uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,564,615 | arxiv | \section{Introduction}
Field-theoretic models with polynomial self-interaction are of growing interest in various areas of modern physics, from cosmology and high energy physics to condensed matter theory \cite{vilenkin01,manton,khare}.
In $(1+1)$-dimensional models of a real scalar field with a high-order polynomial self-interaction potential, there exist topological solutions of the type of kinks, which can possess tails such that either or both decay as power laws towards the asymptotic states connected by the kink \cite{khare,lohe}. Properties of kinks with exponential tail asymptotics (e.g., such as those arising as solutions to the sine-Gordon, $\varphi^4$ or $\varphi^6$ model) are well-understood \cite{GaKuPRE,aek01,christov01,GaKuLi,weigel02,GaLeLi,GaLeLiconf,dorey}. In particular, we know a lot about kink-antikink interactions in such models. Meanwhile, interactions of kinks with power-law tails have not been studied in such detail.
The study of some properties and interactions of plane domain walls in $(2+1)$ and $(3+1)$ dimensional worlds can often be reduced to studying the properties and interactions of kinks in $(1+1)$ dimensions \cite{GaLiRa,GaLeRaconf,GaKuYadFiz,Lensky,GaKsKuYadFiz01,GaKsKuYadFiz02}.
In this paper, we show that power-law asymptotics lead to long-range interaction between kinks and antikinks --- specifically, the force of the interaction can decay slowly, as some negative power of kink-antikink separation. This is a crucial difference from the (``classical'') case of kinks with exponential tail asymptotics. In the latter case, the kink-antikink interaction force always decays exponentially.
\section{A $(1+1)$-dimensional $\varphi^8$ model featuring kinks with power-law tails}
Consider the $(1+1)$-dimensional $\varphi^8$ model \cite{khare,lohe}, given by the Lagrangian field density
\begin{equation}\label{eq:largang}
\mathscr{L}=\frac{1}{2} \left( \frac{\partial\varphi}{\partial t} \right)^2-\frac{1}{2} \left( \frac{\partial\varphi}{\partial x} \right) ^2-V(\varphi),
\end{equation}
where the potential of self-interaction is
\begin{equation}\label{eq:potential8}
V(\varphi)=\varphi^4(1-\varphi^2)^2,
\end{equation}
as depicted in figure \ref{fig:potential}. The potential \eqref{eq:potential8} has three degenerate minima: $\tilde{\varphi}_1=-1$, $\tilde{\varphi}_2=0$, and $\tilde{\varphi}_3=1$, such that $V(\tilde{\varphi}_1)=V'(\tilde{\varphi}_1)=V(\tilde{\varphi}_2)=V'(\tilde{\varphi}_2)=V(\tilde{\varphi}_3)=V'(\tilde{\varphi}_3)=0$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{Potential}
\caption{Potential \eqref{eq:potential8} of the chosen $\varphi^8$ model.}
\label{fig:potential}
\end{figure}
Due to the Lorentz invariance of $(1+1)$-dimensional field theories generated by \eqref{eq:largang} \cite{manton}, we restrict to static solution without loss of generality. Static solutions, which are called kinks \cite{manton}, interpolate between neighboring degenerate minima of the potential (vacua states of the model). We use the following notation for kinks: $\varphi_{(-1,0)}(x)$ denotes the kink connecting the vacua $\tilde{\varphi}_1=-1$ and $\tilde{\varphi}_2=0$. We can also say that this kink belongs to the ``topological sector'' $(-1,0)$, and similarly for the other kinks of the model.
The $\varphi^8$ model with the potential \eqref{eq:potential8} has two kink solutions, $\varphi_{(-1,0)}(x)$ and $\varphi_{(0,1)}(x)$ shown in figures \ref{fig:kinkU} and \ref{fig:kinkD}, respectively, both of which exhibit mixed power-law and exponential tail asymptotics. To each kink there is also a corresponding antikink. The kink $\varphi_{(-1,0)}(x)$ has power-law asymptotics at $x\to +\infty$, while the kink $\varphi_{(0,1)}(x)$ has power-law asymptotics at $x\to -\infty$. All kink solutions of the chosen model can easily be obtained in the implicit closed form \cite{khare,GaLeLi}:
\begin{equation}\label{eq:kinks}
2\sqrt2x=-\frac{2}{\varphi}+\ln\frac{1+\varphi}{1-\varphi}.
\end{equation}
Figure \ref{fig:kinks} illustrates the two distinct kink solutions.
(The corresponding expressions for antikinks can be obtained from \eqref{eq:kinks} via the transformation $x\mapsto -x$.)
Standard Taylor series expansions reveal that the tail asymptotics of the kinks given by equation \eqref{eq:kinks} are:
\begin{alignat}{2}
\varphi_{(-1,0)}(x) &\sim-1+\frac{2}{e^2}\: e^{2\sqrt{2}\: x}, &\qquad x\to -\infty.\label{eq:kink1_asymp_minus}\\
\varphi_{(-1,0)}(x) &\sim-\frac{1}{\sqrt{2}\: x}, &\qquad x\to +\infty.\label{eq:kink1_asymp_plus}\\
\varphi_{(0,1)}(x) &\sim-\frac{1}{\sqrt{2}\: x}, &\qquad x\to -\infty.\label{eq:kink2_asymp_minus}\\
\varphi_{(0,1)}(x) &\sim 1-\frac{2}{e^2}\: e^{-2\sqrt{2}\: x}, &\qquad x\to +\infty.\label{eq:kink2_asymp_plus}
\end{alignat}
Clearly, the kinks $\varphi_{(-1,0)}(x)$ and $\varphi_{(0,1)}(x)$ approach the vacuum state $\tilde{\varphi}_2=0$ very slowly (as $1/x$) at $x\to+\infty$ and $x\to-\infty$, respectively. Meanwhile, the approach to the vacua $\tilde{\varphi}_1=-1$ and $\tilde{\varphi}_3=1$ is exponential.
\begin{figure}[h]
\centering
\subfloat[kink $\varphi_{(0,1)}(x)$]{\includegraphics[width=0.4\textwidth]{KinkU}\label{fig:kinkU}}
\hspace{15mm}
\subfloat[kink $\varphi_{(-1,0)}(x)$]{\includegraphics[width=0.4\textwidth]{KinkD}\label{fig:kinkD}}
\caption{The two kink solutions \eqref{eq:kinks} of the chosen $\varphi^8$ model with potential \eqref{eq:potential8}.}
\label{fig:kinks}
\end{figure}
\section{Long-range interaction between a kink and an antikink}
Consider a static configuration of a kink centered at some point $x=-\xi$ and an antikink centered at $x=+\xi$. Then, $\xi$ is the half-distance between the kink and the antikink. Our goal is to find how the force of kink-antikink interaction, in the chosen $\varphi^8$ model, depends upon $\xi$. Here, by the force of interaction we mean the force produced on the kink by the antikink. To estimate this force, we use two methods: the collective coordinate approximation (see, e.g., \cite{manton,aek01,GaKuLi,weigel02} and the references therein for details) and Manton's method (see, e.g., \cite{manton,kks04} and the references therein for details).
\subsection{Collective coordinate approximation}
In order to find the interaction force in the case of power-law tails, we assume the following ansatz for the kink-antikink field configuration:
\begin{equation}\label{eq:ansatz1}
\varphi(x;\xi)=\varphi_{(-1,0)}(x+\xi)+\varphi_{(0,-1)}(x-\xi),
\end{equation}
as shown in figure \ref{fig:confD}. By a standard calculation \cite{aek01}, we find that the effective potential $U_{\scriptsize\mbox{eff}}(\xi)$ and effective force $F(\xi)$ of the interaction are, respectively,
\begin{equation}\label{eq:U_eff}
U_{\scriptsize\mbox{eff}}(\xi)=\int_{-\infty}^{+\infty}\left[\frac{1}{2}\left( \frac{\partial\varphi}{\partial x} \right)^2 + V(\varphi)\right]dx, \qquad F(\xi)=\frac{dU_{\scriptsize\mbox{eff}}}{d\xi}.
\end{equation}
Here, $F(\xi)$ is the projection of the force onto the $x$-axis. Notice that we do not write the minus sign in front of the derivative of the effective potential because we are calculating the force on the left kink. Thus, a positive value of $-{dU_{\scriptsize\mbox{eff}}}/{d\xi}$ means that the force directed to the left, i.e.\ having a negative projection onto the $x$-axis. This corresponds to repulsion between the kink and antikink. Therefore, the second formula in \eqref{eq:U_eff} is written such that $F(\xi)>0$ corresponds to attraction, and $F(\xi)<0$ corresponds to repulsion.
In figure \ref{fig:forcepow} we show the dependence $F$ upon $\xi$ for the field configuration \eqref{eq:ansatz1}. It is seen that the kink and antikink repel each other, and the force falls off slowly with increasing separation.
Let us now consider the following kink-antikink field configuration:
\begin{equation}\label{eq:ansatz2}
\varphi(x,\xi)=\varphi_{(0,1)}(x+\xi)+\varphi_{(1,0)}(x-\xi)-1,
\end{equation}
which is shown in figure \ref{fig:confU}. Again, we seek to determine $F(\xi)$. In this case, the kink and antikink are turned to each other by the exponential tails. The numerically calculated force of interaction is presented in figure \ref{fig:forceexp}. From this figure, it is seen that the kink and antikink attract, and the force falls off quickly with increasing separation.
\begin{figure}[h]
\centering
\subfloat[configuration \eqref{eq:ansatz1} for $\xi=15$]{\includegraphics[width=0.4\textwidth]{ConfigurationD}\label{fig:confD}}
\hspace{15mm}
\subfloat[configuration \eqref{eq:ansatz2} for $\xi=10$]{\includegraphics[width=0.4\textwidth]{ConfigurationU}\label{fig:confU}}
\caption{Kink-antikink configurations given by (a) equation \eqref{eq:ansatz1} and (b) equation \eqref{eq:ansatz2}.}
\end{figure}
\subsection{Manton's method}
Within the framework of Manton's method \cite{manton}, the force on the kink is given by the time derivative of the momentum on the semi-infinite interval $-\infty<x\leq 0$. At large $\xi$ this method allows us to use the tail asymptotics for the kink and antikink, which were given in \eqref{eq:kink1_asymp_minus}--\eqref{eq:kink2_asymp_plus}, to approximate the integrands in the various integrals with respect to $x$.
Specifically, using the tail asymptotics \eqref{eq:kink1_asymp_plus} of the kink and the tail asymptotics of the corresponding antikink ($x\mapsto-x$), we estimate the force of repulsion for the configuration \eqref{eq:ansatz1} (i.e., the kink and antikink are turned to each other by the power-law tails) to be
\begin{equation}\label{eq:force_power}
F_\mathrm{M}(\xi) \sim \frac{4}{\xi^4},\qquad \xi \gg 1.
\end{equation}
In the case that the kink and antikink are turned to each other by their exponential tails, i.e.\ the configuration \eqref{eq:ansatz2}, the force of attraction is estimated to be
\begin{equation}\label{eq:force_exp}
F_\mathrm{M}(\xi) \sim \frac{64}{e^4} e^{-4\sqrt{2}\:\xi},\qquad \xi \gg 1.
\end{equation}
The predictions of \eqref{eq:force_power} and \eqref{eq:force_exp} (Manton's method) are compared to the corresponding result from \eqref{eq:U_eff}, as shown in figure \ref{fig:forces}. While in the case of exponential tail asymptotics (figure \ref{fig:forceexp}) we observe good agreement, the case of power-law tails (figure \ref{fig:forcepow}) shows considerable disagreement between the two curves. Nevertheless, the force of interaction for the case of power-law tails calculated from the collective coordinate approximation does appear to decay as $\xi$ to a negative integer power close to $-4$ (as predicted by \eqref{eq:force_power}), however the prefactor is off by orders of magnitude. We intend to investigate the origin of this discrepancy, and whether it is a fundamental limitation of Manton's method, in future work.
\begin{figure}[h]
\centering
\subfloat[force of interaction for configuration \eqref{eq:ansatz1}]{\includegraphics[scale=0.4]{ForcePow}
\label{fig:forcepow}}
\hspace{15mm}
\subfloat[force of interaction for configuration \eqref{eq:ansatz2}]{\includegraphics[scale=0.4]{ForceExp}\label{fig:forceexp}}
\caption{The force of the interaction $F(\xi)$ between a kink and an antikink, as a function of their half-separation $\xi$, for (a) the configuration \eqref{eq:ansatz1} (power-law tails) and (b) the configuration \eqref{eq:ansatz2} (exponential tails). The solid curves correspond to $F$ estimated by the collective coordinate approach (i.e., equation \eqref{eq:U_eff}), while the dashed curves correspond to $F$ estimated by Manton's method (i.e., equations \eqref{eq:force_power} and \eqref{eq:force_exp}).}
\label{fig:forces}
\end{figure}
\section{Conclusion}
Within a specific $\varphi^8$ model, we have shown that if a kink and an antikink interact via tails that decay as power-laws, then a long-range interaction appears in the system --- the force of the kink-antikink interaction decays much more slowly than in the (``usual'') case of exponentially decaying tails.
Using Manton's method \cite{manton}, we have calculated the asymptotic dependence of the force of interaction on the half-distance $\xi$ between the kink and antikink. In the case of power-law tails, the force decays as $\xi^{-4}$ for $\xi\gg 1$. Meanwhile, for the case of exponential tails, this force decays off exponentially as $ e^{-4\sqrt{2}\xi}$ for $\xi\gg 1$.
Furthermore, we calculated the force between the kink and antikink numerically using the collective coordinate approximation. The results obtained by both methods are in a good agreement in the case of exponential tails. The origin of the discrepancy in the case of power-law tails will be investigated in future work.
Finally, note that understanding the long-range interactions of topological defects with power-law tails is important because such long-range interactions between kinks and antikinks can have key consequences for the dynamics of domain walls and other similar structures in field-theoretical models with polynomial self-interaction.
\section*{Acknowledgments}
This work was performed within the framework of the Center of Fundamental Research and Particle Physics supported by MEPhI Academic Excellence Project (contract No.~02.03.21.0005, 27.08.2013).
\section*{References}
|
1,108,101,564,616 | arxiv | \section{Introduction}\label{sec:intro}
A Coxeter group $G$ of rank $n$ is an abstract group that can be defined by the generators $S = \{ s_1, s_2, \dots, s_n \}$ and relations as follows:
\begin{equation}
G = \langle s_1, s_2, \dots, s_n\, |\, s^2_i = 1, (s_i s_j)^{m_{ij}} = 1,\, 1 \leq i < j \leq n \rangle,
\end{equation}
for all $1 \leq i \leq n$, and $m_{ij} \in \{2, 3, \dots \} \cup \{ \infty \}$, for all $1 \leq i < j \leq n$. No relation is present between $s_i$ and $s_j$, if and only if $m_{ij} = \infty$.
Such a group can be conveniently described by its Coxeter diagram $\mathcal{D}$, which is a labelled graph, where each vertex $i$ corresponds to a generator $s_i$ of $G$, with $i$ and $j$ connected by an edge whenever $m_{ij} \geq 3$. Moreover, if $m_{ij} \geq 4$ then the edge joining $i$ and $j$ has label $m_{ij}$, while for $m_{ij} = 3$ it remains unlabelled.
If a connected diagram for $G$ contains more than $2$ vertices and has a spanning tree with edges labelled only $\infty$, we call $G$ \textit{$\infty$-spanned}, since deleting all the edges having labels $\geq 3$ will indeed produce a graph product of order two groups (or, equivalently, a right-angled Coxeter group). Here, however, such edges may be quite numerous, and the Coxeter group $G$ may thus be far from a right-angled one (except for the intentionally excluded and trivial case of a two-vertex diagram, when $G \cong \mathbb{Z}_2*\mathbb{Z}_2$, the infinite dihedral group).
Given a Coxeter group of rank $n$ with generating set $S = \{ s_1, s_2, \dots, s_n \}$ of involutions (called a standard generating set, which is not necessarily unique), let us consider its Cayley graph $\mathrm{Cay}(G, S)$ with the identity element $e$ as origin and the word metric $d(g,h) = $ ``the least length of a word in the alphabet $S$ necessary to write down $gh^{-1}$''. Let the word length of an element $g \in G$ be $d(e, g)$. Then, let $w_k$ denote the number of elements in $G$ of word length $k \geq 0$ (assuming that $w_0 = 1$, so that the only element of zero word length is $e$). Also, let $g_k$ denote the number of geodesic paths in $\mathrm{Cay}(G, S)$ of length $k \geq 0$ issuing from $e$ (with the only zero length geodesic being the point $e$ itself, and thus $g_0 = 1$).
The word growth series of $G$ with respect to its standard generating set $S$ is
\begin{equation}
\omega_{(G, S)}(z) = \sum^\infty_{k=0} w_k z^k,
\end{equation}
while the geodesic growth series of $G$ with respect to $S$ is
\begin{equation}
\gamma_{(G,S)}(z) = \sum^\infty_{k=0} g_k z^k.
\end{equation}
Since we shall always use a standard generating $S$ set for $G$ in the sequel, and mostly refer to a given Coxeter diagram defining $G$, rather than $G$ itself, we simply write $\omega_{G}(z)$ and $\gamma_{G}(z)$ for its word and geodesic generating series. As well, by saying that $G$ is $\infty$-spanned we shall refer to an appropriate diagram for $G$.
Both growth series above are known to be rational functions, since the corresponding sets $\mathrm{ShortLex}(G) = $ ``words over the alphabet $S$ in shortest left-lexicographic form representing all elements of $G$'' (equivalently, the language of short-lex normal forms for $G$ with its standard presentation) and $\mathrm{Geo}(G) = $``words over the alphabet $S$ corresponding to labels of all possible geodesics in $\mathrm{Cay}(G, S)$ issuing from $e$'' (equivalently, the language of reduced words in $G$ with its standard presentation) are regular languages. That is, there exist deterministic finite-state automata $\mathrm{ShortLex}$ and $\mathrm{Geo}$ that accept the omonimous languages. We shall use such automata due to Brink and Howlett \cite{BH}, which appear to be a convenient choice for us due to several theoretical and technical reasons, although there is no canonical one.
Given any finite automaton $A$ over an alphabet $S$, let $L = L(A)$ be its accepted language. If $v_k$ is the number of length $k \geq 0$ words over $S$ that belong to $L$, then the quantity $\lambda(A) = \limsup_{k\to \infty} \sqrt[k]{v_k}$ is called the growth rate of the (regular) language $L(A)$.
The limiting value $\omega(G) = \limsup_{k\to \infty} \sqrt[k]{w_k} = \lambda(\mathrm{ShortLex})$ is called the word growth rate of $G$, while $\gamma(G) = \limsup_{k\to \infty} \sqrt[k]{g_k} = \lambda(\mathrm{Geo})$ is called the geodesic growth rate of $G$. Growth rates of many classes of Coxeter groups are known to belong to classical families of algebraic integers, in particular, to Perron numbers. Moreover, growth rates of Coxeter groups acting cocompactly on hyperbolic space $\mathbb{H}^d$, for $d \geq 4$, are specifically conjectured to belong to this class by Kellerhals and Perren \cite{KePe}. We recall that a real algebraic integer $\tau > 1$ is \textit{a Perron number} if all its other Galois conjugates are strictly less than $\tau$ in absolute value. Perron numbers often appear in the context of harmonic analysis \cite{Bertin}, dynamical systems \cite{LM}, arithmetic groups \cite{ERT}, and many others.
It follows from the results of \cite{Floyd, Parry, Yu1, Yu2} that the growth rates of Coxeter groups acting on $\mathbb{H}^2$ and $\mathbb{H}^3$ with finite co-volume are Perron numbers. Moreover, a conjecture by Kellerhals and Perren in \cite{KePe} suggests a very particular distribution of the poles of the growth function $\omega_G(z) = \sum^\infty_{k=0} w_k\, z^k$, which implies that the word growth rate $\omega(G)$ is a Perron number. The main purpose of this paper is to prove the following theorem, that partially confirms the aforementioned conjecture, and also extends to the case of geodesic growth rates.
\begin{theorem}\label{thm:Perron}
Let $G$ be an $\infty$-spanned Coxeter group. Then $\omega(G)$ and $\gamma(G)$ are Perron numbers.
\end{theorem}
Another question that comes about naturally is the number $\gamma_G(g)$ of geodesics in $\mathrm{Cay}(G, S)$ issuing from the neutral element $e$ of $G$ and arriving to a given element $g \in G$. It is clear that $\gamma_G(g)$ depends on $g\in G$ heavily: e.g. in many right-angled Coxeter groups $G$ we can find elements $g$ of word length $k \geq 2$ such that either $\gamma_G(g) = 1$ or $\gamma_G(g) = k!$, depending on $g$. Nevertheless, the average number of geodesics that represent an element of word length $k$, i.e. the ratio $\frac{g_k}{w_k}$, can be analysed.
\begin{theorem}\label{thm:geodesic}
Let $G$ be an $\infty$-spanned Coxeter group which is not a free product $\mathbb{Z}_2*\ldots*\mathbb{Z}_2$. Then $g_k \sim \delta^k(G)\cdot w_k$ asymptotically\footnote{Here by writing $a_k \sim b_k$ for two sequences of positive real numbers indexed by integers, we mean $\lim_{k\to \infty}\frac{a_k}{b_k} = 1$.}, as $k\rightarrow \infty$, with $\delta(G) = \frac{\gamma(G)}{\omega(G)} > 1$. In particular, $\gamma(G)$ always strictly dominates $\omega(G)$.
\end{theorem}
The paper is organised as follows: in Section \ref{automata:construction} we describe the deterministic finite-state automata recognising the languages $\mathrm{ShortLex}$ and $\mathrm{Geo}$ (their construction is first given in the paper by Brink and Howlett \cite{BH}), and show some of their properties, essential for the subsequent proofs, in Section \ref{automata:properties}. Then, in Section \ref{proofs}, we prove Theorems \ref{thm:Perron} and \ref{thm:geodesic}. Finally, a few geometric applications are given in Section~\ref{sec:geom}.
\begin{center}
\textsc{Acknowledgements}\\
\end{center}
\noindent
{\small The authors gratefully acknowledge the support that they received from the Swiss National Science Foundation, project no.~PP00P2-170560 (for A.K.), and the Russian Foundation for Basic Research, projects no.~18-01-00822 and no.~18-51-05006 (for A.T.). They would like to thank Alexander A. Gaifullin, Ruth Kellerhals and Tatyana Smirnova-Nagnibeda for stimulating discussions.}
\section{Brink and Howlett's automata and their properties}
In this section we briefly recall the general construction of the automata $\mathrm{ShortLex}$ and $\mathrm{Geo}$ that accept, respectively, the shortlex and geodesic languages for an arbitrary Coxeter group $G$ with generating set $S = \{ s_1, s_2, \dots, s_n \}$. Then we shall concentrate on some combinatorial and dynamical properties of those automata in the case when $G$ is $\infty$-spanned.
\subsection{Constructing the automata}\label{automata:construction}
Let $G$ be a Coxeter group with generating set $S = \{ s_1, s_2, \dots, s_n \}$ with presentation
\begin{equation}
G = \langle s_1, s_2, \dots, s_n\, |\, (s_i s_j)^{m_{ij}} = 1, \mbox{ for } 1 \leq i, j \leq n \rangle,
\end{equation}
where we assume that $m_{ii} = 1$, for all $1 \leq i \leq n$, and $m_{ij} = m_{ji} \in \{2, 3, \dots \} \cup \{ \infty \}$, for all $1 \leq i < j \leq n$.
Let $V = \mathbb{R}^n$, and let $\{ \alpha_1, \dots, \alpha_n \}$ be a basis in $V$, called the set of \textit{simple roots} of $G$. The associated symmetric bilinear form $B(u,v)$ on $V\times V$ is defined by
\begin{equation}
B(\alpha_i, \alpha_j) = - \cos \frac{\pi}{m_{ij}}, \mbox{ for all } 1 \leq i, j \leq n.
\end{equation}
Let for each $s_i \in S$ the corresponding \textit{simple reflection} in the hyperplane $H_i$ orthogonal to the root $\alpha_i$ be defined as
\begin{equation}
\sigma_i(v) = v - 2 B(v, \alpha_i) \alpha_i, \mbox{ for } 1 \leq i \leq n.
\label{eq:reflection-formula}
\end{equation}
Then the representation $\rho: G \rightarrow GL(V)$ given by
\begin{equation}
\rho(s_i) = \sigma_i, \mbox{ for } 1 \leq i \leq n,
\end{equation}
is a faithful linear representation of $G$ into the group of linear transformations of $V$, called \textit{the geometric representation}, c.f. \cite[\S 4.2]{BB}.
From here on, we shall write $(u|v)$ instead of $B(u,v)$, for convenience, although this symmetric bilinear function is not necessarily positive definite.
Let us define the set $\Sigma$ of \textit{small roots}\footnote{Small roots are called \textit{minimal roots} in \cite{Casselman1, Casselman2} due to their minimality with respect to the dominance relation introduced in the original paper \cite{BH}.} of $G$ as the minimal (by inclusion) subset of vectors in $V$ satisfying the following conditions:
\begin{itemize}
\item $\alpha_i \in \Sigma$, for each $1 \leq i \leq n$, and each $v \in \Sigma$ is a non-vanishing linear combination of $\alpha_i$'s with non-negative coefficients;
\item if $v \in \Sigma$, then $\sigma_i(v) \in \Sigma$, \, for all $1\leq i \leq n$ such that $-1 < (v|\alpha_i) < 0$.
\end{itemize}
In other words, all simple roots of $G$ are small, and if $v$ is a small root of $G$, then $u = \sigma_i(v)$ is also a small root provided that the $i$-th coordinate of $u$ is strictly bigger than the $i$-th coordinate of $v$, and the (positive) difference is less than $2$.
The set $\Sigma$ of small roots is known to be finite \cite[Theorem~4.7.3]{BB}. In particular, if $\alpha_i$ and $\alpha_j$ ($i \neq j$) are such two roots that $m_{ij} = \infty$, then $\sigma_i(\alpha_j)$ is \textit{not} a small root. Thus, if $G$ is $\infty$-spanned, we would expect it to have ``not too many'' small roots, so that a more precise combinatorial analysis of the latter becomes possible.
The set of $\mathrm{ShortLex}$ words, as well as the set $\mathrm{Geo}$ of geodesic words, in $G$ are regular languages by \cite[Theorem~4.8.3]{BB}. Each is accepted by the corresponding finite automaton that we shall call, with slight ambiguity, $\mathrm{ShortLex}$ and $\mathrm{Geo}$, respectively. Their states (besides a single state $\star$) are subsets of $\Sigma$ and the transition functions can be described in terms of the action of generating reflections $\sigma_i$, as follows.
For $\mathrm{Geo}$, we have that the start state is $\{ \emptyset \}$, the fail state is $\star$, and the transition function $\delta(D, s_i)$, for a state $D$ and a generator $s_i$, $i=1,\dots,n$, is defined as follows:
\begin{itemize}
\item if $\alpha_i \in D$ or $D = \star$, then $\delta(D, s_i) = \star$, or otherwise
\item $\delta(D, s_i) = \{ \alpha_i \} \cup (\{ \sigma_i(v) \mbox{ for } v \in D \} \cap \Sigma)$.
\end{itemize}
All states of $\mathrm{Geo}$, except for the fail state, are accept states. The entire set of states can be obtained by applying the transition function repeatedly to the start set and its subsequent images. Then the fact that $\Sigma$ is finite \cite[Theorem~4.7.3]{BB} guarantees that the set of states is finite.
For $\mathrm{ShortLex}$, the start state is $\{ \emptyset \}$, the fail state is $\star$, and the transition function $\delta(D, s_i)$, for a state $D$ and a generator $s_i$, $i=1,\dots,n$, is given by
\begin{itemize}
\item if $\alpha_i \in D$ or $D = \star$, then $\delta(D, s_i) = \star$, or otherwise
\item $\delta(D, s_i) = \{ \alpha_i \} \cup \left( \{ \sigma_i(v) \mbox{ for } v \in D \} \cup \{ \sigma_i(\alpha_j) \mbox{ for } j<i \} \right) \cap \Sigma$.
\end{itemize}
All states of $\mathrm{ShortLex}$,
except for the fail state, are accept states. Again, all other states of $\mathrm{ShortLex}$ can be obtained from the start state by iterating the transition function.
The enhanced transition function of a shortlex or geodesic automaton from a state $D$ upon reading a length $l \geq 1$ word $w$ over the alphabet $S$ will be denoted by $\widehat{\delta}(D, w)$. It is inductively defined as $\widehat{\delta}(D, s_i) = \delta(D, s_i)$, for all $i=1, \dots, n$; and in the case $l\geq 2$ we set $\widehat{\delta}(D, w) = \delta(\widehat{\delta}(D, w'), s_i)$, where $w = w' s_i$ for a word $w'$ of length $l-1$ and a generator $s_i$ with $i \in \{1, 2, \dots, n\}$.
We refer the reader to the original work \cite{BH}, and also the subsequent works \cite{Casselman1, Casselman2} for more detail on the above constructions. A very informative description of geodesic automata can be found in \cite[\S 4.7--4.8]{BB}.
For the sake of convenience, we shall omit the fail state $\star$ and the corresponding arrows in all our automata. This will make many computations in the sequel simpler, since we care only about the number of accepted words.
\subsection{Auxiliary lemmas}\label{automata:properties}
If $\Gamma$ is a tree, i.e. a connected graph without closed paths of edges, a vertex of $\Gamma$ having degree $1$ is called a leaf of $\Gamma$. The set of leaves of $\Gamma$, which is denoted by $\partial \Gamma$, is called the boundary of $\Gamma$.
\begin{lemma}[Labelling lemma]\label{lemma:labelling}
Let $\mathcal{D}$ be an $\infty$-spanned diagram with vertices $\{1, 2, \dots, n\}$, with $n\geq 3$, and $\Gamma \subset \mathcal{D}$ be its spanning tree all of whose edges have labels $\infty$. Then, up to a renumbering of vertices, we may assume that $\Gamma$ contains the edges $1 \rightarrow 2$ and $2 \rightarrow 3$, and for any non-recurring path $i_0 = 1 \rightarrow i_1 \rightarrow i_2 \rightarrow \dots \rightarrow i_k$ inside $\Gamma$, such that $i_k \in \partial \Gamma$, we have $i_0 < i_1 < i_2 < \dots < i_k$.
\end{lemma}
\begin{proof}
We explicitly construct the desired enumeration. Choose two edges forming a connected sub-tree of $\Gamma$ and label their vertices $1$, $2$ and $3$, such that vertex $2$ is between the vertices $1$ and $3$. Then start labelling the leaves in $\partial \Gamma$ by assigning numbers to them down from $n$. When all the leaves are labelled, form a new tree $\Gamma^\prime = \Gamma - \partial \Gamma$, and label the leaves in $\partial \Gamma^\prime$, and so on, until no unused labels remain.
\end{proof}
From now on, we shall suppose that every $\infty$-spanned diagram with $3$ or more vertices already has a labelling satisfying Lemma~\ref{lemma:labelling}. Such a labelling will become handy later on. By $\Gamma$ we will be denoting the corresponding spanning tree.
\begin{lemma}[Hiking lemma]\label{lemma:hiking}
Let $D' = \delta(D, s_i)$ be an accept state of the automaton $\mathrm{ShortLex} = \mathrm{ShortLex}(\mathcal{D})$, resp. $\mathrm{Geo} = \mathrm{Geo}(\mathcal{D})$. Then for any vertex $j$ that is adjacent to $i$ in the tree $\Gamma$, the state $D'' = \delta(D', s_j) \neq D'$ is also an accept state of $\mathrm{ShortLex}$, resp. $\mathrm{Geo}$.
\end{lemma}
\begin{proof}
By definition, all states of $\mathrm{ShortLex}$ and $\mathrm{Geo}$, except for the fail states $\star$, are accepting. If $D = \{ \emptyset \}$ is the start state, there is no sequence of transition bringing the automaton back to it, by definition. Now we need to check that $s_j\notin D'$, which shows that $D''\ne \star$. Indeed, supposing the contrary, we would have $\sigma_i(\alpha) = \alpha_j$ or, equivalently $\alpha = \sigma_i(\alpha_j) = \alpha_j + 2 \alpha_i$, for a small root $\alpha \in D$. The latter is impossible since $(\alpha_j + 2 \alpha_i|\alpha_i)=1$, which contradicts inequality $(\alpha|\alpha_i)<1$ that holds true for any short root $\alpha\ne \alpha_i$ (see \cite[Lemma 4.7.1]{BB}). Since $s_j\notin D'$, and $s_j\in \delta(D', s_j)=D''$, we also obtain that $D''\ne D'$.
\end{proof}
The main upshot of Lemma~\ref{lemma:hiking} is that we can repeatedly apply the generators which are connected in $\Gamma$, and thus move between the accepting states of the automaton, be it shortlex or geodesic. As in our case the tree $\Gamma$ spans the whole diagram $\mathcal{D}$, this gives a fair amount of freedom, which will be used later to prove strong connectivity of both automata.
For any given root $\alpha$ of $\Sigma$, let $\sigma_\alpha$ be the associated reflection. For a given set of simple roots $A = \{\alpha_{i_1}, \dots, \alpha_{i_k}\} \subset \mathbb{R}^n$, let $\mathrm{Stab}(A)$ be the set of all roots from $\Sigma$ that are fixed by $\sigma_\alpha$ for any $\alpha\in A$. Let also $\langle A \rangle$ denote the linear span of $A$, i.e. the set $\{ \sum_{\alpha \in A} c_\alpha \alpha \, |\, c_\alpha \in \mathbb{R} \}$.
\begin{lemma}[Stabiliser lemma]\label{lemma:stabiliser}
Let vertices $i$ and $j$ of $\mathcal{D}$ be adjacent in $\Gamma$. Then any element of $\mathrm{Stab}(\alpha_i, \alpha_j)$ belongs to the linear span of the vector $\alpha_i + \alpha_j$ and the vectors $\alpha_k$ for which $m_{ki}=2$ and $m_{kj}=2$ in the diagram $\mathcal{D}$.
\end{lemma}
\begin{proof}
Let $v$ be a minimal vector such that $\sigma_i(v)=v$ and $\sigma_j(v)=v$. Since $v$ is positive, we can write it as $v = \sum^n_{s=1} c_s \alpha_s$, with all $c_s \geq 0$ for $1 \leq s \leq n$ and at least one $c_s$ being non-zero. The stability of $v$ under $\sigma_i$ and $\sigma_j$ and the formula \eqref{eq:reflection-formula} gives
\begin{equation}
0 = (v | \alpha_i) = \sum^n_{s=1} c_s (\alpha_s | \alpha_i) = c_i (\alpha_i | \alpha_i) + c_j (\alpha_j | \alpha_i) + \sum^n_{s=1, s\neq i,j} c_s (\alpha_s | \alpha_i),
\end{equation}
\begin{equation}
0 = (v | \alpha_j) = \sum^n_{s=1} c_s (\alpha_s | \alpha_j) = c_j (\alpha_j | \alpha_j) + c_i (\alpha_i | \alpha_j) + \sum^n_{s=1, s\neq i,j} c_s (\alpha_s | \alpha_j).
\end{equation}
Which imply, together with the fact that $-1 \leq (\alpha_s | \alpha_i) \leq 0$ and $-1 \leq (\alpha_s | \alpha_j) \leq 0$, for $s\neq i, j$, that
\begin{equation}
c_i - c_j = -\sum^n_{s=1, s\neq i,j} c_s (\alpha_s | \alpha_i) \geq 0,
\end{equation}
and, simultaneously,
\begin{equation}
c_i - c_j = \sum^n_{s=1, s\neq i,j} c_s (\alpha_s | \alpha_i) \leq 0.
\end{equation}
These two inequalities immediately imply that $c_i = c_j$. Then, we also see that $c_s = 0$ for all $s$ such that $\mathcal{D}$ has at least one of the edges connecting $s$ to $i$ or $j$.
\end{proof}
\begin{lemma}[Cycling lemma]\label{lemma:cycling}
Let some vertices $i$ and $j$ in the diagram $\mathcal{D}$ be connected by an edge in $\Gamma$. Then for any small root $v \in \Sigma = \Sigma(\mathcal{D})$, there exists a natural number $N \geq 1$ such that $(s_i s_j)^N(v) \notin \Sigma$, unless $v \in \mathrm{Stab}(\alpha_i, \alpha_j)$.
\end{lemma}
\begin{proof}
We shall prove that for any such $i$, $j$ and any positive root $v \notin \mathrm{Stab}(\alpha_i, \alpha_j)$, we have that
\begin{equation}
\lim_{k\to \infty} \| (s_i s_j)^k(v) \| \rightarrow \infty,
\end{equation}
in the $\ell_2$-norm. As $|\Sigma|<\infty$, this would imply Lemma.
Let $v_0 = v$, and let $R = s_i s_j$. By a straightforward computation,
\begin{equation}
R^k(v_0) = v_0 + (I + R + R^2 + \dots + R^{k-1})\, w,
\end{equation}
where
\begin{equation}
w = (-4(v|\alpha_i)-2(v|\alpha_j)) \alpha_i + (-2(v|\alpha_i)) \alpha_j = c_i \alpha_i + c_j \alpha_j.
\end{equation}
Then, by using the fact that $i$ and $j$ are connected by an edge in $\Gamma$, we compute
\begin{equation}
R(w) = R(c_i \alpha_i + c_j \alpha_j) = (3c_i - 2c_j) \alpha_i + (2c_i - c_j) \alpha_j.
\end{equation}
This means that in the subspace $S$ spanned by $\alpha_i$ and $\alpha_j$, the matrix of $R$ can be written as
\begin{equation}
R|_S = \left( \begin{array}{cc}
3& -2\\
2& -1
\end{array} \right),
\end{equation}
by using $\{\alpha_i, \alpha_j\}$ as a basis.
One can see that $R_S=T J_R T^{-1}$, where
\begin{equation}
J_R = \left( \begin{array}{cc}
1& 1\\
0& 1
\end{array} \right)
\end{equation}
is the Jordan normal form of $R|_S$, which has the following sum of powers:
\begin{equation}
S_k=\sum^{k-1}_{i=0} J_R^i = \left( \begin{array}{cc}
k& \frac{(k-1)k}2\\
0& k
\end{array} \right).
\end{equation}
As for any non-zero vector $u$ one has $\lim_{k\to \infty} \| S_k u \| = \infty$, we also get that
\begin{equation}
\|R^k(v_0) - v_0\| = \|\Big( \sum^{k-1}_{i=0} R^i \Big) w\| \rightarrow \infty,
\end{equation}
unless $w = 0$. In this case, by solving $c_i = c_j = 0$ about the inner products $(v|\alpha_i)$ and $(v|\alpha_j)$, we find that both inner products are equal to $0$, hence $v$ is stable under both reflections $\sigma_i$ and $\sigma_j$, which implies $v \in \mathrm{Stab}(\alpha_i, \alpha_j)$.
\end{proof}
The meaning of the Lemma above is that by repeated applications of $s_i$ and $s_j$, which we informally call ``pedalling'', we can ``cycle away'' in the $\ell_2$-norm from any root $v$ and thus, in particular, we can escape any subset of small roots by applying Cycling lemma to its elements. We shall put this fact to essential use in one more lemma below.
In the following considerations we keep track of the coordinates in the canonical basis, so we introduce a notation $v[i]$ for the $i$-th coordinate $c_i$ of the vector $v$ written out as a sum $v = \sum^n_{s=1} c_s \alpha_s$ in the canonical basis of simple roots.
Then, for a finite set of positive roots $A \subset \mathbb{R}^n$, let us define its \textit{height} as
\begin{equation}
H(A) = \max_{v\in A}\, \{ i \, |\, v[i] \neq 0, \,\, v[j] = 0,\, \forall j > i \},
\end{equation}
and its \textit{width} as
\begin{equation}
W(A) = \mathrm{card}\, \{ v \in A\, |\, v[{h(A)}] \neq 0 \}.
\end{equation}
\begin{lemma}[Hydra's lemma]\label{lemma:hydra}
Let $D \neq \{ \emptyset \}, \star$ be a state of the automaton $\mathrm{ShortLex}$ or $\mathrm{Geo}$ for an $\infty$-spanned group $G$. Then there exists a word $w$ in the respective language such that $\widehat{\delta}(D, w) = \{ \alpha_1 \}$.
\end{lemma}
\begin{proof}
First we provide an argument in the case of the $\mathrm{ShortLex}$ automaton. Since by definition in each state $D\neq \{ \emptyset \}, \star$ there is a simple root, we choose some $\alpha_i \in D$. Also let $h = h(D)$ be the height of $D$ with $\mu \in D$ being some small root realising the height of $D$, i.e. $\mu[h] \neq 0$, while $\mu[k] = 0$, for all $h < k \leq n$. We also denote $S = \mathrm{Stab}(\alpha_1, \alpha_2)$.
First, consider the case $h>2$. Our goal is to form a suitable word $w$ such that $w(\mu)\notin S$. Either $\mu \notin S$ right away, or one of the following cases holds.
\paragraph{I.} \textit{There exists $k\in \{3,\ldots, h-1\}$ such that $\mu[k] \neq 0$.} Choose the minimal $k$ with this property, and let $(i_0,i_1,\ldots,i_p)$ be the path in the tree $\Gamma$ from the vertex $i_0=i$ towards the vertex $i_p=k$. Considering the words $w_l = s_{i_l} s_{i_{l-1}} \dots s_{i_1} s_{i_0}$, with $l\in \{1,\ldots,p-1 \}$, we may obtain that for some $l$ the vector $\mu' = \rho(w_l)(\mu)\notin S$. In this case, we move to the state $D' = \widehat{\delta}(D, w_l)$, which contains $\mu' = \rho(w_l)(\mu) \notin S$ and has $h(D') = h(D) = h$. Otherwise, we consider the word $w_p$, for which one has $\mu' = \rho(w_p)(\mu) \notin \mathrm{Stab}(\alpha_{i_{p-1}}, \alpha_{i_p})$, hence we can apply Cycling lemma to $\mu'$. Thus, for some sufficiently large $N$ we have $\mu'' = (\sigma_{i_{p-1}} \sigma_{i_p})^N(\mu') \notin S$, and we move to the state $D' = \widehat{\delta}(D, w)$, with $w = (s_{i_{p-1}} s_{i_{p}})^N s_{i_{p-1}} \dots s_{i_2} s_{i_1}$ and containing $\mu'' = \rho(w)(\mu) \notin S$, while $h(D') = h(D) = h$, since $w$ contains only reflections $s_l$ with $l<h$.
\paragraph{II.} \textit{For all $k\in \{3,\ldots,h-1 \}$ we have $\mu[k] = 0$.} Let $(i_0,i_1,\ldots,i_p)$ be the path in the tree $\Gamma$ from the vertex $i_0=i$ towards the vertex $i_p=h$. Again, moving up the tree $\Gamma$ by reading the word $w_l = s_{i_l} s_{i_{l-1}} \dots s_{i_1} s_{i_0}$, with $2 < l < p-1$, we either obtain that the vector $\mu' = \rho(w_l)(\mu)$ has a non-zero coordinate $k$ for some $2 < k < h$ and thus the state $D' = \widehat{\delta}(D, w_l)$ containing $\mu' = \rho(w_l)(\mu)$, satisfies Case I. Otherwise, we reach $l=p-2$, while in $\mu' = \rho(w_{p-2})(\mu)$ we have $\mu'[1] = \mu'[2] = c_1$ and $\mu'[h] = c_2 \neq 0$, with $\mu'[l] = 0$ for all other $2 < l < h$ and $h < l \leq n$.
If $\mu' \notin \mathrm{Stab}(\alpha_{i_{p-2}}, \alpha_{i_{p-1}})$, we apply Cycling lemma as in Case I to remove the image of $\mu'$ from the state and thus decrease the width, and not increase the height.
If $\mu' \in \mathrm{Stab}(\alpha_{i_{p-2}}, \alpha_{i_{p-1}})$, we either have $c_1=0$ or $(\alpha_{i_{p-1}}|\alpha_1) = 0$ and $(\alpha_{i_{p-1}}|\alpha_2) = 0$. In both cases, remembering that $(\alpha_{i_{p-1}}|\alpha_{i_{p}}) = -1$, we obtain that
\begin{equation}
\begin{aligned}
\mu'' &= \sigma_{i_{p-1}}(\mu') = \mu' - 2 (\alpha_{i_{p-1}}|\mu') \alpha_{i_{p-1}} = \\
&= \mu' - 2 (c_1 (\alpha_{i_{p-1}}|\alpha_1) + c_1(\alpha_{i_{p-1}}|\alpha_2) + c_2(\alpha_{i_{p-1}}|\alpha_{i_p})) \alpha_{i_{p-1}} \\
&= \mu' + 2 c_2 \alpha_{i_{p-1}},
\end{aligned}
\label{coord-computation}
\end{equation}
where $c_2\ne 0$. Then, $\mu''[i_{p-1}]=2c_2\ne 0 = \mu''[i_{p-2}]$, hence $\mu'' \notin \mathrm{Stab}(\alpha_{i_{p-2}}, \alpha_{i_{p-1}})$. Then, again we can use the argument from Case I and apply Cycling lemma to $\mu''$. Indeed, taking $\mu''' = (\sigma_{i_{p-2}}\sigma_{i_{p-1}})^N(\mu'') \notin S$ for sufficiently big $N$ we obtain that $\mu'''\notin \Sigma$, so with a word $w = (s_{i_{p-2}}s_{i_{p-1}})^N s_{i_{p-2}} \dots s_{i_1} s_{i_0}$we move to the state $D' = \widehat{\delta}(D, w)$, which lacks $\mu''' = \rho(w)(\mu)$, while $h(D'') \le h(D) = h$.
By applying the above argument repeatedly, we arrive at an accept state $D^* = \widehat{\delta}(D, w)$ with a word $w$, possibly empty, in $\mathrm{ShortLex}$ or $\mathrm{Geo}$, so that $\lambda = \rho(w)(\mu)$ is contained in $D^*$, but not in $S$, for the above chosen $\mu \in D\cap S$. Also, we have $h(D^*) = h(D) = h$ and $w(D^*)\le w(D)$. This follows from the fact that all the roots $\lambda \in D^*$ realising the height of $D^*$ are images of height-realising roots $\mu \in D$. Indeed, no simple root $\alpha_k$ with $k\geq h$ has been added during the transition from $D$ to $D^*$, neither an image of such a root under a simple reflection $s_l$, with $l \geq h$. The word $w$ has only simple reflections $s_k$ with $k < h$, and thus we do not change any $k$-coordinates with $k\geq h$ for roots in $D$ and its subsequent images by applying any of the reflections in $w$.
Now, pick a height-realising root in $\lambda \in D^*$ and, since $\lambda \notin S$, apply Cycling lemma to $\lambda$ in order to arrive at a state $D_* = \widehat{\delta}(D^*, (s_1 s_2)^N)$, such that $h(D_*) \leq h(D) = h$, while $w(D_*) \leq w(D^*) - 1$. By applying this argument repeatedly, we can reduce the width of the subsequent states, and thus finally arrive at a state $\overline{D}$, such that $h(\overline{D}) \leq h - 1$. However, we have no control over the magnitude of $w(\overline{D})$, since many vectors of smaller height could have been added during all the above transitions\footnote{Thus, while chopping off hydra's bigger heads, we allow it to grow many more smaller ones, and nevertheless succeed in reducing it down to a single head remaining.}.
\medskip
We can apply the above argument, and finally bring the height of the state down to $h = 2$, hence all the roots in $D_*$ can be written as $c_1\alpha_1+c_2\alpha_2$. Due to Stabiliser lemma, all roots in $D_*$ which are in $S = \mathrm{Stab}(\alpha_1, \alpha_2)$ have $c_1 = c_2$. Since $\alpha_1+\alpha_2 = \sigma_1(\alpha_2) = \sigma_2(\alpha_1)$ is a small root, the due to dominance relation (c.f. the definitions \cite[p. 116]{BB} and \cite[Theorem~4.7.6]{BB}), this is the only option for the elements of $D_*\cap S$. Then, using Cycling lemma with powers of $(s_2 s_1)^N$ for sufficiently big $N\geq 1$ we can either reach $D_0 = \{\alpha_1\}$ or arrive to one of the states $D_1 = \{ \alpha_1, \alpha_1 + \alpha_2 \}$ or $D_2 = \{ \alpha_2, \alpha_1 + \alpha_2 \}$. Then, the states $D_1$ and $D_2$ form a two-cycle under the action of any word $w = (s_1 s_2)^N$, $N \geq 1$. Since $n \geq 3$, we use $s_3$ in order to transition instead from $D_1$ to $D_3 = \{\alpha_3, \beta_1 = \sigma_3(\alpha_1), \beta_2 = \sigma_3(\alpha_1 + \alpha_2) \}$. By Labelling lemma, vertices $2$ and $3$ are connected by an edge in $\Gamma$, and thus we can compute
\begin{equation}
\beta_1 = \alpha_1 - 2 (\alpha_1 | \alpha_3) \alpha_3 \notin S,
\end{equation}
by Stabiliser lemma, since $\beta_1[1] = 1 \neq 0 = \beta_1[2]$, and
\begin{equation}
\beta_2 = \alpha_1 + \alpha_2 + (2 - 2(\alpha_1 | \alpha_3)) \alpha_3 \notin S,
\end{equation}
once again by Stabiliser lemma, since $\beta_2[3] \neq 0$ (recall that the inner product $(\alpha_1 | \alpha_3)$ is always non-positive), and the element $s_2 s_3$ has infinite order.
Now we can apply Cycling lemma to $D_3$ in order to move $\beta_1$ and $\beta_2$ away from the set $\Sigma$ of small roots, and finally arrive at the state $D_0 = \widehat{\delta}(D_3, (s_1 s_2)^N) = \{ \alpha_1, (\sigma_1 \sigma_2)^N(\beta_1), (\sigma_1 \sigma_2)^N(\beta_2) \} \cap \Sigma = \{ \alpha_1 \}$.
A similar argument applies to the case of $\mathrm{Geo}$ automaton, and it can be done by a simpler induction on $|D|$, the cardinality of $D$. Indeed, applying Hiking lemma never increases $|D|$, and applying Cycling lemma to the height-realising root reduces $|D|$.
\end{proof}
\begin{lemma}[GCD lemma]\label{lemma:GCD}
The greatest common divisor of the lengths of all cycles in the $\mathrm{ShortLex}$, resp. $\mathrm{Geo}$, automaton for an $\infty$-spanned Coxeter group equals $1$.
\end{lemma}
\begin{proof}
First of all, let us notice that there is a cycle of length $2$ in each:
\begin{equation}
\{ \alpha_1 \} \rightarrow \delta(\{ \alpha_1 \}, s_2) = \{ \alpha_2 \} \rightarrow \delta(\{ \alpha_2 \}, s_1) = \{ \alpha_1 \}.
\end{equation}
Then, let us consider the following sequence of transitions in $\mathrm{ShortLex}$. Let $D_0 = \{ \alpha_1 \}$, and let $m_{13} \neq \infty$. Then $D_1 = \delta(D_0, s_3) = \{ \alpha_3, \mu = \alpha_1 + c \alpha_3 \}$, where $c = - 2 \cos \frac{\pi}{m_{13}} \leq 0$. Here, $\mu \notin \mathrm{Stab}({\alpha_1, \alpha_2})$ by Stabiliser Lemma. Thus, there exists a natural number $N \geq 1$ such that $(\sigma_1 \sigma_2)^N(\mu) \notin \Sigma$, and $D_{2N + 1} = \widehat{\delta}(D_1, (s_1s_2)^N) = \{ \alpha_1 \} = D_0$. This means that we obtain a cycle of odd length. If $m_{13} = \infty$, then $\mu \notin \Sigma$, and we readily obtain a cycle of length $3$ by putting $N = 1$.
A similar argument applies to the case of $\mathrm{Geo}$ automaton.
\end{proof}
\section{Proofs of main theorems}\label{proofs}
In this section we use the auxiliary lemmas obtained above in order to prove the main theorems of the paper. Namely, we show that the following statement hold for a Coxeter group $G$ that is $\infty$-spanned:
\begin{itemize}
\item the word growth rate $\omega(G)$ and the geodesic growth rate $\gamma(G)$ are Perron numbers (Theorem~\ref{thm:Perron}),
\item unless $G$ is a free product of more than $2$ copies of $\mathbb{Z}_2$, we have $\gamma(G) > \omega(G)$ (Theorem~\ref{thm:geodesic}).
\end{itemize}
\paragraph{Proof of Theorem~\ref{thm:Perron}.} Below, we show that the word growth rate $\omega(G)$ of an $\infty$-spanned Coxeter group $G$ (with respect to its standard generating set) is a Perron number. A fairly analogous argument shows that the geodesic growth rate $\gamma(G)$ of $G$ is also a Perron number.
Observe that any state $D = \widehat{\delta}(\{ \emptyset \}, w)$, for a shortlex word $w$, can be reached from $\{ \alpha \}$, $\delta(\{ \alpha_1 \}, s_k) = \{ \alpha_k \} \cup \{ s_k(\alpha_1), s_k(\alpha_l), l < k \} \cap \Sigma = \{ \alpha_k, s_k(\alpha_l), l < k \} \cap \Sigma = \delta(\{\emptyset \}, s_k)$, for any $k > 1$. Thus, $\widehat{\delta}(\{\emptyset\}, w) = \widehat{\delta}(\{\alpha_1\}, w)$, if $w$ does not start with $s_1$, and $\widehat{\delta}(\{\emptyset\}, w) = \widehat{\delta}(\{\alpha_1\}, w')$, if $w = s_1 w'$.
Then, Hydra's lemma guarantees that we can descend in $\mathrm{ShortLex}$ from any state $D \neq \star$ to $\{ \alpha_1 \}$. Together with the above fact, we have that $\mathrm{Geo}\setminus \{ \emptyset \}$ is strongly connected, and then the transfer matrix $M = M(\mathrm{Geo}\setminus \{ \emptyset \})$ is irreducible.
By GCD lemma, $M$ is also aperiodic, and thus primitive. Then the spectral radius of $M$ is a Perron number. Since the latter equals the growth rate of the short-lex language for $G$ by \cite[Proposition 4.5.11]{LM}, we obtain that $\omega(G)$ is a Perron number.
\paragraph{Proof of Theorem~\ref{thm:geodesic}.} Next, we aim at proving that $\gamma(G) > \omega(G)$, unless $G$ is a free product of several copies of $\mathbb{Z}_2$, in which case $\gamma(G) = \omega(G)$. For convenience, let $A$ denote the shortlex automaton $\mathrm{ShortLex}$ and $B$ denote the geodesic automaton $\mathrm{Geo}$ for $G$. Let $L(F)$ be the language accepted by a given finite automaton $F$, and let $\lambda(F)$ be the exponential growth rate of $L$.
We shall construct a new automaton $A'$, by modifying $A$, such that
\[L(A) \subsetneq L(A') \subseteq L(B)\]
and, moreover, $\omega(G) = \lambda(A) < \lambda(A') \leq \lambda(B) = \gamma(G)$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{automaton.pdf}
\caption{The modified automaton $A'$: transition $\{\alpha_2\} \rightarrow D'_1$ is removed and a path $p$ comprising new states $D''_i$ is added.}\label{fig:automaton}
\end{figure}
Since $G$ is not a free product, we may assume that the edge $1\rightarrow 3$ has label $m \geq 2$, and $m \neq \infty$. Consider two cases depending on the parity of $m$. If $m$ is even, then let $w = s_1 s_2 (s_1 s_3)^{m/2}$, $w' = s_1 s_2(s_3 s_1)^{m/2-1}s_3$ and $w'' = s_1 s_2(s_3 s_1)^{m/2}$. If $m$ is odd, then $w = s_1 s_2(s_1 s_3)^{(m-1)/2} s_1$, $w' = s_1 s_2(s_3 s_1)^{(m-1)/2}$ and $w'' = s_1 s_2(s_3 s_1)^{(m-1)/2} s_3$. We shall use the straightforward equality $w=w''$ which holds for $w$ and $w''$ considered as group elements. One can also verify that in both cases $w, w' \in L(A)$ and $w'' \in L(B) \setminus L(A)$.
Let the word $w$ correspond to the directed path $\{\emptyset\} \rightarrow \{ \alpha_1 \} \rightarrow \{ \alpha_2 \} \rightarrow D_1 \rightarrow \dots \rightarrow D_k$, and the word $w'$ correspond to the directed path $\{\emptyset\} \rightarrow \{ \alpha_1 \} \rightarrow \{ \alpha_2 \} \rightarrow D'_1 \rightarrow \dots \rightarrow D'_{k-1}$ in $A$.
Then, let the graph $A'$ be obtained from $A$ in the following way, which is schematically illustrated in Figure~\ref{fig:automaton}:
\begin{itemize}
\item[1)] Add a number of states $D''_1$, $D''_2$, $\dots$, $D''_k$ to $A$, and create a directed path $ p =\{ \emptyset \} \rightarrow \{ \alpha_1 \} \rightarrow \{ \alpha_2 \} \rightarrow D''_1 \rightarrow \dots \rightarrow D''_{k-1} \to D_k$ in $A$ labelled with the sequence of letters in $w''$. Let $\varepsilon$ be the last edge of $p$.
\item[2)] Remove the transition $\{ \alpha_2 \} \rightarrow D_1'$ labelled by $s_3$, and for all $1 \leq i \leq k-1$ add $n-1$ transitions $D''_i \rightarrow \delta(D_i', s_j)$, where $s_j$ runs over all labels except one that is already used for the transition $D''_i\to D''_{i+1}$.
\item[3)] Let $A'$ be the sub-graph in the automaton above spanned by the start state $\{ \emptyset \}$ together with the strongly connected component of
$\{ \alpha_1 \}$, which (by the fact that $A\setminus \{ \emptyset \}$ is strongly connected) is equivalent to removing all inaccessible states.
\end{itemize}
Let us define yet another automaton $A''$, which is obtained from $A'$ be removing the only transition $\varepsilon$. It follows from points (2)--(3) in the definition of $A'$ above that all the states $D''_i$, $k-1 \geq i \geq 1$ belong to the strongly connected component of $\{ \alpha_1 \}$, and thus we do not create any inaccessible states in $A''$ by removing $\varepsilon$ from $A'$.
Observe, that we have $L(A'') = L(A)$, since each word $u$ accepted by $A''$ can be split into two types of sub-words: sub-words read while traversing a sub-path of $p$, and sub-words read while traversing paths that consist of the states of the original automaton $A$. However, each sub-word $v$ of $u$ obtained by traversing a sub-path of $p$ can be obtained by traversing the states of $A$, since $v$ is a sub-word of $w''$, but $v\neq w$. Thus, $L(A'') \subset L(A)$. The inclusion $L(A) \subset L(A'')$ follows by construction.
On the other hand, $L(A) \subsetneq L(A') \subseteq L(B)$, since $w'' \in L(A')$, while $w'' \notin L(A)$.
From the above description, we obtain that $A'\setminus \{\emptyset\}$ and $A''\setminus \{\emptyset\}$ are both strongly connected. Then the transition matrices $M' = M(A'\setminus \{ \emptyset \})$ and $M'' = M(A'' \setminus \{ \emptyset \})$ are both irreducible.
Moreover, $M'$ and $M''$ have same size and $M' \neq M''$ dominates $M''$, since $A'$ and $A''$ have an equal number of states, while $A''$ has fewer transitions than $A'$. Then \cite[Corollary A.9]{B} implies that $\lambda(A) = \lambda(A'') < \lambda(A') \leq \lambda(B)$, and thus $\omega(G) = \lambda(A) < \lambda(B) = \gamma(G)$.
\section{Geometric applications}\label{sec:geom}
In this section we bring up some applications of our result to reflection groups that acts discretely by isometries on hyperbolic space $\mathbb{H}^n$. A convex polytope $P \subset \mathbb{H}^n$, $n\geq 2$, is intersection of finitely many geodesic half-spaces, i.e. half-spaces of $\mathbb{H}^n$ bounded by hyperplanes. A polytope $P \subset \mathbb{H}^n$ is called Coxeter if all the dihedral angles at which its facets intersect are of the form $\frac{\pi}{m}$, for integer $m\geq 2$.
The geometric Coxeter diagram $\mathcal{D}$ of $P$ is obtained by indexing its facets with a finite set of consecutive integers $F = $ $\{1$, $2$, $\dots \}$, and forming a labelled graph on the set of vertices $F$ as follows. If facets $i$ and $j$ intersect at an angle $\frac{\pi}{m_{ij}}$, then the vertices $i$ and $j$ are connected by an edge labelled $m_{ij}$, if $m_{ij} \geq 4$; by a single unlabelled edge, if $m_{ij}=3$; or no edge is present, if $m_{ij}=2$. If facets $i$ and $j$ are tangent at a point on the ideal boundary $\partial \mathbb{H}^n$, then $i$ and $j$ are connected by a bold edge. If the hyperplanes of $i$ and $j$ admit a common perpendicular, i.e. do not intersect in $\overline{\mathbb{H}^n} = \mathbb{H}^n \cup \partial \mathbb{H}^n$, then $i$ and $j$ are connected by a dashed edge.
It is known that a Coxeter polytope $P\subset \mathbb{H}^n$ gives rise to a discrete reflection group generated by reflections in the hyperplanes of the facets of $P$. The group $G = G(P)$ generated by $P$ is a Coxeter group with standard generating set $S$ given by facet reflections. Then the word growth rate $\alpha(G)$ and geodesic growth rate $\gamma(G)$ with respect to $S$ are be defined as usual. The diagram $\mathcal{D}$ of $G$ as a Coxeter group can be obtained from the diagram of $P$ by converting all bold and dashed edges, if any, into $\infty$-edges.
Usually, the polytope $P$ is assumed to be compact or finite-volume, i.e. non-compact and such that its intersection with the ideal boundary $\partial \mathbb{H}^n$ consists only of a number of vertices. This condition can be relaxed in our case, since it does not particularly influence any of the statements below.
Since the facets of a Coxeter polytope $P\subset \mathbb{H}^n$ intersect if and only if their respective hyperplanes do \cite{Andreev}, then the number and incidence of $\infty$-edges in the diagram of $G = G(P)$ is determined only by the combinatorics of $P$.
The following two facts show that many Coxeter group acting on $\mathbb{H}^n$, $n\geq 2$, discretely by isometries have Perron numbers as their word and geodesic growth rates.
\begin{theorem}\label{thm:Coxeter1}
Let $P \subset \mathbb{H}^n$, $n\geq 2$, be a finite-volume Coxeter polytope, and $G$ its associated reflection group. If the bold and dashed edges in the digram of $P$ form a connected subgraph, then $\alpha(G)$ and $\gamma(G)$ are Perron numbers.
\end{theorem}
The above connectivity condition can be checked for the diagram of $P$ relatively easily either by hand or by using a computer algebra system. It is also clear that Theorem \ref{thm:Coxeter1} is just a restatement of Theorem \ref{thm:Perron}.
An additional fact holds as we compare the word and geodesic growth rates of Coxeter groups of the above kind.
\begin{theorem}\label{thm:Coxeter2}
Let $P \subset \mathbb{H}^n$, $n\geq 3$, be a finite-volume Coxeter polytope, and $G$ its associated reflection group. If the bold and dashed edges in the digram of $P$ form a connected subgraph, then $\alpha(G) < \gamma(G)$.
\end{theorem}
\begin{proof}
Let us notice that, unless $n=2$, it is impossible for a Coxeter polytope $P$ to have finite volume given that $\Gamma$ is a complete graph (in dimension $2$ we have an ideal triangle and its reflection group is isomorphic to a triple free product of $\mathbb{Z}_2$'s). Indeed, let us consider an edge stabiliser of $P$. Since $P$ has finite volume, $P$ is \textit{simple at edges}, meaning that each edge is an intersection of $n-1$ facets. Then the edge stabiliser has a Coxeter diagram that is a subdiagram spanned by $n-1\geq 2$ vertices in the complete graph on $f$ vertices. Thus, it is itself a complete graph that has $\infty$-labels on its edges. This cannot be a diagram of a finite Coxeter group, hence Vinberg's criterion \cite[Theorem~4.1]{Vinberg} is not satisfied, and $P$ cannot have finite volume. Thus, $G$ cannot be a free product of finitely many $\mathbb{Z}_2$'s, and the conditions of Theorem~\ref{thm:geodesic} are satisfied.
\end{proof}
As follows from the results by Floyd \cite{Floyd} and Parry \cite{Parry}, if $P$ is a finite-area polygon in the hyperbolic plane $\mathbb{H}^2$, the word growth rate $\alpha(G)$ of its reflection group $G$ is a Perron number. More precisely, $\alpha(G)$ is a Salem number if $P$ is compact, and a Pisot number if $P$ has at least one ideal vertex. A similar result holds for the geodesic growth rate $\gamma(G)$.
\begin{theorem}\label{thm:Coxeter3}
Let $P \subset \mathbb{H}^2$ be a finite-volume Coxeter polygon, and $G$ its associated reflection group. Then $\gamma(G)$ is also a Perron number whenever $P$ has more than $4$ vertices, or when $P$ is a quadrilateral with at least one ideal vertex, or a triangle with at least two ideal vertices. In all the above mentioned cases, $\gamma(G) > \alpha(G)$ unless $P$ is ideal.
\end{theorem}
\begin{proof}
The proof proceeds case-by-case based on the number of sides of $P$.
\textit{$P$ is a triangle.} If $P$ has two or three ideal vertices, then the subgraph of bold edges in the diagram of $D$ is connected. This subgraph is complete if and only if $P$ is an ideal triangle.
\textit{$P$ is a quadrilateral.} If $P$ has at least one ideal vertex, then the subgraph of bold and dashed edges in the diagram of $G$ is connected. This subgraph is complete if and only if $P$ is an ideal quadrilateral.
\textit{$P$ has $n\geq 5$ sides.} In this case, each vertex in the diagram of $G$ is connected by dashed edges to $n-3$ other vertices. It can be also connected by bold edges to one or two more vertices, depending on $P$ having vertices on the ideal boundary $\partial \mathbb{H}^2$. Provided the vertex degrees, it is clear that the subgraph of bold and dashed edges in $D$ is connected. This subgraph is complete if and only if each vertex in the diagram of $D$ is connected to $n-3$ vertices by dashed edges, and to two more vertices by bold edges. In this case, $P$ is an ideal $n$-gon.
Having described the cases above, the theorem follows from Theorems \ref{thm:Coxeter1} -- \ref{thm:Coxeter2}.
\end{proof}
Another series of examples where Theorems \ref{thm:Coxeter1} -- \ref{thm:Coxeter2} apply arises in $\mathbb{H}^3$: these are the right-angled L\"obell polyhedra originally described in \cite{Loebell} and their analogues with the same combinatorics but various Coxeter angles \cite{BMV, V}. The latter polyhedra can be obtained from the L\"obell ones by using ``edge contraction'', c.f. \cite[Propositions 1 -- 2]{K}.
The word growth rates of their associated reflection groups are Perron numbers by \cite{Yu1, Yu2}, and their geodesic growth rates are Perron numbers by Theorem~\ref{thm:Coxeter1}. Indeed, any Coxeter polyhedron $P$ polyhedron combinatorially isomorphic to a L\"obell polyhedron $L_n$ has the following property: each of its faces has at most $n$ neighbours, while $L_n$ has $2n+2$ faces in total. This implies that there are enough common perpendiculars in between its faces to keep the subgraph of dashed edges in the Coxeter diagram of $P$ connected. Also, Theorem~\ref{thm:Coxeter2} implies that the geodesic growth rates always strictly dominate the respective word growth rates.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{polytope-19.png}
\caption{A finite-volume non-compact Coxeter polytope in $\mathbb{H}^{19}$ }\label{fig:polytope-19}
\end{figure}
In Figure~\ref{fig:polytope-19}, we present a complete Coxeter diagram of the hyperbolic finite-volume polytope $P$ in $\mathbb{H}^{19}$ discovered by Kaplinskaya and Vinberg in \cite{KapVin}. The reflection group $G$ associated with $P$ corresponds to a finite index subgroup in the group of integral Lorentzian matrices preserving the standard hyperboloid $H = \{(x_0, x_1, \ldots, x_{19}) \in \mathbb{R}^{20} \,\, | \,\, -x^2_0 + x^2_1 + \ldots + x^2_{19} = -1, \,\, x_0 > 0\}$. The latter group is isomorphic to $G \rtimes S_5$, where $S_5$ is the symmetric group on $5$ elements. The diagram in Figure~\ref{fig:polytope-19} was obtained by using AlVin \cite{Guglielmetti-1, Guglielmetti-2}. The picture does not exhibit the $S_5$ symmetry but rather renders the edges as sparsely placed as possible in order to let the connectivity properties of the graph be observed.
The dashed edges correspond to common perpendiculars between the facets, and bold edges correspond to facets tangent at the ideal boundary $\partial \mathbb{H}^{19}$. The blue edges have label $4$, and the red ones have label $3$ (because of the size of the diagram, this colour notation seems to us visually more comprehensible).
Checking that the subgraph of bold and dashed edges in the diagram of $P$ is connected can be routinely done by hand or by using SageMath. Then Theorems \ref{thm:Coxeter1} -- \ref{thm:Coxeter2} apply. We would like to stress the fact that checking whether the word and geodesic growth rates of $G$ satisfy the conclusions of Theorems \ref{thm:Coxeter1} -- \ref{thm:Coxeter2} by direct computation would be rather tedious, especially for the geodesic growth rate.
\addcontentsline{toc}{section}{References}
|
1,108,101,564,617 | arxiv | \section{Introduction}
Cosmological observations
of type Ia supernovae \cite{Riess,Riess2,Perlmutter}, the cosmic microwave background \cite{Spergel,Komatsu},
and the large scale structures \cite{Percival,sloan,Allen,Rapetti} indicate that the universe undergoes
a phase of accelerated expansion, and this discovery
opened up a new field in cosmology.
A number of attempts to explain the origin of the present cosmic acceleration
have been proposed over the past decade.
The Einstein's cosmological constant
might be a possible solution, however, the smallness of value
of the cosmological constant can not be explained naturally \cite{Weinberg,Weinberg2}.
The possible solution for an accelerated expansion of the universe
at the present time is an alternative theory of gravity.
So far various modified gravity models have been proposed
such as the scalar-tensor theories \cite{Amendola,Uzan,Chiba,Bartolo,Perrotta},
$f(R)$ gravity \cite{HuSawicki,Starobinsky,Appleby,Nojiri},
Dvali-Gabadazde-Porrati (DGP) braneworld model \cite{DGP1,DGP2},
and Galileon gravity \cite{galileon,galileon2,galileon3,galileon4,galileon5,galileon6,galileon7,galileon8,KGB,KGB2}.
In these models, additional degrees of freedom
can mimic the cosmological constant and
lead to cosmic acceleration today.
Most of these theories are a subclass of
the most general second-order scalar-tensor theory,
which was first constructed by Horndeski \cite{Horndeski}
and also independently derived by Deffayet {\em et al.} \cite{GenGal}
as an extension of galileon theory.
The most general second-order scalar-tensor theory
is applied to the late-time accelerated expansion \cite{AKT}
as well as the inflationary models \cite{G-inf,G-inf2,G-inf3,G-inf4}.
The Lagrangian in the most general second-order scalar-tensor theory
contains the coupling of the scalar field $\phi$
and its kinetic term $X\equiv -g^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi/2$ with gravity,
such as $G_4(\phi,X)R$ and $G_5(\phi,X)G_{\mu\nu}\nabla^{\mu}\nabla^{\nu}\phi$,
where $G_4(\phi,X)$ and $G_5(\phi,X)$ are arbitrary functions of $\phi$ and $X$.
This theory is covariant, but in the presence of a cosmological background
Lorentz invariance could be broken due to these coupling terms.
As a result, the propagation speed of gravitational waves
differs from the speed of light and also
depends on the cosmological background.
When the propagation speed of gravitational waves is less than the
speed of light, the gravitons could be emitted through a similar
process to the Cherenkov radiation \cite{Caves,Moore,Moore2}.
The observation of the high energy cosmic rays puts constraints
on this process, i.e., the speed of the gravitational waves.
Assuming a galactic origin for the high energy cosmic rays,
the lower bound on the propagation speed of gravity
from gravitational Cherenkov radiation is given by \cite{Moore,Moore2}
\begin{eqnarray}
c-c_T < 2\times10^{-15}c,
\label{constraint}
\end{eqnarray}
where $c_T$ is the propagation speed of gravity.
When the origin of the high energy cosmic rays is located
at a cosmological distance, the constraint is four orders of
magnitude tighter than (\ref{constraint}).
In the present paper, we show that the argument of
the gravitational Cherenkov radiation puts a
tight constraint on general second-order scalar-tensor models
on a cosmological background
with a time-varying propagation speed of gravitational waves.
As a demonstration, we consider two models:
the purely kinetic coupled gravity \cite{deriCoupling} and
the extended galileon model \cite{Tsujikawa11},
which are a subclass of the most general second-order
scalar-tensor theory.
This paper is organized as follows.
In section \ref{sec:2}, we briefly review the most general second-order
scalar-tensor theory and the tensor perturbations.
In section \ref{sec:gcr}, we derive the gravitational Cherenkov radiation
on a cosmological background.
In section \ref{sec:3}, we explicitly show that gravitational
Cherenkov radiation reject the purely kinetic coupled gravity
model.
In section \ref{sec:4}, we briefly review the extended galileon model
and see how gravitational Cherenkov radiation
can tightly constrain model parameters.
Section \ref{sec:5} is devoted to conclusions.
In appendix \ref{App:Coefficients1}-\ref{App:otherfixedpoint}, we summarize the scalar perturbations,
derived in \cite{G-inf}, and the coefficients of the scalar
and tensor perturbations in various regimes
in the extended galileon model, derived in \cite{Tsujikawa11}.
In appendix \ref{App:negativeab}, a useful constraint on parameters
in the extended galileon model is demonstrated.
Throughout the paper, we use units in which the speed
of light and the Planck constant are unity, $c=\hbar=1$,
and $M_{\rm Pl}$ is the reduced Planck mass related
with Newton's constant by $M_{\rm Pl}=1/\sqrt{8 \pi G}$.
We follow the metric signature convention $(-,+,+,+)$.
\section{The most general second-order scalar-tensor theory}
\label{sec:2}
The most general second-order scalar-tensor theory
is described by the action,
\begin{eqnarray}
S=\int d^4x \sqrt{-g} \left(\sum_{i=2}^{5}{\cal L}_{i}+{\cal L}_m\right),
\end{eqnarray}
where
\begin{eqnarray}
{\cal L}_{2} & = & K(\phi,X),\nonumber\\
{\cal L}_{3} & = & -G_{3}(\phi,X)\Box\phi,\nonumber\\
{\cal L}_{4} & = & G_{4}(\phi,X) R+G_{4,X}[(\Box\phi)^{2}
-(\nabla_{\mu}\nabla_{\nu}\phi)(\nabla^{\mu}\nabla^{\nu}\phi)],\nonumber\\
{\cal L}_{5} & = & G_{5}(\phi,X) G_{\mu\nu}(\nabla^{\mu}\nabla^{\nu}\phi)
-\frac{1}{6} G_{5,X}[(\Box\phi)^{3}-3(\Box\phi)(\nabla_{\mu}\nabla_{\nu}\phi)\,(\nabla^{\mu}\nabla^{\nu}\phi)\nonumber\\
&&+2(\nabla^{\mu}\nabla_{\alpha}\phi)(\nabla^{\alpha}\nabla_{\beta}\phi)(\nabla^{\beta}\nabla_{\mu}\phi)],
\label{Li}
\end{eqnarray}
where $K,~G_3,~G_4,$ and $G_5$ are arbitrary functions
of the scalar field $\phi$ and the kinetic term
$X \equiv -g^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi/2$,
$G_{i\phi}$ and $G_{iX}$ stands for
$\partial G_i/\partial \phi$ and $\partial G_i/\partial X$,
respectively, and ${\cal L}_m$ is the matter Lagrangian.
We assume that matter is minimally coupled to gravity.
Note that for the case, $G_4=M_{\rm Pl}^2/2$,
the Lagrangian ${\cal L}_4$ reproduces the Einstein-Hilbert term.
We consider the tensor perturbations
in the most general second-order
scalar-tensor theory on a cosmological background,
and briefly review the results in derived in \cite{G-inf}.
We briefly review the tensor perturbations
in the most general second-order
scalar-tensor theory, derived in \cite{G-inf}.
The quadratic action for the tensor perturbations can be written as
\begin{eqnarray}
S_T^{(2)}=\frac{1}{8}\int dtd^3x a^3
\left[{\cal G}_T {\dot h}_{ij}^2 -\frac{{\cal F}_T}{a^2}({\vec \nabla}h_{ij})^2\right],
\label{actiongraviton}
\end{eqnarray}
where
\begin{eqnarray}
{\cal F}_T&\equiv&2\left[G_4
-X\left( \ddot\phi G_{5X}+G_{5\phi}\right)\right],
\label{Ft}\\
{\cal G}_T&\equiv&2\left[G_4-2 XG_{4X}
-X\left(H\dot\phi G_{5X} -G_{5\phi}\right)\right].
\label{Gt}
\end{eqnarray}
Here an overdot denotes differentiation with respect to $t$,
and $H=\dot a/a$ is the hubble parameter.
We find the propagation speed of the tensor perturbations,
\begin{eqnarray}
c_T^2&\equiv&\frac{{\cal F}_T}{{\cal G}_T}.
\label{ct2}
\end{eqnarray}
When $G_4=G_4(\phi)$ and $G_5=0$, the propagation speed of
gravitational waves is equal to the speed of light.
On the other hand, the propagation
speed of gravitational waves depends on the cosmological background
in the presence of $G_5$ or $G_4$ being dependent on $X$.
If the propagation speed of gravitational waves is less than
the speed of light, it is tightly constrained
from gravitational Cherenkov radiation.
\def{\bf p}{{\bf p}}
\def{\bf k}{{\bf k}}
\def{\bf x}{{\bf x}}
\def{\hat h}{{\hat h}}
\def{\rm in}{{\rm in}}
\section{Gravitational Cherenkov radiation in an expanding universe}
\label{sec:gcr}
In this section, we derive the gravitational Cherenkov radiation
in a cosmological background. For simplicity, we consider a complex scalar
field with the action
\begin{eqnarray}
&&S_m=\int d^4x \sqrt{-g}\left[ -
g^{\mu\nu}\partial_\mu \Psi^* \partial_\nu\Psi
-m^2\Psi^*\Psi-{\xi}R\Psi^*\Psi
\right].
\label{SM}
\end{eqnarray}
Here we assume the conformal coupling with spacetime
curvature $\xi=1/6$, for simplicity, but this term can be
neglected as long as we focus on the subhorizon scales,
$p/a,m\gg H$, where $p$ is the comoving momentum.
The free part of $\Psi$ can be quantized as
\begin{eqnarray}
&&\hat \Psi(\eta,{\bf x})={1\over a}\int{d^3p\over (2\pi)^{3/2}}
\left[
\hat b_{\bf p} \psi_p(\eta) e^{i{\bf p}\cdot{\bf x}}
+\hat c_{\bf p}^\dagger \psi_p^*(\eta) e^{-i{\bf p}\cdot{\bf x}}\right],
\end{eqnarray}
where $\eta$ is the conformal time,
$\hat b_{\bf p}$ and $\hat c_{\bf p}^\dagger$ are the annihilation and creation
operators of the particle and anti-particle, respectively, which satisfy
the commutation relations $[\hat b_{\bf p},\hat b_{{\bf p}'}^\dagger]
=\delta({\bf p}-{\bf p}')$, $[\hat c_{\bf p},\hat c_{{\bf p}'}^\dagger]=\delta({\bf p}-{\bf p}')$,
and the mode function obeys
\begin{eqnarray}
\left({d^2\over d \eta^2}+p^2+m^2 a^2\right)\psi_p(\eta)=0.
\end{eqnarray}
The WKB approximate solution is given by (e.g., \cite{BD})
\begin{eqnarray}
\psi_p(\eta)={1\over \sqrt{2\Omega_p}}
\exp\left[-i\int_{\eta_{\rm in}}^\eta \Omega_p(\eta')d\eta'\right]
\end{eqnarray}
with $\Omega_p(\eta)=\sqrt{p^2+m^2a^2}$.
The WKB approximation is valid for
\begin{eqnarray}
\Omega_p^2\gg\biggl|
{1\over\Omega_p}{d^2 \Omega_p\over d\eta^2}-{3\over2}
{1\over\Omega_p^2}\left({d\Omega_p\over d\eta}
\right)^2
\biggr|^2,
\end{eqnarray}
which can be satisfied as long as $p/a,m\gg H$.
On the other hand, the action of the graviton is given by
eq.~(\ref{actiongraviton}),
then, we have the quantized graviton field
\begin{eqnarray}
&&{\hat h}_{\mu\nu}={1\over a}\sqrt{2\over {\cal G}_T}\sum_\lambda
\int{d^3k\over (2\pi)^{3/2}}
\biggl[
\varepsilon_{\mu\nu}^{(\lambda)}\hat a_{\bf k} h_k(\eta) e^{i{\bf k}\cdot{\bf x}}
+\varepsilon_{\mu\nu}^{(\lambda)}{\hat a_{\bf k}}^\dagger h_k^{*}(\eta)
e^{-i{\bf k}\cdot{\bf x}}\biggr],
\end{eqnarray}
where $\varepsilon_{\mu\nu}^{(\lambda)}$ is the polarization tensor,
$\hat a_{\bf k}^\dagger$ and $\hat a_{\bf k}$ are the creation and
annihilation operators, which satisfy the commutation relation
$[\hat a_{\bf k},\hat a_{{\bf k}'}^\dagger]=\delta({\bf k}-{\bf k}')$,
and the mode function satisfies
\begin{eqnarray}
\left({d^2\over d \eta^2}+c_s^2k^2-{a''\over a}\right)h_k (\eta)=0.
\end{eqnarray}
For the case $c_s \sim {\cal O}(1)$ and $c_sk/a\gg H$, we may write
\begin{eqnarray}
h_k(\eta)={1\over\sqrt{2\omega_k}}\exp\left[-i\int_{{\eta}_{\rm in}}^\eta\omega_k(\eta')d\eta'\right],
\end{eqnarray}
where we defined $\omega_k=c_sk$, and the approximate solution is valid as long as $c_sk/a\gg H$.
The interaction part of the action (\ref{SM}) is given by
\begin{eqnarray}
S_I&=&-\int dtd^3x a h_{ij}\partial_i\Psi\partial_j\Psi^*
\nonumber
\\
&=&- \int d\eta d^3x h_{ij}\partial_i\psi\partial_j\psi^*,
\end{eqnarray}
where we defined $\psi=a\Psi$, and the interaction Hamiltonian is
\begin{eqnarray}
H_I&=& a\int d^3x h_{ij}\partial_i\Psi\partial_j\Psi^*.
\end{eqnarray}
\begin{figure}[t]
\begin{center}
\includegraphics[width=70mm]{Feynman.eps}
\end{center}
\caption{Feynman diagram for the process}
\label{fig:one}
\end{figure}
In order to evaluate the gravitational Cherenkov radiation, we adopt the
method developed in \cite{KNY,YN}.
Based on the in-in formalism \cite{Wein}, the lowest order contribution is
given by
\begin{eqnarray}
&&\left<Q(t)\right>=
{i^2}
\int^t_{t_{\rm in}}dt_2
\int^{t_2}_{t_{\rm in}}dt_1\left<[H_I(t_1),[H_I(t_2),Q]]\right>.
\end{eqnarray}
We consider the expectation value of the number operator and the initial
state with the one particle state with the initial momentum, i.e.,
${\hat b}^\dagger_{{\bf p}_{\rm in}}|0\rangle$. Then the lowest-order contribution of the process
so that one graviton with the momentum ${\bf k}$ is emitted from the massive particle with
the initial momentum ${\bf p}_{\rm in}$, as shown in fig.~\ref{fig:one},
is written as \cite{AEL}
\begin{eqnarray}
&&\left<{\hat a}^{\dagger(\lambda)}_{\bf k}{\hat a}^{(\lambda)}_{\bf k} \right>=
2\Re\int^t_{t_{\rm in}}dt_2\int^{t_2}_{t_{\rm in}}dt_1
\left<H_I(t_1)
{\hat a}^{\dagger(\lambda)}_{\bf k}{\hat a}^{(\lambda)}_{\bf k}
H_I(t_2)\right>.
\end{eqnarray}
Then, the total radiation energy from the scalar particle can be estimated as
$E=\sum_\lambda\sum_{\bf k} (\omega_k/ a)$ $\bigl<{\hat a}^{\dagger(\lambda)}_{\bf k}
{\hat a}^{(\lambda)}_{\bf k} \bigr>$,
which leads to
\begin{eqnarray}
&&E=\sum_\lambda \int{d^3k\over (2\pi)^3}{\omega_k\over a}
\biggl|
\int_{\eta_{\rm in}}^\eta d\eta_1
{1\over a(\eta_1)}
\sqrt{{2\over {\cal G}_T}}
h_k(\eta_1)\psi_{{\bf p}_f}(\eta_1)
\psi_{{\bf p}_{\rm in}}^*(\eta_1)\epsilon_{ij}p_{\rm in}^i p^j_f
\biggr|^2,
\end{eqnarray}
where ${\bf p}_f+{\bf k}={\bf p}_{\rm in}~ (p_f^i+k^i=p_{\rm in}^i)$. With the use of the relation
$\sum_\lambda \bigl|\epsilon_{ij}p_{\rm in}^i p_f^j\bigr|^2=p_{\rm in}^4\sin^4\theta$,
we have
\begin{eqnarray}
E&=&\int{d^3k\over (2\pi)^3}{\omega_k\over a}p_{\rm in}^4\sin^4\theta
\biggl|
\int_{\eta_{\rm in}}^\eta d\eta_1
{1\over a(\eta_1)}
\sqrt{{2\over {\cal G}_T}}
h_k(\eta_1)\psi_{{\bf p}_f}(\eta_1)
\psi_{{\bf p}_{\rm in}}^*(\eta_1)
\biggr|^2.
\nonumber
\\
\label{EA}
\end{eqnarray}
We are now interested in the subhorizon scales,
$k/a,~p/a,~m,~c_sk/a \gg H$, and
the situation so that the scale factor $a$
is constant, then we can approximate as
\begin{eqnarray}
&&\int_{\eta_{\rm in}}^\eta d\eta_1
{1\over a(\eta_1)}
\sqrt{{2\over {\cal G}_T}}
h_k(\eta_1)\psi_{{\bf p}_f}(\eta_1)
\psi_{{\bf p}_{\rm in}}^*(\eta_1)
\nonumber
\\
&&~~~~~~
\simeq{1\over a}\sqrt{{2\over {\cal G}_T}}
{1\over \sqrt{2\omega_k}}{1\over \sqrt{2\Omega_{{\bf p}_{\rm in}}}}{1\over \sqrt{2\Omega_{{\bf p}_f}}}
\int_{\eta_{\rm in}}^\eta d\eta_1
\exp\left[i(\Omega_{\rm in}-\Omega_f-\omega_k)(\eta_1-\eta_{\rm ini}) \right].
\end{eqnarray}
Then the total radiation energy eq.~(\ref{EA}) reduces to
\begin{eqnarray}
E&\simeq&{1\over 4{\cal G}_Ta^3}\int{d^3k\over (2\pi)^3}{p_{\rm in}^4\sin^4\theta\over \Omega_f\Omega_{\rm in}}
\frac{2\pi T}{a} \delta(\Omega_{\rm in}-\Omega_f-\omega_k),
\end{eqnarray}
Here we assumed the long time duration of the integration,
\begin{eqnarray}
&&\biggl|
\int_{\eta_{\rm in}}^\eta d\eta_1
\exp\left[i(\Omega_{\rm in}-\Omega_f-\omega_k)(\eta_1-\eta_{\rm ini})\right]
\biggr|^2
\simeq \frac{2\pi T}{a} \delta(\Omega_{\rm in}-\Omega_f-\omega_k),
\end{eqnarray}
where $T/a=\eta-\eta_{\rm in}$.
Then, we have the expression in the relativistic limit of the massive particle,
$p_{\rm in}/a\gg m$,
\begin{eqnarray}
{dE\over dt}&=&{p_{\rm in}^2\over 4{\cal G}_Ta^4}\int_0^\infty{dkk^2\over 2\pi}
\int_{-1}^1d(\cos\theta){\sin^4\theta}
\delta(\Omega_{\rm in}-\Omega_f-\omega_k).
\end{eqnarray}
Now consider the delta-function, which can be written as
\begin{eqnarray}
\delta(\Omega_{\rm in}-\Omega_f-\omega_k)=2\Omega_f\delta(\Omega_f^2-(\Omega_{\rm in}-\omega_k)^2)
\end{eqnarray}
where
$\omega_k=c_sk$,
$\Omega_{\rm in}=\sqrt{{\bf p}^2_{\rm in}+a^2m^2}$, and
$\Omega_f=\sqrt{({\bf p}_{\rm in}-{\bf k})^2+a^2m^2}$.
With the use of the fact
\begin{eqnarray}
&&\Omega_f^2-(\Omega_{\rm in}-\omega_k)^2
=-2p_{\rm in} k\left(
\cos\theta-{c_s\over\beta}-{(1-c_s^2)k\over 2p_{\rm in}}\right),
\end{eqnarray}
where we defined $\beta=p_{\rm in}/\sqrt{p_{\rm in}^2+m^2a^2}$ and $p_{\rm in}^2=|{\bf p}_{\rm in}|^2$,
we find (cf. eq.(3.2) in reference by Moore and Nelson \cite{Moore})
\begin{eqnarray}
{dE\over dt}&=&{p_{\rm in}^2\over 4{\cal G}_Ta^4}\int_0^{k_{\rm max}}{dkk\over 2\pi}{\sin^4\theta}
\end{eqnarray}
with
\begin{eqnarray}
&&\cos\theta={c_s\over\beta}+{(1-c_s^2)k\over 2p_{\rm in}}
\end{eqnarray}
and
\begin{eqnarray}
&&k_{\rm max}={2p_{\rm in}\over 1-c_s^2}\left(1-{c_s\over \beta}\right).
\end{eqnarray}
Assuming $\beta\sim 1$, we have $k_{\rm max}\simeq 2p_{\rm in}/(1+c_s)$ and
\begin{eqnarray}
{dE\over dt}\simeq{p_{\rm in}^2\over 8\pi {\cal G}_T a^4}4(1-c_s)^2\int_0^{k_{\rm max}} dk k
\left(1-{k\over {k_{\rm max}}}\right)^2,
\end{eqnarray}
which yields (cf.\cite{Moore})
\begin{eqnarray}
{dE\over dt}\simeq{G_N p_{\rm in}^4\over a^4}{4(1-c_s)^2\over3(1+c_s)^2},
\end{eqnarray}
where we introduce the Newtonian gravity constant by $G_N=1/16\pi{\cal G}_T$.
One may notice that this definition of the Newton's constant
is slightly different from that in the most general second-order scalar-tensor theory
(cf. \cite{Vainstein2nd}), however, it does not affect the constraints significantly.
Our results are consistent with those in Ref.~\cite{Moore}.
Then, a particle with momentum $p$ cannot possibly have been traveling for
longer than
\begin{eqnarray}
t_{\rm}\sim{a^4\over G_N}{(1+c_s)^2\over 4(1-c_s)^2}{1\over p^3}.
\end{eqnarray}
Therefore, the highest energy cosmic ray put the constraint on the sound speed of
the graviton
\begin{eqnarray}
{(1-c_s)}\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2\times10^{-17}\left({10^{11}{\rm GeV}\over p}\right)^{3/2}
\left({1 {\rm Mpc}\over ct}\right)^{1/2}.
\label{result}
\end{eqnarray}
Since we are considering the theory on a cosmological background, the sound speed of
the graviton is determined by the cosmological evolution of the background field.
This situation is slightly different from that in ref.~\cite{Moore}.
However, as we have shown in this section, the theory on a cosmological background
can be constrained from the gravitational Cherenkov radiation when the speed of the
graviton is smaller than that of light.
Also, there are no the higher order nonlinear interaction terms of the graviton like
the galileon cubic term that becomes important at short distance \cite{Gao}, which suggests
that the nonlinear interactions of the gravitons can be ignored.
\section{Purely kinetic coupled gravity}
\label{sec:3}
We first consider the modified gravity model,
whose action contains a nonminimal derivative coupling to gravity.
The action proposed by Gubitosi and Linder \cite{deriCoupling} is given by
\begin{eqnarray}
S=\int d^4x \sqrt{-g}\left[\frac{M_{\rm Pl}^2}{2}R+X
+\frac{\lambda}{M_{\rm Pl}^2}G^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi\right],
\end{eqnarray}
where $\lambda$ is a dimensionless constant.
In this model, the arbitrary functions in eq.~(\ref{Li}) correspond to
$K=X$, $G_3=0$, $G_4=M_{\rm Pl}^2/2$, and $G_5=-\lambda\phi/M_{\rm Pl}^2$.
Using the matter density parameter $\Omega_m=\rho_m/3M_{\rm Pl}^2H^2$,
the modified Friedmann equation can be written as
$1=\Omega_m+\Omega_{\phi}$, where
\begin{eqnarray}
\Omega_{\phi}=\frac{X}{3M_{\rm Pl}^2H^2}\left(1+18C\right).
\end{eqnarray}
Here we defined the key parameter,
\begin{eqnarray}
C\equiv\frac{\lambda H^2}{M_{\rm Pl}^2} > C_{*},
\label{C_inequality}
\end{eqnarray}
where $C_*=-1/18$.
The second inequality is the condition which ensures
the positivity of the energy density of the scalar field, $\Omega_{\phi} > 0$.
Using the gravity equations and
the energy density $\rho_{\phi}$ and the pressure $p_{\phi}$ for the scalar field,
the effective equation of state, $w_{\rm eff}\equiv p_{\phi}/\rho_{\phi}$,
can be written as
\begin{eqnarray}
w_{\rm eff}=\frac{1+30C}{1+(24-6\Omega_{\phi})C+108(1+\Omega_{\phi})C^2}.
\end{eqnarray}
Gubitosi and Linder showed that if the deviation parameter at the present time,
$\delta \equiv (C_*-C)/C_*|_{z=0}$, satisfies $\delta<2/5$,
corresponding to the condition for negative pressure $w_{\rm eff}<0$,
the kinetic term $X$ behaves as the cosmological constant
around the present time.
The propagation speed of gravitational waves (\ref{ct2}) can be written as
\begin{eqnarray}
c_T^2&=&\frac{M_{\rm Pl}^2+2\lambda X/M_{\rm Pl}^2}{M_{\rm Pl}^2-2\lambda X/M_{\rm Pl}^2}.
\label{GW_PKCG}
\end{eqnarray}
The condition for avoiding ghosts of the tensor perturbations,
${\cal G}_T > 0$,
is $\delta > \Omega_{\phi}(\Omega_{\phi}-3)$,
which is automatically satisfied,
while the condition for avoiding instability $c_T^2 \geq 0$ is
\begin{eqnarray}
\delta \geq \frac{\Omega_{\phi}}{\Omega_{\phi}+3}.
\end{eqnarray}
Therefore, $\delta > 0$ is required for avoiding ghost-instability.
Thus the theoretically allowed parameter range is
\begin{eqnarray}
0 < \delta < {2 \over 5},
\label{condition:PKCGa}
\end{eqnarray}
which is equivalent with
\begin{eqnarray}
-{1 \over 18} < C(z=0) < -{1 \over 30}.
\label{condition:PKCGb}
\end{eqnarray}
The propagation speed of gravitational waves in terms of $\Omega_{\phi}$
is rephrased as
\begin{eqnarray}
c_T^2 &=&\frac{(3+\Omega_{\phi})\delta-\Omega_{\phi}}
{(3-\Omega_{\phi})\delta+\Omega_{\phi}}.
\label{ct2:pkcg}
\end{eqnarray}
The constraints from gravitational Cherenkov radiation
$c_T > 1- \epsilon$, where $\epsilon=2\times10^{-15}$,
reads $\delta > 1-{\cal O}(\epsilon)$ from eq.~(\ref{ct2:pkcg}),
which contradicts with the condition (\ref{condition:PKCGa}).
Equivalently, from eqs.~(\ref{C_inequality}) and (\ref{condition:PKCGb}),
$\lambda$ is always negative, therefore, the propagation speed
of gravitational waves is always smaller than unity from
eq.~(\ref{GW_PKCG}).
Thus this purely kinetic coupled gravity is inconsistent
with the constraint from the gravitational Cherenkov radiation for
any theoretically allowed parameter $\lambda$.
\section{Extended galileon model}
\label{sec:4}
In this section, we consider the model proposed
by De Felice and Tsujikawa \cite{Tsujikawa11},
which is an extension of the covariant galileon model \cite{CovGalileon}.
In this model, the arbitrary functions has the following form,
\begin{eqnarray}
K&=&-c_{2}M_{2}^{4(1-p_{2})}X^{p_{2}},\nonumber\\
G_{3}&=&c_{3}M_{3}^{1-4p_{3}}X^{p_{3}},\nonumber\\
G_{4}&=&\frac{1}{2}M_{{\rm pl}}^{2}-c_{4}M_{4}^{2-4p_{4}}X^{p_{4}},\nonumber\\
G_{5}&=&3c_{5}M_{5}^{-(1+4p_{5})}X^{p_{5}},
\label{GCGM}
\end{eqnarray}
where $c_i$ and $p_i$ are the model parameters and
$M_i$ are constants with dimensions of mass.
We impose the conditions that the tracker solution is characterized by
$H \dot{\phi}^{2q}={\rm const}$
and the energy density of the scalar field
is proportional to $\dot{\phi}^{2p}$.
These conditions
enable us to reduce the model parameters,
which is given by $p_{2}=p$, $p_{3}=p+(2q-1)/2$,
$p_{4}=p+2q$, and $p_{5}=p+(6q-1)/2$
\footnote{Kimura and Yamamoto considered the case :
$p=1$, $q=n-1/2$, $c_4=0$, and $c_5=0$ \cite{KY1}.}.
Note that the covariant Galileon model corresponds to $p=1$ and $q=1/2$.
\subsection{Cosmological Dynamics}
In this subsection, we briefly review the background dynamics
in the extended galileon model.
For convenience, we write the mass dimension constants as
\begin{eqnarray}
M_{2}&\equiv&(H_{{\rm dS}}M_{{\rm Pl}})^{1/2},\nonumber\\
M_{3}&\equiv&\left(\frac{{M_{{\rm Pl}}}^{1-2p_{3}}}{{H_{{\rm dS}}}^{2p_{3}}}\right)^{1/(1-4p_{3})},\nonumber\\
M_{4}&\equiv&\left(\frac{{M_{{\rm Pl}}}^{2-2p_{4}}}{{H_{{\rm dS}}}^{2p_{4}}}\right)^{1/(2-4p_{4})},\nonumber\\
M_{5}&\equiv&\left(\frac{{H_{{\rm dS}}}^{2+2p_{5}}}{{M_{{\rm Pl}}}^{1-2p_{5}}}\right)^{1/(1+4p_{5})}\,,
\end{eqnarray}
where $H_{\rm dS}$ is the hubble parameter at the de-Sitter point.
At the de Sitter point $\dot{H}=0$ and $\ddot{\phi}=0$,
we obtain the following relations from the gravitational and scalar field equations
\begin{eqnarray}
c_{2}&=&\frac{3(3\alpha-4\beta+2)}{2}\left(\frac{2}{x_{{\rm dS}}^{2}}\right)^{p},\nonumber\\
c_{3}&=&\frac{\sqrt{2}\left[3(p+q)(\alpha-\beta)+p\right]}{2p+q-1}\left(\frac{2}{x_{{\rm dS}}^{2}}\right)^{p+q},
\end{eqnarray}
where $x\equiv \dot{\phi}/HM_{{\rm pl}}$ and
\begin{eqnarray}
\alpha&\equiv&\frac{4(2p_{4}-1)}{3}\left(\frac{x_{{\rm dS}}^{2}}{2}\right)^{p_{4}}c_{4},\nonumber\\
\beta&\equiv&2\sqrt{2}\, p_{5}\left(\frac{x_{{\rm dS}}^{2}}{2}\right)^{p_{5}+1/2}c_{5}.
\end{eqnarray}
Thus this model is characterized by only four parameters $p$, $q$, $\alpha$, and $\beta$.
In order to simplify the analysis, we introduce the following variables,
\begin{eqnarray}
r_{1}&\equiv&\left(\frac{x_{{\rm dS}}}{x}\right)^{2q}\left(\frac{H_{{\rm dS}}}{H}\right)^{1+2q},\nonumber\\
r_{2}&\equiv&\left[\left(\frac{x}{x_{{\rm dS}}}\right)^{2}\frac{1}{r_{1}^{3}}\right]^{\frac{p+2q}{1+2q}},
\end{eqnarray}
and the radiation density parameter $\Omega_{r}\equiv \rho_{r}/3H^{2}M_{{\rm pl}}^{2}$.
Note that the de Sitter fixed point corresponds to $(r_{1},r_{2},\Omega_{r})=(1,1,0)$.
Along the tracker $r_1=1$,
the evolution of $r_{2}$ and $\Omega_{r}$ are governed by
the following differential equations,
\begin{eqnarray}
r_{2}' & = & \frac{(1+s)(\Omega_{r}+3-3r_{2})}{sr_{2}+1}\, r_{2}\,,\\
\Omega_{r}' & = & \frac{\Omega_{r}-1-3r_{2}-4sr_{2}}{sr_{2}+1}\,\Omega_{r}\,,
\end{eqnarray}
where a prime denotes a derivative with respect to $N=\ln a$
and only one parameter $s=p/2q$ determines
the background dynamics in the case of the tracker solution.
In this case, the density parameter of the scalar field is
simply given by $\Omega_{\phi}=r_2$,
satisfying the constraint $1=\Omega_{\phi}+\Omega_m+\Omega_r$.
Integrating these equations yields the following
algebraic equations,
\begin{eqnarray}
r_{2} & = & b_{1}a^{4(1+s)}{\Omega_{r}}^{1+s}\,,\\
b_{1}a^{4(1+s)}{\Omega_{r}}^{1+s} & = & 1-\Omega_{r}(1-b_{2}a)\,,
\label{eom_omegar}
\end{eqnarray}
where the integration constants are given by
\begin{equation}
b_{1}=\frac{1-\Omega_{m0}-\Omega_{r0}}{\Omega_{r0}^{1+s}}\,,
\qquad b_{2}=-\frac{\Omega_{m0}}{\Omega_{r0}},
\end{equation}
and $\Omega_{m0}$ and $\Omega_{r0}$ are the
matter and radiation density parameter at present, respectively.
To see how the Friedmann equation is modified,
we rewrite the algebraic equation (\ref{eom_omegar})
in terms of the hubble parameter $H$, then we find
\begin{eqnarray}
\left({H \over H_0}\right)^2&=&(1-\Omega_{m0}-\Omega_{r0})\left({H \over H_0}\right)^{-2s}
+\Omega_{m0}a^{-3}+\Omega_{r0}a^{-4}.
\label{DvaliTurnerEq}
\end{eqnarray}
This modified Friedmann equation
is known as the Dvali-Turner model \cite{DvaliTurner}.
The authors in \cite{KY1} placed the observational constraints
on this modified Friedmann equation (\ref{DvaliTurnerEq})
in the special case $p=1$
using type Ia supernovae and the CMB shift parameter
and showed that the model parameter $s$ has to be small, $s \ll 1$,
in order to be consistent with cosmological observations\footnote{
Observational constraints on eq.~(\ref{DvaliTurnerEq})
from type Ia supernovae, cosmic microwave background, and
baryon acoustic oscillations
including the cosmic curvature $K$ in the context of
the extended galileon model has been recently studied
by De Felice and Tsujikawa \cite{ObsExtGalileon}.
They found that the parameter $s$ is constrained to be
$s =0.034_{-0.034}^{+0.327}~(95\%~{\rm CL})$
in the flat case $K=0$.
}.
\subsection{Conditions}
In this subsection, we summarize the theoretically allowed
parameter space in the extended galileon model,
discussed in \cite{Tsujikawa11},
and show that the constraint from gravitational Cherenkov
radiation is crucial.
To avoid ghost-instabilities, we must impose the conditions,
${\cal G}_T>0$, $c_T^2>0$, ${\cal G}_S>0$, and $c_S^2>0$
in the history of the universe.
The coefficients in the tensor and scalar perturbation equations
in terms of $r_1$, $r_2$, $\Omega_r$, and the model parameters
are listed in appendix \ref{App:Coefficients2}.
We find that the propagation speed of gravitational waves
along the tracker $r_1=1$ is written
\begin{eqnarray}
c_{T}^2=\frac{2(1-2p-4q)(2q+pr_2)+3\alpha (2q+pr_2) r_2-3\beta (1-2p-4q)
(3-3r_2+\Omega_r)r_2}{(1-2p-4q)[2+3(\alpha-2\beta)r_2](2q+pr_2)}.\nonumber\\
\label{ct2tracker}
\end{eqnarray}
Note that eq.~(\ref{ct2tracker}) reduces $c_T^2=1$
when $\alpha=\beta=0$, which correspond to $G_4=M_{\rm Pl}^2/2$ and $G_5=0$.
We further impose no-instability condition at $r_2=r_{2,{\rm min}}$,
where a minimum of propagation speed of gravitational waves $c_T^2$ is
located.
Setting $r_1=1$ and $\Omega_r\simeq 0$,
the minimum of $c_T^2$ is given by eq.(\ref{ct2tracker})
at $r_2=r_{2,min}$,
\begin{eqnarray}
r_{2,min}&=&\biggl[2(3+2p)(1-2p-4q)q\,\beta-8p\,q(p+2q)\alpha
\pm \sqrt{3\,\Gamma_1}\biggr]/\Gamma_2,
\end{eqnarray}
where
\begin{eqnarray}
\Gamma_1&=&(1-2p-4q)(p+2q)q \,\beta
\times[4(p+2q)(p-3q\,\alpha)\alpha\nonumber\\
&&+2(1-2p-4q)\{(3+2p)-3(3-4q)\beta\}\beta
+3[3-16q(1-2q)-2p(3-8q)]\alpha\,\beta],\nonumber\\
\Gamma_2&=&4p^2(p+2q)\alpha-18(p+2q)(1-2p-4q)\beta^2+(1-2p-4q)[2p(3+2p)+9(p+2q)\alpha]\beta.\nonumber\\
\end{eqnarray}
The conditions for avoiding ghost-instabilities
in the regimes along the tracker are given by
\begin{eqnarray}
{\cal G}_S|_{r_1=1, r_2 \ll 1} &>& 0\,, \qquad {\cal G}_S|_{\rm de~Sitter} > 0\,,\nonumber\\
c_S^2|_{r_1=1, r_2 \ll 1} &\geq& 0\,, \qquad ~c_S^2|_{\rm de~Sitter} \geq 0\,,\nonumber\\
{\cal G}_T|_{r_1=1, r_2 \ll 1} &>& 0\,, \qquad {\cal G}_T|_{\rm de~Sitter} > 0\,,\nonumber\\
c_T^2|_{r_1=1, r_2 \ll 1} &\geq& 0\,, \qquad ~ c_T^2|_{\rm de~Sitter} \geq 0\,,\nonumber\\
c_T^2|_{r_2,{\rm min}} &>& 0.
\label{condition:1}
\end{eqnarray}
If the initial condition of $r_1$ is $r_1 \ll 1$,
we then must impose the conditions for avoiding ghost-instabilities
in the regime $r_1 \ll 1$ and $r_2 \ll 1$, which is given by
\begin{eqnarray}
{\cal G}_S|_{r_1 \ll 1, r_2 \ll 1} &>& 0\,, \qquad c_S^2|_{r_1 \ll 1, r_2 \ll 1} \geq 0, \nonumber\\
{\cal G}_T|_{r_1 \ll 1, r_2 \ll 1} &>& 0\,, \qquad c_T^2|_{r_1 \ll 1, r_2 \ll 1} \geq 0.
\label{condition:2}
\end{eqnarray}
We also impose the condition that the other fixed points $r_a$ and $r_b$
(see appendix \ref{App:otherfixedpoint}) is not real or outside
the interval $0<r_1 \leq 1$, which is given by
\begin{eqnarray}
\Delta<0~~~~{\rm or}~~~~
r_{a,b} < 0 ~~~~{\rm or}~~~~ r_{a,b} \geq 1.
\label{condition:3}
\end{eqnarray}
Note that as long as the initial condition of $r_1$ is near
$r_1=1$ and the scalar field follows the tracker from early stage,
these conditions (\ref{condition:2}) and (\ref{condition:3}) do
not have to be imposed.
Let us classify the constraints into four classes:
(a) the constraint from the gravitational
Cherenkov radiation, which is given by eq.~(\ref{constraint})
and eq.~(\ref{ct2tracker}) with setting $c_T=c_T|_{z=0}$,
(b) the theoretical constraint (\ref{condition:1}) to avoid the ghost-instabilities
when the scalar field follows the tracker solution from early stage,
assuming that the tracker is near $r_1=1$ initially,
(c) the theoretical constraint (\ref{condition:2}) and (\ref{condition:3})
in addition to (\ref{condition:1}) to avoid the ghost-instabilities
when the scalar field does not follow the tracker solution initially,
assuming that the initial condition of $r_1$ is sufficiently small,
(d) the other constraint from the cosmological observations,
type Ia supernovae, the shift parameter from the cosmic microwave background,
and the baryon acoustic oscillations.
\begin{figure}[t]
\begin{tabular}{cc}
\begin{minipage}{0.5\textwidth}
\begin{center}
\includegraphics[scale=0.4]{region1_ntk.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\includegraphics[scale=0.4]{region2_ntk.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{The allowed parameter space which satisfies the constraint (a)
from the gravitational Cherenkov radiation (\ref{constraint}) and the
constraint (c) from (\ref{condition:1}), (\ref{condition:2}), and
(\ref{condition:3}).
The left panel assumes $p=1$ and $q=1/2$, while
the right panel does $p=1$ and $q=5/2$.
}
\label{fig:parameter1}
\end{figure}
\begin{figure}[t]
\begin{tabular}{cc}
\begin{minipage}{0.5\textwidth}
\begin{center}
\includegraphics[scale=0.4]{region1_tk.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\includegraphics[scale=0.4]{region2_tk.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{The allowed parameter space which satisfies the constraint (a)
from the gravitational Cherenkov radiation (\ref{constraint}) and the
constraint (b) from (\ref{condition:1}).
The left panel assumes $p=1$ and $q=1/2$, while
the right panel does $p=1$ and $q=5/2$.
}
\label{fig:parameter2}
\end{figure}
Figure~\ref{fig:parameter1} shows the allowed regions to satisfy
the constraint (a) and the constraint (c) for $p=1$ and
$q=1/2$ (left panel) and $p=1$ and $5/2$ (right panel), where we adopt
$\Omega_{m0}h^2=0.1344$ and $\Omega_{r0}h^2=4.17\times 10^{-5}$ with
$h=0.7$. In this case, we see that there is no overlap region
except for $\alpha=0$ and $\beta = 0$. Thus, the constraint
from the gravitational Cherenkov radiation is crucial.
Figure~\ref{fig:parameter2} is the same as figure~\ref{fig:parameter1},
but for the constraint (a) and the constraint (b).
We see that the allowed region in parameter space is significantly reduced,
by combining with the constraint from gravitational Cherenkov radiation (a).
Especially, there is no overlap region with the positive
values of $\alpha$ and $\beta$, in figure~\ref{fig:parameter2}.
In general one can show that both the constraints (a) and (b) impose
$\alpha$ and $\beta$ to be negative or zero
for any values of $p\geq 1$ and $q \geq 0$
(see appendix \ref{App:negativeab}).
We must further include the constraint from cosmological
observations (d).
The authors in ref.~\cite{CovGalileon2}
investigated the constraint on the covariant galileon model
($p=1$ and $q=1/2$) from the observational data of
type Ia supernovae, the shift parameter from the cosmic
microwave background, and the baryon acoustic oscillations.
They showed the early tracking solution,
corresponding to the case of (b),
is disfavored by the cosmological constraint (d).
On the other hand the solutions that approach the tracker
solution only at late times, corresponding to the case of (c),
are favored (also see \cite{ObsExtGalileon})
taking small spatial curvature into account.
However, the latter case is significantly constrained by combining
the constraint (a), though we do not take the spatial curvature
into account.
Thus, the constraint from the gravitational Cherenkov radiation
plays a very important role to reduce the allowed
parameter-space of the extended galileon model. In ref.~\cite{ISW}, it
is demonstrated that the integrated Sachs Wolfe effect
derives a stringent constraint on a subclass of the galileon model.
Further tight constraint could be obtained by combining these
constraints.
\section{Conclusion}
\label{sec:5}
In this paper, we studied constraints on the general scalar-tensor theories
on a cosmological background,
whose propagation speed of gravitational waves
differs from the speed of light,
using the survival of high energy cosmic ray against the gravitational Cherenkov radiation.
In these theories, the coupling of the scalar field $\phi$ and its kinetic term $X$
with gravity causes the violation of Lorentz invariance in a cosmological
background, leading to a time-dependent propagation speed of gravitational
waves.
We demonstrated that such a model can be constrained using the survival of high
energy cosmic ray against the gravitational Cherenkov radiation.
We first considered constraints on the purely kinetic coupled gravity
and found that the conditions for the existence of a desired late-time solution
and avoiding ghost-instability is $0<\delta<2/5$
while the constraint from the gravitational Cherenkov radiation gives
$\delta > 1-{\cal O}(\epsilon)$, where $\epsilon=2\times 10^{-15}$.
Thus the purely kinetic coupled gravity is inconsistent
with the argument of the gravitational Cherenkov radiation.
We also focused our investigation on the extended galileon model,
which is a generalization of the covariant galileon model
in the framework of the most general second-order scalar-tensor theory.
We showed that there is no allowed parameter space
except for $\alpha=\beta=0$ by combining
the condition for avoiding ghost-instabilities and
the constraints from the gravitational Cherenkov radiation
if the initial condition of $r_1$ is sufficiently small.
Even if the initial condition of $r_1$ is placed near the
tracker $r_1=1$, the allowed parameter space is tightly
constrained by combining the gravitational Cherenkov radiation
and cosmological constraint such as type Ia supernovae, the shift
parameter from cosmic microwave background,
baryon acoustic oscillations.
Thus the constraint from the gravitational Cherenkov radiation
is important to constrain the general second-order scalar-tensor theories
on a cosmological background,
whose propagation speed of gravitational waves is less than the speed of light.
\acknowledgments
This work was
supported in part by JSPS Grant-in-Aid for Scientific Research
No.~21540270 and No.~21244033 and JSPS Core-to-Core Program
``International Research Network for Dark Energy''.
R.K. acknowledges support by a research assistant program
of Hiroshima University.
R.K. was also supported in part by a Grant-in-Aid for JSPS Fellows.
K.Y. thanks M. Yamaguchi for a useful discussion at a workshop held
in Takehara on the sound speed of the tensor mode
in the most general second-order scalar-tensor theory.
We also thank T. Kobayashi for useful discussions,
when the authors initiated this work.
|
1,108,101,564,618 | arxiv | \section{Introduction}
Learning renders robots capable of performing well a variety of tasks in a diversity of environments and has recently attracted worldwide attention \cite{Kaelbling915}. Many types of learning topics are investigated in robotics, such as imitation learning and reinforcement learning, and a good many up-to-date robot mechanisms are involved, such as soft robots \cite{Shah} and legged robots \cite{Hwangbo}. Although their intelligence has been tremendously improved by learning different skills (such as grasping \cite{Ficuciello}, locomotion \cite{Won}, manipulating \cite{Fazeli}), robots cannot step into our daily life soon. One critical difficulty lies in data. Although several documents attempt to collect data directly from physical robots \cite{Levine18}, it is quite costly to acquire sufficient data from the real world, with reasons including enormous variability in the environments, tear and wear, and so forth. Training models or policies in simulations and transferring to the real world is one feasible solution since a simulator theoretically provides enough data. However, it faces the reality gap, which depicts the difference between the simulator and the real world, and many researchers endeavor to close this Sim2Real gap \cite{hofer}. Domain randomization trains cross many environments where parameters and properties are randomized, expecting the physical system as one sample of training variations. This technique is simple yet effective and especially suitable for deep networks. It is applied widely to many robotic tasks and achieves good performance.
For robot learning, dynamics is essential to be concerned, especially for movement planning and policy optimization, relating forces acting on a robot mechanism with accelerations they produce. Model-Based reinforcement learning algorithms, for instance, need to capture dynamics changes, which may fail to provide accurate transition states \cite{moerland2021modelbased}. Many documents study the learning of robot dynamics. System identification is an early attempt by tuning parameters to match robot behaviors \cite{Yu-RSS-17}, and its shortcoming is obvious: time-consuming and error-prone. Dynamics randomization augments the training set with a wide range of parameters for dynamics learning \cite{Peng}. This method has shown good performance in transferring to reality but still needs to generate simulation data and apply the Sim2Real technique whenever it meets new requirements with different robot configurations. This paper considers the following question to spare this tediousness of iterative sampling and modeling: Is it possible to concern general robot dynamics (GRD), with which practitioners can simplify the process of dynamics learning? GRD should cover massive robot instances and have the capability of transferring to a variety of physical mechanisms. This new concept has substantial merits. It implies the essence of the robot dynamics, provides a generalization environment for learning in simulation, and lowers the corresponding threshold, enabling a beginner to obtain robot models without attention to details.
Generative pre-training (GPT) is a transformer-based model trained on a massive dataset, exhibiting significant ability in the area of natural language processing \cite{Brown}. It is similar to decoder-only transformers, with the difference lying in the immense model scale and training data. As an unsupervised learning fashion, this method also applies to predict pixels without structure knowledge \cite{Chen}, label data for the graph neural networks \cite{Hu}, generate synthetic biological signals \cite{Bird}, and so forth. Its success may lie in that instances of downstream tasks appear in the succeeding inputs, which facilitates prediction. We examine that, for any robot model, dynamics property exists in continuous trajectories, where inputs are a series of coherent variables and satisfy GPT's requirements. That makes it possible to use a GPT-similar structure to learn GRD.
This paper studies the learning of GRD, endeavoring to pre-train a model on a variety of serial robots. So far as we know, this is the first concern of learning GRD. We investigate the \emph{generality} from three aspects: Dynamics parameter, topology configuration, and model dimension. The model dimension determines the number of robot variables; the topology configuration determines the connection of robot joints; the dynamics parameter determines the property of robot links. We generate datasets by randomizing the above three aspects and execution process within constraints and apply a structure modified from GPT to access the general dynamics.
We also extend the ``generality'' idea to the inverse of robot dynamics. Based on the correlation with GRD, we study the general inverse dynamics independently and the left and the right inverse of GRD with specific purposes, which consider the errors of GRD.
This paper continues to investigate a new concept, \emph{Gen2Real}, to transfer general models directly to reality. For a clear view, given a physical robot, this process can be divided into Gen2Spe and Spe2Real. The first trains a general model to fit a specific robot in simulation, and the second transfers the simulated model to reality with experiment data.
We hope the general models we provide can facilitate policy learning in the simulation since they include enormous robot dynamics. We also hope, with these models, practitioners can spare the tedious process, in transferring to reality, of checking nominal parameters, generating lots of simulation data, and training networks. In summary, these general models can lower the threshold, attracting beginners in robot learning.
\section{Related Work}
\subsection{Robot dynamics learning}
Robot dynamics is crucial to simulating behaviors in policy learning, and, for example, model-based reinforcement learning needs dynamics to provide state transition whenever it attempts a policy. The dynamics cannot yet be accurate due to ubiquitous noises and disturbances. Many learning methods are presented to address it, such as reusing existed experience \cite{Christopher}, employing an episodic method \cite{Folkestad}, exploiting Hopf bifurcations \cite{Khadivar}, recruiting recurrent spiking neuron networks \cite{Gilra}, and so forth. The inverse dynamics also attracts attention in designing appropriate motion, and an example is the design of model-based controllers, which cancel out non-linearities and track with zero-error. It also bridges the torques of the simulated model and the states of the physical robot \cite{Desai}. Some identification-based methods are applied to learn inverse dynamics of robot manipulators, such as a cascaded method imitating the recursive Newton-Euler formulation \cite{Sahand}, combining online and offline neural networks \cite{Panda}, learning a non-minimum phase system \cite{Zhou}. These models are good at learning one specific robot and not suitable for directly transferring to another physical platform.
\subsection{Domain randomization}
Domain randomization varies parameters to define a simulation environment and models differences between the source and the target domains. It employs a wide range of simulated parameters and intends to improve neural networks to generalize well to real-world tasks. Many vision-based policy tasks apply this method and train networks on simulation data. The randomization may include scene appearance and robot kinematics, and visual control policies are learned and transferred to the non-randomized real-world images. Recent techniques utilize this randomization to deal with partial occlusions \cite{Tobin}, adapt distribution with a few real-world data \cite{Chebotar}, and use 3D CAD images as the source inputs \cite{Sadeghi}.
When solving dynamics-related tasks, dynamics randomization is a prevalent method to generalize skills to reality \cite{Valassakis}. The physical dynamics features may include mass and sizes of robot bodies, joint friction, observation noise, and other properties. It applies to develop policies by randomizing simulated dynamics \cite{Peng}, learn predictive dynamics in inference \cite{Alvaro}, and design a universal policy based on training over a wide array of dynamics models \cite{Wenhao}. Another successful application is dexterous manipulation \cite{Andrychowicz}, in which parameters with high uncertainty are frequently randomized. One view to the success of domain randomization is that it is based on bilevel optimization.
Its main challenge lies in selecting the proper parameter set, which is sensitive to the manually-specified distribution \cite{Liang-RSS-20}. One approach to solving the problem is active domain randomization \cite{Mehta}, which learns a sampling strategy of parameters, searching for the most informative variations within randomization ranges. Another is automatic domain randomization \cite{Akkaya}, which attempts to generate randomized environments automatically with changeable distribution ranges.
In the existing documents relating to dynamics tasks, most are focused on the randomization of dynamics parameters considering the target robot, which works well for this type of physical mechanism, and randomization for other influential factors to robot dynamics, such as topology configurations and model dimensions, is not mentioned.
\subsection{Generative pre-training}
The technique of large-scale pre-training uses architectures based on transformers and recently achieves impressive success. GPT-2 \cite{Radford}, as an example, can improve natural language understanding on a diverse corpus of unlabeled text after training on an enormous dataset. GPT-3 \cite{Brown} is proposed recently to use 175 billion parameters and 570GB of training data and achieves strong performance on many datasets without gradient updates. These models demonstrate the power of GPT and have been applied to many other disciplines. It is used to predict pixels knowing nothing about the input structure \cite{Chen}, learn speech representations \cite{Chung}, classify tumors in MR images \cite{Ghassemi}, and so forth. It is also employed to overcome defects of other methods, e.g., to reduce the labeling effort of graph neural networks \cite{Hu}.
\section{General Dynamics Model}
We first analyze the influential factors to robot dynamics and then describe the method to learn GRD, with an attempt to cover the dynamics of enormous serial robots. We also discuss the differences from dynamics randomization.
\subsection{Robot dynamics}
Consider robot dynamics
\begin{equation}\label{equ:dynamics}
\textbf{\emph{M}}\left(\textbf{\emph{q}}\right)\ddot{\textbf{\emph{q}}} + \textbf{\emph{C}}\left( \textbf{\emph{q}}, \dot{\textbf{\emph{q}}} \right)\dot{\textbf{\emph{q}}} + \textbf{\emph{G}}\left( \textbf{\emph{q}} \right) = \tau + \textbf{\emph{f}}
\end{equation}
where $\textbf{\emph{q}}$, $\dot{\textbf{\emph{q}}}$, and $\ddot{\textbf{\emph{q}}}$ are the vectors of joint position, velocity, and acceleration, respectively, $\tau$ is the input joint torque, $\textbf{\emph{f}}$ is the joint torque due to friction and disturbances, $\textbf{\emph{M}}$ is the inertia matrix, $\textbf{\emph{C}}$ is the centrifugal and Coriolis item, and $\textbf{\emph{G}}$ is the gravity. Label $\textbf{\emph{s}} = \left[\textbf{\emph{q}}^{\mathrm{T}}, \dot{\textbf{\emph{q}}}^{\mathrm{T}}\right]^{\mathrm{T}}$ to represent the robot state. The dynamics learning is to obtain the mapping from the current state $\textbf{\emph{s}}_t$ and the torque $\tau_t$ to the next state $\textbf{\emph{s}}_{t+1}$. Viewing the above equation, we conclude the following three facts:
\begin{itemize}
\item [1)] The value of each element in those matrices and vectors varies as robot parameters change.
\item [2)] The expression of each matrix is also different as the robot configuration changes.
\item [3)] The dimension of each matrix and vector alters as the robot link number changes.
\end{itemize}
It appears that the configuration and the dimension have more influence since they, other than parameters, change the expression of the dynamics equation.
System identification tunes parameters to match robot behaviors, which corresponds to approximate the exact dynamics equation. It appears to be time-consuming and error-prone. Dynamics randomization augments the training set with a wide range of parameters for dynamics network learning, which considers the variation of matrix values in Eq. \ref{equ:dynamics}. This method needs to restart the whole training process whenever it meets new requirements with different robot configurations. Another possible approach steps further concerning the variation of matrix value, expression, and dimension. It views all the above three factors, and we call it GRD since this model covers cross various dynamics parameters, topology configurations, and model dimensions. This general model builds a general environment for learning in simulation, avoids iterative sampling and modeling, and simplifies dynamics learning.
\subsection{Learning of general dynamics}
With the above analysis, we present the learning process, as shown in Fig. \ref{fig:gpt}, where a network modified from GPT is employed to learn general dynamics with a dataset. The dataset includes enormous robot models, acquired by randomizing dynamics parameters $P_p$, topology configurations $P_c$, and model dimensions $P_d$. Label $P=\{P_p,P_c,P_d\}$ the robot model.
\begin{figure}[t]
\centerline{\psfig{file=gpt_structure.eps,scale=0.95}}
\caption{The network structure and dataset for GRD learning. The data contains plentiful trajectories, each acquired by randomizing dynamics parameters, topology configurations, model dimensions, initial states, and driving torques. Constraints are added in states and torques to produce meaningful trajectories. The network is modified from GPT to adapt to robot dynamics learning. The inputs are joint states and torques of a trajectory in sequence, and the outputs are the prediction of the succeeding states.} \label{fig:gpt}\vspace{-0.5cm}
\end{figure}
The model dimensions illustrate how many links a robot has and significantly affect the scale of the dataset. We express it in the form of $P_d\in\{ 1,2,\cdots,o \}$, where $o\in \mathbb{N}^\star$. The results show that the dynamics we learn is downward compatible, which means the model trained for three links can be used directly for 2-link robots. Therefore, it is preferable in the dataset to have more trajectories of robots with more dimensions.
The topology configurations primarily address the connection type of robot links, namely the setting of each joint. The relative movements between two adjacent links can be rotational, linear, spherical, helical, and so forth. We focus on rotational joints, the elements of articulated robots, for detailed discussion, and our method also applies to other connection types. A revolute pair performs distinct behaviors rotating around a different axis, and we can randomly select the rotational direction, with each joint spinning around an arbitrary axis. However, this arbitrariness does not comply with commonly-used industrial robots, and one can further limit this randomization by setting $P_c=\{a_i|a_i\in\{\pm \textbf{\emph{x}}, \pm \textbf{\emph{y}}, \pm \textbf{\emph{z}}\}\}$. This reduced range means that there is a state at which the joint spins along an axis of the Cartesian coordinates.
The dynamics parameters describe the property of each link, which include mass $m_i$, the center of mass $\textbf{\emph{l}}_{cm,i}$, length $\textbf{\emph{l}}_i$, the moment of inertia $\textbf{\emph{I}}_i$, and friction coefficient $\mu_i$ of link $i$, expressed as $P_p = \{m_i, \textbf{\emph{l}}_{cm,i}, \textbf{\emph{l}}_i, \textbf{\emph{I}}_i, \mu_i\}$. This paper only considers serial robots, and the proposed method can also be available to soft robots, parallel robots, and robots with closed-loop chains. We set dynamics parameters in a manner to cover those commonly used robots. Meanwhile, we can add constraints to parameters to build models closer to real robots. These constraints limit the range of some parameters by relating them together, e.g., the center of mass is not at the end of the link. In other words, the addition of constraints is to exclude the improbable parameters, enhancing the dataset effectiveness. It is worthy to note that a substantial range of parameters is necessary if one aims at a GRD model.
Constraints of joint states and torques are also valuable in generating meaningful movements, which is helpful to prohibit undesired behaviors, such as out-of-reach acceleration and high-speed rotation. To obtain a simulation trajectory, we sequentially randomize a model dimension, a topology configuration, and dynamics parameters, randomly pick an initial state $\textbf{\emph{s}}_1$ under this robot model $P$, and continuously move motors with randomized torques (motor bubbling). Iteratively repeating the above steps leads to a dataset, in which a trajectory takes the form of $\left\{ \textbf{\emph{s}}_1, \tau_1, \cdots, \textbf{\emph{s}}_n, \tau_n\right\}$, where $n$ is the state number.
A substantial range of dynamics parameters and the addition of randomization of topology configurations and model dimensions naturally render a large-scale network structure and a massive dataset for learning. We examine the robot motion sequence in the dataset and find the trajectory characteristics are the effects of the current torque appear in the next state. It satisfies GPT's requirements, and then we use a structure, shown in Fig. \ref{fig:gpt}, modified from GPT, whose core idea is to approximate models via predicting data. The network input is a complete trajectory. More specifically, we can view it as a variable-size matrix, where each row corresponds to a vector $[\textbf{\emph{s}}_j^{\mathrm{T}},\tau_j^{\mathrm{T}}]^{\mathrm{T}}$ with $j\leq o$, and the column consists of each time step of a trajectory in sequence. It is worthy to note that the network is downward compatible: It can accept inputs with model dimensions less than $o$. The network output is also a matrix, where each row is the predicted state of the next time step, and the column corresponds to the time sequence.
We use the root mean square error (RMSE) to represent the deviation of outputs to the succeeding inputs and the self-supervised learning technique for training. The network structure is different from GPT in three aspects. We replace the embedding layer with a fully-connected one for encoding since inputs are vectors, remove the softmax in the output layer to accustom to our tasks, and erase the positional encoding in the front layers (which, we believe, is not necessary for processing motion sequence data). The remaining parts are the same as GPT. $N$ blocks are sequentially connected to improve data processing, and each block is composed of a masked multi-head attention layer, a multi-head attention layer, two feedforward layers, and three ``add \& norm'' modules. A fully-connected layer follows after the last block. Before processing a state, the multi-head attention improves the relationship understanding of other associated states, with joint attendance to different representation information at various positions \cite{Vaswani}. The ``masking'' aims to mask latter states so that the network predicts only according to current states.
\subsection{Difference from dynamics randomization}
Although both GRD and dynamics randomization appears similar in applying the randomization technique, they possess substantial differences:
\begin{itemize}
\item For research motivation: We intend to study a general dynamics model, aiming at covering various common instances of serial robots, whereas dynamics randomization focuses on a specific robot. By this means, we can directly apply the same general dynamics model to a variety of robots, but dynamics randomization needs to restart generating data and training networks according to newly designated platforms.
\item From the perspective of configuration, we add randomization of joint pins, i.e., to cross over various topology configurations, which props up the idea of generality. Dynamics randomization aims at a specific robot acquired from the target and obviously can not transfer to robots with different configurations.
\item In terms of the parameter: Dynamics randomization mainly focuses on a small range around nominal parameters, e.g. $\left[ 0.25, 4\right]\times$ default mass in \cite{Peng}, while our model faces a substantial parameter range to cover various robot sets. The general dynamics model comparably has more massive parameter ranges.
\item On the aspect of dimension, our model is compatible with robots of various link numbers. It can apply to both planar 2-link robots and cubic 6-link robots. Dynamics randomization does not address this issue and can only apply to the learned dimension.
\item Viewing the network size, we can easily distinguish the two types of networks. Dynamics randomization usually adopts a small network, and an example is the four hidden layers, each containing 128 neurons \cite{Peng}, while our model employs $10^8$-level sized parameters.
\end{itemize}
\section{Inverse Models of General Dynamics}
The state transition of various robots is acquired after learning GRD, and similarly, we can investigate its inverse models. Inverse dynamics accepts the current state, $\textbf{\emph{s}}_t$, and the desired next state, $\textbf{\emph{s}}_{t+1}$, and outputs the torque, $\tau_t$, to achieve that state transition. There are two types of models relating to inverse dynamics, as shown in Fig. \ref{fig:inverse}, where the upper one is to learn general inverse dynamics and the lower two are to learn inverse models of GRD. For the learning of either model, the same dataset described in the previous section is used.
\begin{figure}[t]
\subfigure[]{
\label{fig:inverse:a}
\begin{minipage}[b]{0.445\textwidth}
\centering
\includegraphics[scale=0.85]{inverse_dynamics.eps}
\end{minipage}}\\
\subfigure[]{
\label{fig:inverse:b}
\begin{minipage}[b]{0.22\textwidth}
\centering
\includegraphics[scale=0.85]{inverse_dynamics-part1.eps}
\end{minipage}}
\subfigure[]{
\label{fig:inverse:c}
\begin{minipage}[b]{0.22\textwidth}
\centering
\includegraphics[scale=0.85]{inverse_dynamics-part2.eps}
\end{minipage}}
\caption{Inverse dynamics related models. (a) The general inverse dynamics model, which is trained independently from the dynamics. (b) The right inverse networks and (c) the left inverse networks of GRD consider the dynamics error in its learning. The first network has distinct physical meanings, and the latter ones form autoencoders with GRD.}
\label{fig:inverse}\vspace{-0.5cm}
\end{figure}
One can refer to the same network structure as is applied to learn GRD to acquire general inverse dynamics, as shown in Fig. \ref{fig:inverse:a}, which also covers massive robot instances. Unlike the learning process of GRD, the outputs of the inverse dynamics do not appear in the succeeding inputs, and thus supervised learning is applied to train the network. This model has distinct physical meanings and can be used in a pre-trained environment, e.g., trajectory planning under torque constraints and trajectory tracking.
Since general models still have non-negligible learning errors even after sufficient training, the independently-trained inverse dynamics has no (apparently physical) relationship with GRD. It will produce larger (about twice of) model errors if putting them together. To solve this problem, we view it from another aspect, studying the inverse of general dynamics considering both models' errors. The lower half in Fig. \ref{fig:inverse} presents two inverse models, which are diverse as GRD produces apparent errors, and the connection of general dynamics and its inverse forms an autoencoder, limiting the gain of the two networks around one. In one viewpoint, the inverse models intend to reduce the errors produced by GRD from different directions. The right inverse in Fig. \ref{fig:inverse:b} takes the current state and the outcomes of GRD as input and endeavors to generate torques that equal the inputs to GRD. The combination of dynamics and its right inverse can apply to circumstances in which torques are supposed to be maintained or tracked. Given the current and succeeding states, the left inverse in Fig. \ref{fig:inverse:c} aims to result in torques with which GRD can output in a small range around the succeeding state. This combination of left inverse and GRD is valuable in planning and tracking trajectories. The self-supervised learning technique is applied to train the two inverse models while unchanging the dynamics model.
The differences of three inverse models in Fig. \ref{fig:inverse} are: Given the current state $\textbf{\emph{s}}_t$, the inverse dynamics maps the succeeding state $\textbf{\emph{s}}_{t+1}$ to the torque $\tau_t$; the right inverse model endeavors to relate the output of GRD $\hat{\textbf{\emph{s}}}_{t+1}$ with the torque $\tau_t$; the left inverse maps the succeeding state to the torque $\hat{\tau}_t$ that enables GRD to generate this succeeding state. With errors from GRD, these inverse models perform differently but become similar when GRD's errors approach acceptable values.
\section{Gen2Real}
GRD and its inverse models provide a simulation environment covering massive robot models with variant parameters, configurations, and dimensions, in which one can test and optimize his/her policies. After that, it is desirable to transfer models and skills to physical robots, and here we introduce a new concept, Gen2Real, which bridges between general models and reality.
There are typically two ways, as shown in Fig. \ref{fig:gen2real}, for model transfer. One is Gen2Real, transferring simply from general dynamics to reality (a determined robot target) with experimental data. We can view it as a transfer process from a simulated model with a randomized setting (including parameters, configurations, and dimensions) to a physical, specific robot. It is convenient: Anyone can directly apply our pre-trained general models to different robots with experiment data generated by motor bubbling and obtain dynamics networks with sufficient fitting precision, which spares the dull modeling and training in simulations (as what researchers do in Sim2Real).
Another transferring method is a combination of Gen2Spe and Spe2Real processes, as the dashed green arrows shown in Fig. \ref{fig:gen2real}. Gen2Spe aims to adapt a general model toward a specific one that is closer to the target robot. To do it, we need to generate a simulation dataset by applying the same configuration and dimension with the target robot and randomizing dynamics parameters in a range around nominal values, and then train the network to fit the simulated robot model. Spe2Real transfers the specific model to reality with experiment data, which is similar to the process commonly used in dynamics randomization, with the difference lying in the network structure and size. The specific model is like a transfer station in a bridge between general models and reality, whose purpose is theoretically to increase the transferring performance at the expense of the addition of simulation data.
Both two methods have merits. Gen2Real is direct, and practitioners can merely apply it without worrying about details, such as the robot's configuration and parameters. Gen2Spe and Spe2real take a roundabout route, requiring both training in simulation and the robot's nominal parameters, and perform well if the physical robot is a sample of the specific model.
\begin{figure}[t]
\centerline{\psfig{file=gen2real.eps,scale=0.9}}
\caption{The two processes of transferring to reality. One is Gen2Real, which transfers GRD to reality with experiment data. This process is simple, and practitioners can directly use it without trapped into details. Another is the combination of Gen2Spe and Spe2Real. Gen2Spe transfers GRD to a specific model with simulation data. This new model has the same dimension and configuration as the target robot, and the parameters are randomized in a range around the nominal values. Spe2Real transfers the specific model to reality with experiment data.} \label{fig:gen2real}\vspace{-0.5cm}
\end{figure}
Due to the massive network size, adjusting all weights during transferring will not receive good results, given a small number of experiment data, and correspondingly we only tune a few layers, which performs well. Pruning as transferring will be considered in later research.
\section{Experiments and Results}
\subsection{The pre-trained general models}
We apply an iterative process of setting a model and running trajectories to generate a dataset for learning in simulation. For the model set, we sequentially randomize a model dimension with $o\leq 6$, an axis direction for each revolute joint, and dynamics parameters for each link. We add several constraints to limit the dataset size in its generation, and the proposed method can apply to other instances out of the limitations. The setting of no-more-than-six dimensions is to consider the minimum requirement of freely moving in three-dimensional space. The consideration of revolute joints is to focus on the connection of articulated robots. Tab. \ref{tab:I} shows the parameter range, which considers both the commonly used robot models and the data effectiveness. We further strengthen the parameter constraints by relating some parameters: The relationship between link length and the center of mass avoids lopsided links, and the moment of inertia relates with mass and link length to strict the material density in a reasonable range. It shows that the parameter range for GRD is much broader than used in dynamics randomization. After acquiring this robot model, we sequentially randomize an initial state, $\textbf{\emph{s}}_1$, and joint torques for 50 time-steps in one trajectory. During execution, based on Eq. \ref{equ:dynamics}, we use fourth-order Runge-Kutta to integrate motion in a time interval $\Delta t = 0.1s$. This paper uniformly samples in randomization ranges, and other sampling methods, such as logarithmical sampling, can also be used. In total, the dataset has 20 million trajectories, each having 50 state points. We use $90\%$ for training and $10\%$ for testing.
\begin{table}[t]
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end {tabular}}
\caption{The range of dynamics parameters, initial states, and driving torques.} \label{tab:I}
\begin{center}
\renewcommand\arraystretch{1.5}
\begin{tabular}{p{2.5cm}<{\centering} p{3.5cm}<{\centering} }
\hline\hline
Parameter & Range \\
\hline
Mass $m_i$ & $\left[ 0.1,10\right]kg$ \\
Friction coefficient $\mu_i$ & $[0.5,2.5]$ \\
Center of mass $l_{cm,i}$ & $[-0.1,0.5]m$ \\
Link length $l_i$ & $l_{cm,i}\times [\frac{10}{7}, \frac{10}{3}]m$ \\
Moment of Inertia $I_i$ & $\frac{1}{12}m_i\times l_i^2\times[0.3,3.0]kg\cdot m^2$ \\
Initial position $q_{i,1}$ & $[-\pi,\pi] radians$ \\
Initial velocity $\dot{q}_i$ & $\left[-1,1\right] rad/s$ \\
Joint Torque $\tau_i$ & $[-30,30]N\cdot m$\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
Tab. \ref{tab:II} shows the GRD learning size and corresponding performance. In this table, $n_{params}$ is the total parameters; $n_{blocks}$ is the block number; $d_{model}$ is the data dimension of each block and also indicates the neuron number of linear layers, where the first layer includes $d_{model}$ neurons and the second incorporates $4\times d_{model}$ neurons; and $n_{heads}$ is the head number of the ``multi-head'', which reshapes a long vector into matrices. We display two general dynamics models, i.e., 2-link and 3-link robots, to show the learning performance. For the learning structure of 3-link dynamics, as shown in Tab. \ref{tab:II}, we provide 200 million parameters in total, consisting of 64 blocks with eight heads and 512 data dimensions in each block. Its RMSE is 0.1557, which contains the errors of joint positions and velocities. We also take the 3-link model as an example to test the effect of structure scale. The medium structure contains half parameters, and the small one has quarter parameters. We change their block number without revising each block. The results show that the increase of structure scale is helpful to the performance of general dynamics.
We compare the performance of approximating various robot models with other approaches. To do it, we generate five thousand trajectories using both 2-link and 3-link models and test the performance in tracking tasks. We pick two comparative methods: An LSTM with $4\times 128$ neurons employed in \cite{Peng} and a 5-layer linear network with about 7200 neurons. Fig. \ref{fig:dynamics_result:a} shows the results. The linear network has the largest errors, with a mean of 1 and a standard deviation of 0.58, and the LSTM is much better, with a half mean and a half standard deviation. Our model performs the best on these trajectories, with a mean of 0.17 and a standard deviation of 0.16. These results demonstrate the advantages of our general dynamics model.
\begin{table}[t]
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end {tabular}}
\caption{The network size and performance for different models of GRD.} \label{tab:II}
\begin{center}
\renewcommand\arraystretch{1.5}
\begin{tabular}{p{2.5cm}<{\centering} p{1.0cm}<{\centering} p{1.0cm}<{\centering}p{0.6cm}<{\centering}p{0.6cm}<{\centering}p{0.6cm}<{\centering}}
\hline\hline
Model & $n_{params}$ & $n_{blocks}$ & $d_{model}$ & $n_{heads}$ & RMSE \\
\hline
2-Link & 141M & 44 & 512 & 8 & 0.180\\%0.1602 \\
3-Link & 200M & 64 & 512 & 8 & 0.1557\\
\hline
3-Link (Small) & 50M & 16 & 512 & 8 & 0.1862\\
3-Link (Medium) & 100M & 32 & 512 & 8 & 0.1827\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[t]
\subfigure[]{
\label{fig:dynamics_result:a}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{dynamics_result.eps}
\end{minipage}}
\subfigure[]{
\label{fig:dynamics_result:b}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{inverse_dynamics_result.eps}
\end{minipage}}
\caption{The error comparison of different methods in tracking tasks with various robot a) dynamics and b) inverse dynamics.}
\label{fig:dynamics_result}
\end{figure}
\begin{table}[t]
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end {tabular}}
\caption{Performance of different inverse dynamics.} \label{tab:III}
\begin{center}
\renewcommand\arraystretch{1.5}
\begin{tabular}{p{4.0cm}<{\centering} p{3.5cm}<{\centering} }
\hline\hline
Model & RMSE \\
\hline
General inverse dynamics & $4.212 N\cdot m$\\
GRD + general inverse dynamics & $8.127 N\cdot m$ \\
GRD + right inverse model & $2.784 N\cdot m$ \\
General inverse dynamics + GRD & $0.204$ \\
Left inverse model + GRD & $0.076$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[t]
\subfigure[]{
\label{fig:inverse_results:a}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{inverse_result1.eps}
\end{minipage}}
\subfigure[]{
\label{fig:inverse_results:b}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{inverse_result2.eps}
\end{minipage}}
\subfigure[]{
\label{fig:inverse_results:c}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{inverse_result3.eps}
\end{minipage}}
\subfigure[]{
\label{fig:inverse_results:d}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{inverse_result4.eps}
\end{minipage}}
\caption{The errors of the combination of a) left inverse and GRD, b) general inverse dynamics and GRD, c) GRD and its right inverse, and d) GRD and general inverse dynamics.}
\label{fig:inverse_results}
\end{figure*}
We also display the learning results of inverse models of general dynamics, as shown in Tab. \ref{tab:III}, taking 3-link robots as an example. The three types of inverse models have the same structure size as used in RGD learning. We train the general inverse dynamics and achieve an RMSE of $4.212 N\cdot m$. This inverse dynamics model has a physical meaning and can perform independently. However, if one connects it to the right of GRD, the RMSE is $8.127 N\cdot m$, an almost double error. It shows that, due to model errors, GRD and general inverse dynamics have no apparent physical relationship. We train the right inverse of GRD and obtain an RMSE of $2.784 N\cdot m$, close to one-third error of the combination of general inverse dynamics and GRD. Similar results apply in investigating the left inverse. Putting general inverse dynamics to the left of GRD receives an RMSE of 0.204, about one-third larger than the GRD's error. Combining the left inverse model and GRD reduces the RMSE to 0.076, an approximately one-third error of combining GRD and general inverse dynamics. The above results show that these two inverse models play essential roles in cooperating with GRD, with less accumulated errors, and their combinations are more suitable to instances where dynamics and inverse dynamics are simultaneously needed.
To analyze how those models work with various robots, we generate another one thousand trajectories on a hundred randomly selected models and test the performance of connecting GRD with different inverse models. Fig. \ref{fig:inverse_results} shows the results of the error distribution of four different combinations. Since the state includes position and velocity, we exhibit them separately in the first two subfigures. Fig. \ref{fig:inverse_results:a} shows the results of combining left inverse and GRD. The mean of each joint position error is about 0.006 $radians$, a tiny deviation, and the average of the joint velocity errors is about 0.02 $rad/s$. We can conclude that the position is more precise than the velocity using GRD and its inverse. Fig. \ref{fig:inverse_results:b} replaces the left inverse with general inverse dynamics, and their results show that the position errors demonstrate little difference, but a substantial divergence exists between velocity errors. It appears that the training of the left inverse model from general inverse dynamics primarily reduces the error of joint velocities. Fig. \ref{fig:inverse_results:c} presents the torque results of the combination of GRD and its right inverse. The base joint exerts a mean of 2.5 $N\cdot m$ torque errors, and the error means of the other two joints are around 1 $N\cdot m$. Replacing the right inverse with general inverse dynamics leads to torque error enlargement of every joint. These results demonstrate that the two inverse models present fewer errors in joint velocities and torques than general inverse dynamics when connecting to GRD.
To manifest the performance in learning various inverse dynamics, we also compare with LSTM and linear network, using the trajectories of 2-link and 3-link robots, as tested in Fig. \ref{fig:dynamics_result:a}. Fig \ref{fig:dynamics_result:b} displays the error distribution of different methods, where the linear network has the largest RMSE error, and our general model still performs the best. It shows the superiority of our inverse dynamics models.
\subsection{Transfer to reality}
\begin{figure}[t]
\centerline{\psfig{file=exp_rob.eps,scale=0.4}}
\caption{The UR5e robot used in experiments.} \label{fig:rob}
\end{figure}
\begin{figure*}[t]
\subfigure[]{
\label{fig:transfer_results:a}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D1.eps}
\end{minipage}}
\subfigure[]{
\label{fig:transfer_results:b}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D2.eps}
\end{minipage}}
\subfigure[]{
\label{fig:transfer_results:c}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D3.eps}
\end{minipage}}
\subfigure[]{
\label{fig:transfer_results:d}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D4.eps}
\end{minipage}}
\subfigure[]{
\label{fig:transfer_results:e}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D_mean.eps}
\end{minipage}}\\
\subfigure[]{
\label{fig:transfer_results:f}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D-1.eps}
\end{minipage}}
\subfigure[]{
\label{fig:transfer_results:g}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D-2.eps}
\end{minipage}}
\subfigure[]{
\label{fig:transfer_results:h}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D-3.eps}
\end{minipage}}
\subfigure[]{
\label{fig:transfer_results:i}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D-4.eps}
\end{minipage}}
\subfigure[]{
\label{fig:transfer_results:j}
\begin{minipage}[b]{0.183\textwidth}
\centering
\includegraphics[scale=0.27]{transfer_D-_mean.eps}
\end{minipage}}
\caption{The training results of different methods in transferring to reality, where log of RMSE is used. a)-d) use a linear network, LSTM, Spe2Real, and Gen2Real to transfer dynamics models to reality. e) is the error distributions of the last 100 steps. f)-i) use the four models to transfer inverse dynamics models to reality. j) is their error distributions of the last 100 steps.}
\label{fig:transfer_results}
\end{figure*}
The UR5e robot, as shown in Fig. \ref{fig:rob}, applies in transfer validation. We run the robot with motor bubbling and collect a trajectory consisting of five thousand time steps, in which $80\%$ is for training and the rest for testing. In transferring to reality, we only adjust the last layer of our network since it has a massive scale. As for comparisons, we also employ the two comparative methods, LSTM and linear network, with the same structure used in the previous subsection, for Sim2Real. We use a similar parameter range, $[0.25, 4]$, as used in the document \cite{Peng}, and train these two networks.
Fig. \ref{fig:transfer_results} shows the transferring results of different methods in a thousand episodes, where the star means the error of directly applying to experiment data without transferring, which displays that the general model is more compatible with the unseen robots. All models approximate the physical dynamics with acceptable learning errors, with the linear network having the largest errors. The upper half of Fig. \ref{fig:transfer_results} shows the result of dynamics transfer. Gen2Real has the best performance, with a mean of 0.018 and a standard deviation of 0.004. Spe2Real exhibits a moderate error, with a mean of 0.036 and a standard deviation of 0.010. The lower half of Fig. \ref{fig:transfer_results} is the result of transferring inverse dynamics. LSTM is the best, and Spe2Real is close, with an RMSE mean of 0.72 $N\cdot m$ and a standard deviation of 0.10 $N\cdot m$. Gen2Real is relatively larger, with a mean of 0.83 $N\cdot m$ and a standard deviation of 0.21 $N\cdot m$. These results reveal that Gen2Real and Spe2Real can transfer the general models to reality with competitive performance.
We test the capability of transferring to another robot set, which is entirely different from the UR5e model. We generate a new dataset by randomizing a 2-link robot with $[0.25, 4]$ parameter range and different joint axes. We train the comparative methods with the data and then transfer them to the UR5 robot. Fig. \ref{fig:another_results} shows the results. It takes LSTM a long time to converge, with a mean of 5 at the end. The linear network's behavior is similar to its Sim2Real. Compare with them, Gen2Real has outstanding performance, with a one-fourth RMSE mean and a one-third standard variation. This comparison validates that our models are superior in a different robot set transfer.
\begin{figure}[t]
\subfigure[]{
\label{fig:another_results:a}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{another_plot.eps}
\end{minipage}}
\subfigure[]{
\label{fig:another_results:b}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{another_mean.eps}
\end{minipage}}
\caption{The results of various methods in learning a different robot dynamics and transferring to UR5. They learn a 2-link model with randomized parameters in simulation and transfer to UR5 in reality. a) The learning process of transfer. b) The distribution of RMSEs of the last 100 steps.}
\label{fig:another_results}
\end{figure}
\begin{figure}[t]
\subfigure[]{
\label{fig:another_results:a}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{transfer_D-D.eps}
\end{minipage}}
\subfigure[]{
\label{fig:another_results:b}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[scale=0.33]{transfer_DD-.eps}
\end{minipage}}
\caption{The transfer results of different methods in transferring a combination of robot dynamics and its inverse. a) Dynamics follows its inverse. b) Dynamics is followed by its inverse.}
\label{fig:DDtransfer_results}
\end{figure}
We also transfer the combination of robot dynamics and its inverse to reality. Fig. \ref{fig:DDtransfer_results} shows the results of different methods. For the connection of dynamics and its inverse, Gen2Real outperforms the other two methods, with distinct differences in approximation error. For the sequence that dynamics follows its inverse, Gen2Real is inferior to others, but their RMSEs are not much different.
In summary, our general models and Gen2Real have exhibited outstanding performance. In modeling various robots in simulation, GRD is superior to the comparative methods. In transferring to reality, Gen2Real is predominant for dynamics transfer and presents a competent performance for inverse dynamics. All in all, the ``generality'' of our models is revealed.
\section{Conclusion}
We study the learning of general robot dynamics and its inverse models for the first time. These models can promote robot policy learning since they incorporate the dynamics of massive robots. We consider the ``generality'' from the viewpoint of different robot properties. In the order of influence on dynamics, dynamics parameters change the attribute of robot links; topology configurations change the type of robot joints; model dimensions change the number of robot links. We generate a dataset by randomizing these three elements for various robot models and driving torques for motion trajectory. We employ a structure by modifying GPT to adapt to learn general dynamics. We further propose and train three inverse models of GRD, each for specific applications. The results display the advantages of the proposed models, outbalancing other methods in approximating various robot dynamics.
We also investigate Gen2Real to transfer general models to reality. This method can save some tedious processes in Sim2Real and thereby lower the threshold of dynamics learning. Comparisons with other methods show that Gen2Real achieves moderate performance in transferring to reality and superior results to a different robot set.
|
1,108,101,564,619 | arxiv | \section{Introduction}
The $\psi(3770)$ is the lowest
mass charmonium state above the $D\bar{D}$ threshold, and
is generally regarded as the $1^{3}D_{1}$ dominant charmonium
state~\cite{3770}.
To investigate the nature of the $\psi(3770)$ resonance, the BESIII Collaboration
performed a
cross-section scan experiment, in which $e^+e^-$ data at 41 center-of-mass (CM) energy ($E_{\rm cm}$) points from
3.73 to 3.89~GeV were collected.
This data sample, referred to as the ``$\psi(3770)$ cross-section scan data,'' was collected during the time period from June 1st to June 16th, 2010.
The $\psi(3770)$ cross-section scan data can be used to study the line-shapes of the cross sections for
various hadronic final states produced in $e^+e^-$ annihilation in the energy region around the $\psi(3770)$.
Amplitude analyses of these line-shapes of cross sections will provide crucial information
to explore the anomalous line-shape observed by the BESII experiment in 2008~\cite{Int_ref2}.
These also benefit the measurements of the parameters of the $\psi(3770)$ resonance and shed light on the understanding of the branching fraction of $\psi(3770)\to$ non-$D\bar D$~\cite{Int_nonDD1,Int_nonDD2,Int_nonDD3,Int_nonDD4,nonDDCLEO} decays.
In this paper, we present measurements of the integrated luminosity of the $\psi(3770)$ cross-section scan data at each $E_{\rm cm}$
by analyzing large angle Bhabha scattering events.
We follow a method similar to that used in the measurement of the integrated luminosity of the data taken at $E_{\rm cm}=$ 3.773 GeV with the BESIII detector~\cite{psi3770_lum}. Furthermore, the luminosities are checked with an independent measurement by
analyzing $e^+e^-\rightarrow(\gamma)\gamma\gamma$~events.
\section{BESIII detector}
BEPCII~\cite{ref1} is a double-ring $e^{+}e^{-}$ collider. The design peak luminosity is $1\times10^{33}$ cm$^{-2}$s$^{-1}$ at a
beam current of $0.93$ A and was achieved in 2016. The BESIII detector~\cite{ref1} has a geometrical acceptance of $93\%$ of $4\pi$ and consists of the following main
components: 1) a small-celled, helium-based main drift chamber (MDC) with 43 layers. The average single wire resolution
is 135 $\mu$m, and the momentum resolution for $1~\rm{GeV}$$/c$ charged particles in a $1~\rm{T}$ magnetic field is $0.5\%$; 2) an electromagnetic
calorimeter (EMC) made of 6240 CsI (Tl) crystals arranged in a cylindrical shape (barrel) plus two endcaps. For 1.0 GeV photons,
the energy resolution is $2.5\%$ (5\%) in the barrel (endcaps), and the position resolution is 6 mm (9 mm) in the barrel (endcaps); 3) a Time-Of-Flight system (TOF) for particle identification composed of a barrel part made of two layers with
88 pieces of 5 cm thick, 2.4 m long plastic scintillators in each layer, and two endcaps with 96 fan-shaped, 5 cm thick, plastic
scintillators in each endcap. The time resolution is 80 ps (110 ps) in the barrel (endcaps), corresponding to a $2\sigma$ K/$\pi$
separation for momentum up to about 1.0 GeV/$c$; 4) a muon chamber system (MUC) made of 1600 m$^{2}$ of Resistive Plate Chambers (RPC) arranged
in 9 layers in the barrel and 8 layers in the endcaps and incorporated in the return iron of the superconducting magnet. The position resolution is about 2 cm.
\section{Method}
In principle, any process with a well-known cross-section can be used to determine the integrated luminosity of the corresponding data set. The luminosity $\mathcal L$ can be calculated by
\begin{linenomath*}
\begin{equation}\label{eq:lum}
\mathcal L=\frac{N^{\rm obs}\times(1-\eta)}{\sigma\times\varepsilon},
\end{equation}
\end{linenomath*}
where $N^{\rm obs}$ is the number of observed events, $\eta$ is the background contamination rate, $\sigma$ is the cross section and $\varepsilon$ is the detection efficiency.
In $e^+e^-$ experiments, useful processes for
the determination of integrated luminosity are the QED processes $e^+e^-\to(\gamma)e^+e^-$,
$e^+e^-\to(\gamma)\gamma\gamma$ and $e^+e^-\to(\gamma)\mu^+\mu^-$
since they have precisely calculated cross sections in QED and relatively simple and distinctive final states. According to its largest production cross section,
the Bhabha scattering process~($e^+e^-\to(\gamma)e^+e^-$) is used to measure the integrated luminosity of
the $\psi(3770)$ cross-section scan data.
In this work, Babayaga v3.5~\cite{babayaga} is adopted as the generator to
determine the cross sections and the detection efficiencies.
\section{Luminosity measurement}
\subsection{Event selection}\label{sec:lummea}
The Bhabha scattering candidate events are selected by requiring exactly two oppositely-charged tracks that are well reconstructed in the MDC and satisfy $|\cos\theta|<0.70$,
where $\theta$ is the polar angle of the charged track.
Each good charged track must satisfy $|V_r|<1$~cm and $|V_z|<5$~cm.
Here $V_r$ and $V_z$ are the closest distance of the charged tracks to the interaction point in the plane perpendicular to the beam direction and along the beam direction, respectively.
To suppress the backgrounds from
$e^+e^-\to J/\psi\rm X$, where the $J/\psi$ decays into a $e^+e^-$ pair, and X refers to $\gamma_{\rm{ISR}}$, $\pi^0\pi^0$, $\eta$, $\pi^0$, or $\gamma\gamma$, the sum of the momenta of the two good charged tracks is required to be greater
than $0.9\times E_{\rm cm}/c$.
The momentum of each good charged track is also required to be less than $(E_{\rm{b}}/c+0.15)~\rm{GeV}$$/c$, where $E_{\rm b}$ is the beam energy and 0.15
GeV/$c$ is about 4 times the momentum resolution~\cite{psi3770_lum}.
The energy deposited in the EMC of each charged track ($E_{\rm EMC}$) is required to be larger than 1 GeV to reject the background from $e^+e^-\rightarrow(\gamma)\mu^+\mu^-$.
After applying the above selection criteria, most of the surviving events come from the process $e^+e^-\to(\gamma)e^+e^-$.
Taking $E_{\rm cm}=3.7358$~GeV as an example,
comparisons of the distributions of the momentum, polar angle and deposited energy in the EMC of the charged tracks between data and Monte Carlo (MC) simulation are shown in Fig.~\ref{fig:cmp}. Good agreement between data and MC simulation is observed.
\subsection{Background estimation}
Most of the surviving candidate events are from $e^+e^-\to(\gamma)e^+e^-$. Potential background contamination includes two parts. One is the beam-associated background, such as beam-gas and beam-wall events. The other is background from $e^+e^-$ reaction including $\psi(3770)\to D\bar{D}$, $\psi(3770)\to$ non-$D\bar{D}$,
$e^+e^-\to (\gamma)J/\psi$, $(\gamma)\psi(3686)$, $q\bar{q}$, $(\gamma)\mu^+\mu^-$ and $(\gamma)\tau^+\tau^-$. To study the beam-associated backgrounds, we analyzed the separated-beam data samples collected at 3.400 GeV and 4.030 GeV with BESIII.
To estimate the background contamination rates for the other background processes, we analyze large MC samples generated at $E_{\rm cm}=3.773$~GeV.
The overall contamination rate $\eta$ is estimated by
\begin{linenomath*}
\begin{equation}
\eta=\frac{\sum\sigma^i\times\eta^i}{\sigma^{\rm Bhabha}\times\varepsilon^{\rm Bhabha}},
\end{equation}
\end{linenomath*}
where $\sigma^i$ and $\eta^i$ are the cross section and the contamination rate for a specific process $i$, respectively; and $\sigma^{\rm Bhabha}$ and $\varepsilon^{\rm Bhabha}$ are the cross section and detection efficiency, respectively, for the Bhabha scattering process.
The overall contamination rate of these backgrounds is estimated to be at the level of $10^{-4}$.
\subsection{Numerical result}
\label{sec:rslt}
Inserting the numbers of observed Bhabha scattering events,
the contamination rates of backgrounds, the detection efficiencies
and cross sections calculated with the Babayaga v3.5 generator~\cite{babayaga} into Eq.~(\ref{eq:lum}),
we obtain the integrated luminosity at individual CM energy points for the $\psi(3770)$ cross-section scan data.
The measured integrated luminosities are summarized in the second column of Table~\ref{tab:rslt}. The total integrated luminosity of the $\psi(3770)$ cross-section scan data is determined to be $76.16\pm0.04\pm0.61$~pb$^{-1}$, where the first uncertainty is statistical and the second systematic, which will be discussed in the following.
\end{multicols}
\begin{center}
\includegraphics[width=0.4\textwidth]{Compare_p_dataMC37358_ep.eps}
\includegraphics[width=0.4\textwidth]{Compare_p_dataMC37358_em.eps}\\
\includegraphics[width=0.4\textwidth]{Compare_costheta_dataMC37358_ep.eps}
\includegraphics[width=0.4\textwidth]{Compare_costheta_dataMC37358_em.eps}\\
\includegraphics[width=0.4\textwidth]{Compare_EEMC_dataMC37358_ep.eps}
\includegraphics[width=0.4\textwidth]{Compare_EEMC_dataMC37358_em.eps}
\figcaption{\label{fig:cmp} Distributions of (a), (b) momentum, (c), (d) $\cos\theta$ and (e), (f) deposited energy in the EMC of the two charged tracks in the CM frame for selected Bhabha candidate events from the data taken at $E_{\rm cm}$ =3.7358 GeV (points with error bars) and the corresponding MC simulation (histograms). The MC entries are normalized to the experimental data.}
\end{center}
\begin{multicols}{2}
\section{Systematic uncertainty}
\label{sec:syserr}
The main sources of the systematic uncertainty are the event selection, the trigger efficiency, the generator, and the beam energy.
Due to the low luminosity of individual data sets, we take the average value among the 41 CM energy points as the systematic uncertainties to avoid large statistical fluctuations.
To estimate the systematic uncertainty of the $\cos\theta$ requirement, we repeat the measurements with the alternative requirements $|\cos\theta|<0.60$, $|\cos\theta|<0.65$, $|\cos\theta|<0.75$, or $|\cos\theta|<0.80$, individually. The maximum relative change of the total integrated luminosity with respect to the nominal value is taken as the systematic uncertainty.
To study the systematic uncertainty arising from the MDC information, including the tracking and momentum requirements, we select a Bhabha sample using only EMC information. Two clusters must be reconstructed in the EMC with a deposited energy larger than $0.85\times E_{\rm{b}}$ and a polar angle within $|\cos\theta|<0.7$. To remove $e^+e^-\rightarrow(\gamma)\gamma\gamma$ events, an additional requirement of $5^{\circ}<|\Delta\phi|<22^{\circ}$ is imposed, where $\Delta\phi$ is defined as $\Delta\phi=|\phi_{1}-\phi_{2}|-180^{\circ}$, and $\phi_{1}$ and $\phi_{2}$ are the azimuthal angles of the two showers in the EMC. The requirements on the MDC information are then imposed on the selected candidates, and the ratio of the surviving events is regarded as the corresponding acceptance efficiency. The difference of the acceptance efficiencies between data and MC simulation is taken as the relevant systematic uncertainty.
\begin{center}
\tabcaption{ \label{tab:rslt} Summary of integrated luminosities measured using the processes $e^+e^-\rightarrow(\gamma)e^+e^-$ ($\mathcal L^{e^+e^-}$) and $e^+e^-\rightarrow(\gamma)\gamma\gamma$ ($\mathcal L^{\gamma\gamma}$) at each individual CM energy, where the first uncertainties are statistical and the second are systematic.}
\footnotesize
\begin{tabular*}{80mm}{c@{\extracolsep{\fill}}cc}
\toprule $E_{\rm cm}$ (GeV) & $\mathcal L^{e^+e^-}$ (nb$^{-1}$) & $\mathcal L^{\gamma\gamma}$ (nb$^{-1}$)\\
\hline
3.6471 &$ 2255.4\pm 6.3\pm 18.0$ &$ 2250.3\pm15.5\pm24.8 $\\
3.6531 &$ 2214.0\pm 6.3\pm 17.7$ &$ 2184.1\pm15.3\pm24.0 $\\
3.7266 &$ 896.2\pm 4.1\pm 7.2$ &$ 879.8\pm9.9\pm9.7 $\\
3.7356 &$ 334.8\pm 2.5\pm 2.7$ &$ 340.9\pm6.2\pm3.7 $\\
3.7358 &$ 491.9\pm 3.0\pm 3.9$ &$ 484.8\pm7.4\pm5.3 $\\
3.7376 &$ 327.7\pm 2.5\pm 2.6$ &$ 324.1\pm6.0\pm3.6 $\\
3.7447 &$ 956.0\pm 4.2\pm 7.6$ &$ 933.9\pm10.3\pm10.3 $\\
3.7464 &$ 1412.2\pm 5.1\pm 11.3$ &$ 1404.1\pm12.6\pm15.4 $\\
3.7488 &$ 2270.9\pm 6.5\pm 18.2$ &$ 2267.6\pm16.0\pm24.9 $\\
3.7503 &$ 2971.8\pm 7.5\pm 23.8$ &$ 2962.7\pm18.3\pm32.6 $\\
3.7526 &$ 3310.7\pm 7.9\pm 26.5$ &$ 3308.1\pm19.4\pm36.4 $\\
3.7541 &$ 3418.1\pm 8.0\pm 27.3$ &$ 3370.0\pm19.6\pm37.1 $\\
3.7555 &$ 3878.0\pm 8.5\pm 31.0$ &$ 3824.9\pm20.9\pm42.1 $\\
3.7585 &$ 4444.8\pm 9.2\pm 35.6$ &$ 4411.9\pm22.4\pm48.5 $\\
3.7616 &$ 4494.7\pm 9.2\pm 36.0$ &$ 4456.9\pm22.5\pm49.0 $\\
3.7645 &$ 3290.3\pm 7.9\pm 26.3$ &$ 3277.4\pm19.3\pm36.1 $\\
3.7675 &$ 2449.9\pm 6.8\pm 19.6$ &$ 2419.2\pm16.6\pm26.6 $\\
3.7705 &$ 2021.7\pm 6.2\pm 16.2$ &$ 2001.7\pm15.1\pm22.0 $\\
3.7735 &$ 1833.0\pm 5.9\pm 14.7$ &$ 1818.0\pm14.4\pm20.0 $\\
3.7765 &$ 1829.4\pm 5.9\pm 14.6$ &$ 1823.1\pm14.5\pm20.1 $\\
3.7795 &$ 1956.1\pm 6.1\pm 15.6$ &$ 1933.1\pm14.9\pm21.3 $\\
3.7825 &$ 2148.3\pm 6.4\pm 17.2$ &$ 2116.8\pm15.6\pm23.3 $\\
3.7855 &$ 2546.7\pm 7.0\pm 20.4$ &$ 2538.0\pm17.1\pm27.9 $\\
3.7882 &$ 2840.9\pm 7.4\pm 22.7$ &$ 2811.2\pm18.0\pm30.9 $\\
3.7925 &$ 3537.2\pm 8.2\pm 28.3$ &$ 3506.3\pm20.1\pm38.6 $\\
3.7964 &$ 4056.9\pm 8.8\pm 32.5$ &$ 4006.1\pm21.6\pm44.1 $\\
3.8002 &$ 3931.2\pm 8.7\pm 31.4$ &$ 3911.1\pm21.3\pm43.0 $\\
3.8026 &$ 2690.5\pm 7.2\pm 21.5$ &$ 2671.3\pm17.6\pm29.4 $\\
3.8064 &$ 1762.4\pm 5.8\pm 14.1$ &$ 1732.0\pm14.2\pm19.1 $\\
3.8095 &$ 1252.3\pm 4.9\pm 10.0$ &$ 1275.1\pm12.2\pm14.0 $\\
3.8124 &$ 898.5\pm 4.2\pm 7.2$ &$ 898.5\pm10.3\pm9.9 $\\
3.8156 &$ 683.0\pm 3.6\pm 5.5$ &$ 666.6\pm8.8\pm7.3 $\\
3.8236 &$ 399.5\pm 2.8\pm 3.2$ &$ 386.3\pm6.7\pm4.2 $\\
3.8315 &$ 281.7\pm 2.3\pm 2.3$ &$ 278.5\pm5.7\pm3.1 $\\
3.8396 &$ 282.3\pm 2.4\pm 2.3$ &$ 269.6\pm5.7\pm3.0 $\\
3.8475 &$ 279.8\pm 2.4\pm 2.2$ &$ 273.8\pm5.7\pm3.0 $\\
3.8557 &$ 318.8\pm 2.5\pm 2.6$ &$ 317.8\pm6.2\pm3.5 $\\
3.8636 &$ 302.3\pm 2.5\pm 2.4$ &$ 300.6\pm6.0\pm3.3 $\\
3.8715 &$ 514.2\pm 3.2\pm 4.1$ &$ 507.7\pm7.8\pm5.6 $\\
3.8805 &$ 190.1\pm 2.0\pm 1.5$ &$ 188.1\pm4.8\pm2.1 $\\
3.8905 &$ 184.1\pm 1.9\pm 1.5$ &$ 172.2\pm4.6\pm1.9 $\\
\bottomrule
\end{tabular*}
\end{center}
To estimate the systematic uncertainties of the EMC cluster reconstruction and $E_{\rm{EMC}}$ requirement, we select a Bhabha sample with almost the same selection requirements as those listed in Section \textbf{4.1} except for the deposited energy requirement. Additional requirements of $E_{\rm{EMC}}>1.0~\rm{GeV}$ and $E_{\rm{EMC}}/p>0.8$ are imposed on one charged track and the other charged track is kept as the control sample. The difference of the acceptance efficiencies of the EMC cluster reconstruction and $E_{\rm{EMC}}$ requirement between data and MC simulation are taken as the systematic uncertainties.
The uncertainty of the trigger efficiency is less than 0.1\%~\cite{eff_trig}. The systematic uncertainty due to background is negligible.
The uncertainty associated with the signal MC model
due to the Babayaga generator is assigned to be $0.5\%$ according to Ref.~\cite{Babayaga_syserr}. To estimate the systematic uncertainty due to beam energy, we repeat the measurement by shifting the CM energies by $\pm 0.5$, $\pm 1$
or $\pm 2\rm~MeV$, individually. The largest change in total integrated luminosity with respect to the nominal value is assigned as the
systematic uncertainty.
All of the systematic uncertainties are summarized in Table~\ref{table:syserr}.
Assuming the individual uncertainties to be independent, the total systematic uncertainty, 0.8\%, is calculated by adding them in quadrature.
\begin{flushleft}
\tabcaption{ \label{table:syserr} Summary of systematic uncertainties in the luminosity measurement using the processes $e^+e^-\rightarrow(\gamma)e^+e^-$ and $e^+e^-\rightarrow(\gamma)\gamma\gamma$.}
\footnotesize
\begin{tabular*}{85mm}{c@{\extracolsep{\fill}}cc}
\toprule
\multirow{2}{*}{Source} & \multicolumn{2}{c}{Systematic uncertainty (\%)}\\
& $e^+e^-\rightarrow(\gamma)e^+e^-$ & $e^+e^-\rightarrow(\gamma)\gamma\gamma$\\
\hline
$|\cos\theta|<0.70$ & 0.2 & 0.2\\
Tracking and $p$ requirement & 0.5 & - \\
$E_{\rm EMC}$ requirement & 0.2 & 0.2 \\
EMC cluster reconstruction & 0.06 & 0.06 \\
$\Delta\phi$ requirement & - & 0.05 \\
Trigger efficiency & 0.1 & 0.1 \\
Generator & 0.5 & 1.0 \\
Beam energy & 0.11 & 0.11 \\
\hline
Total & 0.8 & 1.1 \\
\bottomrule
\end{tabular*}
\end{flushleft}
\section{Cross check}
As a cross check, we perform an independent measurement of the integrated luminosities of the $\psi(3770)$ cross-section scan data by analyzing the process $e^+e^-\rightarrow(\gamma)\gamma\gamma$.
To select events from the process $e^+e^-\rightarrow(\gamma)\gamma\gamma$, we require that the number of good charged tracks is zero.
Two neutral clusters are required to be within the polar angle region $|\cos\theta|<0.7$ and the deposited energy
of each cluster in the EMC should be larger than $0.4\times E_{\rm b}$. Since the
directions of photons are not affected by the magnetic field, the two photon candidates should be back-to-back, and are required to satisfy $|\Delta\phi|<2.5^{\circ}$, where $\Delta\phi$ is defined as previously.
Figure~\ref{fig:ggdphi} shows a comparison of the $\Delta\phi$ distribution of
the $e^+e^-\rightarrow(\gamma)\gamma\gamma$ candidate events between the data taken at $E_{\rm cm}=3.7358$~GeV and the corresponding MC simulation. Good agreement is visible.
\begin{center}
\includegraphics[width=0.4\textwidth]{ggCompare_withArrow.eps}
\figcaption{\label{fig:ggdphi} The $\Delta\phi$ distributions of the $e^+e^-\rightarrow(\gamma)\gamma\gamma$ candidate events selected from the data taken at $E_{\rm cm}=3.7358$~GeV (points with error bars) and the corresponding MC simulation (histogram). The selected $\Delta\phi$ range is indicated by the two arrows. The MC entries are normalized to the experimental data.}
\end{center}
For the background estimation, we analyzed the separated-beam data samples collected at 3.400 GeV and 4.030 GeV with BESIII, as well as MC samples of $\psi(3770)\to D\bar{D}$, $\psi(3770)\to$ non-$D\bar{D}$, $e^+e^-\to (\gamma)J/\psi$, $(\gamma)\psi(3686)$, $q\bar{q}$, $(\gamma)e^+e^-$, $(\gamma)\mu^+\mu^-$, and $(\gamma)\tau^+\tau^-$. The total contamination rate is estimated to be at the level of $10^{-3}$.
The integrated luminosity for the individual CM energy points is determined with Eq.~(\ref{eq:lum}) by using the numbers of observed $e^+e^-\to(\gamma)\gamma\gamma$ events, the contamination rates of backgrounds, the corresponding detection efficiencies, and cross sections calculated with the Babayaga v3.5 generator~\cite{babayaga}, as summarized in the third column of Table~\ref{tab:rslt}.
The main sources of the systematic uncertainty arise from the EMC cluster reconstruction, the requirements on $|\cos\theta|$, $E_{\rm EMC}$ and $\Delta\phi$, the trigger efficiency, the generator, and the beam energy. Most sources are the same as those in the luminosity measurement using Bhabha scattering events, and the corresponding systematic uncertainties are determined with the same approach.
To estimate the systematic uncertainty originating from the requirement on $\Delta\phi$, which is only used in the selection of $e^+e^-\rightarrow(\gamma)\gamma\gamma$ events, we repeat the measurements with the alternative requirements $|\Delta\phi|<2^{\circ}$ or $|\Delta\phi|<3^{\circ}$, individually. The maximum relative change of the integrated luminosity with respect to the nominal value is taken as the systematic uncertainty. The individual uncertainties are summarized in Table~\ref{table:syserr}, and the total systematic uncertainty, 1.1\%, is obtained by assuming the different systematic sources independently and adding the individual values in quadrature. The total integrated luminosity measured using $e^+e^-\rightarrow(\gamma)\gamma\gamma$ events is $75.50\pm0.09\pm0.83$~pb$^{-1}$, which is consistent with the result obtained using $e^+e^-\rightarrow(\gamma)e^+e^-$ within uncertainties, but with relatively larger statistical and systematical uncertainties.
\section{Summary}
\label{sec:sum}
By analyzing $e^+e^-\to(\gamma)e^+e^-$ events,
we measure the integrated luminosities of the $\psi(3770)$ cross-section scan data taken at 41 CM energy points. The total integrated luminosity of the $\psi(3770)$ cross-section scan data is determined to be $76.16\pm0.04\pm0.61$~pb$^{-1}$, where the first uncertainty is statistical and the second systematic. As a cross check, we also perform a measurement of the integrated luminosity for the $\psi(3770)$ cross-section scan data using $e^+e^-\to(\gamma)\gamma\gamma
$ events. The results are consistent with that of the previous measurement, but with relatively larger uncertainty. The obtained integrated luminosities at the individual CM energy points are summarized in Table~\ref{tab:rslt}. The results provide important information needed to measure the cross sections of exclusive or inclusive hadronic production in
$e^+e^-$ annihilation and thus benefit the understanding of the anomalous line-shape of $e^+e^-\to$ inclusive hadrons observed at BESII, the nature of the $\psi(3770)$, and the origin of the large branching fraction of $\psi(3770)\to$ non-$D\bar D$ decays~\cite{Int_ref2}.
\section{Acknowledgement}
The BESIII collaboration thanks the staff of BEPCII and the computing center for their hard efforts. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11235011, 11335008, 11425524, 11625523, 11635010; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U1332201, U1532257, U1532258; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; National 1000 Talents Program of China; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contracts Nos. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0010118, DE-SC-0010504, DE-SC-0012069; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt; WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.
\end{multicols}
\vspace{-1mm}
\centerline{\rule{80mm}{0.1pt}}
\vspace{2mm}
\begin{multicols}{2}
|
1,108,101,564,620 | arxiv | \section{Introduction}
\label{sec:intro}
The study of microswimming has exploded in recent years with the advent of
precise, well-controlled experiments. (See for instance the reviews
of~\citet{Pedley1992} and~\citet{Lauga2009}.) This has uncovered a plethora of
fascinating behavior, for example the complex interaction of microswimmers
with boundaries~\cite{Rothschild1963, Winet1984, Cosson2003, Lauga2006,
Berke2008, Drescher2009}, or the collective suspension instability (swirls
and jets) at high concentrations of `pushers,' organisms whose propulsion
mechanism is at the rear~\cite{Dombrowski2004, HernandezOrtiz2005,
Underhill2008, Underhill2011, Saintillan2007, Sokolov2009, Saintillan2012}.
Another fruitful research direction is biogenic mixing, or biomixing for
short. Does the motion of swimmers influence the effective diffusivity of
passive scalars advected by the fluid, such as the nutrients the organisms
depend on? This has been proposed as a mixing mechanism in the
ocean~\cite{Huntley2004, Dewar2006, Kunze2006, Katija2009, Dabiri2010,
Leshansky2010, Thiffeault2010b, Lorke2010, Katija2012}, though its
effectiveness is still very much open to debate~\cite{Visser2007, Gregg2009,
Kunze2011, Noss2014}. Biomixing has also been studied in suspensions of
small organisms~\cite{Ishikawa2010, Kurtuldu2011, Mino2011, Zaid2011}.
The main ingredient in formulating a theory for the enhanced diffusion due to
swimming organisms is the \emph{drift} caused by the
swimmer~\cite{Maxwell1869, Darwin1953, Lighthill1956}. \citet{Katija2009} and
\citet{Thiffeault2010b} proposed that the enhanced diffusivity is due to the
repeated displacements induced by a swimmer on a particle of fluid.
\citet{Thiffeault2010b} and~\citet{Lin2011} formulated a probabilistic model
where, given the drift caused by one swimmer, an effective diffusivity could
be computed. This model has been tested in physical and numerical
experiments~\cite{Jepson2013, Morozov2014, Kasyap2014} and modified to include
curved trajectories~\cite{Pushkin2013b} and confined
environments~\cite{Pushkin2014}. Mi\~{n}o
\emph{et~al.}~\cite{Mino2011,Mino2013} observe that effective diffusivity is
inversely related to swimming efficiency, and find increased diffusivity near
solid surfaces, both theoretically and experimentally. The drift caused by
individual microswimmers has also been studied in its own
right~\cite{Dunkel2010, Pushkin2013}. \citet{Pushkin2013b} also found an
analytical expression for stresslet displacements, valid in the far field.
The studies mentioned above have typically been concerned with the effective
diffusivity induced by the swimmers, but one can also ask more detailed
questions about the distribution of displacements of fluid particles.
\citet{Wu2000} studied the displacement of spheres larger than the swimming
organisms. More recently, \citet{Leptos2009} studied the microscopic algae
\textit{Chlamydomonas reinhardtii}. They used spheres that are much smaller
than the organisms, so their distributions can be taken to be close to the
displacements of idealized fluid particles. The probability density function
(pdf) of tracer displacements was found to be strongly non-Gaussian, though
the distributions scaled `diffusively': they collapsed onto each other if
rescaled by their standard deviation.
Several papers have dealt with these non-Gaussian distributions.
\citet{Zaid2011} examine the velocity fluctuations due to swimmers modeled as
regularized point stresslets, and obtain strongly non-Gaussian tails. The
non-Gaussianity in their case is due to the divergence of the stresslet near
the singularity, which indicates large displacements. While the broad outline
of this mechanism is surely correct, examining this singular limit is
questionable: it is never valid to evaluate terms such as the stresslet in the
singular limit, since the swimmer's body necessarily regularizes the velocity.
In addition, no direct comparison to experiments is offered beyond a comment
that the data `resemble the measurements of \citet{Leptos2009}.'
\citet{Pushkin2014} extended this work to confined environments, and we will
contrast their results to ours. As we will show here, the non-Gaussianity
arises from the rarity of interaction events --- the system is very far from
the Gaussian limit. Note also that \citet{Eckhardt2012} have fitted the
distributions of \citet{Leptos2009} very well to a continuous-time random walk
model, but this does not suggest a mechanism and requires fitting different
parameters at each concentration.
What causes the non-Gaussian form of the displacement distribution? As was
pointed out by \citet{Pushkin2014}, the experiments are run for a very short
time. Let us quantify what is meant by `short.' \citet{Leptos2009} define a
`sphere of influence' of radius~$\Reff$ around a particle: swimmers outside
that sphere do not significantly displace the particle. If swimmers with
number density~$\nd$ moves a distance~$\pal$ in random directions, the
expected number of `interactions' with a target particle is roughly
\begin{equation*}
\nd\pal\,\pi\Reff^2
\sim 0.4.
\end{equation*}
Here we took~$\pal \sim 30\,\microm$ and $\nd \sim 4\times
10^{-5}\,\microm^{-3}$, which are the largest values used in the experiments,
and~$\Reff \sim 10\,\microm$ as estimated in \citet{Leptos2009}. Hence, a
typical fluid particle feels \emph{very few} near-encounters with any swimmer.
In order for the central limit theorem to apply, the net displacement must be
the sum of many independent displacements, and this is clearly not the case
here for the larger values of the displacement. We thus expect a Gaussian
core (due to the many small displacements a particle feels) but non-Gaussian
tails (due to the rarity of large displacements), which is exactly what was
observed in the experiments.
Here, we present a calculation that quantitatively predicts essentially all
the details of the distributions obtained by~\citet{Leptos2009}. The
underlying model is not new, being based on the particle-displacement picture
of \citet{Thiffeault2010b} and~\citet{Lin2011}. However, the analysis is new:
we show how to combine multiple displacements to obtain the probability
density function due to multiple swimmers, and take the appropriate
infinite-volume limit. As we go, we discuss the mathematical assumptions that
are required. Upon comparing with experiments, we find the agreement to be
excellent, in spite of the differences between our model swimmer and the
experiments. Only a single parameter needs to be fitted: the dimensionless
stresslet strength, $\beta$.
The paper is organized as follows. In Section~\ref{sec:pdfdisp} we derive the
probability density of displacements based on the drift function of a single
swimmer, in the infinite-volume limit. We use numerical simulations of a
model swimmer (of the squirmer
type~\cite{Lighthill1952,Blake1971,Ishikawa2006, Ishikawa2007b, Drescher2009})
in Section~\ref{sec:numerics} to obtain a distribution of displacements which
we match to the experiments of \citeauthoreos{Leptos2009}. In
Section~\ref{sec:interact} we give a different interpretation of the main
formula of Section~\ref{sec:pdfdisp} in terms of `interactions' between
swimmers and a fluid particle. This alternative form can be used to obtain
some analytic results, in particular when the drift function is logarithmic.
We examine in Section~\ref{sec:largedev} the long-time (or long swimming path)
asymptotics of the model, and find what features of the drift function affect
the convergence to Gaussian. In Section~\ref{sec:diffscal} we address the
`diffusive scaling' observed in the experiments, and show that it is a
transient phenomenon. Finally, we discuss our results as well as future
directions in Section~\ref{sec:discussion}.
\section{Distribution of displacements}
\label{sec:pdfdisp}
The setting of our problem is a large volume~$\Vol$ that contains a number of
swimmers~$\Nswim$, also typically large. The swimmers move independently of
each other in randomly directions. In the dilute limit that we consider, the
velocity field of one swimmer is not significantly affected by the others. A
random fluid particle (not too near the edges of the domain), will be
displaced by the cumulative action of the swimmers. If we follow the
displacements of a large number of well-separated fluid particles, which we
treat as independent, we can obtain the full pdf\ of displacements. Our goal
is to derive the exact pdf\ of displacements from a simple probabilistic
model. Our starting point is the model described by~\citet{Thiffeault2010b}
and improved by~\citet{Lin2011}, which captures the important features
observed in experiments.
For simplicity, we assume the swimmers move along straight paths at a fixed
speed~$\Uc$. The velocity field induced at point~$\xv$ by a swimmer
is~$\uv(\xv - \Uv\tt)$, with the time dependence reflecting the motion of the
swimmer. The main ingredient in the model is the finite-path drift
function~$\Deltav_\pal(\etav)$ for a fluid particle, initially at~$\xv=\etav$,
affected by a single swimmer:
\begin{equation}
\Deltav_\pal(\etav) = \int_0^{\pal/\Uc}
\uv(\xv(\stime) - \Uv\stime)\dint\stime,\qquad
\dot\xv = \uv(\xv - \Uv\tt),\quad
\xv(0) = \etav\,.
\label{eq:Deltav}
\end{equation}
Here~$\Uc\tt = \pal$ is the swimming distance. To
obtain~$\Deltav_\pal(\etav)$ we must solve the differential
equation~$\dot\xv=\uv$ for each initial condition~$\etav$. Assuming
homogeneity and isotropy, we obtain the probability density of
displacements~\cite{Pushkin2014},
\begin{equation}
\probone_{\Rv_\pal^1}(\rv) = \frac{1}{\areaus\,\rc^{\sdim-1}}\int_\Vol
\delta(\rc - \Delta_\pal(\etav))
\,\frac{\!\dint\Vol_{\etav}}{\Vol}
\label{eq:rhotx}
\end{equation}
where~$\areaus=\areaus(\sdim)$ is the area of the unit sphere in~$\sdim$
dimensions: $\areaus(2)=2\pi$, $\areaus(3)=4\pi$. Here~$\Rv_\pal^1$ is a
random variable that gives the displacement of the particle from its initial
position after being affected by a single swimmer with path length~$\pal$. We
denote by $\probone_{\Rv_\pal^1}(\rv)$ the pdf\
of~$\Rv_\pal^1$. Because of the isotropy assumption, only the
magnitude~$\Delta_\pal(\etav) = \lVert\Deltav_\pal(\etav)\rVert$
enters~\eqref{eq:rhotx}.
Before we continue with finding the pdf\ for multiple swimmers, let us
investigate how the variance of displacements evolves. The second moment
of~$\Rv_\pal^1$ is
\begin{equation}
\avg{(\Rc_\pal^1)^2} =
\int_{\Vol}\rc^2\,\probone_{\Rv_\pal^1}(\rv)\dint\Vol_{\rv}
=
\int_\Vol \Delta_\pal^2(\etav)
\,\frac{\!\dint\Vol_{\etav}}{\Vol}.
\end{equation}
This typically goes to zero as~$\Vol\rightarrow\infty$, since a single swimmer
in an infinite volume shouldn't give any fluctuations on average. We
write~$\Rv_\pal^\Nswim$ for the random particle displacement due to~$\Nswim$
swimmers; the second moment of~$\Rv_\pal^\Nswim$ is
\begin{equation}
\avg{(\Rc_\pal^\Nswim)^2}
= \Nswim\avg{(\Rc_\pal^1)^2} =
\nd\int_\Vol \Delta_\pal^2(\etav)
\dint\Vol_{\etav}
\label{eq:r2N}
\end{equation}
with~$\nd = \Nswim/\Vol$ the number density of swimmers. This is nonzero (and
might diverge) in the limit~$\Vol\rightarrow\infty$, reflecting the cumulative
effect of multiple swimmers. Note that this expression is exact, within the
problem assumptions: it doesn't even require~$\Nswim$ to be large.
The expression~\eqref{eq:r2N} will lead to diffusive behavior if the integral
grows linearly in~$\pal$ (or if the swimmers change direction~\cite{Lin2011},
which we shall not treat here). Surprisingly, it has been found to do so in
two distinct ways. In the first, exemplified by bodies in inviscid
flow~\cite{Thiffeault2010b,Lin2011}, the support of~$\Delta_\pal$ grows
linearly with~$\pal$, but the displacements themselves become independent
of~$\pal$ when~$\pal$ is large. The intuition is that the swimmer pushes
particles a finite distance as it encounters them. As we wait longer, the
volume of such displaced particles grows linearly in~$\pal$, but once
particles are displaced they are left behind and suffer no further
displacement. This diffusive behavior is thus appropriate for very localized
interactions, where the only displaced particles are very near the axis of
swimming. This tends to occur in inviscid flow, or for spherical
`treadmillers' in viscous flow. See Fig.~\ref{fig:sphere_Delta} for an
illustration.
\begin{figure}
\begin{center}
\subfigure[]{
\includegraphics[height=.25\textheight]{sphere_Delta}
\label{fig:sphere_Delta}
}\hspace{1em}%
\subfigure[]{
\includegraphics[height=.25\textheight]{stresslet_Delta}
\label{fig:stresslet_Delta}
}
\end{center}
\caption{The natural log of~$\rho^2\Delta^2(\rho,\zc)$ (integrand
of~\eqref{eq:r2N} with~$\dint\Vol_{\etav} =
2\pi\rho^2\dint(\ln\rho)\dint\zc$) for (a) a sphere of radius~$\lsc=1$ in
inviscid flow, moving a path length~$\pal=10$ (top) and~$100$ (bottom),
plotted on the same scale. The scale of the integrand doesn't change,
only its support. Here~$\etav=(\rho,\zc)$ with~$\zc$ the swimming
direction and~$\rho$ the distance from the~$\zc$ axis. (b) Same as (a)
but for a stresslet velocity field. The integral~\eqref{eq:r2N} grows
linearly with~$\pal$ for both (a) and (b).}
\label{fig:sphere_stresslet_Delta}
\end{figure}
The second situation in which~\eqref{eq:r2N} shows diffusive behavior even
for straight swimming paths is when the far-field velocity has the form of a
stresslet, as is the case for a force-free swimmer in a viscous fluid. This
diffusive behavior was observed in \citet{Lin2011} but it was
\citet{Pushkin2013b} who provided a full explanation. For a stresslet
swimmer, the main contributions to~\eqref{eq:r2N} come
from~$\lVert\etav\rVert$ of order $\pal$, so it is appropriate to use a point
singularity model swimmer for large~$\pal$. In that case the drift function
has the scaling~$\Delta_\pal(\etav) = \Delta_\pal(\pal\zetav) =
\pal^{-1}\Deltasc(\zetav)$, where~$\zetav=\etav/\pal$ is a dimensionless
variable and the function~$\Deltasc(\zetav)$ is independent of~$\pal$ for
large~$\pal$~\cite{Pushkin2013b}. Inserting this form in~\eqref{eq:r2N}, we
find
\begin{equation}
\int \Delta_\pal^2(\etav)\dint\Vol_{\etav}
=
\int \l(\pal^{-2}\Deltasc^2(\zetav)\r)\l(\pal^3\dint\Vol_{\zetav}\r)
\sim \pal.
\label{eq:Deltasc2int}
\end{equation}
The integral of~$\Deltasc^2(\zetav)$ converges despite having
singularities~\cite{Pushkin2013b}. We thus see that the integral
in~\eqref{eq:r2N} grows linearly in~$\pal$ for very different reasons than our
first case: here the volume of particles affected by the swimmer grows
as~$\pal^3$ (particles are affected further and further away), but they are
displaced less (since they are further away, see
Fig.~\ref{fig:stresslet_Delta}). Any truncation of the integral
in~\eqref{eq:Deltasc2int} (because of finite volume effect) will lead to a
decrease in the diffusivity, a possible origin for the decrease in diffusivity
with path length observed in \citet{Jepson2013}. Note also that the
reorientation mechanism discussed by~\citet{Lin2011} is not necessary in this
case to achieve the diffusive behavior, as pointed out by \citet{Pushkin2014}.
Having addressed the growth of the variance, we continue with finding the
pdf\ of displacements for multiple swimmers. We write~$\X_\pal^\Nswim$ for a
single coordinate of~$\Rv_\pal^\Nswim$ (which coordinate is immaterial,
because of isotropy). From~\eqref{eq:rhotx} with~$\sdim=2$ we can
compute~$\probone_{\X_\pal^1}(\xc)$, the marginal distribution for one
coordinate:
\begin{equation}
\probone_{\X_\pal^1}(\xc)
= \int_{-\infty}^\infty\probone_{\Rv_\pal^1}(\rv)\dint\yc
= \int_\Vol\int_{-\infty}^\infty\frac{1}{2\pi\rc}\,
\delta(\rc - \Delta_\pal(\etav))
\dint\yc\,\frac{\!\dint\Vol_{\etav}}{\Vol}.
\end{equation}
Since~$\rc^2=\xc^2+\yc^2$, the~$\delta$-function will capture two values
of~$\yc$, and with the Jacobian included we obtain
\begin{equation}
\probone_{\X_\pal^1}(\xc)
= \frac{1}{\pi}\int_\Vol
\frac{1}{\sqrt{\Delta_\pal^2(\etav) - \xc^2}}
\indf{\Delta_\pal(\etav) > \lvert\xc\rvert}
\,\frac{\!\dint\Vol_{\etav}}{\Vol}\,,
\label{eq:rhotxc2D}
\end{equation}
where~$\indf{A}$ is an indicator function: it is~$1$ if~$A$ is satisfied, $0$
otherwise.
The marginal distribution in the three-dimensional case proceeds the same way
from~\eqref{eq:rhotx} with~$\sdim=3$:
\begin{equation}
\probone_{\X_\pal^1}(\xc)
= \int_{-\infty}^\infty\probone_{\Rv_\pal^1}(\rv)\dint\yc\dint\zc
= \int_\Vol\int_{-\infty}^\infty\int_{-\infty}^\infty\frac{1}{4\pi\rc^2}\,
\delta(\rc - \Delta_\pal(\etav))
\dint\yc\dint\zc\,\frac{\!\dint\Vol_{\etav}}{\Vol}.
\end{equation}
Again with~$\rc^2=\xc^2+\yc^2+\zc^2$ the~$\delta$-function captures two
values of~$\zc$, and with the Jacobian included we obtain
\begin{equation}
\probone_{\X_\pal^1}(\xc)
= \frac{1}{2\pi}\int_\Vol\int_{-\infty}^\infty
\frac{1}{\Delta_\pal(\etav)}\,
\frac{1}{\sqrt{\Delta_\pal^2(\etav) - \xc^2 - \yc^2}}
\indf{\Delta_\pal^2(\etav) > \xc^2+\yc^2}
\!\dint\yc\,\frac{\!\dint\Vol_{\etav}}{\Vol}.
\end{equation}
Now we integrate over~$\yc$ to get
\begin{equation}
\probone_{\X_\pal^1}(\xc)
= \tfrac{1}{2}\int_\Vol
\frac{1}{\Delta_\pal(\etav)}\,
\indf{\Delta_\pal(\etav) > \lvert\xc\rvert}
\,\frac{\!\dint\Vol_{\etav}}{\Vol}
\label{eq:rhotxc3D}
\end{equation}
which is the three-dimensional analogue of~\eqref{eq:rhotxc2D}. The integrand
of~\eqref{eq:rhotxc3D} has an intuitive interpretation. The indicator
function says that a displacement in a random direction must at least be
larger than~$\lvert\xc\rvert$ to project to a value~$\xc$. The factor
of~$\Delta_\pal(\etav)$ in the denominator then tells us that large
displacements in a random direction are less likely to project to a
value~$\xc$. (The two-dimensional form~\eqref{eq:rhotxc2D} has essentially
the same interpretation, with a different weight.)
In order to sum the displacements due to multiple swimmers, we need the
characteristic function of~$\probone_{\X_\pal^1}(\xc)$, defined by
\begin{equation}
\avg{\ee^{\imi\kc\X_\pal^1}} = \int_{-\infty}^\infty\probone_{\X_\pal^1}(\xc)\,
\ee^{\imi\kc\xc}\dint\xc.
\end{equation}
For the two-dimensional pdf~\eqref{eq:rhotxc2D}, we have
\begin{equation}
\avg{\ee^{\imi\kc\X_\pal^1}}
=
\int_\Vol J_0(\kc\Delta_\pal(\etav))
\,\frac{\!\dint\Vol_{\etav}}{\Vol}
\label{eq:char2}
\end{equation}
where~$J_0(x)$ is a Bessel function of the first kind.
For the three-dimensional pdf~\eqref{eq:rhotxc3D}, the characteristic
function is
\begin{equation}
\avg{\ee^{\imi\kc\X_\pal^1}}
=
\int_\Vol
\sinc{(\kc\Delta_\pal(\etav))}\,\frac{\!\dint\Vol_{\etav}}{\Vol}
\label{eq:char3}
\end{equation}
where~$\sinc \xc \ldef \xc^{-1}\sin \xc$ for~$x\ne 0$, and~$\sinc 0 \ldef
1$.\footnote{Beware that this function is sometimes defined as~$(\pi
\xc)^{-1}\sin (\pi \xc)$, most notably by Matlab.} The
expression~\eqref{eq:char3} appears in~\cite{Pushkin2014}, except here we
compute it directly from a spatial integral rather than from the pdf\ of
$\Delta$. The main difference will come in the way we take the limit $\Vol
\rightarrow \infty$ below, which will allow us to study the number density
dependence directly.
We define
\begin{equation}
\K(\xc) \ldef \begin{cases}
1 - J_0(\xc),\quad &\sdim=2;\\
1 - \sinc\xc,\quad &\sdim=3,
\end{cases}
\label{eq:Ksdimdef}
\end{equation}
We have~$\K(0)=\K'(0)=0$, $\K''(0)=1/\sdim$,
so~$\K(\xi) \sim (1/2\sdim)\,\xi^2 + \Order{\xi^4}$
as~$\xi\rightarrow0$. For large argument, $\K(\xi)\rightarrow 1$.
We can then write the two cases~\eqref{eq:char2}--\eqref{eq:char3} for the
characteristic function together as
\begin{equation}
\avg{\ee^{\imi\kc\X_\pal^1}}
=
1 - (\vpal/\Vol)\,\cK_\pal(\kc)
\label{eq:charf}
\end{equation}
where
\begin{equation}
\cK_\pal(\kc) \ldef
\frac{1}{\vpal}
\int_\Vol\K(\kc\Delta_\pal(\etav))\dint\Vol_{\etav}.
\label{eq:cKdef}
\end{equation}
Here~$\vpal$ is the volume `carved out' by a swimmer moving a distance~$\pal$:
\begin{equation}
\vpal = \pal\areasw
\label{eq:vpal}
\end{equation}
with~$\areasw$ the cross-sectional area of the swimmer in the direction of
motion.
Since we are summing independent particle displacements, the probability
distribution of the sum is the convolution of~$\Nswim$ one-swimmer
distributions. Using the Fourier transform convolution property, the
characteristic function for~$\Nswim$ swimmers is
thus~$\avg{\ee^{\imi\kc\X_\pal^\Nswim}} =
\avg{\ee^{\imi\kc\X_\pal^1}}^\Nswim$. From~\eqref{eq:charf},
\begin{equation}
\avg{\ee^{\imi\kc\X_\pal^1}}^\Nswim
=
\l(1 - \vpal\cK_\pal(\kc)/\Vol\r)^{\nd\Vol},
\label{eq:charfNswim}
\end{equation}
where we used~$\Nswim = \nd\Vol$, with~$\nd$ the number density of swimmers.
We will need the following simple result:
\begin{prop}
\label{prop:expid}
Let~$\yprop(\eprop) \sim \order{\eprop^{-\Mprop/(\Mprop+1)}}$
as~$\eprop\rightarrow 0$ for an integer~$\Mprop \ge 1$; then
\begin{equation}
(1 - \eprop\yprop(\eprop))^{1/\eprop}
= \exp\biggl(-\sum_{m=1}^{\Mprop}\frac{\eprop^{m-1}\yprop^m(\eprop)}{m}\biggr)
\l(1 + \order{\eprop^0}\r), \quad \eprop \rightarrow 0.
\label{eq:expid}
\end{equation}
\end{prop}
\noindent
See Appendix~\ref{apx:proof} for a short proof.
Let's examine the assumption of Proposition~\ref{prop:expid} for~$\Mprop=1$
applied to~\eqref{eq:charfNswim}, with~$\eprop=1/\Vol$
and~$\yprop=\vpal\cK_\pal(\kc)$.
For~$\Mprop=1$, the assumption of Proposition~\ref{prop:expid} requires
\begin{equation}
\cK_\pal(\kc) \sim \order{\Vol^{1/2}},\qquad\Vol\rightarrow\infty.
\label{eq:propcond2}
\end{equation}
A stronger divergence with~$\Vol$ means using a larger~$\Mprop$ in
Proposition~\ref{prop:expid}, but we shall not need to consider this here.
Note that it is not possible for~$\cK_\pal(\kc)$ to diverge faster
than~$\Order{\Vol}$, since~$\gamma(\xc)$ is bounded. In order
for~$\cK_\pal(\kc)$ to diverge as~$\Order{\Vol}$, the displacement must be
nonzero as~$\Vol\rightarrow\infty$, an unlikely situation that can be ruled
out.
Assuming that~\eqref{eq:propcond2} is satisfied, we use
Proposition~\ref{prop:expid} with~$\Mprop=1$ to make the large-volume
approximation
\begin{equation}
\avg{\ee^{\imi\kc\X_\pal^1}}^\Nswim
=
\l(1 - \vpal\cK_\pal(\kc)/\Vol\r)^{\nd\Vol}
\sim
\exp\l(-\nd\vpal\,\cK_\pal(\kc)\r), \quad
\Vol \rightarrow \infty.
\label{eq:largevol}
\end{equation}
If the integral~$\cK_\pal(\kc)$ is convergent as~$\Vol\rightarrow\infty$ we
have achieved a volume-independent form for the characteristic function, and
hence for the distribution of~$\xc$ for a fixed swimmer density. We define
the quantity
\begin{equation}
\Mpal \ldef \nd\vpal = \pal/\mfp
\label{eq:Mpal}
\end{equation}
where~$\mfp = (\nd\areasw)^{-1}$ is the swimmer mean free path. Since $\vpal$
is the volume carved out by a single swimmer moving a distance~$\pal$
(Eq.~\eqref{eq:vpal}), $\Mpal$ is the expected number of swimmers that will
`hit' a given fluid particle.
A comment is in order about evaluating~\eqref{eq:cKdef} numerically: if we
take~$\lvert\kc\rvert$ to~$\infty$, then~$\K(\kc\Delta) \rightarrow 1$, and
thus $\vpal\cK \rightarrow \Vol$, which then leads to~$\ee^{-\Nswim}$
in~\eqref{eq:largevol}. This is negligible as long as the number of
swimmers~$\Nswim$ is moderately large. In practice, this means
that~$\lvert\kc\rvert$ only needs to be large enough that the argument of the
decaying exponential in~\eqref{eq:largevol} is of order one, that is
\begin{equation}
\Mpal\,\cK_\pal(\kmax) \sim \Order{1}.
\label{eq:kmax}
\end{equation}
Wavenumbers~$\lvert\kc\rvert > \kmax$ do not contribute
to~\eqref{eq:largevol}. (We are assuming monotonicity of~$\cK_\pal(\kc)$
for~$\kc>0$, which will hold for our case.) Note that~\eqref{eq:kmax} implies
that we need larger wavenumbers for smaller densities~$\nd$: a typical fluid
particle then encounters very few swimmers, and the distribution should be far
from Gaussian.
Now that we've computed the characteristic function for~$\Nswim$
swimmers~\eqref{eq:largevol}, we finally recover the pdf\ of~$\xc$
for~$\Nswim=\nd\Vol$ swimmers as the inverse Fourier transform
\begin{equation}
\prob_{\X_\pal}(\xc) = \frac{1}{2\pi}\int_{-\infty}^\infty
\exp\l(-\Mpal\,\cK_\pal(\kc)\r)
\ee^{-\imi\kc\xc}\dint\kc,
\label{eq:rhotxNswim}
\end{equation}
where we dropped the superscript~$\Nswim$ from~$\X_\pal^\Nswim$ since the
number of swimmers no longer enters the expression directly.
\section{Comparing to experiments}
\label{sec:numerics}
We now compare the theory discussed in the previous sections to the
experiments of \citeauthoreos{Leptos2009}, in particular the observed
dependence of the distribution on the number density~$\phi$. (Another aspect
of their experiments, the `diffusive scaling' of the distributions, will be
discussed in Section~\ref{sec:diffscal}.) In their experiments they use the
microorganism \textit{C.\ reinhardtii}, an alga of the `puller' type, since
its two flagella are frontal. This organism has a roughly spherical body with
radius~$\lsc \approx 5\,\microm$. They observe a distribution of swimming
speeds with a strong peak around~$100\,\microm/\second$. They place
fluorescent microspheres of about a micron in radius in the fluid, and
optically measure their displacement as the organisms move. The volume
fraction of organisms varies from~$\phi=0\%$ (pure fluid) to~$2.2\%$.
They measure the displacement of the microspheres along a reference direction,
arbitrarily called~$\xc$ (the system is assumed isotropic). Observing many
microspheres allows them to compute the pdf\ of tracer
displacements~$\X_\pal$, which we've denoted~$\prob_{\X_\pal}(\xc)$. Thus,
$\prob_{\X_\pal}(\xc)\dint\xc$ is the probability of observing a particle
displacement~$\X_\pal \in [\xc,\xc+\dint\xc]$ after a path length~$\pal$.
(They write their density~$P(\Delta x,\Delta t)$, where~$(\Delta x,\Delta t)$
are the same as our~$(\xc,\pal/\Uc)$.)
At zero volume fraction ($\phi=0$), the pdf\ $\prob_{\X_\pal}(\xc)$ is
Gaussian, due solely to thermal noise. For higher number densities,
\citeauthor{Leptos2009} see exponential tails appear and the Gaussian core
broaden. The distribution is well-fitted by the sum of a Gaussian and an
exponential:
\begin{equation}
\prob_{\X_\pal}(\xc) = \frac{1-f}{\sqrt{2\pi\delta_{\text{g}}^2}}
\,\ee^{-\xc^2/2\delta_{\text{g}}^2}
+ \frac{f}{2\delta_{\text{e}}}\,\ee^{-\lvert\xc\rvert/\delta_{\text{e}}}.
\label{eq:nonGaussform}
\end{equation}
They observe the scalings~$\delta_{\text{g}} \approx A_{\text{g}}\tt^{1/2}$
and~$\delta_{\text{e}} \approx A_{\text{e}}\tt^{1/2}$, where~$A_{\text{g}}$
and~$A_{\text{e}}$ depend on~$\phi$. The dependence on~$\tt^{1/2}$ is
referred to as the `diffusive scaling' and will be discussed in
Section~\ref{sec:diffscal}. Exploiting this scaling, \citet{Eckhardt2012}
have fitted these distributions very well to a continuous-time random walk
model, but this does not suggest a mechanism.
We shall use a model swimmer of
the squirmer type~\cite{Lighthill1952,Blake1971,Ishikawa2006, Ishikawa2007b,
Drescher2009}, with axisymmetric streamfunction~\cite{Lin2011}
\begin{equation}
\Psi_{\text{sf}}(\rho,z)
= \tfrac12{\rho^2\,\Uc}\l\{-1 + \frac{\lsc^3}{(\rho^2+z^2)^{3/2}}
+ \tfrac32\frac{\beta \lsc^2 z}{(\rho^2+z^2)^{3/2}}
\l(\frac{\lsc^2}{\rho^2+z^2} - 1\r)\r\}
\label{eq:squirm_strfcn}
\end{equation}
in a frame moving at speed~$\Uc$. Here~$z$ is the swimming direction
and~$\rho$ is the distance from the~$z$ axis. To mimic \textit{C.\
reinhardtii}, we use~$\lsc=5\,\microm$ and $\Uc=100\,\microm/\second$.
(\citet{Leptos2009} observe a distribution of velocities but the peak is
near~$100\,\microm/\second$.) We take~$\beta=0.5$ for the relative stresslet
strength, which gives a swimmer of the puller type, just like \textit{C.\
reinhardtii}. The contour lines of the axisymmetric
streamfunction~\eqref{eq:squirm_strfcn} are depicted in
Fig.~\ref{fig:squirmer_contour_beta=0p5}. The parameter~$\beta=0.5$ is the
only one that was fitted (visually) to give good agreement later.
\begin{figure}
\begin{center}
\includegraphics[width=.7\textwidth]{squirmer_contour_beta=0p5}
\end{center}
\caption{Contour lines for the axisymmetric streamfunction of a squirmer of
the form~\eqref{eq:squirm_strfcn}, with~\hbox{$\beta=0.5$}. This swimmer
is of the puller type, as for \textit{C.\ reinhardtii}.}
\label{fig:squirmer_contour_beta=0p5}
\end{figure}
First we compute the drift function~$\Delta_\pal(\etav)$ for a single swimmer
moving along the~$\zc$ axis. The model swimmer is axially symmetric,
so~$\etav$ can be written in terms of~$\zc$ and~$\rho$, the perpendicular
distance to the swimming axis. We take~$\pal=12\,\microm$, since the time
is~$\tt=0.12\,\second$ in Fig.~2(a) of \citeauthor{Leptos2009}, and our
swimmer moves at speed~$\Uc=100\,\microm/\second$. We
compute~$\Delta_\pal(\rho,\zc)$ for a large grid of~$\ln\rho$ and~$\zc$
values, using the analytic far-field stresslet form for the
displacement~\cite{Dunkel2010,Mino2013,Pushkin2013b} when far away from the
swimmer's path.
From the drift function~$\Delta_\pal(\etav)$ we now want to
compute~$\cK_\pal(\kc)$ defined by~\eqref{eq:cKdef}. To estimate how large
a~$\kc$ value we will need, we start from the smallest volume fraction in the
experiments, $\phi \sim 0.4\%$. For spherical swimmers of radius~$\lsc \sim
5\,\microm$ (with cross-sectional area~$\areasw = \pi\lsc^2 \sim
78.5\,\microm^2$), this gives a number density of~$7.6 \times
10^{-6}\,\microm^{-3}$. We thus get~$\Mpal = \nd\areasw\pal \sim 7.2\times
10^{-3}$. The criterion~\eqref{eq:kmax} then tells us that we need~$\kmax$
large enough that $\cK_\pal(\kmax) \sim 1/\Mpal \sim 139$.
\begin{figure}
\begin{center}
\includegraphics[width=.5\textwidth]{pdfX1_charfun}
\end{center}
\caption{\protect\ifjournal{(Color online)\ }{} The function $\cK_\pal(\kc)$ defined by
Eq.~\eqref{eq:cKdef} for (from broadest to narrowest)~$\pal=12\,\microm$,
$36\,\microm$, $60\,\microm$, and $96\,\microm$.}
\label{fig:pdfX1_charfun}
\end{figure}
Figure~\ref{fig:pdfX1_charfun} shows the numerically-computed~$\cK_\pal(\kc)$
for several values of~$\pal$, with~$\pal=12\,\microm$ the broadest curve. We
can see from the figure that choosing~$\kmax \sim 20\,\microm^{-1}$ will
ensure that~$\Mpal\cK_\pal(\kmax)$ is large enough. As~$\pal$ gets larger,
$\kmax$ decreases, reflecting the trend towards the central limit theorem
(which corresponds to the small-$\kc$ expansion of~$\cK_\pal(\kc)$, see
Section~\ref{sec:largedev}). Note also that~$\cK_\pal(\kc)$ tends to become
independent of~$\pal$ as~$\pal$ gets larger.
To obtain~$\prob_{\X_\pal}(\xc)$ and compare to \citeauthor{Leptos2009}, we
must now take the inverse Fourier transform of~$\exp(-\Mpal\cK_\pal(\kc))$, as
dictated by~\eqref{eq:rhotxNswim}. This is straightforward using Matlab's
\texttt{ifft} routine. The `period' (domain in~$\xc$) is controlled by the
spacing of the $\kc$ grid, so we make sure the grid is fine enough to give us
the largest values of~$\xc$ required. We also convolve with a Gaussian
distribution of half-width~$\sqrt{2\Diff_0\tt}=0.26\,\microm$ to mimic thermal
noise. This follows from the value~$\Diff_0=0.28\,\microm^2/\second$ measured
by \citeauthor{Leptos2009} for the diffusivity of the microspheres. The value
of~$\Diff_0$ is consistent with the Stokes--Einstein equation for the
diffusivity of thermally-agitated small spheres in a fluid.
\begin{figure}
\begin{center}
\subfigure[]{
\raisebox{.1em}{
\includegraphics[width=.49\textwidth]{compare_to_Leptos}}
\label{fig:compare_to_Leptos}
}%
\subfigure[]{
\includegraphics[width=.477\textwidth]{compare_to_Eckhardt}
\label{fig:compare_to_Eckhardt}
}
\end{center}
\caption{\protect\ifjournal{(Color online)\ }{} (a) The pdf\ of particle displacements after
a path length~$\pal=12\,\microm$, for several values of the volume
fraction~$\phi$. The data is from \citet{Leptos2009}, and the figure
should be compared to their Fig.~2(a). The theoretical curves were
obtained from~\eqref{eq:rhotxNswim} for the model squirmer in
Fig.~\ref{fig:squirmer_contour_beta=0p5}, with some noise corresponding to
thermal diffusivity as measured in \citet{Leptos2009}. Inset: comparison
of (from broadest to narrowest)~$\beta = 2$, $1$, $0.5$, and~$0.1$,
for~$\phi=2.2\%$, showing the sensitivity of the fit~$\beta=0.5$. (b)
Same as (a) but on a wider scale, also showing the form suggested
by~\citet{Eckhardt2012} (dashed lines).}
\label{fig:compare_to_Leptos_both}
\end{figure}
The results are plotted in Fig.~\ref{fig:compare_to_Leptos} and compared to
the data of Fig.~2(a) of \citet{Leptos2009}. The agreement is excellent: we
remind the reader that we adjusted only one parameter, $\beta=0.5$. This
parameter was visually adjusted to the~$\phi=2.2\%$ data in
Fig.~\ref{fig:compare_to_Leptos}, since the larger concentration is most
sensitive to~$\beta$; a more careful fit is unnecessary given the
uncertainties in both model and data. (The inset shows the sensitivity of the
fit to~$\beta$.) All the other physical quantities were gleaned from
\citeauthoreos{Leptos2009}. What is most remarkable about the agreement in
Fig.~\ref{fig:compare_to_Leptos} is that it was obtained using a model
swimmer, the spherical squirmer, which is not expected to be such a good model
for \textit{C.\ reinhardtii}. The real organisms are strongly time-dependent,
for instance, and do not move in a perfect straight line. Nevertheless the
model captures very well the pdf\ of displacements, in particular the volume
fraction dependence. The model swimmer slightly underpredicts the tails, but
since the tails are associated to large displacements they depend on the
near-field details of the swimmer, so it is not surprising that our model
swimmer should deviate from the data.
In Figure~\ref{fig:compare_to_Eckhardt} we compare our results to the
phenomenological fit of \citet{Eckhardt2012} based on continuous-time random
walks: their fit is better in the tails, but our models disagree immediately
after the data runs out. Our model has the realistic feature that the
distribution is cut off at the path length~$\pal = 12\,\microm$, since it is
extremely unlikely that a particle had two close encounters with a swimmer at
these low volume fractions.
\begin{figure}
\begin{center}
\includegraphics[width=.6\textwidth]{compare_to_PY}
\end{center}
\caption{\protect\ifjournal{(Color online)\ }{} The same distributions as in
Fig.~\ref{fig:compare_to_Leptos}, but on a log-log plot. The dashed line
is the $\xc^{-4}$ power law predicted by \citet{Pushkin2014}. Inset:
numerical simulation with only the stresslet far-field displacement
included.}
\label{fig:compare_to_PY}
\end{figure}
A possible explanation as to why the squirmer model does so well was provided
by \citet{Pushkin2014}. They used numerical simulations of squirmers (with a
larger value~$\beta=2$ that leads to a trapped volume) to show that the tails
of distribution scale as~$\xc^{-4}$, which is the asymptotic form of the
stresslet displacement distribution. Figure~\ref{fig:compare_to_PY} shows
that our computations have a similar tail, though we emphasize here that our
agreement with the experiments of \citet{Leptos2009} is \emph{quantitative}
and correctly reproduces the volume fraction dependence. We also point out
that though the trend in Fig.~\ref{fig:compare_to_PY} follows~$\xc^{-4}$, the
slope changes gradually and does not have a clear power law (the log scale
means the deviations are quite large). The inset in
Fig.~\ref{fig:compare_to_PY} is a numerical simulation that includes only the
singularity in the stresslet displacement, $\Delta(\etav) \sim
\lVert\etav\rVert^{-1}$, as assumed in the analysis of \citet{Pushkin2014}.
Though the~$\xc^{-4}$ tails are eventually achieved, they have far lower
probability than needed to explain the numerics. \citeauthor{Pushkin2014}'s
use of the far-field stresslet form to predict the tails is thus questionable,
at least for short path lengths.
\begin{figure}
\begin{center}
\subfigure[]{
\includegraphics[height=.25\textheight]{squirmer_Deff_vs_beta}
\label{fig:squirmer_Deff_vs_beta}
}\hspace{1em}%
\subfigure[]{
\includegraphics[height=.247\textheight]{compare_to_Leptos_fig3b}
\label{fig:compare_to_Leptos_fig3b}
}
\end{center}
\caption{(a) For the squirmer model~\eqref{eq:squirm_strfcn}, dependence of
the effective diffusivity~$\Diff_{\text{eff}}$ on the stresslet
strength~$\beta$. For small~$\beta$, we recover the value for spheres in
inviscid flow~\cite{Thiffeault2010b}. An approximate formula is also
shown as a solid curve. (b) Comparison of the effective diffusivity data
from \citet{Leptos2009}, showing their fit (solid line). The dashed line
is the prediction for~$\beta=0.5$, used in this paper.}
\label{fig:squirmer_Deff}
\end{figure}
For the effective diffusivity, \citet{Leptos2009} give the
formula~$\Diff_{\text{eff}} \simeq \Diff_0 + \alpha\,\phi$,
with~$\Diff_0=0.23\,\microm/\second^2$ and~$\alpha = 81.3\,\microm/\second^2$.
Elsewhere in their paper they also give~$\Diff_0=0.28\,\microm/\second^2$ for
the diffusivity of the microspheres in the absence of swimmers, but their
fitting procedure changes the intercept slightly. (Here we
used~$\Diff_0=0.28\,\microm/\second^2$, but the difference is minute.)
Figure~\ref{fig:squirmer_Deff_vs_beta} shows the numerically-computed
effective diffusivity for our squirmer model, as a function of~$\beta$. This
curve is as in \cite{Lin2011}, Fig.~6(a), except that we corrected the
integrals in the far field using the analytic expression of
\citet{Pushkin2013b}, which gives a more accurate result. The Figure also
shows the fit
\begin{equation}
\frac{\Diff_{\text{eff}} - \Diff_0}{\Uc\nd\lsc^4}
\simeq 0.266 + \tfrac34\pi\beta^2,
\label{eq:Deff}
\end{equation}
which is fairly good over the whole range (keeping in mind that this is a
logarithmic plot, so the discrepancy at moderate~$\beta$ are of the order
of~$20$--$30\%$). Here the value~$0.266$ is the diffusivity due to spheres in
inviscid flow ($\beta=0$, see~\cite{Thiffeault2010b}), and
$\tfrac34\pi\beta^2$ is the large-$\beta$ analytic
expression~\cite{Pushkin2013b} for stresslets. From the data in
Fig.~\ref{fig:squirmer_Deff_vs_beta} we find~$\alpha \simeq
113\,\microm/\second^2$, significantly larger than \citet{Leptos2009}, as can
be seen in Fig.~\ref{fig:compare_to_Leptos_fig3b}. The solid line is their
fit, the dashed is our model prediction for~$\beta=0.5$. The overestimate is
likely due to the method of fitting to the squared displacement: their
Fig.~3(a) clearly shows a change in slope with time, and the early times tend
to be steeper, which would increase the effective diffusivity. Note also that
their Fig.~3(a) has a much longer temporal range than their PDFs, going all
the way to~$2\,\second$ (compared to~$0.3\,\second$), raising the possibility
that particles were lost by moving out of the focal plane.
\section{The `interaction' viewpoint}
\label{sec:interact}
Equation~\eqref{eq:rhotxNswim} gives the exact solution for the distribution
of uncorrelated displacements due to swimmers of number density~$\nd$. In
this section we derive an alternative form, in terms of an infinite series,
which is often useful and provides an elegant interpretation
for~\eqref{eq:rhotxNswim}.
The displacement~$\Delta_\pal(\etav)$ typically decays rapidly away from the
swimmer, so that it may often be taken to vanish outside a specified
`interaction volume'~$\Volint$. Then from~\eqref{eq:cKdef}, since~$\K(0)=0$,
we have
\begin{equation}
\cK_\pal(\kc)
= \frac{1}{\vpal}
\int_{\Volint}\K(\kc\Delta_\pal(\etav))\dint\Vol_{\etav} \\
=
\frac{\Volint}{\vpal}\l(1 - \cKm_\pal(\kc)\r)
\label{eq:cKm0}
\end{equation}
where
\begin{equation}
\cKm_\pal(\kc) =
\frac{1}{\Volint}
\int_{\Volint}(1-\K(\kc\Delta_\pal(\etav)))\dint\Vol_{\etav}\,.
\label{eq:cKm}
\end{equation}
Define~$\Mpalm \ldef \nd\Volint$; we insert~\eqref{eq:cKm0}
into~\eqref{eq:rhotxNswim} and Taylor expand the exponential to obtain
\begin{equation}
\prob_{\X_\pal}(\xc) =
\sum_{m=0}^\infty\frac{\Mpalm^m}{m!}\,\ee^{-\Mpalm}\,
\frac{1}{2\pi}\int_{-\infty}^\infty
\cKm_\pal^m(\kc)\,\ee^{-\imi\kc\xc}\dint\kc.
\label{eq:interac}
\end{equation}
The factor~$\Mpalm^m\,\ee^{-\Mpalm}/m!$ is a Poisson distribution for the
number of `interactions' $m$ between swimmers and a particle: it measures the
probability of finding~$m$ swimmers inside the volume~$\Volint$. The inverse
transform in~\eqref{eq:interac} gives the $m$-fold convolution of the
single-swimmer displacement pdf. This was the basis for the model used
in~\cite{Thiffeault2010b,Lin2011} and in an earlier version of this
paper~\cite{Thiffeault2014_preprint_v1}. We have thus shown that
formula~\eqref{eq:rhotxNswim} is the natural infinite-volume limit of the
interaction picture.
Formula~\eqref{eq:interac} is very useful in many instances, such as
when~$\Mpalm$ is small, in which case only a few terms are needed
in~\eqref{eq:interac} for a very accurate representation. Note that the first
term of the sum in~\eqref{eq:interac} is a $\delta$-function, which
corresponds to particles that are outside the interaction volume~$\Volint$.
This singular behavior disappears after $\prob_{\X_\pal}(\xc)$ is convolved
with a Gaussian distribution associated with molecular noise.
Let us apply~\eqref{eq:interac} to a specific example. A model for cylinders
and spheres of radius~$\lsc$ traveling along the~$\zc$ axis in an inviscid
fluid~\cite{Thiffeault2010b,Lin2011} is the \emph{log model},
\begin{equation}
\Delta_\pal(\etav) =
\begin{cases}
\Clog\ln^+(\lsc/\rho),&\text{if $0 \le \zc \le \pal$,}\\
0,&\text{otherwise},
\end{cases}
\label{eq:Deltalog}
\end{equation}
where~$\rho$ is the perpendicular distance to the swimming direction
and~$\ln^+\xc\ldef\ln\max(\xc,1)$. The logarithmic form comes from the
stagnation points on the surface of the swimmer, which dominate transport in
this inviscid limit. This model is also appropriate for a spherical
`treadmiller' swimmer in viscous flow. The drift function~\eqref{eq:Deltalog}
resembles Fig.~\ref{fig:sphere_Delta}.
For the form~\eqref{eq:Deltalog} the interaction volume~$\Volint$ is the same
as~$\vpal$, the volume carved out during the swimmer's motion
(Eq.~\eqref{eq:vpal}). By changing integration variable from~$\rho$
to~$\Delta$ in~\eqref{eq:cKdef} we can carry out the integrals explicitly to
obtain (see Appendix~\ref{apx:logmodel})
\begin{equation}
\cKm_\pal(\kc) = \begin{cases}
(1 + (\Clog\kc)^2)^{-1/2},
\qquad &\text{(cylinders)}; \\
(\Clog\kc/2)^{-1}\arctan(\Clog\kc/2),
\qquad &\text{(spheres)}.
\end{cases}
\label{eq:cKlogmodel}
\end{equation}
This is independent of~$\pal$, even for short paths (but note
that~\eqref{eq:Deltalog} is not a good model for~$\pal < \lsc$).
Furthermore, for~$\sdim=2$ we can also explicitly obtain the convolutions that
arise in~\eqref{eq:interac} to find the full distribution,
\begin{equation}
\prob_{\X_\pal}(\xc) = \ee^{-\Mpal}\l(\delta(\xc) + \sum_{\nenc=1}^\infty
\frac{\Mpal^\nenc}{\nenc!}\,
\frac{1}{\Clog\sqrt{\pi}\,\Gamma(\nenc/2)}
\l(\lvert \xc\rvert/2\Clog\r)^{(\nenc-1)/2}
K_{(\nenc-1)/2}(\lvert \xc\rvert/\Clog)\r),
\label{eq:probNcyl}
\end{equation}
where~$K_\alpha(x)$ are modified Bessel functions of the second kind,
and~$\Gamma(x)$ is the Gamma function (not to be confused with~$\cK_\pal(\kc)$
above). Equation~\eqref{eq:probNcyl} is a very good approximation to the
distribution of displacements due to inviscid cylinders. Unfortunately no
exact form is known for spheres: we must numerically
evaluate~\eqref{eq:rhotxNswim} with~\eqref{eq:cKlogmodel} or use asymptotic
methods (see Section~\ref{sec:largedev}).
\section{Long paths: Large-deviation theory}
\label{sec:largedev}
In Section~\ref{sec:interact} we derived an alternative form of our master
equation~\eqref{eq:rhotxNswim} as an expansion in an `interaction' volume.
Here we look at another way to evaluate the inverse Fourier transform
in~\eqref{eq:rhotxNswim}, using large-deviation
theory~\cite{Gartner1977,Ellis1984,Ellis,Touchette2009}. In
essence, large-deviation theory is valid in the limit when a particle
encounters many swimmers, so that~$\Mpal$ is large (in practice `large' often
means order one for a reasonable approximation). This includes the central
limit theorem (Gaussian form) as a special case. In this section we provide a
criterion for how much time is needed before Gaussian behavior is observed,
which can help guide future experiments.
Earlier we used the characteristic function~\eqref{eq:largevol}. Here it is
more convenient to work with the moment-generating function, which in our case
can be obtained simply by letting~$\sc=\imi\kc$. The moment-generating
function of the distribution is then
\begin{equation*}
\avg{\ee^{\sc\X_\pal}} =
\exp\l(-\Mpal\,\cK_\pal(-\imi\sc)\r) =
\exp\l(\Mpal\,\Lpal(\sc)\r)
\end{equation*}
where~$\Mpal$ was defined by Eq.~\eqref{eq:Mpal}, and
\begin{equation}
\Lpal(\sc) \ldef
\frac{1}{\Mpal}\ln\avg{\ee^{\sc\X_\pal}} = -\cK_\pal(-\imi\sc)
\end{equation}
is the scaled cumulant-generating function. As its name implies, this
function has the property that its derivatives at~$\sc=0$ give the cumulants
of~$\X_\pal$ scaled by~$\Mpal$, for example
\begin{equation}
\Lpal''(0) = \Mpal^{-1}\avg{\X_\pal^2},
\qquad
\Lpal''''(0) = \Mpal^{-1}\l(\avg{\X_\pal^4} - 3\avg{\X_\pal^2}^2\r),
\label{eq:cumul}
\end{equation}
where we left out the vanishing odd moments. We left out the~$\pal$ subscript
on~$\Lpal(\sc)$ since we assume that it becomes independent of~$\pal$ for
large~$\pal$.
If~$\Lpal(\sc)$ is differentiable over some interval of
interest,~$\prob_{\X_\pal}(\xc)$ satisfies a \emph{large-deviation
principle}~\cite{Gartner1977,Ellis1984,Ellis,Touchette2009},
\begin{equation}
\prob_{\X_\pal}(\xc)
\sim \ee^{-\Mpal\,\I(\xc/\Mpal) + \order{\Mpal}},\qquad
\Mpal \gg 1,
\label{eq:rholargedev}
\end{equation}
where~$\I(\xa)$ is the \emph{rate function}, which is the Legendre--Fenchel
transformation of~$\Lpal(\sc)$:
\begin{equation}
\I(\xa) = \sup_{\sc\in \mathbb{R}}\{\sc\xa - \Lpal(\sc)\}.
\label{eq:Idef}
\end{equation}
The large-deviation principle is in essence an application of the method of
steepest descent for large~$\Mpal$.
The scaled cumulant-generating function~$\Lpal(\sc)$ is always convex,
which guarantees a unique solution to~\eqref{eq:Idef}. The rate
function~$\I(\xa)$ is also convex, with a global minimum at~$\xa=0$. This
means that for small~$\xa=\xc/\Mpal$ we can use the Taylor expansion
\begin{equation}
\I(\xa) = \tfrac12 \I''(0)\xa^2 + \tfrac1{4!} \I''''(0)\xa^4
+ \Order{\xa^6}
\end{equation}
to write
\begin{equation}
\prob_{\X_\pal}(\xc)
\sim \ee^{-\tfrac12\I''(0)\,\xc^2/\Mpal},\qquad
\xc \ll \cc\,\Mpal,\quad
\Mpal \gg 1,
\label{eq:rhoGaussian}
\end{equation}
with~$\cc = \lvert12\I''(0)/\I''''(0)\rvert^{1/2}$. This is a Gaussian
approximation with variance~$\Mpal/\I''(0)$, which can be shown to agree
with~\eqref{eq:r2N} after multiplying by~$\sdim$. To recover a Gaussian
distribution over an appreciable range of~$\xc$ (say, a standard deviation) we
insert~$\xc \sim \sqrt{\Mpal/\I''(0)}$ in the condition~$\xc \ll \cc\,\Mpal$
to find the Gaussian criterion
\begin{equation}
\Mpal \gg \frac{1}{12}\,\frac{\lvert\I''''(0)\rvert}{(\I''(0))^2}
= \frac{1}{12}\,\frac{\lvert\Lpal''''(0)\rvert}{(\Lpal''(0))^2}.
\label{eq:Gausscrit}
\end{equation}
After using~$\Lpal(\sc)$ to find the cumulants, we can rewrite this as
\begin{equation}
\Phi_\pal \ldef
\frac{(\sdim+3)}{40}\,
\frac{\volsw\int_\Vol\Delta_\pal^4(\etav)\dint\Vol_{\etav}}
{\l(\int_\Vol\Delta_\pal^2(\etav)\dint\Vol_{\etav}\r)^2}
\ll \phi,
\label{eq:Gausscrit2}
\end{equation}
where~$\volsw$ is the volume of one swimmer. When~\eqref{eq:Gausscrit}
or~\eqref{eq:Gausscrit2} is satisfied, we can expect that the distribution
will be Gaussian (except in the far tails). (The constant prefactor
in~\eqref{eq:Gausscrit2} is only valid for~$\sdim=2$ or~$3$.) The
criterion~\eqref{eq:Gausscrit2} can be interpreted as the minimum volume
fraction~$\Phi_\pal$ required to observe Gaussian behavior, roughly within a
standard deviation of the mean. We note that, at small swimmer volume
fraction, a long time (\textit{i.e.}, path length~$\pal$) is required to achieve the
Gaussian form. Figure~\ref{fig:Philambda} highlights this: the solid curve
is~$\Phi_\pal$ from Eq.~\eqref{eq:Gausscrit2} for the squirmer model in
Section~\ref{sec:numerics}, with parameter values appropriate for the
experiments of \citet{Leptos2009}. Their experiments had $\pal \lesssim
30\,\microm$, so they are in the slowly-decreasing region of
Fig.~\ref{fig:Philambda}, before more rapid~$\pal^{-1}$ convergence sets in
after~$\pal \gtrsim 50\,\microm$. It is thus not surprising that Gaussian
tails were not observed in the experiments.
\begin{figure}
\begin{center}
\includegraphics[width=.6\textwidth]{Philambda}
\end{center}
\caption{The minimum volume fraction~$\Phi_\pal$ for the threshold of
Gaussian behavior (Eq.~\eqref{eq:Gausscrit2}). The solid line is the the
squirmer model (Section~\ref{sec:numerics}) with~$\beta=0.5$ and
radius~$\lsc=5\,\microm$. The dashed line is for spherical treadmillers
(inviscid spheres) of the same radius. The latter require an order of
magnitude longer to achieve Gaussianity, due to the short range of their
velocity field.}
\label{fig:Philambda}
\end{figure}
As an illustration of the large-deviation approach, we consider again the
inviscid cylinder and sphere results~\eqref{eq:cKlogmodel}. We have then
respectively
\begin{equation}
\Lpal(\sc) = \begin{cases}
(1 - (\Clog\sc)^2)^{-1/2} - 1,
\qquad &\text{(cylinders)}; \\
(\Clog\sc/2)^{-1}\arctanh(\Clog\sc/2) - 1,
\qquad &\text{(spheres)}.
\end{cases}
\label{eq:L2cylsph}
\end{equation}
We can see from~\eqref{eq:Idef} that the singularities in~\eqref{eq:L2cylsph}
($\lvert\sc\rvert = 1/\Clog$ for cylinders, $\lvert\sc\rvert = 2/\Clog$ for
spheres) immediately lead to~$\I(\xa) \sim \lvert\xa\rvert/\Clog$
and~$2\lvert\xa\rvert/\Clog$ as~$\lvert\xa\rvert \rightarrow \infty$,
respectively, corresponding to exponential tails in~\eqref{eq:rholargedev}
independent of~$\Mpal$. These are the displacements of particles that come
near the stagnation points at the surface of the cylinder or
sphere~\cite{Lin2011}. We can also use~\eqref{eq:L2cylsph} to compute the
constant on the right-hand side of~\eqref{eq:Gausscrit}: $3/4$ (cylinders)
and~$9/10$ (spheres), which are both of order unity. This reflects the fact
that the drift function~$\Delta_\pal(\etav)$ is very localized, so convergence
to Gaussian is tied directly to the volume carved out by the swimmers. For
swimmers with a longer-range velocity field, such as squirmers, the constant
is much larger, as reflected by the large difference between the solid
(squirmers) and the dashed (inviscid spheres) curves in
Fig.~\ref{fig:Philambda}.
\begin{figure}
\begin{center}
\includegraphics[width=.6\textwidth]{cyl_sph_Cramer}
\end{center}
\caption{The rate function~$\I(\xa)$ for cylinders (Eq.~\eqref{eq:Icyl},
dashed line) and spheres (solid line, numerical solution
of~\eqref{eq:rholargedev}). In both cases we used~$\Clog=1$. The linear
behavior for large~$\lvert\xa\rvert$ indicates exponential tails
in~\eqref{eq:rholargedev}. When~$\xa$ is small, expanding near the
quadratic minimum recovers the Gaussian limit.}
\label{fig:cyl_sph_Cramer}
\end{figure}
For inviscid cylinders the Legendre--Fenchel transform~\eqref{eq:Idef} can be
done explicitly to find (with \hbox{$\Clog=1$})
\begin{equation}
\I(\xa) =
1 - \sqrt{3\pi\alpha}\l(12 - \alpha^2\xa^{-2}\r)^{-1/2}
+ \tfrac12\sqrt{\pi\alpha}
\l(\l(\pi\alpha - 4\r)\alpha^{-2}\xa^2 + \tfrac13\r)^{1/2}
\label{eq:Icyl}
\end{equation}
where~$\alpha(\xa) \ge 0$ is defined by
\begin{equation}
\alpha^3(\xa) = 6\l(\sqrt{(9\pi\xa^4)^2 + 48\xa^6} - 9\pi\xa^4\r).
\end{equation}
For spheres~\eqref{eq:Idef} must be solved numerically for each~$\xa$, which
is straightforward since this is a one-dimensional problem with a unique
solution. The function~$\I(\xa)$ for both cylinders and spheres is plotted in
Fig.~\ref{fig:cyl_sph_Cramer}.
\section{The diffusive scaling}
\label{sec:diffscal}
\begin{figure}
\begin{center}
\subfigure[]{
\raisebox{.6em}
{\includegraphics[height=.25\textheight]{diffusive_scaling_nodiff}}
\label{fig:diffusive_scaling_nodiff}
}
\hspace{1em}
\subfigure[]{
\includegraphics[height=.26\textheight]{diffusive_scaling_nodiff_std1}
\label{fig:diffusive_scaling_nodiff_std1}
}
\end{center}
\caption{\protect\ifjournal{(Color online)\ }{} (a) pdf{}s of particle displacements for
squirmers for different times, at a number density~$\phi=2.2\%$. (b) The
same pdf{}s rescaled by their standard deviation exhibit the `diffusive
scaling' observed in the experiments of \citet{Leptos2009}, where the curves
collapse onto one despite not being Gaussian. As in the experiments, the
scaling is worst for~$\pal=6\,\microm$.}
\label{fig:diffusive_scaling_nodiff_both}
\end{figure}
One of the most remarkable property of the pdf{}s found by
\citeauthor{Leptos2009} is the \emph{diffusive scaling}. This is illustrated
in Fig.~\ref{fig:diffusive_scaling_nodiff_both}: the unrescaled displacement
pdf{}s are shown in Fig.~\ref{fig:diffusive_scaling_nodiff}; the same pdf{}s
are shown again in Fig.~\ref{fig:diffusive_scaling_nodiff_std1}, but rescaled
by their standard deviation. The pdf{}s collapse onto a single curve (the
shortest path length collapses more poorly).
Figure~\ref{fig:diffusive_scaling_nodiff_both} was obtained in the same manner
as Fig.~\ref{fig:compare_to_Leptos_both}, using our probabilistic approach.
Hence, the diffusive scaling is also present in our model, as it was in the
direct simulations of \citet{Lin2011} for a similar range of path lengths. In
Fig.~\ref{fig:diffusive_scaling_nodiff_both} we left out thermal diffusion
completely, which shows that it is not needed for the diffusive scaling to
emerge.
Here we have the luxury of going much further in time and to examine the
probability of larger displacements, since we are simply carrying out
integrals and not running a statistically-limited experiment or simulation.
(The numerical integrals are of course limited by resolution.)
Figure~\ref{fig:diffusive_scaling_nodiff_full_both} shows much longer runs
(maximum~$\pal=500\,\microm$ compared to~$30\,\microm$ in the experiments).
We see that, though the diffusive scaling holds in the core (as it must, since
the core is Gaussian), the tails are narrowing, consistent with convergence to
a Gaussian distribution but breaking the diffusive scaling. We now explain
why the diffusive scaling appears to hold for some time, but eventually breaks
down.
\begin{figure}
\begin{center}
\subfigure[]{
\raisebox{.6em}
{\includegraphics[height=.25\textheight]{diffusive_scaling_nodiff_full}}
\label{fig:diffusive_scaling_nodiff_full}
}
\hspace{1em}
\subfigure[]{
\includegraphics[height=.26\textheight]{diffusive_scaling_nodiff_std1_full}
\label{fig:diffusive_scaling_nodiff_std1_full}
}
\end{center}
\caption{\protect\ifjournal{(Color online)\ }{} Same as
Fig.~\ref{fig:diffusive_scaling_nodiff_both} but for longer times and with a
wider scale. In (a) the distributions broaden with time since their
standard deviation is increasing; in (b), after rescaling by the standard
deviation, the distributions' tails narrow with increasing~$\pal$ as they
converge to a Gaussian.}
\label{fig:diffusive_scaling_nodiff_full_both}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{intDeltaq}
\end{center}
\caption{\protect\ifjournal{(Color online)\ }{} The second and fourth integrated moments
of~$\Delta_\pal$. These grow ballistically ($\pal^q$) for short times,
and eventually grow linearly with~$\pal$. The slow crossover
of~$\Delta_\pal^4$ is the origin of the `diffusive scaling' of
\citet{Leptos2009}, since in their narrow range of~$\pal$ the curve is
tangent to~$\pal^2$.}
\label{fig:intDeltaq}
\end{figure}
To understand the origin of the diffusive scaling, let us first examine how
the integrated moments of~$\Delta_\pal$ change with~$\pal$.
Figure~\ref{fig:intDeltaq} shows the evolution of the spatial integrals
of~$\Delta_\pal^2$ and~$\Delta_\pal^4$ for our squirmer model. For
short~$\pal$, the moment of~$\Delta_\pal^q$ grows as~$\pal^q$. This is a
typical `ballistic' regime: it occurs because for short times the integrals
are dominated by fluid particles that are displaced proportionately to the
swimmer's path length. These particles are typically very close to the
swimmer, and get dragged along for a while. This regime is visible for~$\pal
\lesssim 2\,\microm$ in Fig.~\ref{fig:intDeltaq}.
As~$\pal$ becomes larger, the particles initially near the swimmer are left
behind, and thus undergo only a finite displacement even as~$\pal$ increases.
Eventually, for~$q=2$ the scenario illustrated in
Fig.~\ref{fig:stresslet_Delta} takes over and leads to linear growth of the
moment with~$\pal$. This can be seen in Fig.~\ref{fig:intDeltaq} (triangles)
for~$\pal \gtrsim 40\,\microm$, though the scaling already looks fairly linear
at~$\pal \sim 10\,\microm$. For~$q=4$ the moment also eventually grows
linearly with~$\pal$, but the mechanism is different: the larger power
downplays the far-field stresslet effect, and the near-field dominates. The
linear growth is thus due to a corresponding linear growth of the support
of~$\Delta_\pal^4$ as in Fig.~\ref{fig:sphere_Delta}. This can be seen in
Fig.~\ref{fig:intDeltaq} (dots) for~$\pal \gtrsim 100\,\microm$, as indicated
by a dashed line (see Appendix~\ref{apx:logmodel} for the computation of this
asymptotic form). The crucial fact is that for~$q=4$ the crossover
from~$\pal^q$ to~$\pal^1$ takes much longer than for~$q=2$. This is because
the larger power weighs the largest displacements (with~$\Delta_\pal^q \sim
\lambda^q$) more heavily, so they dominate for longer before becoming too
rare. This crossover is at the heart of the diffusive scaling, as we now
show.
Let us assume that the distribution~$\prob_{\X_\tt}(\xc)$ does satisfy a
diffusive scaling, such that $\alsct\,\prob_{\X_\pal}(\alsct\tilde\xc) =
\tilde\prob_{\X_\pal}(\tilde\xc)$ is independent of~$\pal$.
From~\eqref{eq:rhotxNswim}, after changing integration variable to~$\tilde\kc
= \alsct\kc$,
\begin{equation}
\tilde\prob_{\X_\pal}(\tilde\xc)
=
\alsct\,\prob_{\X_\pal}(\alsct\tilde\xc)
=
\frac{1}{2\pi}\int_{-\infty}^\infty
\exp\l(-\Mpal\,\cK_\pal(\tilde\kc/\alsct)\r)
\ee^{-\imi\tilde\kc\tilde\xc}\dint\tilde\kc.
\end{equation}
Hence, a diffusive scaling law requires that $\Mpal\cK_\pal(\tilde\kc/\alsct)$
be independent of~$\pal$. Using this scaling in~\eqref{eq:cKdef}, we have
\begin{equation}
\Mpal\cK_\pal(\tilde\kc/\alsct) =
\nd
\int_\Vol\K(\Delta_\pal(\etav)\tilde\kc/\alsct)\dint\Vol_{\etav}\,.
\end{equation}
We Taylor expand~$\K$ (for~$\sdim=3$):
\begin{equation}
\Mpal\cK_\pal(\tilde\kc/\alsct)/\nd
=
\tfrac{1}{6}\,\tilde\kc^2\pal^{-1}
\int_\Vol\Delta_\pal^2(\etav)\dint\Vol_{\etav}
+
\tfrac{1}{120}\,\tilde\kc^4\pal^{-2}
\int_\Vol\Delta_\pal^4(\etav)\dint\Vol_{\etav}
+ \Order{\tilde\kc^6}.
\end{equation}
The first term recovers the Gaussian approximation; the second is the first
correction to Gaussian. Again this must be independent of~$\pal$ to obtain a
diffusive scaling, so we need
\begin{equation}
\int_\Vol\Delta_\pal^2(\etav)\dint\Vol_{\etav} \sim \pal,
\qquad
\int_\Vol\Delta_\pal^4(\etav)\dint\Vol_{\etav} \sim \pal^2,
\end{equation}
and clearly in general we would need each even moment~$q$ to scale
as~$\pal^{q/2}$. However, we've already seen that all the moments typically
eventually scale linearly with~$\pal$, so there can be no diffusive scaling.
Because there is a transition from a power larger than~$2$ ($\pal^4$) to one
less that~$2$ ($\pal^1$), observe that in Fig.~\ref{fig:intDeltaq} there is a
range of~$\pal$ (roughly $10\,\microm \lesssim \pal \lesssim 60\,\microm$)
where~$\pal^2$ is tangent to the~$q=4$ curve, as indicated by the line
segment. In that range the distribution will appear to have a reasonably good
diffusive scaling, consistent with
Fig.~\ref{fig:diffusive_scaling_nodiff_both}. But, as we saw in
Fig.~\ref{fig:diffusive_scaling_nodiff_full_both}, the diffusive scaling does
not persist for larger~$\pal$. It is a coincidence that the range of~$\pal$
used in the experiments of \citet{Leptos2009} were exactly in that
intermediate regime.
\section{Discussion}
\label{sec:discussion}
In this paper, we showed how to use the single-swimmer drift function to fully
derive the probability distribution of particle displacements. We took the
limit of infinite volume and discussed the underlying assumptions, such as the
need for the function~$\cK_\pal(\kc)$ in~\eqref{eq:propcond2} to not diverge
too quickly with volume. In typical cases, the function becomes independent
of volume as we make~$\Vol$ large, but it is possible for the integral to
diverge with~$\Vol$, as may occur for example in sedimentation problems. If
the divergence is rapid enough a larger value of~$\Mprop$ would need to be
used when applying Proposition~\ref{prop:expid}, potentially leading to
interesting new distributions. Whether this can happen in practice is a topic
for future investigation.
An intriguing question is: why does the squirmer model do so well? As was
observed previously~\cite{Lin2011,Pushkin2014}, it reproduces the pdf\ very
well in the core and part of the tails (Fig.~\ref{fig:compare_to_Leptos}).
However, the high precision of our calculation reveals that the experiments
have slightly `fatter' tails. This means that the specific details of the
organisms only begin to matter when considering rather large displacements.
In future work, we shall attempt to determine what is the dominant cause of
large displacements in the near-field for a more realistic model of
\textit{C.~reinhardtii}. The large displacements could arise, for instance,
from the strong time-dependence of the swimming organism, or from particles
`sticking' to the no-slip body of the organism or to stagnation points.
We have not discussed at all the role of reorientation, that is,
running-and-tumbling or orientation diffusion. \citet{Pushkin2013b} showed
that some curvature in the paths does not influence the diffusivity very much,
so it is likely not a very important factor here. In experiments involving
different organisms it could matter, especially if the swimmer carries a
volume of trapped fluid.
One glaring absence from the present theory is any asymmetry between pushers
and pullers. This suggests that correlations between swimmers must be taken
into account to see this asymmetry emerge. These correlations begin to matter
as swimmer densities are increased. However, how to incorporate these
correlations into a model similar to the one presented here is a challenge.
\begin{acknowledgments}
The author thanks Bruno Eckhardt and Stefan Zammert for helpful discussions
and for providing the digitized data from \citeauthoreos{Leptos2009}. The
paper also benefited from discussions with Raymond Goldstein, Eric Lauga,
Kyriacos Leptos, Peter Mueller, Tim Pedley, Saverio Spagnolie, and Benedek
Valko. Much of this work was completed while the author was a visiting
fellow of Trinity College, Cambridge. This research was supported by NSF
grant DMS-1109315.
\end{acknowledgments}
|
1,108,101,564,621 | arxiv | \section{Introduction}
Rapid technological developments since the early 1990s have provided us with an enormous amount of
information about the existence of Supermassive Black Holes (\texttt{SMBH}s; $M_{bh}\thicksim 10^{5}-10^{9} M_{\sun}$)
in almost all galaxies \citep{KormendyRichstone1995}. Studies have shown that there is a correlation between the \texttt{SMBH} mass and a number
of measurable features of the host galaxy due to the interaction between the \texttt{SMBH} and its surroundings. Some
of the properties known to correlate well with the \texttt{SMBH} mass are the bulge luminosity
\citep[$L_{Bulge}$;][]{KormendyRichstone1995,MarconiHunt2003}, the bulge mass \citep[$M_{Bulge}$;][]{KormendyRichstone1995, MarconiHunt2003, HaringRix2004},
the mean velocity dispersion $(\sigma)$ of the bulge stars \citep{Ferrarese2000, Gebhardt2000},
the S\'{e}rsic index $(n)$ of the major-axis surface brightness profile \citep{GrahamDriver2007}, and the pitch angle $(P)$ of spiral arms in disk galaxies
\citep{Seigar2008, Berrier2013}.
Over the past decade, the number of galaxies with secure mass estimates has increased, because studies have revealed new scaling relations
and revised the existing ones, thus improving our understanding of galaxy-black hole coevolution. The substructures in the most commonly
cited black hole scaling relations (e.g. $M_{bh}$-$L_{Bulge}$, $M_{bh}$-$\sigma$, $M_{bh}$-$M_{Bulge}$) are reported due to the barred galaxies and/or pseudobulges.
The true nature of galaxy evolution in different galaxy types still needs to be resolved.
A common practice with these correlations is to estimate the mass function of the central \texttt{SMBH}s (\texttt{BHMF}) in the local universe
\citep[e.g.][]{Salucci1999, YuTremaine2002, Marconi2004, Shankar2004, Graham2007, Vika2009, Davis2014}. A robust \texttt{BHMF} helps to describe
the evolution of the \texttt{SMBH} distribution and provides important constraints on the coevolution of the quasar and black hole populations.
The most well-known theoretical constrains are on the integrated emissivity of the quasar population, integrated mass density of black holes, and
the average black hole accretion rate \citep{Soltan1982, Fabian1999, Elvis2002, Shankar2009}. A comparison among the recent local \texttt{BHMF}
estimates derived from different scaling relations can be seen in Figure 5 of \citet{Shankar2009}. Most of these studies use an analytic approach,
which combines the measurements of the galaxy luminosity or velocity function with one of the \texttt{SMBH} scaling relations as outlined
by \citet{HaringRix2004}. These studies use some assumptions of the morphological type fractions and the bulge-to-total luminosity ($B/T$) ratios.
The sensitivity of the low-mass end of the \texttt{BHMF} based on these assumptions is well presented in Figure A2 of \citet{Vika2009}.
Recently, \citet{Davis2014} estimated the \texttt{BHMF} by using the \texttt{SMBH} mass versus spiral arm pitch angle relation for a nearly complete sample of local spiral galaxies
in order to produce reliable data for the low-mass end of the local \texttt{BHMF}. In this paper, we aim to estimate a local \texttt{BHMF} for
all galaxy types within the same volume limits in order to complement this late-type \texttt{BHMF}. Therefore, we used the identical sample selection
criteria used by \citet{Davis2014}.
The structure of the paper is as follows: in Section 2, we discuss the robustness of the $M_{bh}$-$P$ relation for late-type
galaxies and the $M_{bh}$-$n$ relation for early-type galaxies (E/S0). In Section 3, we describe our sample selection and its completeness.
In Section 4, we present our methodology for estimating \texttt{BHMF}. We first describe how we measure the S\'{e}rsic indeces and how we establish the S\'{e}rsic index
distribution for the early-type galaxies in our sample. Then, we show our determination of the local \texttt{BHMF} from
the S\'{e}rsic index distribution for the early-types and the pitch angle distribution for the late-types. Finally, in Section 5 we compare our results
to the previous works.
A cosmological model with $\Omega_{\Lambda}=0.691$, $\Omega_{M}=0.307$, $\omega_{b}=0.022$ and $h_{67.77}=H_{o}/$(67.77 km s$^{-1}$ Mpc$^{-1}$)
is adopted throughout this paper.
\section{$M_{bh}$-$P$ Relation and $M_{bh}$-$n$ Relation}
A common conclusion based on observational data is that \texttt{SMBH}s are associated with the
mass of the central bulge in the host galaxy. The $M_{bh}$-$M_{Bulge}$, $M_{bh}$-$L_{Bulge}$, and $M_{bh}$-$n$ relations all depend
on the success of the measurements of the central bulge. In late type galaxies, there can be difficulties when it comes to
isolating the central bulge from other components of galaxies (e.g. bars, disc, and spiral arms). In the study of disc galaxies,
a standard practice is to assume a fixed value of $B/T$ ratio. This introduces a bias on a \texttt{BHMF} such that \texttt{SMBH} mass is over-estimated
in the late-type disc galaxies and underestimated in early-type disc galaxies \citep{Graham2007}. Another approach is to use the average $B/T$ ratios
derived from $R^{1/n}$-bulge $+$ exponential-disc decompositions \citep{Graham2007}, which requires heavy image processing tools. The large scatter
in these relations to estimate \texttt{SMBH} mass can be traced back to the complexity of the decomposition in late-type galaxies, particularly in barred galaxies.
The $M_{bh}$-$\sigma$ relation has had considerable success in estimating \texttt{SMBH} masses in many galaxies. However, it requires spectroscopic measurements, which are
observationally expensive and depend on the spectroscopic bandwidth. Furthermore, a careful approach is needed such that a consistent bulge region is always sampled for the measurement of $\sigma$.
Similar to the above relations, measuring $\sigma$ is more complex for disc galaxies than it is for elliptical galaxies because the velocity dispersion from the motion of
disc and bar is coupled with $\sigma$ and they need to be handled properly \citep{Hu2008}.
Among other relations, the $M_{bh}$-$P$ relation seems promising for late-type galaxies. \citet{Berrier2013} established a linear $M_{bh}$-$P$ relation
for local spiral galaxies as $log(M/M_{\sun})=(8.21\pm0.16)-(0.062\pm0.009)|P|$ with a scatter less than 0.48 dex in all of their samples.
This is lower than the intrinsic scatter ($\approx0.56$ dex) of the $M_{bh}$-$\sigma$ relation,
using only late-types \citep{Gultekin2009}. The $P$ derived \texttt{SMBH} mass estimates also seem to be consistent in galaxies with pseudobulges, where other
relations seem to fail \citep{Berrier2013}. Although there are obvious advantages in using the $M_{bh}$-$P$ relation in late-type galaxies (see Discussion in \citet{Berrier2013}) ,
one needs to use a complimentary relation for elliptical and S0 galaxies since the $M_{bh}$-$P$ relation is just applicable for spiral galaxies.
Figure 6 in \citet{Berrier2013} presents evidence that $n$ and $P$ derived mass estimates are compatible for non-barred galaxies, and
a combination of these two approaches (i.e. using S\'{e}rsic index for E/S0 galaxies, and pitch angles for spiral galaxies) may produce a very
accurate \texttt{BHMF} for all galaxy types by using only imaging data.
\citet{Graham2001} presented evidence that the light concentration of the spheroids correlate well with their \texttt{SMBH} mass, showing that more centrally
concentrated spheroids have more massive black holes. Given that the S\'{e}rsic index, $n$, is essentially a measurement of the central light concentration,
\citet{GrahamDriver2007} found a log-quadratic relation between n and $M_{bh}$:
\begin{equation}
\log(M_{bh}) = (7.98\pm0.09)+(3.70\pm0.46)\log(\frac{n}{3})-(3.10\pm0.84)[\log(\frac{n}{3})]^2
\end{equation}
with an intrinsic scatter of $\epsilon_{intrinsic}=0.18^{+0.07}_{-0.06}$ dex.
Recently, \citet{Sani2011}, \citet{Vika2012} and \citet{Beifiori2012} failed to recover a strong $M_{bh}$-$n$ relation. \citet{Savorgnan2013} re-investigated and recovered
the relation using a large collection of literature S\'ersic index measurements using R-band \citep{GrahamDriver2007}, I-band \citep{Beifiori2012}, K-band \citep{Vika2012}, and 3.6\micron
\citep{Sani2011} imaging data. \citet{Savorgnan2013} discussed the systematic effects associated with measuring S\'ersic index in different optical and infrared wavebands. They concluded that
the differences expected from measuring S\'ersic index in different wavebands are smaller than the differences expected due to other systematic biases such as one-dimensional decomposition
versus two-dimensional decomposition, or the differences between measuring the S\'ersic index along a minor axis versus measuring it along a major axis. Indeed, one migh expect that a S\'ersic index
measured using a one-dimensional fit (as performed in this paper) to be $\sim$10\% smaller than that measured using a two-dimensional fit \citep{Ferrari2004}. Furthermore, when measuring S\'ersic
index in multiple wavebands for the same galaxies, \citet{Savorgnan2013} found that wavelength bias was completely dominated by these other biases, which could be as large as 50\%. Given, the
result of \citet{Kelvin2012}, we would expect the S\'ersic index measured at 3.6\micron to be less than 10\% higher than that measured in the $R$-band, which is signifantly smaller than the 50\%
number given by \citet{Savorgnan2013}. \citet{Savorgnan2013} excluded the outlying S\'ersic indices, averaged the remaining values, and recovered the $M_{bh}$-$n$ relation by showing that elliptical
and disc galaxies follow two different linear $M_{bh}$-$n$ relations. They discussed how this relation is consistent with what would be derived by combining the $M_{bh}$-$L_{Bulge}$ and $L_{Bulge}$-$n$
relations and how this explains the log quadratic nature of the $M_{bh}$-$n$ relation reported by \citet{GrahamDriver2007}.
In this paper, we define early-type galaxies as elliptical and S0 galaxies. The sample used by \citet{GrahamDriver2007} was dominated ($\sim89\%$) by elliptical and S0 galaxies. However,
\citet{Savorgnan2013} studied S0 galaxies together with spiral galaxies. Therefore, we used the log quadratic $M_{bh}$-$n$ relation reported by \citet{GrahamDriver2007} to estimate \texttt{SMBH} masses
in our early-type sample.
\section{Data and Sample Selection}
\citet{Davis2014} based their selection criterion on the Carnegie-Irvine Galaxy Survey (\texttt{CGS}) \citep{Ho2011}; it is an almost complete sample of 605 nearby
galaxies in the southern hemisphere. Using the spiral galaxies in this parent sample plus Milky Way, they defined a volume-limited sample which consists of spiral galaxies
within a luminosity (redshift-independent) distance of $25.4$ Mpc and a limiting absolute B-band magnitude of
$\mathfrak{M}_{B} = -19.12$. We followed the same selection criterion, except also including elliptical and S0 galaxies. As a result,
our volume-limited sample consists of 208 host galaxies (30 ellipticals and 38 S0s and 140 spiral galaxies) within a comoving volume
of $V_{c}=3.37\times10^{4}$ h$^{-3}_{67.77}$ Mpc$^3$ over a lookback time, $t_{L}\leq82.1$ Myr. We then downloaded images of selected galaxies from the \texttt{NASA/IPAC} Extragalactic
Database (\texttt{NED}).
A complete sample selection is necessary to estimate a meaningful \texttt{BHMF}. Therefore, we checked the completeness of our sample within
the limits of luminosity distance and absolute B-band magnitude in several ways. First, we compared our sample size with the maximum number of galaxies within these limits.
Figure 1 shows that the maximum number of galaxies, which is 217, appears at $D_{L}=28.05$ Mpc and $\mathfrak{M}_{B}=-19.37$, whereas our sample consists of
208 galaxies. While these two limits just differ by $4\%$, using the limiting $\mathfrak{M}_{B} =-19.12$ allows us to include galaxies with dimmer intrinsic
brightness, and helps us to be more complete.
In addition, we determined the luminosity function in order to check if our volume-limited sample is a fair representation of the local galaxy population
over the absolute magnitude range $-19.12 \lesssim \mathfrak{M}_{B} \lesssim -23$. The luminosity function is determined as $\phi(\mathfrak{M}_{B}) =\partial N/\partial \mathfrak{M}_{B}$,
where N is the number of galaxies in our sample in terms of the absolute B-band magnitude and dividing it by the comoving volume of the volume-limited sample.
This is illustrated in Figure 2, which shows the comparison with the luminosity functions for the overall \texttt{CGS} sample \citep{Ho2011} and the much larger sample of
\citet{Blanton2003} and \citet{Bernardi2013}.
The luminosity functions of \citet{Blanton2003} and \citet{Bernardi2013} have been shifted by $B-r=0.67$ mag, the average color of an Sbc spiral \citep{Fukugita1995}, which is roughly
the median Hubble type of both \texttt{CGS} and our volume-limited sample, and also transformed to $H_{0}=67.77$ km s$^{-1}$ Mpc$^{-1}$. While \citet{Blanton2003} derived the luminosity function
of $z\approx0.1$ galaxies from Sloan Digital Sky Survey (\texttt{SDSS}) by using the S\'{e}rsic parameters from a 1-D radial surface brightness profile, \citet{Bernardi2013} derived it
by using the 2-D fits to the whole galaxy image. The overall \texttt{CGS} sample has a luminosity function that agrees quite well with that of \citet{Blanton2003} \citep{Ho2011}.
However, our galaxy sample has a luminosity function that implies that it was observed in an overdense volume (see red data points in Figure 2). Therefore, we renormalized
our luminosity function by adding $-0.25$ in the y-axis in order to be consistent with that of \citet{Blanton2003} and \texttt{CGS} (see pink data points in Figure 2). For our \texttt{BHMF} estimation,
we used the same normatization factor (see Section 4.3). In addition, due to the sample selection criterion, our luminosity function does not extend below the magnitude limit of
$\mathfrak{M}_{B}=-19.12$. This fact is obviously of interest to our \texttt{BHMF} estimation that will be discussed more in Section 5.
Furthermore, we compared the distribution of morphological types of \texttt{CGS} and our sample. Our morphological fractions, $f_{type}$, are as such: $f_{E}=0.14$, $f_{S0}=0.18$, and
$f_{Spiral}=0.67$. This is in good agreement with the ones ($f_{E}=0.11\pm0.03$, $f_{S0}=0.21\pm0.05$, $f_{Sab+Sbc+Scd}=0.62\pm0.14$) reported by \citet{Fukugita1998}. Moreover,
Figure 3 shows that our volume-limited sample preserves the distribution of morphological types in \texttt{CGS}. In addition, we checked the T-type
distributions of \texttt{CGS} and our sample. The T-type values are taken from http://cgs.obs.carnegiescience.edu/CGS/database\_tables. The differences between the densities
of each T-type are always less than 5\% (see Figure 4).
We used imaging data taken from the \texttt{NASA/IPAC} Extragalactic Database (\texttt{NED}) (see Table 3). The absolute magnitudes were calculated from
apparent magnitudes, from \textit{HyperLeda} \citep{Paturel2003}, luminosity distances compiled from the mean redshift-independent distance from the \texttt{NED},
and extinction factors in the B-band from \citet{SchlaflyFinkbeiner2011}, as compiled by the \texttt{NED}. We used several different band images for
our measurements.
\section{Methodology}
\subsection{S\'{e}rsic Index Measurement}
In order to have a reliable S\'{e}rsic index measurement for early-type galaxies in our sample, we carefully masked the foreground stars and background galaxies
by using the \texttt{SEXTRACTOR} \citep{BertinArnouts1996}, and determined the centers of the galaxies by using the \texttt{IRAF} task \texttt{IMCNTR}.
The sky-background flux and its uncertainty were estimated from the mean and standard deviation of five median fluxes that were obtained from small boxes near
the galaxy free corners of each images, respectively. Then, the surface brightness profiles were extracted using the \texttt{IRAF} task \texttt{ELLIPSE}
\citep{Tody1986, Jedr1987} with a fixed center and allowing the isophotal position angle and ellipticity to vary. The best S\'{e}rsic bulge $+$ exponential disc
model for S0 galaxies, and the best S\'{e}rsic bulge model for elliptical galaxies were fitted by minimizing $\chi^{2}$ with an iterative procedure.
The models were derived three times for each galaxy in order to estimate the S\'{e}rsic index error. The uncertainity in the sky-background level was
respectively added and subtracted from the surface brightness profile data in the second and third derivation (see Figure 5).
This method for estimating the errors on the model parameters was also used by \citet{DeJong1996}. When fitting the profiles, seeing effects are particularly
relevant when the ratio between the \texttt{FWHM} of the seeing and the effective half-light radii $R_{e}$ of the S\'{e}rsic model is small \citep{Grahamdisk2001}.
When $R_{e}/$\texttt{FWHM} $> 2$, the difference between the measured S\'{e}rsic index and the actual S\'{e}rsic index is typically small, as explained
by \citet{Grahamdisk2001}. For our sample, all the derived bulge values for $R_{e}$ are greater than 1\arcsec, and the ratio $R_{e}/$\texttt{FWHM} is
greater than 2 (see Table 3, Column 6). The results of the best-fitting S\'{e}rsic bulge model for elliptical galaxies and the best-fitting S\'{e}rsic bulge $+$ exponential disc
model for S0 galaxies are shown in Figure 6 and 7, respectively.
We successfully completed the S\'{e}rsic index measurements for all 68 galaxies in our sample.
Before proceeding, we note that Equation 1 was constructed in the R-band \citep{GrahamDriver2007}, while our data ranges from the R-band to 4.6$\micron$. The structural parameters of
a galaxy may vary with wavelength due to the radial variations in stellar population and/or dust obscuration \citep{Kelvin2012}. This may result in different values for S\'ersic index in
different wavelengths. However, the local early-type galaxies mostly have fairly small color gradients \citep[e.g.][]{Peletier1990,Taylor2005}. Using similar fitting method to ours (S\'{e}rsic
bulge model for ellipticals; S\'{e}rsic bulge $+$ exponential disc model for disc galaxies), \citet{McDonald2011} found that the S\'ersic indices of elliptical and S0 galaxies show no
significant variation across optical and NIR wavelengths. In order to quantify how photometric and structural parameters of a galaxy vary with wavelength, recent studies used 2D single
S\'ersic fits and reported that galaxies with different S\'ersic indices and colors follow different trends with wavelength \citep[e.g.][]{Kelvin2012, Vulcani2014, Kennedy2015}. Their
common result is that high-n galaxies remain relatively stable at all wavelengths. These high-n galaxies roughly correspond to our early-type sample. However, it is worth mentioning that
the measurement of the S\'ersic index in these recent studies are different to ours: they used a single S\'ersic profile fit for all galaxies and made no attempt to remove objects
for which a two-component fit would be more appropriate. Therefore, single S\'ersic index wavelength dependence mostly gives information about bulge and disc properties of a galaxy \citep{Kennedy2016}.
For example, \citet{Vulcani2014} attributed the lack of variation in S\'ersic index with wavelength for red galaxies to the fact that they principally comprise one-component objects (i.e. ellipticals)
or two-component galaxies in which the components possess very similar colors, i.e. S0s. Although we can get some insight for the (disc-less) elliptical galaxies, the single S\'ersic galaxy
model is not suitable for quantifying possible changes with wavelength to S\'ersic indices of bulges in S0 galaxies. Therefore, following the work of \citet{McDonald2011}, we did not apply any
corrections to our S\'{e}rsic index measurements. All measured data for individual early-type galaxies in our sample are listed in Table 3.
\subsection{S\'{e}rsic index distribution}
As a result of S\'{e}rsic index measurements, we had three S\'{e}rsic index estimates ($n_{i}$) for each of our 68 galaxies. We used two independent ways in order to find the best fit
probability density function (\texttt{PDF}) to our data.
First, we employed a nominal \textit{binless} histogram, which is identical to the method in \citet{Davis2014}, in order to create the S\'{e}rsic index distribution.
We modeled each data point twice as a normalized Gaussian, where the mean is the average S\'{e}rsic index values $<n_{i}>$ and the standard deviation is the standard deviation
of $n_{i}$ , $\sigma_{<n_{i}>}$. The S\'{e}rsic index distribution is obtained by a normalized sum of Gaussian values. Then, we repeated the same modeling, but this time
the mean is the average logarithmic value of $n_{i}$, $<\log n_{i}>$, and the standard deviation is the standard deviation of $\log n_{i}$, $\sigma_{<\log n_{i}>}$. From the
resulting S\'{e}rsic index distributions, we were able to compute the statistical standardized moments of a probability distribution; mean ($\mu$), standard deviation (stdev),
skewness, and kurtosis. The two distributions give us almost the same statistical standardized moments: $\mu=3.10 (3.10)$, $stdev=1.38 (1.39)$, $skewness=0.95 (0.95)$, and
$kurtosis=4.17 (4.18)$, where the numbers in parentheses refer to the distribution derived from $<n_{i}>$ and $\sigma_{<n_{i}>}$. We used the \texttt{MATLAB} code \texttt{PEARSPDF}
to perform our \texttt{PDF} fitting. To explore the uncertainty in our \texttt{PDF} fit, we used a bootstrapping process. The random number generator \texttt{NORMRND} in \texttt{MATLAB}
was used for sampling (with replacement) from the original 68 data points, using the mean as $<\log n_{i}>$ and the standard deviation as $\sigma_{<\log n_{i}>}$.
The statistical standardized moments for one thousand data sets containing \text{68} data points each were individually calculated.
This gave one thousand new estimates for each of the parameters ($\mu$, stdev, skewness, and kurtosis). Then, the median and the standard deviation of these new estimates gave us
the uncertainty on the \texttt{PDF} fitting: $\mu=3.12\pm0.02$, $stdev=1.40\pm0.04$, $skewness=0.92\pm0.03$, $kurtosis=3.87\pm0.30$.
Then, we used the \texttt{MATLAB} code \texttt{ALLFITDIST}, which fits all valid parametric probability distributions to the data and returns the fitted distributions based on
the Bayesian information criterion. As a result, the gamma distribution function is given as a best \texttt{PDF} fit, with $\mu=3.11$, $variance=1.84$, shape $a=5.26\pm0.51$,
scale $b=0.59\pm0.06$. The resulting S\'{e}rsic distribution and its \texttt{PDF} fits are illustrated in Figure 8.
\subsection{Estimating BHMF}
The local \texttt{BHMF} is formulated as
\begin{equation}
\phi(\log(M_{bh}))=\frac{\partial N}{\partial \log(M_{bh})}=\frac{\partial N}{\partial x}\frac{\partial x}{\partial \log(M_{bh})}=\phi(x)\frac{\partial x}{\partial \log(M_{bh})}
\end{equation}
where $N$ is the number of galaxies, $x$ is pitch angle $P$ for late-type galaxies and S\'{e}rsic index $n$ for early-type galaxies, and $M_{bh}$ is \texttt{SMBH} mass.
For the early-type galaxies, the S\'{e}rsic index measurements for the volume-limited sample give us the S\'{e}rsic index function
$\phi(n)=\frac{\partial N}{\partial n}$; and $\frac{\partial n}{\partial \log(M_{bh})}$ can be evaluated by taking the derivative of Equation 1 as follows:
\begin{equation}
\frac{d\log(M_{bh})}{dn}=\frac{(3.70\pm0.46)}{n\ln(10)}-\frac{2(3.10\pm0.84)\log(\frac{n}{3})}{n\ln(10)}
\end{equation}
As a result, we get the following equation:
\begin{equation}
\phi(\log(M))=\phi(n)[\frac{(3.70\pm0.46)}{n\ln(10)}-\frac{2(3.10\pm0.84)\log(\frac{n}{3})}{n\ln(10)}]^{-1}
\end{equation}
Using Equation 5 and dividing by a local comoving volume of $V_{c}=3.37\times10^{4}$ $h^{-3}_{67.77}$ $Mpc^{3}$, the S\'{e}rsic index distribution was converted into the \texttt{BHMF}
for the early-type galaxies.
In order to estimate the error in the \texttt{BHMF}, we ran a Markov Chain Monte Carlo (\texttt{MCMC}) sampling of the \texttt{BHMF}. The sampling uses $10^{5}$ realizations of
the S\'{e}rsic index distribution based on the errors in the previous section. The S\'{e}rsic index distributions were randomly generated from the parameters that define
the \texttt{PDF}, assuming that they are normally distributed with the $1\sigma$ uncertainties given by the estimated errors. The uncertainties in the $M_{bh}$-$n$ relation are also
allowed to vary as a Gaussian distribution around the fiducial values. We first estimated the \texttt{BHMF} without assuming any errors, then we allowed the listed errors
(four parameters in the \texttt{PDF} fit $+$ three parameters in the $M_{bh}$-$n$ relation) to be perturbed individually and collectively. This is illustrated in Figure 9 (left),
which shows that the S\'{e}rsic index distribution has no impact on the \texttt{BHMF} for $M_{bh}>10^{9}M_{\sun}$ since the mass of the \texttt{SMBH} is fixed for $n > 11.9$.
The sharp decrease at the high-mass end is the result of the curved nature of the $M_{bh}$-$n$ relation, that predicts a maximum mass which \texttt{SMBH}s have formed \citep{Graham2007}.
The uncertainties in the $M_{bh}$-$n$ relation dominate at this region, softening the high-mass decrease of the \texttt{BHMF}, and thus increasing the total density of the \texttt{BHMF}
for high masses.
The error region in the \texttt{BHMF} is estimated by the $16^{th}$ and $84^{th}$ percentile of the $10^{5}$ \texttt{MCMC} realizations, similar to the
method used by \citet{Marconi2004}, where the $16^{th}$ and $84^{th}$ percentiles indicate the $1\sigma$ uncertainties on the logarithm of the local \texttt{BHMF}.
In order to deal with the intrinsic scatter in the $M_{bh}$-$P$ relation, \citet{Davis2014} used the method described in Equation 3 in the paper of \citet{Marconi2004}. However, we did
not adopt this method for our early-type \texttt{BHMF}. \citet{Graham2007} discussed that the intrinsic scatter in the $M_{bh}$-$n$ relation is not Gaussian; and the removal of the two highest
mass \texttt{SMBH}s converts the $M_{bh}$-$n$ relation into one with zero intrinsic scatter. In estimating the \texttt{BHMF} derived from the $M_{bh}$-$n$ relation, \citet{Graham2007}
did not apply any correction for the intrinsic scatter, and neither did we. Finally, we obtained our best estimate of the early-type \texttt{BHMF} by merging all
the random realizations of the \texttt{BHMF}s and considering the $16th$, $50th$, and $84th$ percentile levels (see the right panel in Figure 9).
We note that the early-type \texttt{BHMF} is normalized by adding $-0.25$ in the y-axis, which corrects for the overdensity in our selected volume.
In order to estimate the local \texttt{BHMF} for all galaxy types, following Equation 3, we also run the \texttt{MCMC} realizations of the \texttt{BHMF} for the spiral galaxies, but
this time using the pitch angle distribution that was derived by \citet{Davis2014}. Note that \citet{Davis2014} estimated possible \texttt{SMBH} masses from the $M_{bh}$-$P$ relation by using
the \texttt{MCMC} sampling and then fitted a \texttt{PDF} model to derive the late-type \texttt{BHMF}. In this paper, we used the best-fit \texttt{PDF} model for the pitch angle distribution
derived by \citet{Davis2014}, and then used Equation 3 by adopting the method used by \citet{Marconi2004} to estimate the late-type \texttt{BHMF} by considering the $16th$, $50th$, and $84th$
percentile levels of the \texttt{MCMC} realizations. Similar to our early-type MCMC sampling, we assumed that the input parameters ($\mu$, stdev, skewness, kurtoisis) of the \texttt{PDF} fit
and the uncertainities in the $M_{bh}$-$P$ relation are Gaussian distributed around the fiducial values. Then, we merged all random realizations of \texttt{BHMF}s from the early-type and
spiral galaxies. Figure 10 shows our best estimate of the local \texttt{BHMF} obtained by merging all random realizations and considering the $16th$, $50th$, and $84th$ percentile levels.
The late-type \texttt{BHMF} and the early-type \texttt{BHMF} are also shown in Figure 10 to help visualize how the early- and late-type samples are being spliced. We note that the our
\texttt{BHMF} estimates are all normalized by adding $-0.25$ in the y-axis to be able correct for the overdensity in our survey volume. The plotted data for Figure 9 (right) and Figure 10 are listed
for convenience in Table 1.
\subsection{\texttt{SMBH} mass density}
Integrating over the mass functions, we derived the local mass density of \texttt{SMBH}s which gives $1.74^{+0.79}_{-0.60}\times10^{5}$ h$^3_{67.77}$ M$_{\sun}$ Mpc$^{-3}$ for early-type and
$2.04^{+1.16}_{-0.75}\times10^{5}$ h$^3_{67.77}$ M$_{\sun}$ Mpc$^{-3}$ for all-type galaxies. For reference, \citet{Graham2007} and \citet{Vika2009} reported
$3.99\pm1.54\times10^{5}$ h$^{3}_{67.77}$ M$_{\sun}$ Mpc$^{-3}$ and $7.25\pm1.18\times10^5$ h$^3_{67.77}$ M$_{\sun}$ Mpc$^{-3}$ for the \texttt{SMBH} mass density in the local all-type galaxies, respectively.
In terms of the critical density of the universe, we obtained $\Omega_{BH,total}=1.61^{+0.91}_{-0.59}\times10^{-6}$ h$_{67.77}$.
This implies that $0.007^{+0.005}_{-0.003}$ h$^{3}_{67.77}$ percent of the baryons are contained in \texttt{SMBH}s at the centers of galaxies in the local universe (see Table 2).
\section{Discussion}
Figure 11 shows the comparison of our early-type \texttt{BHMF} with previously estimated early-type \texttt{BHMF}s \citep{Graham2007, Marconi2004, Vika2009}.
Our early-type \texttt{BHMF} is expected to be consistent with that of \citet{Graham2007} within the uncertainties, since they are both derived from the same $M_{bh}$-$n$ relation.
The data points are in overall good agreement within their uncertainties. There is an apparent disagreement below $M_{bh} < 10^{6.5}M_{\sun}$, which corresponds to $n \approx 1.5$ and the region between
$10^{8}M_{\sun} < M_{bh} < 10^{8.75}M_{\sun}$. \citet{Graham2007} defined early-type galaxies as $\frac{B}{T}>0.4$ and used the GIM2D$-$derived $n$ values \citep{Allen2006}, which were obtained
from the logical filter for S\'{e}rsic $+$ exponential catalog. For galaxies with $n < 1.5$, this logical filter classifies galaxies as pure disk and therefore fits them with a single component.
However, we obtained $1 < n < 1.5$ for seven S0 galaxies but still performed a two-component fit. As a result, our \texttt{BHMF} has higher density for the low mass end ($M_{bh} < 10^{6.5}M_{\sun}$)
and lower density for intermediate masses ($10^{8}M_{\sun} < M_{bh} < 10^{8.75}M_{\sun}$). Differences in the definition of early-type galaxies and the profile fitting methodology may explain the
disagreement between the two \texttt{BHMF}s derived from the same relation. It should also be noted that they used a sample of 1356 early-type galaxies from the Millennium Galaxy Catalogue
(\texttt{MGC}) in the redshift range of $0.013 < z < 0.18$, and they estimated the \texttt{BHMF} by summing the \texttt{SMBH} mass distribution times an associated space-density weights,
i.e., $\phi(M)=\sum W(L)M$, where $W(L)=\phi(L)/N(L)$ is constructed for black holes derived from early-type galaxies (defined as $\frac{B}{T}>0.4$). Although the volume of their sample is
considerably higher than ours, and their sample selection and \texttt{BHMF} estimation method are different from ours, overall their \texttt{BHMF} is consistent with our findings.
We also compared our \texttt{BHMF} with the work of \citet{Vika2009}. They used the sample identical to that of \citet{Graham2007}, except they included the galaxies with $\mathfrak{M}_{B}>-18$,
indicating the data from this region is unreliable. They used the linear $M_{bh}$-$L_{Bulge}$ relation reported by \citet{Graham2007Lum} with dust correction to their sample. Other than using
the $M_{bh}$-$L_{Bulge}$ relation to derive the \texttt{BHMF}, their \texttt{BHMF} estimation method is identical to that of \citet{Graham2007}. However, their \texttt{BHMF} does not agree
well with that of \citet{Graham2007}, or with ours. They discussed the probable reasons for the discrepancy between theirs and that of \citet{Graham2007} (see Section 3.1 in
\citet{Vika2009}). In addition, \citet{Graham2013} recently revised the $M_{bh}$-$L_{Bulge}$ relation and found a log quadratic nature in the $M_{bh}$-$L_{Bulge}$ relation, which is also
expected from the linear nature of the two distinct $L_{Bulge}$-$n$ relations for elliptical galaxies and bulges, and the curved $M_{bh}$-$n$ relation. This may explain the discrepancy
between the \texttt{BHMF} derived from the linear $M_{bh}$-$L_{Bulge}$ relation and the one derived from the curved $M_{bh}$-$n$ relation.
In addition, we compared our \texttt{BHMF} with that of \citet{Marconi2004}. They estimated the local \texttt{BHMF} for early-type galaxies based on \texttt{SDSS} sample of \citet{Bernardi2003},
by using the linear $M_{bh}$-$L_{Bulge}$ and $M_{bh}$-$\sigma$ relations reported by \citet{MarconiHunt2003} assuming the same intrinsic dispersion. They also derived the local
\texttt{BHMF} for early-type galaxies obtained from different galaxy luminosity functions, in different photometric bands. All their local \texttt{BHMF}s for early-type
galaxies are in remarkable agreement with ours within the uncertainities. However, they reported a discrepancy at $M_{bh} < 10^{8} M_{\sun}$ between the \texttt{BHMF} derived with the
\citet{Bernardi2003} luminosity function and the others (see Figure 1b in \citet{Marconi2004}). They considered this discrepancy as insignificant because this is the region where authors
adopted different functional forms to fit the data to extrapolate luminosity functions of early-type galaxies. Our early-type \texttt{BHMF} agrees more with the one derived from the sample of
\citet{Bernardi2003} at $M_{bh} < 10^{8} M_{\sun}$ than the others.
Figure 12 shows the comparison between our \texttt{BHMF} for all galaxy types with those of \citet{Graham2007}, \citet{Vika2009}, and \citet{Marconi2004}.
Overall our \texttt{BHMF} agrees better with that of \citet{Marconi2004} within the uncertainties. It is clear that there is a disagreement between ours and
those of \citet{Graham2007} and \citet{Vika2009} at the low-mass end. Late-type galaxies have the biggest contribution on the \texttt{BHMF} at the low-mass end (see Figure 10),
where the S\'{e}rsic index is more difficult to measure due to the complex nature of these late-type galaxies as we explained earlier in this paper.
It is also worth mentioning that \citet{Vika2009} argued that their \texttt{BHMF} data below
$\log(M_{bh}/M_{\sun}) = 7.67$ (light blue circles in Figure 12) is not reliable because it is derived from galaxies with $\mathfrak{M}_{B}>-18$. Our entire sample consists of galaxies
with $\mathfrak{M}_{B}\leq-19.12$. Moreover, \citet{Davis2014} stated a possible bias for the sample of \citet{Vika2009}, pointing the small number of late-type galaxies in their considerably
larger sample volume (see Section 7 in \citet{Davis2014}). Although our sample does not contain very faint galaxies $(\mathfrak{M}_{B} > -19.12)$, our \texttt{BHMF} results in a higher
number density for the low-mass end when compared to those of \citet{Vika2009} and \citet{Graham2007}. In addition, other relations ($M_{bh}$-$n$, $M_{bh}$-$L_{Bulge}$, and $M_{bh}$-$\sigma$ relations)
are not as accurate as the $M_{bh}$-$P$ relation in this mass regime \citep{Berrier2013}.
Finally, Figure 13 shows the comparison between our all-type \texttt{BHMF} with more recent \texttt{BHMF} estimates \citep{Shankar2013b, Sijacki2015}.
At the high-mass end, it looks as if our \texttt{BHMF} lies between those of \citet{Marconi2004} and \citet{Shankar2013b}, except for the lower mass \texttt{SMBH}s with $M_{bh} < 10^{7}M_{\sun}$.
\citet{Shankar2013b} derived the local \texttt{BHMF} based on the assumption that all local galaxies follow the early-type $M_{bh}$-$\sigma$ relation reported by \citet{McConnellMa2013}.
As shown in Figure 10, early-type galaxies dominate at the high-mass end, therefore a \texttt{BHMF} derived from a relation for early-type galaxies is expected to be more reliable at the high-mass
end. Observational uncertainities increase for low mass (late-type) galaxies because measuring $\sigma$ in disc galaxies is not a trivial task and one needs to properly count the contribution
from the motion of disc and bar that is coupled with the bulge. In addition, the majority of low mass galaxies may host pseudobulges \citep{FisherDrory2011}, and a number of independent groups
claimed that the properties measured for galaxies with pseudobulges do not follow the typical scaling relations (e.g. $M_{bh}$-$\sigma$, $M_{bh}$-$M_{Bulge}$, $M_{bh}$-$L_{Bulge}$),
with \texttt{SMBH} masses being often significantly smaller than what is expected by these relations \citep[e.g.][]{Hu2009, Greene2010, Kormendy2011, Beifiori2012}. Therefore, the \texttt{BHMF}
of \citet{Shankar2013b} (and most of previous ones) likely represents an upper limit on the true local \texttt{BHMF} \citep{Shankar2013b}. To adress this issue, \citet{Shankar2013b} re-estimated
the \texttt{BHMF} with the same relation, but this time the authors made the odd assumption that Sa galaxies do not host any \texttt{SMBH}s. This assumption likely makes this modified \texttt{BHMF}
a lower limit on the local \texttt{BHMF} \citep{Sijacki2015}. Our \texttt{BHMF} indeed stays between the \texttt{BHMF} of \citet{Shankar2013b} and the modified one. In the comparison with
the \texttt{BHMF}s derived from accretion models, the continuity equation models of \citet{Shankar2013a} predict a local \texttt{BHMF} similar to that of \citet{Shankar2013b} when a constant
Eddington ratio is assumed (see Figure 2 of \citet{Shankar2013b}), and they predict a local \texttt{BHMF} very similar to ours for the highest mass regime when a Eddington ratio is assumed to
be decreasing as a function of cosmological time (see dot-dashed line in Figure 13). Finally, when compared with the Illustris Simulation, which is a large scale cosmological simulation with
the resolution of a (106.5 Mpc)$^3$ volume, our result agrees quite well with their \texttt{BHMF}. At higher masses, the simulation estimate is in a remarkable agreement with our result.
Similar to the others, disagreements exist at lower masses, and \citet{Sijacki2015} already argued that the simulation results are least reliable at the low-mass end (see Section 3.3 in
\citet{Sijacki2015}). In summary, for the intermediate and high mass \texttt{SMBH}s ($M_{bh} > 10^{7} M_{\sun}$), the agreements between our \texttt{BHMF} and those of previous \texttt{BHMF}
estimates are encouraging. At the low-mass end, inconsistencies exist in the previous work that still need to be resolved, but our work is more in line with the expectations based on
accretion models \citep{Shankar2013a},
favouring steadily decreasing Eddington ratios, and semi-analytic models \citep[e.g.][]{Marulli2008}, which suggest a relatively flat distribution for $M_{bh} \lesssim 10^{8}M_{\sun}$.
Also, our results at the low-mass end of the \texttt{BHMF} are probably consistent with the claims that the majority of low-mass galaxies contain pseudobulges rather than classical
bulges \citep{FisherDrory2011}. This, in turn, may explain why the $M_{bh}$-$P$ produces a tighter relation than the $M_{bh}$-$\sigma$ relation for disc galaxies \citep{Berrier2013}, and therefore
why our \texttt{BHMF} result shows more promise when compared to expectations from semi-analytical models. This highlights an important need for properly accounting for the affects of pseudobulges
in disc galaxies when determining the local \texttt{BHMF}.
\section{Conclusion}
The observational simplicity of our approach and the use of the statistically tightest correlations with \texttt{SMBH} mass, which are the S\'{e}rsic index
for E/S0 galaxies and pitch angle for spiral galaxies, make it straightforward to estimate a local \texttt{BHMF} through imaging data only within
a limiting luminosity (redshift-independent) distance $D_{L}=25.4$ Mpc $(z=0.00572)$ and a limiting absolute B-band magnitude of $\mathfrak{M}_{B}=-19.12$.
The inconsistencies at the low-mass end of the local \texttt{BHMF} exist in the previous works that still need to be resolved. We presented our \texttt{BHMF}
as of a particular interest because it is a nearly complete sample within set limits and provides reliable data, especially for the low-mass end of the local \texttt{BHMF}.
\section*{Acknowledgements}
This research has made use of the \texttt{NASA}/\texttt{IPAC} Extragalactic Database (\texttt{NED}) which is operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with National Aeronautics and Space Administration. The authors wish to thank Joel C. Berrier for useful discussion.
MSS wishes to thank the generous support of the University of Minnesota Duluth and the Fund for Astrophysical Research. We also wish to thank the
anonymous referee whose comments greatly improved the content of this paper.
|
1,108,101,564,622 | arxiv | \section{Introduction}
\subsection{Background}
Several of the most important conjectures in modern number theory, such as the Bloch--Kato and Beilinson conjectures, relate the special values of $L$-functions to arithmetic data. In much of the work on these conjectures to date, an important role has been played by \emph{$p$-adic $L$-functions}: measures or distributions on $\Zp^\times$, for a prime $p$, interpolating the special values of a given complex $L$-function and its twists by Dirichlet characters of $p$-power conductor. Such functions are expected to exist in wide generality, but in practice they can be difficult to construct, and there are large classes of $L$-functions which at present are not known to have a $p$-adic analogue. In this paper, we provide such a construction for a new class of $L$-functions: the \emph{Asai}, or \emph{twisted tensor}, $L$-functions attached to Bianchi modular forms (automorphic forms for $\GLt / F$, where $F$ is imaginary quadratic).
In order to construct our $p$-adic $L$-function, we use the Betti cohomology of a locally symmetric space associated to $\GLt/F$. Work of Ghate \cite{Gha99} shows that the critical values of the Bianchi Asai $L$-function and its twists are computed by certain special elements in Betti cohomology, which can be reinterpreted as pushforwards of cohomology classes for $\GLt/\Q$ associated to Eisenstein series. However, interpolating such classes $p$-adically is not straightforward. The key novelty in our construction is to \emph{simultaneously} vary two parameters: the choice of Eisenstein series, and the choice of embedding of $\GLt/\Q$ in $\GLt/F$. This allows us to reduce the interpolation problem to a (much simpler) compatibility property of the $\GLt/\Q$ Eisenstein series.
Our construction uses techniques that are closely related to those those found in \cite{LLZ14} and \cite{LLZ16}, in which Lei, Zerbes and the first author constructed Euler systems (certain compatible families of \'etale cohomology classes) for Rankin--Selberg convolutions of modular forms, and for the Asai representation of a Hilbert modular form over a real quadratic field. In the Bianchi setting, there is no \'etale cohomology to consider, since Bianchi manifolds (the symmetric spaces associated to $\GLt / F$) are not algebraic varieties. However, we show in this article that applying the same techniques in this setting instead gives compatible families of classes in the Betti cohomology of these spaces. Hence the same techniques used to construct an Euler system for $\GLt / F$ when $F$ is real quadratic also give rise to a $p$-adic $L$-function when $F$ is imaginary quadratic.
We hope that these techniques can be extended to build other new $p$-adic $L$-functions as ``Betti counterparts'' of known Euler system constructions; in particular, we are presently exploring applications of this method to the standard $L$-function of (possibly non-self-dual) cohomological automorphic representations of $\operatorname{GL}_3 / \Q$.
\subsubsection*{Note} While working on this project, we learned that Balasubramanyam, Ghate and Vangala have also been working on a construction of $p$-adic Asai $L$-functions for Bianchi cusp forms \cite{BGV}. Their work is independent of ours, although both constructions rely on the same prior work \cite{Gha99} of Ghate.
\subsection{Statement of the main theorem}
Let $\Psi$ be a Bianchi modular form of weight $(k, k)$, for some $k \ge 0$, which is an eigenform for the Hecke operators. We assume that the level $\n$ of $\Psi$ is divisible by all primes $\mathfrak{p} \mid p$ of $F$; this leads to no loss of generality, since we may replace $\Psi$ by a $\mathfrak{p}$-stabilisation if necessary.
The Asai $L$-function of $\Psi$ is defined by\footnote{This should not be confused with the standard $L$-function $L^{\mathrm{std}}(\Psi, s) \defeq \sum_{\m \trianglelefteq \roi_F}c(\m, \Psi) \operatorname{Nm}(\m)^{-s}$.
\[
L^{\mathrm{As}}(\Psi, s) \defeq L(\varepsilon_{\Psi, \Q}, 2s-2) \sum_{n\geq 1}c(n\roi_F, \Psi) n^{-s},
\]
where $\varepsilon_{\Psi, \Q}$ is the restriction to $\widehat{\Z}^\times$ of the nebentypus character of $\Psi$, and $c(\m, \Psi)$ is the Hecke eigenvalue of $\Psi$ at the ideal $\m$. Up to finitely many Euler factors, it is the $L$-function of the \emph{Asai representation} of $\Psi$, that is, the tensor induction to $\operatorname{Gal}(\overline{\Q}/\Q)$ of the compatible family of $\ell$-adic representations of $\operatorname{Gal}(\overline{F}/F)$ attached to $\Psi$.
We choose a finite extension $L/\Qp$, containing $F$ and the Hecke eigenvalues of $\Psi$, with ring of integers $R$, and we assume that $\Psi$ is \emph{ordinary} at $p$, i.e.~$c(p\roi_F, \Psi)$ is a unit in $R$.
\begin{theorem}
For any integer $c > 1$ coprime to $6\n$, there exists a $p$-adic measure
\[
\subc L_p^{\mathrm{As}}(\Psi) \in R[[\Zp^\times]]
\]
on $\Zp^\times$ satisfying the following interpolation property: if $\chi$ is a Dirichlet character of conductor $p^r$, and $0 \leq j \leq k$, then we have
\[
\int_{\Zp^\times}\chi(x)x^j \mathrm{d}\subc L_p^{\mathrm{As}}(\Psi)(x) = \left\{\begin{array}{ll}(*)L^{\mathrm{As}}(\Psi,\chibar,j+1) &: \chi(-1)(-1)^j = 1,\\
0 &: \chi(-1)(-1)^j = -1,
\end{array}\right.
\]
where $(*)$ is an explicit factor (which is always non-zero if $r \ge 1$).
\end{theorem}
See Theorem \ref{thm:interpolation} of the main text for a precise statement. Note that $\subc L_p^{\mathrm{As}}$ interpolates all critical values in the left half of the critical strip. The critical values in the right half are related to these via a functional equation, but we do not make this explicit.
It is possible to remove the dependence on $c$ entirely if the restriction to $\Q$ of the character of $\Psi$ is non-trivial and does not have $p$-power conductor. If this condition is not satisfied, then we can only remove $c$ at the cost of passing to a slightly larger space of ``pseudo-measures'', which may be interpreted as meromorphic (rather than analytic) functions on $p$-adic weight space; this is a $p$-adic counterpart of the fact that, for certain eigenforms $\Psi$, the Asai $L$-function and its twists can have poles. The details of this are contained in \S\ref{sec:getting rid of c}.
\subsection{Outline of the construction} We first give a brief outline of the construction in the simplest case, when $\Psi$ is a normalised Bianchi modular eigenform of weight $0$ (i.e.~contributing to cohomology with trivial coefficients) for some imaginary quadratic field $F$.
From $\Psi$ we construct a class $\phi_\Psi^* \in \h^1_{\mathrm{c}}(Y_{F, 1}^*(\n), R)$, where $Y_{F, 1}^*(\n)$ is a Bianchi manifold with appropriate level structure. This cohomology group is a free $R$-module of finite rank, and its $\Zp$-linear dual is $\h^2(Y_{F, 1}^*(\n), R) / (\text{torsion})$. In \cite{Gha99}, Ghate showed that critical values of the Asai $L$-function can be obtained by pairing $\phi_\Psi^*$ with certain classes in this $\h^2$ coming from classical weight 2 Eisenstein series. The main new ideas in the present paper arise in controlling integrality of these Eisenstein classes as the level varies, thus putting them into a compatible family from which we build a $p$-adic measure.
The first input in our construction is a collection of maps, one for each $m \ge 1$ and $a \in \roi_F$, defined by
\[
Y_{\Q, 1}(m^2 N) \labelrightarrow{\iota} Y_{F, 1}^*(m^2 \n) \labelrightarrow{\kappa_{a/m}} Y_{F, 1}^*(\n),
\]
where $\iota$ is the natural embedding, and $\kappa_{a/m}$ is obtained by twisting the natural quotient map by $\smallmatrd{1}{a/m}{0}{1}$. Here $Y_{\Q, 1}(m^2 N)$ is the usual (open) modular curve for $\GLt / \Q$ of level $m^2 N$, where $N = \n \cap \Z$.
The second input is a collection of special cohomology classes (``Betti Eisenstein classes'') ${}_c C_{m^2N} \in \h^1( Y_{\Q, 1}(m^2N), \Z)$. These are constructed using Siegel units. The theory of Siegel units shows that these classes satisfy norm-compatibility properties as $m$ varies, and that their images in de Rham cohomology are related to the Eisenstein series used in \cite{Gha99}. (The factor $c$ refers to an auxiliary choice of integer which serves to kill off denominators from these classes).
With these definitions, we set
\begin{align*}
{}_c \Xi_{m, \n, a} \defeq (\kappa_{a/m} \circ \iota)_*\left( {}_c C_{m^2 N} \right) &\in \h^2(Y_{F, 1}^*(\n), \Z),\\
\subc \Phi_{\n, a}^r \defeq \sum_{t \in (\Z / p^r \Z)} {}_c \Xi_{p^r, \n, at} \otimes [t] &\in \h^2(Y_{F, 1}^*(\n), \Z) \otimes \Zp[(\Z / p^r)^\times].
\end{align*}
The key theorem in our construction (Theorem \ref{norm relation}) is that the classes $\subc \Phi_{\n, a}^r$ satisfy a norm-compatibility relation in $r$. Both the statement of this norm-compatibility relation, and its proof, are very closely analogous to the norm-compatibility relations for Euler system classes in \cite{LLZ14, LLZ16}.
From this, it follows that after renormalising using the Hecke operator $(U_p)_*$ (the transpose of the usual $U_p$) the classes $\subc\Phi_{\n, a}^r$ form an inverse system. In particular, they fit together to define an element
\[
\subc\Phi_{\n,a}^\infty \in e_{\mathrm{ord}, *}\h^2(Y_{F,1}^*(\n),\Zp)\otimes_{\Zp} \Zp[[\Zp^\times]],
\]
where $e_{\mathrm{ord}, *}$ is Hida's ordinary projector asociated to $(U_p)_*$. We view this as a bounded measure on $\Zp^\times$ with values in the $(U_p)_*$-ordinary part of $\h^2(Y_{F, 1}^*(\n), \Zp)$. We then define the $p$-adic Asai $L$-function to be the measure
\[
\subc L_p^{\mathrm{As}}(\Psi) \defeq \langle \phi_{\Psi}^*, \subc\Phi_{\n,a}^\infty\rangle \in R[[\Zp^\times]].
\]
That this measure interpolates the critical values of the (complex) Asai $L$-function then follows from \cite{Gha99} together with certain twisting maps (to obtain twisted $L$-values).
The case of higher-weight Bianchi forms (contributing to cohomology with non-constant coefficients) is similar, although unavoidably a little more technical. Suppose $\Psi$ is such a form of weight $(k,k)$. Using \cite{Gha99} and the same twisting methods as in the weight $(0,0)$ case, one can prove algebraicity for the critical value $L^{\mathrm{As}}(\Psi,\chi,j+1)$, where $0 \leq j \leq k$ and $\chi(-1)(-1)^j = 1$, by pairing with classes in $\h^2$ arising from Eisenstein series of weight $2k-2j+2$. For each such $j$, we define a compatible system of cohomology classes with coefficients in a suitable algebraic representation of $\GLt / F$ by applying a $p$-adic moment map to our Siegel-unit classes, obtaining classes $\subc\Phi_{\n,a}^{\infty,j}$ analogous to $\subc\Phi_{\n,a}^\infty$ in the weight $(0,0)$ construction. Again, this is a ``Betti analogue'' of a construction for \'etale cohomology which is familiar in the theory of Euler systems \cite{kings15,KLZ17}.
Pairing $\phi_\Psi^*$ with $\subc\Phi_{\n,a}^{\infty,j}$ gives a $p$-adic measure on $\Zp^\times$, as above. Using Kings' theory of $p$-adic interpolation of polylogarithms, it turns out that after a twist by the norm this measure is actually independent of $j$, and we define the $p$-adic Asai $L$-function $\subc L_p^{\mathrm{As}}(\Psi)$ to be the measure for $j=0$. Moreover, the class $\subc\Phi_{\n,a}^{\infty,j}$ can be explicitly related to weight $2k-2j+2$ Eisenstein series, so that integrating the function $\chi(x)x^j$ against $\subc L_p^{\mathrm{As}}(\Psi)$ computes the value $L^{\mathrm{As}}(\Psi,\chi,j+1)$ (under the parity condition above).
\subsection{Acknowledgements}
The authors would like to thank Aurel Page for suggesting the proof of Proposition \ref{prop:goodinclusion}; and the two anonymous referees, who provided valuable comments and corrections on an earlier draft of the paper.
\section{Preliminaries and notation}\label{preliminaries}
\subsection{Basic notation}
We fix notation for a general number field $K$, which will either be $\Q$ or an imaginary quadratic field. (We'll generally denote this imaginary quadratic field by $F$ to distinguish it from the rationals in the notation). Denote the ring of integers by $\roi_K$, the adele ring by $\A_K$ and the finite adeles by $\A_K^f$. We let $\roikhat \defeq \widehat{\Z}\otimes_{\Z}\roi_K$ be the finite integral adeles, and $K^{\times +}$ the totally-positive elements of $K^\times$ (so that $K^{\times +} = K^\times$ for $K = F$).
Let $\uhp \defeq \{z \in \C: \mathrm{Im}(z)>0\}$ be the usual upper half-plane, with $\GLt(\R)_+$ (the group of $2 \times 2$ matrices of positive determinant) acting by M\"obius transformations in the usual way; we extend this to all of $\GLt(\R)$ by letting $\smallmatrd{-1}{}{}1$ act via $x + iy \mapsto -x + iy$.
Define the \emph{upper half-space} to be
\[ \uhs \defeq \{(z,t)\in \C\times\R_{>0}\}, \]
with $\GLt(\C)$ acting via
\[ \smallmatrd{a}{b}{c}{d} \cdot (z, t) = \left( \frac{(az + b)\overline{(cz + d)} + a\bar{c} t^2}{|cz + d|^2 + |c|^2 t^2}, \frac{ |ad-bc|t}{|cz + d|^2 + |c|^2 t^2}\right).\]
We embed $\uhp$ in $\uhs$ via $x + iy \mapsto (x, y)$, which is compatible with the actions of $\GLt(\R)$ on both sides.
Throughout, $p$ will denote a rational prime. Let $F$ be an imaginary quadratic field of discriminant $-D$, with different $\mathcal{D} = (\sqrt{-D})$, and fix a choice of $\sqrt{-D}$ in $\C$. Let $\n \subset \roi_F$ be an ideal of $F$, divisible by all the primes of $F$ above $p$; this will be the level of our Bianchi modular form. We assume throughout that $\n$ is small enough to ensure that the relevant locally symmetric space attached to $\n$ is smooth (see Proposition \ref{prop:goodinclusion}). Let $N$ be the natural number with $(N) = \Z\cap \n$ as ideals in $\Z$ (noting that $p \mid N$).
For an integer $n \ge 0$ and a ring $R$, define $V_n^{(r)}(R)$ to be the space of homogeneous polynomials of degree $n$ in two variables $X, Y$ with coefficients in $R$, with $\GLt(R)$ acting on the right via $(f \mid \gamma)(X, Y) = f(aX + bY, cX + dY)$. Similarly, we write $V_n^{(\ell)}(R)$ for the same space with $\GLt$ acting on the left, so $(\gamma \cdot f)(X, Y) = f(aX + cY, bX + dY)$.
\subsection{Locally symmetric spaces}
\begin{definition}\label{def:G*}
Let $G$ be the algebraic group $\mathrm{Res}_{F/\Q}\GLt$ over $\Q$, and let $G^*$ be the subgroup
$G \times_{D}\mathbb{G}_m$, where $D \defeq\mathrm{Res}_{F/\Q}\mathbb{G}_m$ and the map $G \rightarrow D$ is determinant.
\end{definition}
(Compare \cite[Definition 2.1.1]{LLZ16} in the totally-real case.)
\begin{definition}
We define locally symmetric spaces attached to the groups $\GLt$, $G$ and $G^*$ as follows:
\begin{itemize}
\item If $U \subset \GLt(\A_\Q^f)$ is an open compact subgroup, we set
\[ Y_\Q(U) \defeq \GLt(\Q)_+\backslash \left[\GLt(\A_\Q^f)\times \uhp \right]/U,\]
where $\GLt(\Q)_+$ acts from the left on both factors in the usual way, and $U$ acts on the right of $\GLt(\A_\Q^f)$.
\item If $U \subset G(\A_\Q^f) = \GLt(\A_F^f)$ is open compact, we set
\[ Y_F(U) \defeq \GLt(F)\backslash \left[\GLt(\A_F^f)\times \uhs \right]/U.\]
\item If $U \subset G^*(\A_\Q^f) = \left\{ g \in \GLt(\A_F^f) : \det(g) \in (\A_\Q^f)^\times\right\}$ is open compact, we set
\[ Y_F^*(U) \defeq G^*(F)_+ \backslash \left[G^*(\A_\Q^f) \times \uhs\right] / U, \]
where $G^*(F)_+ = \{ g \in G^*(F) : \det(g) > 0\}$ is the intersection of $G^*(F)$ with the identity component of $G^*(\R)$.
\end{itemize}
\end{definition}
Each of these spaces has finitely many connected components, each of which is the quotient of $\uhp$ or $\uhs$ by a discrete subgroup of $\operatorname{PSL}_2(\R)$ (resp.~$\operatorname{PGL}_2(\C)$). If $U$ is sufficiently small, these discrete subgroups act freely, so in particular the quotient is a manifold.
\begin{definition}
Let $K$ be either $\Q$ or $F$, and $\m$, $\n$, $\aaa$ be ideals in $\roi_K$. We define:
\begin{itemize}
\item[(i)] $U_K(\m,\n) \defeq \{\gamma \in \GLt(\roikhat): \gamma \equiv I \newmod{\smallmatrd{\m}{\m}{\n}{\n}}\},$
\item[(ii)] $U_K(\m(\aaa),\n) \defeq \{\gamma \in \GLt(\roikhat): \gamma \equiv I \newmod{\smallmatrd{\m}{\m\aaa}{\n}{\n}}\},$
\end{itemize}
We write $Y_K(\m, \n) \defeq Y_K(U(\m, \n))$ and similarly $Y_K(\m(\aaa), \n)$. We will be particularly interested $Y_K(\m, \n)$ for $\m = (1)$, which we abbreviate as $Y_{K, 1}(\n)$.
In the case $K = F$, we write $U_F^*(\m, \n)$, $U_F^*(\m(\aaa), \n)$, $U_{F, 1}^*(\n)$ for the intersections of the above groups with $G^*$.
\end{definition}
\begin{example}
The following three locally symmetric spaces are of particular importance in the sequel, so here we describe them explicitly (and record some of their other basic properties) for reference later in the paper.
\begin{enumerate}[(i)]
\item $Y_{\Q,1}(N)$ is the usual (open) modular curve of level $\Gamma_1(N)$. It has one connected component, isomorphic to $\Gamma_{1}(N)\backslash\uhp$.
\item The space $Y_{F,1}^*(\n)$ also has a single connected component, isomorphic to $\Gamma_{F,1}^*(\n)\backslash\uhs$, where
\begin{align*}
\Gamma_{F,1}^*(\n) &\defeq G^*(F)_+ \cap U^*_{F,1}(\n)\\
&= \left\{ \smallmatrd{a}{b}{c}{d} \in \SLt(\roi_F): c = 0, a = d = 1 \bmod \n\right\}.
\end{align*}
\item Since $\det(U_{F,1}(\n)) = \roihat^\times$, the space $Y_{F,1}(\n)$ has $h_F$ connected components, where $h_F$ is the class number of $F$. The identity component is isomorphic to $\Gamma_{F,1}(\n)\backslash\uhs$, where
\[\Gamma_{F,1}(\n) \defeq \GLt(F) \cap U_{F,1}(\n).\]
\end{enumerate}
\end{example}
If $N = \n\cap \Z$, then there are natural maps
\[Y_{\Q,1}(N) \labelrightarrow{\iota} Y_{F,1}^*(\n) \labelrightarrow{\jmath} Y_{F,1}(\n)\]
induced by the natural maps $\uhp \hookrightarrow \uhs$ and $\GLt(\A_\Q^f) \hookrightarrow G^*(\A_\Q^f) \hookrightarrow G(\A_\Q^f)$. The map $\iota$ is injective in most cases:
\begin{proposition}
\label{prop:goodinclusion}
If $\n$ is divisible by some integer $q \ge 4$, then $Y_{F, 1}^*(\n)$ is a smooth manifold, and
\[ \iota: Y_{\Q, 1}(N) \hookrightarrow Y_{F, 1}^*(\n) \]
is a closed immersion.
\end{proposition}
\begin{proof}
First, the smoothness assertion. It suffices to prove that $\Gamma_{F, 1}^*(\n)$ has no non-trivial torsion elements. Since $\Gamma_1^*(\n)$ is a subgroup of $\SLt(\roi_F)$, any torsion element $\gamma$ must have eigenvalues $\zeta, \zeta^{-1}$ where $\zeta$ is a (non-trivial) root of unity, defined over an extension of $F$ of degree at most 2. Since $\zeta + \zeta^{-1} = a + d = 2 \bmod \n$, we conclude that $\n$ divides $\zeta + \zeta^{-1} - 2$. A case-by-case check shows that this implies $\zeta$ has order $2, 3, 4$ or $6$, and $\n$ must contain one of the integers $1$,$2$,$3$.
Let us now prove the injectivity assertion. Let $z, z' \in \uhs$ be such that $\gamma z = z'$, for some $\gamma \in \Gamma_1^*(\n)$. Then $\gamma^{-1}\bar\gamma z = z$, so either $\gamma^{-1}\bar\gamma = \mathrm{id}$, or $\gamma^{-1}\bar\gamma$ is a non-trivial torsion element in $\SLt(\roi_F)$. Since $\gamma$ is upper-triangular modulo some integer $q \ge 4$, the same is true of $\bar\gamma$ and thus also of $\gamma^{-1}\bar\gamma$; but we have just seen that $\Gamma_{F, 1}^*(q)$ has no torsion elements for $q \ge 4$.
We can therefore conclude that $\gamma^{-1}\bar\gamma = \mathrm{id}$, in other words that $\gamma \in \Gamma_{F, 1}^*(\n) \cap \SLt(\Z) = \Gamma_{\Q, 1}(N)$. Hence $z = z'$ as elements of $Y_{\Q, 1}(N)$.
\end{proof}
\begin{remark-numbered}
Henceforth, we will always assume that $\n$ is divisible by such a $q$, or, more generally, is small enough to avoid the possibility that these spaces are (non-smooth) orbifolds.
\end{remark-numbered}
In contrast, the composition $\jmath\circ\iota$ is \emph{never} injective, since $\smallmatrd{-1}{0}{0}{1} \in \Gamma_1(\n)$ preserves the image of $\uhs$ in $\uhp$, and acts on $\uhp$ by $x+iy \mapsto -x + iy$, so the points $-x +iy$ and $x + iy$ of $Y_{\Q, 1}(N)$ (which are distinct for generic $x, y$) are identified when mapped to $Y_{F,1}(\n)$. This failure of injectivity is a key reason for introducing the space $Y_{F,1}^*(\n)$. In fact, one can see directly that:
\begin{proposition}\label{prop:map-j}
The map $\jmath : Y_{F,1}^*(\n) \rightarrow Y_{F,1}(\n)$ has image equal to the identity component $\Gamma_{F,1}\backslash\uhs$. Its fibres are the orbits of the finite group $\left\{\smallmatrd{\epsilon}{0}{0}{1} \hspace{2pt} : \hspace{2pt} \epsilon \in \roi_F^\times\right\}$ acting on $\Gamma^*_{F, 1} \backslash \uhs$.
\end{proposition}
\subsection{Hecke correspondences}
\label{hecke operators}
We can define Hecke correspondences on the symmetric spaces $Y_{K, 1}(\n)$, for $\n$ an ideal of $\roi_K$, as follows. Firstly, we have diamond operators $\langle w \rangle$ for every $w \in (\roi_K / \n)^\times$, which define an action of $(\roi_K / \n)^\times$ on $Y_{K,1}(\n)$; this even extends to an action of the narrow ray class group modulo $\n$, although we shall not use this.
Secondly, let $\aaa$ be a square-free ideal of $\roi_K$. Consider the diagram
\[
\begin{diagram}
&& Y_K(1(\aaa),\n) &&\\
&\ldTo^{\pi_2} && \rdTo^{\pi_1}&\\
Y_{K, 1}(\n)&&&& Y_{K, 1}(\n),
\end{diagram}
\]
where $\pi_1$ is the natural projection map, and $\pi_2$ is the `twisted' map given by the right-translation action of $\smallmatrd{\varpi}{}{}{1}$ on $\GLt(\A^f_K)$, where $\varpi \in \roikhat$ is any integral ad\`ele which generates the ideal $\aaa \roikhat$. (If $K = \Q$ and $\varpi = a$ is the positive integer generating $\aaa$, then this map $\pi_2$ corresponds to $z \mapsto z/a$ on $\uhp$.) We then define
\[(T_{\aaa})_* \defeq (\pi_2)_* \circ (\pi_1)^*\]
\[(T_{\aaa})^* \defeq (\pi_1)_* \circ (\pi_2)^*\]
as correspondences on $Y_{K, 1}(\n)$. When $\aaa$ divides the level $\n$, we denote these operators instead by $(U_\aaa)_*$ and $(U_\aaa)^*$. The definition may be extended to non-squarefree $\aaa$ in the usual way.
The same construction is valid for the more general symmetric spaces $Y_K(\m, \n)$, but it is no longer independent of the choice of generator $\varpi$ of $\aaa$ (it depends on the class of $\varpi$ modulo $1 + \m \roikhat$). We will only use this in the case where $\aaa$ is generated by a positive integer $a$, in which case we of course take $\varpi = a$. With this convention, the Hecke operators $(T_a)_*$ and $(T_a)^*$ for positive integers $a$ also make sense on the ``hybrid'' symmetric spaces $Y_{F, 1}^*(\m, \n)$.
\begin{remark-numbered}
The maps $(T_\aaa)^*$ and $(U_\aaa)^*$ are perhaps more familiar, as their action on automorphic forms is given by simple formulae in terms of Fourier expansions, as we shall recall below. The lower-star versions $(T_\aaa)_*$ and $(U_\aaa)_*$ are the transpose of the upper-star versions with respect to Poincar\'e duality; this duality explains the key role played by $(U_p)_*$ in our norm relation computations.
\end{remark-numbered}
\subsection{Bianchi modular forms}
\label{bianchi modular forms}
We briefly recall the definition of Bianchi modular forms; for further details following our exact conventions, see \cite[\S 1]{Wil17}, or \cite{Gha99} for a more general treatment. As above, let $F$ be an imaginary quadratic field, and $U$ an open compact subgroup of $\GLt(\A^f_F)$. Then, for any $k \ge 0$, there is a finite-dimensional $\C$-vector space $S_{k,k}(U)$ of \emph{Bianchi cusp forms} of weight $(k, k)$ and level $U$, which are functions
\[ \Psi: \GLt(F) \backslash \GLt(\A_F) / U \longrightarrow V_{2k+2}^{(r)}(\C) \]
transforming appropriately under right-translation by the group $\C^\times \cdot \SUt(\C)$, and satisfying suitable harmonicity and growth conditions.
These forms can be described by an appropriate analogue of $q$-expansions (cf.~\cite[\S 1.2]{Wil17}). Let $e_F : \A_F/F \rightarrow \C^\times$ denote the unique continuous character whose restriction to $F \otimes \R \cong \C$ is
\[ x_\infty \longmapsto e^{2\pi i\mathrm{Tr}_{F/\Q}(x_\infty)}, \]
and let $W_\infty: \C^\times \to V_{2k+2}(\C)$ be the real-analytic function defined in 1.2.1(v) of \emph{op.cit.} (involving the Bessel functions $K_n$).
\begin{theorem}
Let $\Psi$ be a Bianchi modular form of weight $(k,k)$ and level $U$. Then there is a \emph{Fourier--Whittaker expansion}
\[\Psi\left(\matrd{\mathbf{y}}{\mathbf{x}}{0}{1}\right) = |\mathbf{y}|_{\A_F}\sum_{\zeta \in F^\times}W_f(\zeta \mathbf{y}_{f}, \Psi) W_{\infty}(\zeta \mathbf{y}_\infty) e_F(\zeta \mathbf{x}),\]
where $W_{f}(-, \Psi)$, the ``Kirillov function'' of $\Psi$, is a locally constant function on $(\A_{F}^f)^\times$, with support contained in a compact subset of $\A_F^f$.
\end{theorem}
If $U = U_{F, 1}(\n)$ for some $\n$, then $W_f(-, \Psi)$ is supported in $\mathcal{D}^{-1}\roihat$. For $\m$ an ideal of $\roi_F$, we define a coefficient $c(\m, \Psi)$ as the value $W_{f}(\mathbf{y}_f, \Psi)$ for any $\mathbf{y}_f$ generating the fractional ideal $\mathcal{D}^{-1} \m \roihat$; this is independent of the choice of $\mathbf{y}_f$.
Exactly as for elliptic modular forms, the space $S_{k, k}(U_{F, 1}(\n))$ has an action of (commuting) Hecke operators $(T_\m)^*$ for all ideals $\m$; and if $\Psi$ is an eigenvector for all these operators, normalized such that $c(1, \Psi) = 1$, then the eigenvalue of the $\m$-th Hecke operator on $\Psi$ is $c(\m, \Psi)$. Moreover, the space $S_{k, k}(U_{F, 1}(\n))$ is a direct sum of ``new'' and ``old'' parts, and the Hecke operators $(T_\m)^*$ are simultaneously diagonalisable on the new part.
\subsection{The Asai $L$-function}
We now define the principal object of study in this paper, the Asai $L$-function of a Bianchi eigenform.
The space $S_{k, k}(U_{F, 1}(\n))$ has an action of diamond operators $\langle d \rangle$, for all $d \in (\roi_F / \n)^\times$; and on any Hecke eigenform $\Psi$ these act via a character $\varepsilon_{\Psi}: (\roi_F / \n)^\times \to \C^\times$. Let $\varepsilon_{\Psi, \Q}$ denote the restriction of this character to $(\Z / N\Z)^\times$.
\begin{definition}
\label{def:asaiLS}
Let $\Psi$ be a normalized eigenform in $S_{k,k}(U_{F, 1}(\n))$, and $\chi$ a Dirichlet character of conductor $m$. Define the \emph{Asai $L$-function} of $\Psi$ by
\[
L^{\mathrm{As}}(\Psi, \chi, s) \defeq L^{(mN)}(\chi^2\varepsilon_{\Psi, \Q},2s-2k-2) \cdot \sum_{\substack{n\geq 1 \\ (m, n) = 1}}c(n\roi_F,\Psi)\chi(n)n^{-s},
\]
where $N = \n \cap \Z$ and $L^{(mN)}(-,s)$ is the Dirichlet $L$-function with its Euler factors at primes dividing $mN$ removed. If $\chi$ is trivial we write simply $L^{\mathrm{As}}(\Psi,s)$.
\end{definition}
This Dirichlet series is absolutely convergent for $\Re(s)$ sufficiently large\footnote{Since the eigenvalues $c(\mathfrak{l}, \Psi)$ for $\l$ prime satisfy $|c(\mathfrak{l}, \Psi)| \le 2N_{F/\Q}(\mathfrak{l})^{k/2 + 1}$ \cite[\S 11]{jacquetlanglands}, it suffices to take $\Re(s) > k+3$. This bound is not optimal, but is suffices for our purposes.}, and has meromorphic continuation to all $s \in \C$. (This is proved in \cite{Fli88} or \cite{Gha99} for $\chi$ trivial, and the result for general $\chi$ can be proved similarly.) For $s$ in the half-plane of convergence, it can be written as an Euler product
\[ L^{\mathrm{As}}(\Psi, \chi, s) = \prod_{\text{$\ell$ prime}} L_\ell^{\mathrm{As}}(\Psi, \chi, s), \]
where $L_\ell^{\mathrm{As}}(\Psi, \chi, s)$ depends only on $\chi(\ell)$ and the Hecke and diamond eigenvalues of $\Psi$ at the primes above $\ell$. If $\ell \mid m$ then $L_\ell^{\mathrm{As}}(\Psi, \chi, s) = 1$; if $\ell \nmid m$, then a case-by-case check shows that $L_\ell^{\mathrm{As}}(\Psi, \chi, s)$ has the form $P_\ell(\Psi, \ell^{-s} \chi(\ell))^{-1}$, where $P_\ell(\Psi, -)$ is a polynomial of degree $\le 4$.
If $\Psi$ is a new eigenform, and $\chi$ is trivial, then $L^{\mathrm{As}}(\Psi, \chi, s)$ coincides with the function denoted by $G(s, f)$ in \cite{Gha99}.
\begin{lemma}
Suppose $\Psi$ is a normalised eigenform of level $\n$ coprime to $p$, $\n' = \n \prod_{\mathfrak{p} \mid p} \mathfrak{p}$, and $\Psi_\alpha$ is a normalised eigenform of level $\n'$ such that $c(\m, \Psi_\alpha) = c(\m, \Psi)$ for all $\m$ coprime to $p$. Let $\alpha = c(p\roi_F, \Psi_\alpha)$. Then we have
\[
L^{\mathrm{As}}(\Psi_\alpha, s) = (1 - \alpha p^{-s})^{-1} L^{\mathrm{As}, (p)}(\Psi, s),
\]
and $L^{\mathrm{As}}(\Psi_\alpha, \chi, s) = L^{\mathrm{As}}(\Psi, \chi, s)$ for every non-trivial $\chi$ of $p$-power conductor.\qed
\end{lemma}
\subsection{The primitive Asai $L$-function}
The ``imprimitive'' Asai $L$-function of Definition \ref{def:asaiLS} is closely related to another $L$-function, the ``primitive'' Asai $L$-function. This is slightly more difficult to define, but in many ways more fundamental.
\subsubsection*{Automorphic construction:} The Langlands $L$-group of $G = \operatorname{Res}_{F/\Q}\GLt$ is the semidirect product ${}^L G = (\GLt(\C) \times \GLt(\C)) \rtimes \operatorname{Gal}(\overline{\Q}/\Q)$, with the Galois group acting on $\GLt \times \GLt$ by permuting the factors via its quotient $\operatorname{Gal}(F/\Q)$. There is a 4-dimensional representation of ${}^L G$, the \emph{Asai representation} $r^{\mathrm{As}}$, whose restriction to $\GLt \times \GLt$ is the tensor product map $\GLt \times \GLt \to \operatorname{GL}_4$ (see e.g.~\cite[\S 0]{Fli88}).
\begin{remark-numbered}
The representation $r^{\mathrm{As}}$ of ${}^L G$ factors through the quotient ${}^L G^*$, which explains the prominent role the group $G^*$ plays in our constructions.
\end{remark-numbered}
If $\Pi$ is the automorphic representation of $\GLt(\A_F)$ generated by an eigenform $\Psi$ as above, then for each rational prime $\ell$, the local factors $\Pi_v$ for primes $v \mid \ell$ of $F$ determine (via the local Langlands correspondence for $\GLt$) a Weil--Deligne representation $w_{\Pi, \ell}$ of $\Ql$ with values in ${}^L G$. Moreover, a Dirichlet character $\chi$ modulo $m$ determines uniquely a character $\chi_\A = \prod_\ell \chi_\ell$ of $\A_\Q^\times / \Q^\times$ such that for $\ell \nmid m$, $\chi_\ell$ is unramified and maps a uniformiser to $\chi(\ell)$. We may interpret $\chi_\ell$ also as a character of the Weil group of $\Ql$, via the Artin reciprocity map (normalised to send uniformisers to geometric Frobenius elements).
\begin{definition}
\label{def:local-lfactor}
We let $L_\ell^{\mathrm{As}}(\Pi, \chi, s)$ denote the local $L$-factor of the 4-dimensional Weil-Deligne representation $(r^{\mathrm{As}} \circ w_{\Pi, \ell}) \otimes \chi_\ell$, where $r^{\mathrm{As}}$ is the 4-dimensional tensor product representation of ${}^L G$. We define the \emph{primitive Asai $L$-function} of $\Pi$ by
\[
L^{\mathrm{As}}(\Pi, \chi, s) = \prod_{\text{$\ell$ prime}} L_\ell^{\mathrm{As}}(\Pi, \chi, s).
\]
If $\chi$ is trivial we write simply $L_\ell^{\mathrm{As}}(\Pi, s)$ and $L^{\mathrm{As}}(\Pi, s)$.
\end{definition}
\begin{remark-numbered} \
\begin{enumerate}[(i)]
\item Theorem 1.4 of \cite{Ram02} shows that $L^{\mathrm{As}}(\Pi, \chi, s)$ is an automorphic $L$-function: more precisely, there exists an automorphic representation $\mathrm{As}(\Pi)$ of $\operatorname{GL}_4(\A_{\Q})$ such that $L^{\mathrm{As}}(\Pi, \chi, s)$ coincides with the $L$-function of $\mathrm{As}(\Pi) \otimes \chi$, for every Dirichlet character $\chi$.
\item The $L$-factor $L_\ell^{\mathrm{As}}(\Pi, \chi, s)$ can also be defined as the ``lowest common denominator'' of the values of a local zeta-integral, as in \cite{Fli88}. This alternative definition is known to be equivalent to Definition \ref{def:local-lfactor}: if $\ell$ is split in $F$, this equivalence is one of the defining properties of the local Langlands correspondence (see condition (2) in the introduction of \cite{harristaylor01}); if $\ell$ is inert or ramified in $F$ the equivalence is given by \cite[Theorem 4.2]{matringe10}.
\end{enumerate}
\end{remark-numbered}
\subsubsection*{Galois representations:} This primitive $L$-function $L^{\mathrm{As}}(\Pi, \chi, s)$ can also be understood in terms of Galois representations, as follows. As we shall see in the next section, for an eigenform $\Psi$ as above, the coefficients $c(\n, \Psi)$ all lie in a number field $E \subset \C$. If $\mathfrak{p}$ is a prime of $E$, above some rational prime $p$, there is a unique semisimple 2-dimensional $E_\mathfrak{p}$-linear representation $V_{\mathfrak{p}}(\Pi)$ of $\operatorname{Gal}(\overline{\Q} / F)$, unramified outside $\n p$, on which geometric Frobenius at a prime $\mathfrak{l} \nmid \n p$ of $F$ has trace $c(\mathfrak{l}, \Psi)$. Then we may form the \emph{tensor induction} $\operatorname{As} V_{\mathfrak{p}}(\Pi)$, which is a 4-dimensional representation of $\operatorname{Gal}(\overline{\Q} / \Q)$, isomorphic as a representation of $\operatorname{Gal}(\overline{\Q} / F)$ to the tensor product of $V_{\mathfrak{p}}(\Pi)$ and its Galois conjugate.
The local Euler factor at $\ell$ of this Galois representation is given by
\begin{equation}
\label{eq:localEF}
L_\ell(\operatorname{As} V_{\mathfrak{p}}(\Pi) \otimes \chi, s) \coloneqq \det\left( 1 - \ell^{-s} \operatorname{Frob}_{\ell}^{-1}: \left(\operatorname{As} V_{\mathfrak{p}}(\Pi) \otimes \chi\right)^{I_\ell} \right)^{-1},
\end{equation}
where $I_\ell$ is the inertia group at $\ell$, $\operatorname{Frob}_{\ell}$ is an arithmetic Frobenius element, and $\mathfrak{p}$ is any prime not dividing $\ell$. If $V_{\mathfrak{p}}(\Pi)$ satisfies local-global compatibility at the primes above $\ell$ (which is expected, and known in many cases by the results of \cite{mok14} and \cite{varma}), then this local $L$-factor $L_\ell(\operatorname{As} V_{\mathfrak{p}}(\Pi) \otimes \chi, s)$ coincides with the automorphic local $L$-factor $L_\ell^{\mathrm{As}}(\Pi, \chi, s)$.
\subsubsection*{Relation to the imprimitive $L$-function:} The imprimitive $L$-function $L^{\mathrm{As}}(\Psi, \chi, s)$ can be seen as an ``approximation'' to the primitive $L$-function, in the following sense:
\begin{proposition}
We have
\[
L^{\mathrm{As}}(\Psi, \chi, s) = L^{\mathrm{As}}(\Pi, \chi, s) \prod_{\ell \mid mN} C_\ell(\Psi, \chi, s),
\]
where the local error terms $C_\ell(\Psi, \chi, s)$ are polynomials in $\ell^{-s}$, of degree $\le 4$. In particular, the primitive and imprimitive $L$-functions have the same local factors at all but finitely many primes.
\end{proposition}
\begin{proof}
The equality of the local factors for $\ell \nmid mN$ is Proposition 1 of \cite{Gha99}; so it remains to show that if $\ell \mid mN$, the ratio $\frac{L_\ell^{\operatorname{As}}(\Psi, \chi, s)}{L_\ell^{\operatorname{As}}(\Pi, \chi, s)}$ is a polynomial in $\ell^{-s}$ of degree $\le 4$.
If $\ell \mid m$, then $L_\ell^{\operatorname{As}}(\Psi, \chi, s)$ is identically 1. The same holds if $\Pi_\ell$ is supercuspidal at some prime above $\ell$, since in this case $c(n \roi_F, \Psi)$ is zero for every $n$ divisible by $\ell$. Since $L_\ell^{\operatorname{As}}(\Pi, \chi, s)$ is (by definition) the reciprocal of a polynomial of degree $\le 4$, the result is automatic in these cases. This leaves the primes $\ell$ such that $\ell \mid N$, $\ell \nmid m$, and the local factors of $\Pi$ at primes above $\ell$ are principal series or special, which can be verified in a case-by-case check.
\end{proof}
Perhaps surprisingly, the local error terms $C_\ell(\Psi, \chi, s)$ at the bad primes may be non-trivial, even if $\chi = 1$ and $\Psi$ is the unique \emph{new} eigenform generating $\Pi$. In Galois-theoretic terms, this corresponds to the fact that the inertia invariants of $\operatorname{As} V_{\mathfrak{p}}(\Pi)$ at a prime $\ell$ may be strictly larger than the tensor product of the inertia invariants of $V_{\mathfrak{p}}(\Pi)$ at the primes above $\ell$.
\subsection{The Coates--Perrin-Riou conjecture}
\label{sec:coates-perrin-riou}
In \cite{coatesperrinriou89} and \cite{coates89}, a general conjecture is formulated predicting the existence of $p$-adic $L$-functions associated to any motive over $\Q$ whose $L$-function has at least one critical value. In this section, we shall explain what this conjecture predicts for the (conjectural) Asai motive associated to a Bianchi eigenform.
Let us fix an automorphic representation $\Pi$ of $\GLt(\A_F)$, generated by some Bianchi eigenform $\Psi$ of weight $(k, k)$ with coefficients in a number field $E$. We shall assume (in this section only) that there exists a Chow motive $M^{\mathrm{As}}(\Pi)$ over $\Q$, with coefficients in $E$, whose $L$-function is $L^{\mathrm{As}}(\Pi, s)$, and whose $\mathfrak{p}$-adic realisation is $\operatorname{As} V_{\mathfrak{p}}(\Pi)$, for every prime $\mathfrak{p} $ of $E$. (Strictly speaking the conjectures are formulated only for motives with coefficients in $\Q$, but the extension to general $E$ is immediate.)
We shall apply the conjectures of \cite{coates89} to the motive $M = M^{\mathrm{As}}(\Pi)(1)$. Note that $M$ has weight $2k$, and its Hodge decomposition at $\infty$ is given by $\operatorname{dim} M^{(-1, 2k+1)} = \operatorname{dim} M^{(2k+1, -1)} = 1$, $\operatorname{dim} M^{(k,k)} = 2$. Moreover, complex conjugation acts as $-1$ on $M^{(k,k)}$, so $d^+(M) = 1$ and $d^-(M) = 3$. Thus $s = 0$ is a critical value of $L(M(j)(\chi), s) = L^{\mathrm{As}}(\Pi, \chi^{-1}, s + j)$, for integers $0 \le j \le k$ and Dirichlet characters $\chi$ such that $(-1)^j\chi(-1) = 1$.\footnote{These are all the critical values left of the centre of symmetry of the functional equation. Right of this line, we also have critical values for $k+1 \leq j \leq 2k+2$ with $(-1)^j\chi(-1) = -1$.} We choose our square root of $-1$ (the $\rho$ of \emph{op.cit.}) to be $+i$.
We shall also assume that $p$ is unramified in $F$ and $\Pi$ has conductor coprime to $p$, and is ordinary at the primes above $p$, so there is a unique eigenvalue $\alpha$ of $\operatorname{Frob}_p^{-1}$ on $\operatorname{As} V_{\mathfrak{q}}(\Pi)$, for $\mathfrak{q}$ not dividing $p$, such that $\alpha$ is a unit at $\mathfrak{p}$. We shall take $\chi$ to be a Dirichlet character of $p$-power conductor.
\begin{proposition}
In the notation of \emph{op.cit.}, for $(j, \chi)$ as above we have
\begin{align*}
\mathcal{L}_\infty^{(\rho)}(M(j)(\chi)) &= 2(2 \pi i)^{-1-j} j!, \\
\mathcal{L}_p^{(\rho)}(M(j)(\chi)) &=
\begin{cases}
(1 - \tfrac{p^j}{\alpha}) \cdot (1 - \tfrac{\alpha}{p^{1+j}})^{-1}
& \text{if $\chi$ is trivial}, \\[2mm]
G(\chi) \left( \tfrac{p^{j}}{\alpha}\right)^{r}
& \text{if $\chi$ has conductor $p^r > 1$},
\end{cases}
\end{align*}
where $G(\chi) \coloneqq \sum_{a \in (\Z/p^r)^\times} \chi(a) e^{2\pi i a / p^r}$.\qed
\end{proposition}
The conjecture of Coates and Perrin-Riou thus predicts that there should exist a non-zero constant $\Omega_M$, and a $p$-adic measure $\mu$ on $\Zp^\times$, such that
\[ \int_{\Zp^\times} x^j \chi(x)\, \mathrm{d}\mu(x) = \mathcal{L}_p^{(\rho)}(M(j)(\chi)) \cdot \frac{j! L^{\mathrm{As}, (p)}(\Pi, \chi^{-1}, j+1)}{(2 \pi i)^{(1 + j)} \Omega_M}, \]
for $(j, \chi)$ as above. Stated in this form, the conjecture is independent of the existence of the motive $M^{\mathrm{As}}(\Pi)$.
\subsection{The modular symbol attached to $\Psi$}
\label{sect:modsymb}
The Bianchi modular forms we consider in this paper are \emph{cohomological}: they contribute to the cohomology of local systems on $Y_{F, 1}(\n)$, as we shall now explain.
\begin{definition}
Let $E$ be any field extension of $F$, and let $k \ge 0$. We define the left $E[\GLt(F)]$-module $V_{kk}(E) \defeq V_k^{(\ell)}(E)\otimes_{E} V_k^{(\ell)}(E)^\sigma$, where $\gamma \in \GLt(F)$ acts in the usual way on the first component and via its complex conjugate $\gamma^{\sigma}$ on the second component. Via this action, the space $V_{kk}(E)$ gives rise to a local system of $E$-vector spaces on $Y_{F,1}(\n)$, which we also denote by $V_{kk}(E)$.
\end{definition}
\begin{theorem}[Eichler--Shimura--Harder] There is a canonical, Hecke-equivariant injection
\[S_{k,k}(U_{F,1}(\n)) \hookrightarrow \h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(\C)). \]
Moreover, if $\Psi \in S_{k,k}(U_{F,1}(\n))$ is a normalised eigenform, this map induces isomorphisms of 1-dimensional $\C$-vector spaces
\[ S_{k,k}(U_{F,1}(\n))[\Psi] \rTo^\cong \h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(\C))[\Psi] \rTo^\cong \h^1(Y_{F,1}(\n),V_{kk}(\C))[\Psi].
\]
\end{theorem}
\begin{proof}
This was initially proved in \cite{Har87}. A treatment closer to our conventions is \cite{Hid94}, where Proposition 3.1 gives an isomorphism between the cusp forms and cuspidal cohomology, the injection of cuspidal cohomology into $\h_{\mathrm{c}}^1$ is explained at the start of \S5, and the 1-dimensionality of each of these spaces (hence the isomorphism) is explained in \S8.
\end{proof}
If $\Psi \in S_{k,k}(U_{F,1}(\n))$, write $\omega_{\Psi}$ for its image in $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(\C))$. This can be described concretely as the class of a harmonic $V_{kk}$-valued differential form constructed from $\Psi$; for a summary of the construction using our conventions, see \cite[\S2.4]{Wil17}.
We now consider integral structures at $p$. Let $\Psi \in S_{kk}(U_{F, 1}(\n))$ be a normalised eigenform, $E$ a finite extension of $F$ containing the Hecke eigenvalues of $\Psi$, and $\mathscr{P}$ a prime of $E$ above $p$. We write $R' = \roi_{F, (\mathscr{P})}$ for the valuation ring of $E$ at $\mathscr{P}$, and $R$ for its completion (so $R$ is a finite-rank free $\Zp$-algebra). The same construction as before gives a finite-rank free $R'$-module $V_{kk}(R')$, with a left action of $\GLt(\roi_{F, (p)})$, whose base-extension to $E$ is $V_{kk}(E)$.
If $\widehat{\roi}_{F, (p)} = \prod_{v \mid p} \roi_{F, v} \times \prod_{v \nmid p}' F_v$ denotes the ring of finite ad\`eles of $F$ integral above $p$, then the natural map
\[ \GLt(\roi_{F, (p)}) \backslash \left[ \GLt(\widehat{\roi}_{F, (p)}) \times \uhs \right] / U \longrightarrow Y_F(U), \]
is a bijection for any level $U$ (by the weak approximation theorem). Thus $V_{kk}(R')$ gives a local system of $R'$-modules on $Y_F(U)$ for any sufficiently small $U$, whose base-extension to $E$ is $V_{kk}(E)$ as previously defined.
Since every connected component of $Y_F(U)$ is non-compact, its compactly-supported cohomology (with coefficients in any locally constant sheaf) vanishes in degree 0. From the cohomology long exact sequence associated to multiplication by a non-zero integer $m$, we conclude that its degree 1 compactly-supported cohomology with coefficients in a torsion-free sheaf is torsion-free. Hence we can regard $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(R'))$ as an $R'$-lattice in $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(E))$, preserved by the action of the Hecke operators $(T_{\aaa})^*, (U_{\aaa})^*$.
\begin{definition}
We let $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(R'))[\Psi]$ be the intersection of the $\Psi$-eigenspace $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(E))[\Psi]$ with the lattice $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(R'))$.
\end{definition}
\begin{proposition}
There exists a complex period $\Omega_{\Psi} \in \C^\times$, uniquely determined up to multiplication by $(R')^\times$, such that the quotient
\[
\phi_{\Psi} \defeq \omega_\Psi/\Omega_\Psi
\]
is an $R'$-basis of $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(R'))[\Psi]$.
\end{proposition}
\begin{proof}
Since compactly-supported cohomology commutes with flat base extension, it follows from the Eichler--Shimura--Harder theorem that $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(E))[\Psi]$ is one-dimensional over $E$. If $\phi_\Psi$ is any basis of this space, then it is also a $\C$-basis of $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(\C))[\Psi]$, so $\omega_{\Psi}$ must be a $\C^\times$-multiple of $\phi_\Psi$.
As $R'$ is a discrete valuation ring with field of fractions $E$, the lattice $\h^1_{\mathrm{c}}(Y_{F,1}(\n),V_{kk}(R'))[\Psi]$ must be free of rank 1 over $R'$. We can therefore choose $\phi_\Psi$ to be a generator of this module, and this determines $\phi_\Psi$ (and hence $\Omega_{\Psi}$) uniquely up to units in $R'$.
\end{proof}
\begin{remarks}\
\begin{enumerate}[(i)]
\item Our normalisation of the period $\Omega_{\Psi}$ is a little different from that used by Hida in \cite[\S 8]{Hid94}: we have used the lattice given by $\h^1_{\mathrm{c}}$ with integral coefficients, while Hida uses the ordinary $\h^1$ with integral coefficients. Hence our $\phi_{\Psi}$ may not map to an $R'$-basis of $\h^1(Y_{F,1}(\n), V_{kk}(R'))[\Psi]$. We expect that the difference between these two $R'$-lattices ``detects'' congruences modulo $\mathscr{P}$ between $\Psi$ and Eisenstein series, but we have not checked this.
\item There is a convenient explicit presentation for $\h^1_{\mathrm{c}}(Y_{F, 1}(\n), M)$, for very general local systems $M$, using ``modular symbols'' (see \cite[Lemma 8.4]{BW17}, generalising \cite[Proposition 4.2]{AS86}). From this description the torsion-freeness of the cohomology, and its compatibility with base-extension, is immediate.
\end{enumerate}
\end{remarks}
The pullback $\phi_{\Psi}^* \defeq \jmath^*(\phi_{\Psi})$ lies in $\h^1_{\mathrm{c}}(Y^*_{F,1}(\n), V_{kk}(R'))$. It is this class which we shall use in the definition of our $p$-adic Asai $L$-function.
\section{Siegel units and weight 2 Asai--Eisenstein elements}
\label{siegel units}
\subsection{Modular units}
Let $U \subset \GLt(\widehat{\Z})$ be an open compact subgroup, with associated symmetric space $Y_\Q(U)$. In this section, we work exclusively over $\Q$, so we shall drop the subscript $\Q$ from the notation. As is well known, the manifolds $Y(U)$ are naturally the complex points of algebraic varieties defined over $\Q$.
\begin{definition}
A \emph{modular unit} on $Y(U)$ is an element of $\roi(Y(U))^\times$, that is, a regular function on $Y(U)$ with no zeros or poles. (This corresponds to a rational function on the compactification $X(U)$ whose divisor is supported on the cusps).
\end{definition}
Modular units are \emph{motivic} in the sense that there are \emph{realisations} of modular units in various cohomology theories. In particular, to a modular unit $\phi\in \roi(Y_1(N))^\times$ one can attach:
\begin{itemize}
\item its \emph{de Rham} realisation $C_{\mathrm{dR}}(\phi) \in \h^1_{\mathrm{dR}}(Y_1(N),\Q)$, which is the class of the differential form $d\log \phi = \frac{\mathrm{d}\phi}{\phi}$;
\item its \emph{Betti} realisation $C(\phi) \in \h^1(Y_1(N),\Z)$, which is the pullback along $\phi: Y_1(N)(\C) \to \C^\times$ of the generator $C$ of $\h^1(\C^\times, \Z) \cong \Z$ that pairs to $1$ with the homology class of a positively-oriented loop around 0.
\end{itemize}
These are closely related:
\begin{proposition}
The canonical comparison isomorphism
\[
\h^1\left(Y_1(N),\Z\right)\otimes_{\Z} \C \rTo^\cong \h^1_{\mathrm{dR}}(Y_1(N),\Q)\otimes_{\Q} \C
\]
maps $C(\phi)$ to $\tfrac{1}{2\pi i} C_{\mathrm{dR}}(\phi)$.
\end{proposition}
\begin{proof}
The comparison isomorphism identifies the de Rham cohomology class of an $n$-form $\omega$ with the Betti cohomology class mapping an $n$-simplex $\delta$ to $\int_{\delta} \omega$. By construction, this isomorphism is compatible with the pullback maps in the two cohomology theories attached to a smooth map of manifolds.
In our case, the modular unit $\phi$ gives a map $Y_1(N)(\C) \to \C^\times$. By definition, $C(\phi)$ is the pullback by $\phi$ of the generator $C$ of $\h^1(\C^\times, \Z)$; and $C_{\mathrm{dR}}(\phi)$ is the pullback of the class of the differential $\tfrac{dT}{T}$, where $T$ is the coordinate on $\C$. If $\delta$ denotes a positively-oriented loop around 0, then $\int_{\delta} \tfrac{\mathrm{d}T}{T} = 2\pi i$. Since $C$ pairs to $1$ with $\delta$ (by definition), the result follows.
\end{proof}
\begin{remark}
It is conventional to ``normalize away'' the factor of $2\pi i$ by defining $C(\phi)$ as an element of $\h^1(Y_1(N), \Z(1))$, where $\Z(1) = 2\pi i\cdot \Z \subset \C$. However, this formalism does not work well when considering the cohomology of non-algebraic manifolds, so we shall not use it here.
\end{remark}
\subsection{Eisenstein series}
\label{eisenstein series}
The de Rham realisations of modular units give rise to weight 2 Eisenstein series in the de Rham cohomology. In the next section, we'll exhibit a canonical system of modular units -- the \emph{Siegel units} -- whose de Rham realisations can be written down very explicitly in terms of the following Eisenstein series.
\begin{definition}[{cf.~\cite[\S3]{Kat04}}]
Let $\tau \in \uhp$, $k$ an integer $\ge 2$, and $\beta \in \Q/\Z$, with $\beta \ne 0$ if $k = 2$. Define
\[F^{(k)}_{\beta}(\tau) \defeq \frac{(k-1)!}{(-2\pi i)^k}\sideset{}{'}\sum_{(m,n)\in\Z^2}\frac{e^{2\pi i \beta m}}{(m\tau + n)^k},\]
where the prime denotes that the term $(m,n) = (0,0)$ is omitted. This is a modular form of weight $k$ and level $\Gamma_1(N)$, for any $N$ such that $N\beta = 0$.
\end{definition}
(Kato defines a slightly more general class of Eisenstein series $F^{(k)}_{\alpha,\beta}$; we shall only use the case $\alpha = 0$, so we drop it from the notation.)
For the $L$-value calculations in the appendix, we shall need to use the fact that this series is a specialisation of a real-analytic family. For $\Re(s) \ge 1 - \tfrac{k}{2}$, we define
\begin{equation}\label{eqn:Eis}
E^{(k)}_{\beta}(\tau, s) \defeq \frac{\Gamma(s+k)}{(-2\pi i)^k\pi^{s}}\sideset{}{'}\sum_{(m,n)\in\Z^2}\frac{\mathrm{Im}(\tau)^s}{(m\tau + n + \beta)^k|m\tau + n + \beta|^{2s}},
\end{equation}
where the prime denotes that the term $(m,n) = (0,0)$ is omitted if $\beta = 0$ (but included otherwise). This has analytic continuation to all $s\in \C$, and we have
\[ F^{(k)}_{\beta}(\tau) = E^{(k)}_{\beta}(\tau, 1-k).\]
(See e.g.\ \cite[4.2.2(iv)]{LLZ14}.)
\subsection{Siegel units}
\begin{definition}\mbox{~}
For $N \ge 1$, and $c > 1$ an integer coprime to $6N$, let
\[ \subc g_N \in \roi(Y_1(N))^\times \]
be Kato's Siegel unit (the unit denoted by $\subc g_{0,1/N}$ in the notation of \cite[\S 1]{Kat04}).
\end{definition}
Slightly abusively, we shall use the same symbol $\subc g_N$ for the pullback of this unit to $Y(M, N)$, for any $M \ge 1$ (we shall only need this when $M \mid N$).
As in \emph{op.cit.}, we note that if $c, d$ are two integers that are both $>1$ and coprime to $6N$, then we have the identity
\begin{equation}
\label{eq:cdsymmetry}
(d^2 - \langle d \rangle)\subc g_N = (c^2 - \langle c \rangle){}_d g_N.
\end{equation}
It follows that the dependence on $c$ may be removed after extending scalars to $\Q$: there is an element $g_N \in \roi(Y_1(N))^\times \otimes \Q$ such that $\subc g_N = (c^2 - \langle c \rangle) \cdot g_N$ for any choice of $c$.
\begin{proposition} \
\label{siegel unit properties}
\begin{itemize}
\item[(i)] The Siegel units are \emph{norm-compatible}, in the sense that if $N'|N$ and $N$ and $N'$ have the same prime divisors, then under the natural map
\[\mathrm{pr}: Y(M,N) \longrightarrow Y(M,N')\]
we have
\[(\mathrm{pr})_*(\subc g_N) = \subc g_{N'}.\]
\item[(ii)] The de Rham realisation of $g_N$ is the Eisenstein series
\[
d\log( g_N)(\tau) =
-2\pi i \, F^{(2)}_{1/N}(\tau)\, \mathrm{d}\tau.
\]
\end{itemize}
\end{proposition}
\begin{proof}
The first part is proved in \cite{Kat04}, Section 2.11. The second part is Proposition 3.11(2) \emph{op.cit.}.
\end{proof}
One important use of Siegel units comes in the construction of \emph{Euler systems}; for example, see \cite{Kat04},\cite{LLZ14}, and \cite{KLZ17}. The basic method in each of these cases is similar; one takes cohomology classes attached to Siegel units under the realisation maps and pushes them forward to a different symmetric space, then exploits the norm compatibility to prove norm relations for these cohomology classes. We will do something similar in the Betti cohomology. In particular, we make the following definition:
\begin{definition}
Let $\subc C_{N} \defeq C(\subc g_{N}) \in \h^1(Y_1(N),\Z)$ be the Betti realisation of $\subc g_{N}$.
\end{definition}
From Proposition \ref{siegel unit properties}(i), we see that if $p \mid N$, the classes ${}_c C_{Np^r}$ for $r \ge 0$ are compatible under push-forward, and define a class
\[ \subc C_{Np^\infty} \in \varprojlim_r \h^1(Y_1(Np^r),\Z).\]
\begin{lemma}
\label{lemma:siegel-unit-conjugation}
If $\rho$ denotes the involution of $Y_1(N)$ corresponding to $x + iy \mapsto -x+iy$ on $\uhp$, then $\rho^*\left({}_c C_N\right) = -{}_c C_N$.
\end{lemma}
\begin{proof}
The variety $Y_1(N)$ has a canonical model over $\Q$ for which $\rho$ corresponds to complex conjugation on $Y_1(N)(\C)$. Since the Siegel units are defined over $\Q$ in this model, they intertwine the action of $\rho$ on $Y_1(N)(\C)$ with complex conjugation on $\C^\times$. So it suffices to check that complex conjugation acts as $-1$ on $\h^1(\C^\times, \Z)$, which is clear.
\end{proof}
\subsection{Asai--Eisenstein elements in weight 2}
Now let $F$ be an imaginary quadratic field, and $\n$ an ideal of $\roi_F$ divisible by some integer $\ge 4$. Recall that we have
\[ Y_{F, 1}^*(\n) = \Gamma_{F, 1}^*(\n) \backslash \uhs, \]
and that we showed in Proposition \ref{prop:goodinclusion} that the natural map
\[\iota : Y_{\Q,1}(N) \hookrightarrow Y_{F,1}^*(\n)\]
is a closed immersion. We also need the following map:
\begin{definition}
\label{def:kappaam}
Let $m\geq 1$ and $a \in \roi_F$. Consider the map
\[ \kappa_{a/m}: Y_{F, 1}^*(m^2 \n) \to Y_{F, 1}^*(\n) \]
given by the left action of $\smallmatrd{1}{a/m}{0}{1} \in \SLt(F)$ on $\uhs$. This is well-defined\footnote{
Note that $\kappa_{a/m}$ is \emph{not} in general well-defined on $Y_{F,1}(m^2\n)$, since \[\smallmatrd{1}{a/m}{0}{1}\smallmatrd{-1}{0}{0}{1}\smallmatrd{1}{-a/m}{0}{1} = \smallmatrd{-1}{2a/m}{0}{1},\] which is not in $\Gamma_{F,1}(\n)$ if $2a \notin m\roi_F$.}, since it is easy to see that
\[\matrd{1}{a/m}{0}{1} \Gamma_{F, 1}^*(m^2\n) \matrd{1}{-a/m}{0}{1} \subset \Gamma_{F, 1}^*(\n), \]
and it depends only on the class of $a$ modulo $m\roi_F$.
\end{definition}
The elements we care about are the following. Assume that $\n$ is divisible by some integer $q \ge 4$, as above. Then we have maps
\[ Y_{\Q, 1}(m^2 N) \labelrightarrow{\iota} Y_{F, 1}^*(m^2 \n) \labelrightarrow{\kappa_{a/m}} Y_{F, 1}^*(\n). \]
Moreover, writing $\h_*^{\mathrm{BM}}$ for Borel--Moore homology (homology with non-compact supports), there are isomorphisms
\begin{equation}
\label{eq:BMhomology}
\h^1(Y_{\Q, 1}(m^2 N), \Z) \cong \h_1^{\mathrm{BM}}(Y_{\Q, 1}(m^2 N), \Z), \quad \h^2(Y_{F, 1}(m^2 \n), \Z) \cong \h_1^{\mathrm{BM}}(Y_{F, 1}(m^2 \n), \Z),
\end{equation}
given by cap-product with the fundamental classes in $\h_2^{\mathrm{BM}}(Y_{\Q, 1}(m^2 N), \Z)$ and $\h_3^{\mathrm{BM}}(Y^*_{F, 1}(m^2 \n), \Z)$ respectively. Since Borel--Moore homology is covariantly functorial for proper maps, we can therefore define a pushforward map $\iota_*: \h^1(Y_{\Q, 1}(m^2 N), \Z) \to \h^2(Y^*_{F, 1}(m^2 \n), \Z)$.
\begin{definition}
For $a \in \roi_F/m\roi_F$, $m \ge 1$, and $c > 1$ coprime to $6 m N$, define
\[ \subc\Xi_{m,\n,a} \in \h^2\left(Y_{F,1}^*(\n),\Z\right) \]
to be the image of $\subc C_{m^2N}$ under the map $(\kappa_{a/m})_* \circ \iota_*$.
\end{definition}
\begin{proposition}
\label{prop:xi-sign}
We have $\smallmatrd{-1}{}{}{1}^* \cdot \subc\Xi_{m,\n,a} = \subc\Xi_{m,\n,-a}$.
\end{proposition}
\begin{proof}
It is clear that $\smallmatrd{-1}{}{}{1} \circ \kappa_{a/m} = \kappa_{-a/m} \circ \smallmatrd{-1}{}{}{1}$ as maps $Y_{F, 1}^*(m^2\n) \to Y_{F, 1}^*(\n)$. Moreover, the action of $\smallmatrd{-1}{}{}{1}$ on $Y_{F, 1}^*(m^2 \n)$ preserves the image of $Y_{\Q,1}(m^2 N)$, and the involution of $Y_{\Q,1}(m^2 N)$ it induces is the involution $\rho$ of Lemma \ref{lemma:siegel-unit-conjugation}, which acts as $-1$ on ${}_c C_{m^2 N}$. Finally, we have $\smallmatrd{-1}{}{}{1}^* \circ \iota_* = -\iota_* \circ \rho^*$, because $\smallmatrd{-1}{}{}{1}$ preserves the orientation of $Y^*_{F, 1}(\n)$ and thus acts as $+1$ on the fundamental class in $\h_3^{\mathrm{BM}}$, but $\rho$ reverses the orientation of $Y_{\Q, 1}(m^2N)$ and thus acts as $-1$ on the fundamental class.
\end{proof}
\begin{definition}
Define
\[\subc \Phi_{\n,a}^r \in \h^2\Big(Y_{F, 1}^*(\n),\Z\Big) \otimes_{\Z}\Zp[(\Z/p^r)^\times]\]
by
\[\subc\Phi_{\n,a}^r \defeq \sum_{t \in (\Z/p^r)^\times} \subc\Xi_{p^r,\n,at}\otimes[t].\]
\end{definition}
\begin{lemma}\label{lemma:level-compat}
If $\n \mid \n'$ are two ideals of $\roi_F$ with the same prime factors, then pushforward along the map $Y_{F, 1}(\n') \to Y_{F, 1}(\n)$ sends $\subc \Phi_{\n',a}^r$ to $\subc \Phi_{\n,a}^r$ (for any valid choices of $c$, $a$, $r$).
\end{lemma}
\begin{proof}
This is immediate from the norm-compatibility of the Siegel units; compare \cite[Theorem 3.1.2]{LLZ14}.
\end{proof}
We now come to one of the key theorems of this paper, which shows that if $m = p^r$ with $r$ varying, then the above elements fit together $p$-adically into a compatible family. We now impose the assumption that $\n$ is divisible by all primes $v \mid p$ of $\roi_F$.
\begin{theorem}\label{norm relation}
Let $r\geq 1$, let $a$ be a generator of $\roi_F/(p\roi_F+\Z)$, and let
\[\pi_{r+1}: \h^2(Y_{F, 1}^*(\n),\Z) \otimes_{\Z}\Zp[(\Z/p^{r+1})^\times] \longrightarrow \h^2(Y_{F, 1}^*(\n),\Z) \otimes_{\Z}\Zp[(\Z/p^r)^\times]\]
denote the map that is the identity on the first component and the natural quotient map on the second component. Then we have
\[\pi_{r+1}( \subc\Phi_{\n,a}^{r+1}) = (U_p)_* \cdot \subc\Phi_{\n,a}^r,\]
where the Hecke operator $(U_p)_*$ acts via its action on $\h^2(Y_{F, 1}^*(\n),\Z)$. Similarly, when $r = 0$ we have
\[ \pi_{1}( \subc\Phi_{\n,a}^{1}) = \left( (U_p)_* - 1 \right) \cdot \subc\Phi_{\n,a}^0.\]
\end{theorem}
\begin{remark-numbered}
Before embarking on the proof, which will occupy the next section of the paper, we pause to give a brief description of how this is important for the construction of $p$-adic Asai $L$-functions, in the simplest case of a Bianchi eigenform $\Psi$ of weight $(0,0)$. Define $e_{\mathrm{ord},*} \defeq \lim_{n\rightarrow\infty}(U_p^{n!})_*$ to be the ordinary projector on cohomology with $\Zp$ coefficients associated to $(U_p)_*$, so that $(U_p)_*$ is invertible on the space $e_{\mathrm{ord},*}\h^2(Y_1(\n),\Zp)$. Given the theorem, we see that the collection
\[
[(U_p)_*^{-r} e_{\mathrm{ord},*} \cdot \subc\Phi_{\n,a}^r]_{r\geq 1}
\]
forms an element $\subc \Phi_{\n,a}^\infty$ in the inverse limit
\[
e_{\mathrm{ord},*}\h^2(Y_{F,1}^*(\n),\Zp) \otimes \Zp[[\Zp^\times]].
\]
As in \S \ref{sect:modsymb}, the eigenform $\Psi$ gives rise to a cohomology class $\phi_\Psi^* \in \h^1_{\mathrm{c}}(Y_{F, 1}^*(\n), R)$, where $R$ is the ring of integers of some finite extension of $\Qp$. If the $(U_p)^*$-eigenvalue of $\Psi$ is a unit in $R$, then the linear functional on $\h^2(Y_{F,1}^*(\n),\Zp) \otimes R$ given by pairing with $\phi_\Psi^*$ factors through the projector $e_{\mathrm{ord},*}$. Hence pairing $\subc \Phi_{\n,a}^\infty$ with $\phi_\Psi^*$ gives a measure on $\Zp^\times$ with values in $R$. This will be our $p$-adic $L$-function. By construction, its values at finite-order characters are given by integrating $\Psi$ against linear combinations of Eisenstein series on $Y_{\Q,1}(m^2 N)$; and these will turn out to compute the special values of the Asai $L$-function.
\end{remark-numbered}
\section{Proving the norm relations (Theorem \ref{norm relation})}
\label{proof of norm relation}
Theeorem \ref{norm relation} is directly analogous to the norm-compatibility relations for Euler systems constructed from Siegel units; specifically, it is the analogue in our context of \cite[Theorem 3.3.2]{LLZ14}. Exactly as in \emph{op.cit.}, it is simplest not to prove the theorem directly but rather to deduce it from a related result concerning cohomology classes on the symmetric spaces $Y_F^*(m, m\n)$, analogous to Theorem 3.3.1 of \emph{op.cit.}. Note that these symmetric spaces are not connected for $m > 2$, but have $\phi(m)$ connected components; this will allow us to give a tidy conceptual interpretation of the sum over $j \in (\Z / p^r \Z)^*$ appearing in the definition of $\subc\Phi_{\n, a}^r$.
\subsection{Rephrasing using the spaces $Y_F^*(m, m\n)$}
\begin{proposition}
For any $a \in \roi_F$, the element $\matrd{1}{a}{0}{1}$ normalises $U^*(m, m\n) \subset \GLt^*(\A_F^f)$.
\end{proposition}
\begin{proof} Easy check. \end{proof}
We can therefore regard right-translation by $\smallmatrd{1}{a}{0}{1}$ as an automorphism of $Y_F^*(m, m\n)$, and we can consider the composite map
\[ \iota_{m, \n, a}: Y_{\Q}^*(m, mN) \hookrightarrow Y_F^*(m, m\n)
\labelrightarrow{\smallmatrd{1}{-a}{0}{1}} Y_F^*(m, m\n),\]
where the first arrow is injective (as soon as $m\n$ is divisible by some integer $\ge 4$) by the same argument as in Proposition \ref{prop:goodinclusion}. Note also that the components of $Y_F^*(m, m\n)$ are indexed by $(\Z / m\Z)^*$, with the fibre over $j$ corresponding to the component containing the image of $\smallmatrd j {}{}1 \in \GLt^*(\A_F^f)$; and the action of $\smallmatrd{1}{a}{0}{1}$ preserves each component.
\begin{remark-numbered}
The change of sign appears because we are comparing left and right actions.
\end{remark-numbered}
\begin{definition}
We define $\subc\mathcal{Z}_{m, \n, a}$ to be the image of $\subc C_{mN} \in \h^1(Y_{\Q}(m, mN), \Z)$ under pushforward via $\iota_{m, \n, a}$, and $\subc\mathcal{Z}_{m, \n, a}(j)$ the projection of $\subc\mathcal{Z}_{m, \n, a}$ to the direct summand of $\h^1(Y_F^*(m, m\n), \Z)$ given by the $j$-th component, so that
\[ \subc\mathcal{Z}_{m, \n, a} = \sum_j \subc\mathcal{Z}_{m, \n, a}(j).\]
\end{definition}
Exactly as in the situation of Beilinson--Flach elements, these $\mathcal{Z}$ elements turn out to be closely related to the $\Phi$'s defined above (compare \cite[Proposition 2.7.4]{LLZ14}). We consider the map
\[ s_m: Y_F^*(m, m\n) \to Y_{F, 1}^*(\n) \]
given by the action of $\matrd{m}{0}{0}{1}$ (corresponding to $(z, t) \mapsto (z/m, t/m)$ on $\uhs$).
\begin{proposition}
\label{Z-and-Xi}
We have $(s_m)_*\left( \subc\mathcal{Z}_{m, \n, a}(j) \right) = \subc \Xi_{m, \n, j a}$, and hence
\[ \subc \Phi_{\n, a}^r = \sum_{j} (s_{p^r})_*\left( \subc\mathcal{Z}_{p^r, \n, a} (j) \right) \otimes [j].\]
\end{proposition}
Before proceeding to the proof, we note the following lemma:
\begin{lemma}
The pushforward of $\subc C_{m^2 N}$ along the map
\[ Y_{\Q, 1}(m^2 N) \to Y_\Q(1(m), mN), \]
given by $z \mapsto mz$ on $\uhp$, is $\subc C_{m N}$.
\end{lemma}
\begin{proof} This follows from the well-known norm-compatibility relations of the Siegel units, cf.\cite[Lemma 2.12]{Kat04}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Z-and-Xi}]
For each $j \in (\Z / m\Z)^\times$, we have a diagram
\[
\BigCommDia
{Y_F^*(m, mN)^{(1)}}{\smallmatrd{1}{-ja}{0}{1}}{Y_F^*(m, m\n)^{(1)}}
{\smallmatrd{j}{0}{0}{1}}{\smallmatrd{j}{0}{0}{1}}
{Y_F^*(m, mN)^{(j)}} {\smallmatrd{1}{-a}{0}{1}} {Y_F^*(m, m\n)^{(j)}}.
\]
In other words, if we identify $Y_F^*(m, mN)^{(j)}$ with $\Gamma_F^*(m, mN) \backslash \uhs$ via $\smallmatrd{j}{0}{0}{1}$, the restriction to this component of the right action of $\smallmatrd{1}{-a}{0}{1}$ on the adelic symmetric space corresponds to the left action of $\smallmatrd{1}{ja}{0}{1}$ on $\uhs$.
With these identifications, we see that the map
\[ \kappa_{ja/m} : Y^*_{F, 1}(m^2 \n) \to Y^*_{F, 1}(\n) \]
used in the definition of $\Xi_{m, \n, ja}$ factors as
\begin{align*} Y^*_{F, 1}(m^2 \n) &\labelrightarrow{\smallmatrd{1}{0}{0}{m}} Y^*_{F}(1(m), m\n)\\
&\labelrightarrow{\cong} Y^*_{F}(m, m\n)^{(j)}\\
&\labelrightarrow{\smallmatrd{1}{-a}{0}{1}} Y^*_{F}(m, m\n)^{(j)}\\
&\labelrightarrow{s_m} Y^*_{F, 1}(\n).
\end{align*}
Pushforward along the first map is compatible with pushforward along the corresponding map on $\uhp$, which sends $\subc C_{m^2N}$ to $\subc C_{mN}$ by the previous lemma.
\end{proof}
\begin{corollary}
The classes $\subc \Xi_{m, \n, a}$ and $\subc \mathcal{Z}_{m, \n, a}$ depend only on the image of $a$ in the quotient $\roi_F / (m \roi_F + \Z)$.
\end{corollary}
\begin{proof}
If $b \in \Z$, the action of $\smallmatrd{1}{b}{0}{1}$ on $Y_{\Q}(m, mN)$ fixes the cohomology class ${}_c C_{mN}$, as this class is the pullback of a class on $Y_{\Q, 1}(mN)$. Since the actions of $\smallmatrd{1}{b}{0}{1}$ on $Y_\Q(m, mN)$ and $Y_{F}^*(m, m\n)$ are compatible, we see that $\subc \mathcal{Z}_{m, \n, a} = \subc \mathcal{Z}_{m, \n, a + b}$ for any $a \in \roi_F$ and $b \in \Z$, as required. The corresponding result for $\subc \Xi_{m, \n, a}$ now follows from the previous proposition.
\end{proof}
\subsection{A norm relation for zeta elements}
In this section, we formulate and prove a norm relation for the zeta elements $\subc\mathcal{Z}_{m, \n, a}$ which is analogous to Theorem \ref{norm relation}, but simpler to prove.
\begin{definition}
For $p$ prime, define a map
\[\tau_{p}: Y_F^*(p m, p m\n) \longrightarrow Y_F^*(m,m\n)\]
by composing the right-translation action of $\smallmatrd{p}{0}{0}{1} \in \GLt^*(\A^f_F)$ with the natural projection.
\end{definition}
\begin{theorem}\label{norm relation zeta}
Suppose $p$ is a prime with $p \mid m$, and suppose that $a \in \roi_F$ maps to a generator of the quotient $\roi_F / (p \roi_F + \Z) \cong \Z / p \Z$. Then we have the norm relation
\[(\tau_{p})_* (\subc \mathcal{Z}_{p m, \n, a}) = (U_p)_*(\mathcal{Z}_{m, \n, a}).\]
\end{theorem}
For simplicity, we give the proof under the slightly stronger hypothesis that $p \mid \n$ (rather than just that every prime above $p$ divides $\n$, which is our running hypothesis). This only makes a difference if $p$ is ramified in $F$, and the proof can be extended to handle this extra case at the cost of slightly more complicated notation; we leave the necessary modifications to the interested reader.
Firstly, note that there is a commutative diagram
\begin{equation}
\label{tau commutative diagram}
\begin{diagram}
Y_F^*(p m, p m \n) && \rTo^{\text{\tiny{pr$_1$}}} &&Y_F^*(p m, m\n) && \rTo^{\text{\tiny{pr$_2$}}} && Y_F^*(m (p), m\n)\\
&\rdTo(4,3)_{\tau_p} &&&&&& \ldTo(4,3)_{\pi_2} &\dTo_{\pi_1} \\
&&&&&&\\
&&&& Y_F^*(m, m\n) &&&& Y_F^*(m, m\n)
\end{diagram},
\end{equation}
where the top maps are the natural projection maps, $\tau_p$ is the twisted degeneracy map of the previous section, and $\pi_1, \pi_2$ are the degeneracy maps of Section \ref{hecke operators}.
\begin{lemma}\label{first norm relation}
Let $\n' = (p)^{-1} \n.$ Under pushforward by the natural projection map
\[\mathrm{pr}_1: Y_F^*(pm, pm\n) \longrightarrow Y_F^*(pm, m\n) = Y_F^*(pm,pm\n'),\]
we have
\[(\mathrm{pr}_1)_* (\subc \mathcal{Z}_{p^{r+1},\n,a}) = \subc \mathcal{Z}_{p^{r+1},\n',a}.\]
\end{lemma}
\begin{proof}
This is immediate from the corresponding norm-compatibility property of the Siegel units, which is Proposition \ref{siegel unit properties}. Compare \cite[Theorem 3.1.1]{LLZ14}.
\end{proof}
So we need to compare the classes $\mathcal{Z}_{pm,\n',a}$ and $\mathcal{Z}_{m,\n,a}$. Note that these both involve the same Siegel unit ${}_c g_{0, 1/mN}$. Let us write $u_a$ for the element $\smallmatrd 1{-a} 0 1$.
\begin{definition}
\begin{itemize}
\item[(i)]Let $\alpha_{pm,\n,a}$ denote the composition of the maps
\[Y_\Q(pm, mN) \hookrightarrow Y_F^*(pm, m\n) \labelrightarrow{u_a} Y_F^*(pm, m\n) \labelrightarrow{\mathrm{pr}_2} Y_F^*(m(p),m\n).\]
\item[(ii)]Let $\iota_{m,\n,a}$ denote, as above, the composition of the maps
\[Y_\Q(m,m N) \hookrightarrow Y_F^*(m,m\n) \labelrightarrow{u_a} Y_F^*(m,m\n).\]
\end{itemize}
\end{definition}
The following lemma is the key component in the proof of Theorem \ref{norm relation zeta}.
\begin{lemma}\label{cartesian}
Suppose that $a \in \roi_F$ is a generator of $\roi_F/(p\roi_F + \Z)$. Then:
\begin{itemize}
\item[(i)] The map $\alpha_{pm,\n,a}$ is injective.
\item[(ii)] The diagram
\[
\begin{diagram}
Y_{\Q}(pm,mN) &&&\rInto^{\alpha_{pm,\n,a}}&&& Y_F^*(m(p),m\n)\\
\dTo &&&&&& \dTo_{\pi_1}\\
Y_{\Q}(m,mN) &&&\rInto^{\iota_{m,\n,a}}&&& Y_F^*(m,m\n)
\end{diagram}
\]
is Cartesian, where the left vertical arrow is the natural projection.
\end{itemize}
\end{lemma}
The proof of this lemma is taken essentially verbatim from \cite[Lemma 7.3.1]{LLZ16}, where the analogous result is proved for real quadratic fields.
\begin{proof}
To prove part (i), note that the image of $\alpha_{pm,\n,a}$ is the modular curve of level
\[\GLt(\A_{\Q}^f)\cap u_a^{-1}U_F(m(p),m\n)u_a.\]
This intersection is the set of $\smallmatrd{r}{s}{t}{u} \in \GLt(\widehat{\Z})$ such that
\[\matrd{r-at}{s+a(r-u)-a^2t}{t}{at+u} \equiv I \mod \matrd{m}{pm}{m\n}{m\n}.\]
We want to show that this is equal to $U_\Q(pm,mN)$. Clearly, any $\smallmatrd{r}{s}{t}{u}$ in the intersection satisfies
\[\matrd{r}{s}{t}{u} \equiv \matrd{1}{*}{0}{1} \mod \matrd{m}{*}{mN}{mN},\]
whilst
\[s + a(r-1) \equiv 0 \newmod{mp}.\]
Suppose $m = p^q m'$ with $m'$ coprime to $p$. We know that both summands are zero modulo $m$, so it suffices to check that they are both zero modulo $p^{q+1}$. Since $a$ generates $\roi_F/(p\roi_F + \Z)$, $\{1, a\}$ is a basis of $(\roi_F / p^{q+1}\roi_F) \otimes \Zp$ as a module over $\Z / p^{q+1}\Z$; so both summands must be individually zero modulo $p^{q+1}$. But this means precisely that $\smallmatrd{r}{s}{t}{u} \in U_{\Q}(mp, mN)$, as required.
Part (ii) follows from the observations that the horizontal maps are both injections and that both vertical maps are finite of degree $p^2$.
\end{proof}
\begin{remark-numbered}Since this lemma is crucial to the proof, we expand slightly on what part (i) really says. The map $\alpha_{mp,\n,a}$ is $p$-to-$1$ on connected components, in the sense that the preimage of a single component of $Y_F^*(p^r(p),p^r\n)$ contains $p$ connected components of $Y_F^*(p^{r+1},p^rN)$. The condition on $a$ ensures that the map $u_a$ `twists' these $p$ components away from each other inside that single component of the target space, so that their images are disjoint. In particular, the result would certainly fail without this condition; for instance, if $a = 0$ then the map factors through $Y_\Q(m(p), mN)$.
\end{remark-numbered}
\begin{proof}[Proof of Theorem \ref{norm relation zeta}]
The Cartesian diagram of Lemma \ref{cartesian} shows that
\[(\alpha_{mp,\n,a})_*(\subc C_{mN}) = (\pi_1)^*(\subc \mathcal{Z}_{m,\n,a}).\]
But by definition,
\begin{align}\label{zeta equality}
(\alpha_{mp,\n,a})_*(\subc C_{mN}) &= (\mathrm{pr}_2)_*(\subc \mathcal{Z}_{mp,\n',a})\notag \\
&= (\mathrm{pr}_2)_*(\mathrm{pr}_1)_*(\subc \mathcal{Z}_{mp,\n,a}),
\end{align}
where the second equality follows from Lemma \ref{first norm relation}. From the commutative diagram \eqref{tau commutative diagram}, we know that $(\tau_p)_* = (\pi_2)_*(\mathrm{pr}_2)_*(\mathrm{pr}_1)_*$, whilst by definition $(U_p)_* = (\pi_2)_*(\pi_1)^*.$ Hence applying $(\pi_2)_*$ to equation \eqref{zeta equality} gives the result.
\end{proof}
We can now deduce the proof of the main theorem:
\begin{proof}[Proof of Theorem \ref{norm relation}]
We need to show that, for each $j \in (\Z/p^r \Z)^\times$, we have
\[ \sum_{\substack{k \in \Z / p^{r + 1} \Z \\ k = j \bmod p^r}} \subc \Xi_{p^{r+1}, \n, ka} = (U_p)_* \cdot \subc \Xi_{p^r, \n, ja}. \tag{\dag}\]
We have a commutative diagram
\[
\BigCommDia
{ \bigsqcup_k Y_F^*(p^{r+1}, p^{r+1}\n)^{(k)}} {\tau_p} {Y_F^*(p^r, p^r\n)^{(j)}}
{s_{p^{r+1}}} {s_{p^r}}
{Y_{F, 1}^*(\n)} {} {Y_{F, 1}^*(\n). }
\]
The left-hand side of $(\dag)$ is exactly the pushforward of $\sum_k \subc \mathcal{Z}_{p^{r+1}, \n, a}(k)$ along the left vertical arrow (where again we are using the notation $x(k)$ for the projection of $x$ to the $k$-th component). Theorem \ref{norm relation zeta} shows that the pushforward of the same element along $\tau_p$ is $(U_p)_* \subc \mathcal{Z}_{p^{r}, \n, a}(j)$. So it suffices to check that the operators $(U_p)_*$ on $Y_F^*(p^r, p^r \n)^{(j)}$ and on $Y_{F, 1}^*(\n)$ are compatible under $s_{p^r}$, which is clear by inspecting a set of single-coset representatives (using our running assumption that all primes above $p$ divide $\n$).\\
\\
The case where $r=0$ is a special case, since we must exclude the term $\subc \Xi_{p^1,\n,0}$ from the above sum, introducing the $-1$ term of the theorem.
\end{proof}
\section{Asai--Eisenstein elements in higher weights}\label{higher weights}
In the previous sections, we have defined compatible systems of classes in the Betti cohomology of the spaces $Y_{F, 1}^*(\n)$ with trivial coefficients. We now extend this to coefficients arising from non-trivial algebraic representations.
We fix, for the duration of this section, a prime $p$ and a finite extension $L$ of $\Qp$ large enough that $F$ embeds into $L$ (and we fix such an embedding). We let $R$ be the ring of integers of $L$. We also choose an ideal $\n$ of $\roi_F$ divisible by all primes above $p$. For convenience, we also assume that $\n$ is divisible by some integer $q \ge 4$; note that this is automatic if $p$ is unramified in $F$ and $p \ge 5$.
\subsection{Coefficients and moment maps}
As above, we let $V_k(R) = \operatorname{Sym}^k R^2$ be the left $R[\GLt(\Z)]$-module of symmetric polynomials in 2 variables with coefficients in $R$. We will be interested in the dual $T_k(R) = V_k(R)^*$ (the module of symmetric tensors of degree $k$ over $R^2$). We view this as a local system of $R$-modules on $Y_{\Q, 1}(N)$, for any $N \ge 4$, in the usual way.
Similarly, we have $R[\GLt(\roi_F)]$-modules $V_{kk}(R) = \operatorname{Sym}^k R^2 \otimes (\operatorname{Sym}^{k} R^2)^\sigma$, where $\GLt(\roi_F)$ acts on the first factor via the given embedding $\roi_F \hookrightarrow R$ and on the second via its Galois conjugate. We let $T_{kk}(R)$ be the $R$-dual of $V_{kk}(R)$. These give local systems on $Y_{F}^*(U)$ and $Y_F(U)$ for sufficiently small levels $U$.
The linear functional dual to the second basis vector of $R^2$ defines a $\Gamma^*_{F, 1}(\n p^r)$-invariant linear functional on $\operatorname{Sym}^k (R/p^r)^2$, or on $\left(\operatorname{Sym}^k (R/p^r)^2\right)^\sigma$, and hence an invariant vector in $T_{kk}(R/p^r))$. This can be seen as a section of the corresponding local system, defining a class
\[
e_{F, k, r} \in \h^0\left( Y_{F, 1}^*(\n p^r), T_{kk}(R / p^r)\right).
\]
Since $R$ is $p$-adically complete, cup-product with this section defines a ``moment'' map
\[
\operatorname{mom}^{kk}: \varprojlim_t \h^\bullet(Y_{F, 1}^*(\n p^t), \Z) \otimes R \to \h^\bullet(Y_{F, 1}^*(\n), T_{kk}(R)).
\]
This is the Betti cohomology analogue of the moment maps in \'etale cohomology of modular curves considered in \cite[\S 4]{KLZ17}.
By Lemma \ref{lemma:level-compat}, the family of classes $\left( \Phi_{\n p^t, a}^{k, r}\right)_{t \ge 0}$ is compatible under pushforward, so it is a valid input to the maps $\operatorname{mom}^{kk}$ (after base-extending from $R$ to the group ring $R[(\Z/p^r)^\times]$).
\begin{definition}
We let $\subc \Phi_{\n,a}^{k, r} \in \h^2\Big(Y_{F, 1}^*(\n),T_{kk}(R)\Big) \otimes_{R}R[(\Z/p^r)^\times]$ be the image of the compatible system
\(\left( \Phi_{\n p^t, a}^{k, r}\right)_{t \ge 0}\)
under $\operatorname{mom}^{kk}$.
\end{definition}
The action of the Hecke operator $(U_p)_*$ is well-defined both on $\h^2\big(Y_{F, 1}^*(\n),T_{kk}(R)\big)$ and on the inverse limit $\varprojlim_t \h^2\big(Y_{F, 1}^*(\n p^t), \Zp\big)$, and the maps $\operatorname{mom}^{kk}$ commute with this operator (cf.~\cite[Remark 4.5.3]{KLZ17}). So we deduce immediately from Theorem \ref{norm relation} that the classes $\subc \Phi_{\n,a}^{k, r}$, for any fixed $k \ge 1$ and varying $r$, satisfy the same norm-compatibility relation as the $k = 0$ classes.
\subsection{Relation to the weight $2k$ Eisenstein class}
We will later relate the $\subc \Phi_{\n,a}^{k, r}$ to values of $L$-functions. For this purpose the definition above, via a $p$-adic limiting process, is inconvenient; so we now give an alternative description of the same classes via higher-weight Eisenstein series for $\GLt / \Q$, directly generalising the classes obtained in weight 2 from realisations of Siegel units.
Let $k \geq 0$. The local system $T_{k}(\C)$ is exactly the flat sections of a vector bundle $T_{k, \mathrm{dR}}$ with respect to a connection $\nabla$ (the Gauss--Manin connection). The vector bundle $T_{k, \mathrm{dR}}$ is algebraic over $\Q$, and there is a comparison isomorphism
\begin{equation}
\label{eq:dRcomparison}
\h^1(Y_{\Q, 1}(N), T_{k}(\Q)) \otimes \C \cong \h^1_{\mathrm{dR}}\left(Y_{\Q, 1}(N), T_{k, \mathrm{dR}}\right)\otimes_{\Q} \C.
\end{equation}
Moreover, the pullback of $T_{k, \mathrm{dR}}$ to the upper half-plane is the $k$-th symmetric tensor power of the relative de Rham cohomology of $\C / (\Z\tau + \Z)$, so it has a canonical section $(\mathrm{d}w)^{\otimes k}$, where $w$ is a coordinate on $\C$.
\begin{proposition}
There exists a class $\operatorname{Eis}^k_N \in \h^1(Y_{\Q, 1}(N), T_k(\Q))$ whose image under the comparison isomorphism \eqref{eq:dRcomparison} is the class of the differential form
\[
-N^k F^{(k+2)}_{1/N}(\tau)\, \mathrm{d}w^{\otimes k} \,\mathrm{d}\tau.
\]
\end{proposition}
\begin{proof}
By work of Beilinson--Levin \cite{beilinsonlevin94}, there exists a \emph{motivic Eisenstein class}
\[
\operatorname{Eis}^k_{\mathrm{mot}, N} \in \h^1_{\mathrm{mot}}\big(Y_{\Q, 1}(N), T_{k, \mathrm{mot}}(k+1)\big),
\]
where $T_{k, \mathrm{mot}}$ is a relative Chow motive over $Y_1(N)$ with coefficients in $\Q$ (cut out by a suitable idempotent inside the relative motive of the $k$-fold fibre product of the universal elliptic curve over $Y_1(N)$).
This motivic cohomology group admits a realisation map $r_{\mathrm{dR}}$ to $\h^1_{\mathrm{dR}}(Y_{\Q, 1}(N), T_{k, \mathrm{dR}})$, and $\operatorname{Eis}^k_{\mathrm{dR}, N} \coloneqq r_{\mathrm{dR}}(\operatorname{Eis}^k_{\mathrm{mot}, N})$ is given by the class of the differential form
\[
-N^k (2\pi i)^{k + 1}F^{(k+2)}_{1/N}(\tau)\, \mathrm{d}w^{\otimes k} \,\mathrm{d}\tau.
\]
(This is a restatement of a result of \cite{beilinsonlevin94}, but we refer to \cite[Theorem 4.3.3]{KLZ1} for the statement in this particular form). There is also a Betti realisation map
\[ r_B: \h^1_{\mathrm{mot}}\big(Y_{\Q, 1}(N), T_{k, \mathrm{mot}}(k+1)\big) \to \h^1(Y_{\Q, 1}(N), T_k(\Q)(k+1)),
\]
where the twist $(k+1)$ in Betti cohomology denotes tensor product with $\Q(k+1) = (2 \pi i)^{k+1} \Q \subset \C$; and these two realisations are compatible under the comparison isomorphism \eqref{eq:dRcomparison} (see \cite[\S 2.2]{KLZ1}). Identifying $\Q(k+1)$ with $\Q$ in the obvious manner, we obtain a class $\operatorname{Eis}^k_{N} \in \h^1(Y_{\Q, 1}(N), T_k(\Q))$ whose image under the comparison isomorphism \eqref{eq:dRcomparison} is $(2\pi i)^{-(k+1)}\operatorname{Eis}^k_{\mathrm{dR}, N}$, as required.
\end{proof}
Via base-extension, we can consider $\operatorname{Eis}^k_N$ as a class in $\h^1(Y_{\Q, 1}(N), T_k(\Qp))$. This class does not generally lie in the lattice $\h^1(Y_{\Q, 1}(N), T_k(\Zp))$; but for any $c > 1$ as above, there exists a class $\subc\operatorname{Eis}^k_N \in \h^1(Y_{\Q, 1}(N), T_{k}(\Zp))$ such that the equality
\[
{}_c \operatorname{Eis}^k_N = \left(c^2 - c^{-k} \langle c \rangle\right) \operatorname{Eis}^k_N
\]
holds in $\h^1(Y_{\Q, 1}(N), T_{k}(\Qp))$. (This follows from Kings' theory of $p$-adic interpolation of Eisenstein classes \cite{kings15}; see \cite[Theorem 4.4.4]{KLZ17}.)
Letting $R$ be as in the previous section, for any $j \in \{0, \dots, k\}$ we can regard $T_{2k-2j}(R)$ as a $\SLt(\Z)$-invariant submodule of the $\SLt(\roi_F)$-module $T_{kk}(R)$, via the \emph{Clebsch--Gordan map}
\begin{equation}
\label{eq:CG}
\operatorname{CG}^{[k, k, j]}: T_{2k-2j}(R) \to T_{kk}(R),
\end{equation}
(normalised as in \cite[\S 5.1]{KLZ1}). Thus we obtain a map
\[
(\iota_{m, \n, a})_* \circ \operatorname{CG}^{[k, k, j]}: \h^1\big(Y_{\Q}(m, mN), T_{2k-2j}(\Zp)\big) \otimes_{\Zp} R \to \h^2\big(Y_{F}^*(m, m\n), T_{kk}(R)\big).
\]
\begin{definition}
Let $\subc \Xi_{m, \n, a}^{k, j} \in \h^2\big(Y_{F, 1}^*(\n), T_{kk}(R)\big)$ be the image of $(\iota_{m, \n, a})_* \operatorname{CG}^{[k, k, j]}\left(\subc\operatorname{Eis}^{2k-2j}_{m N}\right)$ under restriction to the identity component followed by $(s_m)_*$. We similarly write $\Xi_{m, \n, a}^{k, j}$ (without $c$) for the analogous element with $L$-coefficients, defined using $\operatorname{Eis}^k_{m N}$.
\end{definition}
This definition is convenient for $p$-adic interpolation, but to relate this element to special values it is convenient to have an alternative description involving pushforward along the map $\kappa_{a/m}: Y_{F, 1}^*(m^2\n) \to Y_{F, 1}^*(\n)$, as above. (Note that if $p \mid m$, this pushforward map only acts on cohomology with coefficients in $T_{kk}(L)$, not $T_{kk}(R)$, since it corresponds to the action of a matrix whose entries are not $p$-adically integral.)
\begin{lemma}
\label{Z-and-Xi2}
As elements of $\h^2\big(Y_{F, 1}^*(\n), T_{kk}(L)\big)$ we have
\[ \Xi_{m, \n, a}^{k, j} = m^{j} \cdot (\kappa_{a/m})_*\left( \iota_*\mathrm{CG}^{[k,k,j]}(\operatorname{Eis}^{2k-2j}_{m^2 N})\right).\]
\end{lemma}
\begin{proof}
This follows from the definition of $\subc \Xi_{m, \n, a}^{k, j}$ above in exactly the same way as Proposition \ref{Z-and-Xi} (which is the case $j=k=0$), noting that the Clebsch--Gordan maps at levels $Y^*_{F}(m, m\n)$ and $Y_{F, 1}^*(m^2 \n)$ differ by the factor $m^j$; compare the proof of \cite[Theorem 5.4.1]{KLZ17}.
\end{proof}
\begin{proposition}
For any $r \ge 0$ we have
\begin{align*}
{}_c \Phi^{k, r}_{\n, a} &=
\sum_{t \in (\Z/p^r)^\times} {}_c \Xi_{p^r, \n, at}^{k, 0} \otimes [t] \\
\\
&= \left(c^2 - c^{-2k} [c]^2 \langle c \rangle\right)\cdot \sum_{t \in (\Z/p^r)^\times} \Xi_{p^r, \n, at}^{k, 0} \otimes [t],
\end{align*}
where the first equality takes place in $\h^2(Y_{F, 1}^*(\n), T_{kk}(R)) \otimes_R R[(\Z / p^r)^\times]$ and the second after base-extension to $L$.
\end{proposition}
\begin{proof}
This proposition is very close to \cite[Proposition 5.1.2]{KLZ17} so we only briefly sketch the proof. There is a $\GLt / \Q$ moment map $\mom^{k}$ for any $k \ge 0$, and one sees easily that the maps $\mom^k$ and $\mom^{kk}$ are compatible via the inclusion $Y_{\Q, 1}(N) \hookrightarrow Y^*_{F, 1}(\n)$. However, the main theorem of \cite{kings15} shows that the higher-weight Eisenstein classes are exactly the moments of the family of Siegel-unit classes $\subc C_{Np^\infty}$, up to a factor depending on $c$.
\end{proof}
There is an analogous statement for $j \ne 0$, but this can only be formulated after reduction modulo $p^r$:
\begin{proposition}\label{prop:0-to-j}
For $r \ge 1$, as classes in $\h^2(Y_{F, 1}^*(\n), T_{kk}(R/p^r))$ we have
\[ {}_c \Xi_{p^r, \n, a}^{k, j} = (a - a^\sigma)^j j!\binom{k}{j}^2 {}_c \Xi_{p^r, \n, a}^{k, 0}.\]
\end{proposition}
\begin{proof}
The proof of this proposition is identical to the corresponding statement for \'etale cohomology of Hilbert modular varieties, which is Corollary 8.1.5 of \cite{LLZ16}.
\end{proof}
\section{The $p$-adic Asai $L$-function}
\label{p-adic L-function}
In this short section, we put together the norm-compatibility and $p$-adic interpolation relations proved above in order to define a measure on $\Zp^\times$ with values in a suitable eigenspace of the Betti $\h^2$. This will be our $p$-adic $L$-function.
To ease the notation, we will assume for the rest of the paper that $p$ is odd. Similar arguments -- with some additional care -- should also hold for $p=2$, but we leave this case to the interested reader.
\subsection{Constructing the measure}
As in \S5, let $L$ be a finite extension of $\Qp$, with a chosen embedding of $F$ into $L$, and write $R$ for the ring of integers in $L$. In previous sections, we defined the elements
\[
\subc \Phi_{\n,a}^{k, r} = \sum_{t\in(\Z/p^r)^\times} \subc \Xi^{k, 0}_{p^r,\n,at}\otimes[t] \in \h^2\big(Y_{F, 1}^*(\n),T_{kk}(R)\big)\otimes R[(\Z/p^r)^\times],
\]
for $k \ge 0$ and $r \geq 0$. We also showed that if $a$ is a generator of $\roi_F/(p\roi_F + \Z)$, then under the natural projection maps in the second factor, we have
\[
\pi_{r+1}(\subc \Phi_{\n,a}^{r+1}) =(U_p)_*\left(\subc \Phi_{\n,a}^r\right) \quad \text{for $r \ge 1$}.
\]
\begin{definition}
Let us write
\begin{align*}
\mathcal{L}_k(\n, R) &\defeq \h^2(Y_{F, 1}^*(\n),T_{kk}(R)) / (\text{torsion}),\\
\text{and}\quad \mathcal{L}_k^{\mathrm{ord}}(\n, R) &\defeq e_{\mathrm{ord},*}\mathcal{L}_k(\n, R),
\end{align*}
where $e_{\mathrm{ord},*} \defeq \lim_{n\rightarrow\infty}(U_p)_*^{n!}$ is the ordinary projector.
\end{definition}
Clearly $e_{\mathrm{ord},*}\h^2(Y_{F, 1}^*(\n),T_{kk}(R))$ is an $R$-direct-summand of $\h^2(Y_{F, 1}^*(\n),T_{kk}(R))$, which is a finitely-generated $R$-module, since $Y_{F, 1}^*(\n)$ is homotopy-equivalent to a finite simplicial complex. On this direct summand, $(U_p)_*$ is invertible, so we may make the following definition:
\begin{definition}
Define
\[
\subc \Phi_{\n, a}^{k, \infty} \defeq \big[(U_p)_*^{-r} e_{\mathrm{ord},*}(\subc \Phi_{\n,a}^{k, r})\big]_{r\geq 1} \in \mathcal{L}_k^{\mathrm{ord}}(\n, R) \otimes_R R[[\Zp^\times]],
\]
where $R[[\Zp^\times]] = \varprojlim_r R[(\Z / p^r)^\times]$ is the Iwasawa algebra of $\Zp^\times$ with $R$-coefficients.
\end{definition}
We can interpret $R[[\Zp^\times]]$ as the dual space of the space of continuous $R$-valued functions on $\Zp^\times$. For $\mu \in R[[\Zp^\times]]$ and $f$ a continuous function, we write this pairing as $(\mu, f) \mapsto \int_{\Zp^\times} f(x) \mathrm{d}\mu(x)$.
\begin{proposition}
\label{prop:interpPhi}
For $j$ an integer with $0 \le j \le k$, and $\chi: \Zp^\times \to \Cp^\times$ a finite-order character of conductor $p^r$ with $r \ge 1$, we have
\[
\int_{\Zp^\times} x^j \chi(x)\, \mathrm{d}\,\subc \Phi_{\n, a}^{k, \infty}(x) = \frac{1}{(a - a^\sigma)^j j!\binom{k}{j}^2} (U_p)_*^{-r} e_{\mathrm{ord},*} \sum_{t \in (\Z/p^r)^\times}\chi(t) \subc \Xi_{p^r,\n, at}^{k, j}
\]
as elements of $L(\chi) \otimes_R \mathcal{L}_k^{\mathrm{ord}}(\n, R)$. For $\chi$ trivial we have
\[
\int_{\Zp^\times} x^j \, \mathrm{d}\,\subc \Phi_{\n, a}^{k, \infty}(x) = \frac{1}{(a - a^\sigma)^j j!\binom{k}{j}^2}(1 - p^j(U_p)_*^{-1}) e_{\mathrm{ord},*} \subc \Xi_{1, \n, a}^{k, j}.
\]
\end{proposition}
\begin{proof}
For $j = 0$ this is immediate from the definition of $\subc \Phi_{\n, a}^{k, \infty}$ (with the Euler factor in the case of trivial $\chi$ arising from the fact that the norm of $(U_p)_*^{-1}\, \subc \Phi_{\n, a}^{k, 1}$ is not $\subc \Phi_{\n, a}^{k, 0}$ but $(1 - (U_p)_*^{-1})\, \subc\Phi_{\n, a}^{k, 0}$, by the base case of Theorem \ref{norm relation}).
The case $j \ge 1$ is more involved. It suffices to show the equality modulo $p^h$ for arbitrarily large $h$. Modulo $p^h$ with $h \ge r$, we have
\begin{align*}
(a - a^\sigma)^j j!\binom{k}{j}^2 \int_{\Zp^\times} x^j \chi(x)\, \mathrm{d}\,\subc \Phi_{\n, a}^{k, \infty}(x) & \\
=(a - a^\sigma)^j j!\binom{k}{j}^2 (U_p)_*^{-h} e_{\mathrm{ord},*} \sum_{t \in (\Z / p^h)^\times} t^j \chi(t) \subc \Xi_{p^h, \n, at}^{k, 0} & \qquad\text{(definition of $\subc \Phi_{\n, a}^{k, \infty}$)}\\
= (U_p)_*^{-h} e_{\mathrm{ord},*} \sum_{t \in (\Z / p^h)^\times}\chi(t) \subc \Xi_{p^h\n, at}^{k, j} & \qquad\text{(Proposition \ref{prop:0-to-j})} \\
= (U_p)_*^{-h}e_{\mathrm{ord},*} \sum_{t \in (\Z / p^r)^\times} \chi(t) \Bigg( \sum_{\substack{s \in (\Z/p^h)^\times \\ s = t \bmod p^r}} \subc \Xi_{p^h, \n, as}^{k, j}\Bigg).
\end{align*}
The bracketed term is $(U_p)_*^{h-r} \subc \Xi_{p^r, \n, at}^{k, j}$ if $r \ge 1$, while for $r = 0$ it is $(U_p)_*^{h}(1 - p^j (U_p)_*^{-1}) \subc \Xi_{1, \n, a}^{k, j}$, by the same argument as the proof of Theorem \ref{norm relation}.
\end{proof}
Now suppose $\Psi$ is a Bianchi modular eigenform of parallel weight $(k, k)$ and level $U_{F,1}(\n)$. Recall that if $E$ is the extension of $F$ generated by the Hecke eigenvalues of $\Psi$, and $\mathscr{P}$ a prime of $E$ above $p$, we defined in \S \ref{sect:modsymb} an element
\[
\phi_\Psi^* = \jmath^*\left(\omega_{\Psi}/\Omega_{\Psi}\right) \in \h^1_{\mathrm{c}}( Y_{F, 1}^*(\n), V_{kk}(E)),
\]
well-defined up to elements of $E^\times$ that are units at $\mathscr{P}$. Enlarging $L$ if necessary, we fix an embedding $E_{\mathscr{P}} \hookrightarrow L$, and regard $\phi^*_\Psi$ as an element of $\h^1_{\mathrm{c}}(Y_{F, 1}^*(\n), V_{kk}(R))$, well-defined modulo $R^\times$.
\begin{assumption}
We shall assume that the Bianchi modular eigenform $\Psi$ is \emph{ordinary} with respect to this embedding, i.e.~that the $(U_p)^*$-eigenvalue of $\Psi$ lies in $R^\times$.
\end{assumption}
Since the adjoint of $(U_p)_*$ is $(U_p)^*$, this assumption implies that the linear functional on $\mathcal{L}_k(\n, R)$ given by pairing with $\phi^*_\Psi$ factors through projection to the $(U_p)_*$-ordinary part $\mathcal{L}_k^{\mathrm{ord}}(\n, R)$.
We also need to fix a value of $a$, which must generate the quotient $\frac{\roi_F \otimes \Zp}{\Zp}$. It suffices to take $a = \tfrac{1 + \sqrt{-D}}{2}$ if $D = -1 \bmod 4$, and $a = \tfrac{\sqrt{-D}}{2}$ if $D = 0 \bmod 4$; then we have $\roi_F = \Z + \Z a$, and $a - a^\sigma = \sqrt{-D}$.
\begin{definition}\label{def:p-adic l-function}
Define the \emph{$p$-adic Asai $L$-function} $\subc L_p^{\mathrm{As}}(\Psi) \in R[[\Zp^\times]]$ to be
\[\subc L_p^{\mathrm{As}}(\Psi) \defeq \big\langle \phi^*_\Psi, \subc \Phi_{\n,a}^{k, \infty}\big\rangle, \]
where $\langle-, -\rangle$ denotes the (perfect) Poincar\'e duality pairing
\begin{equation}\label{eqn:pairing}
\h^1_{\mathrm{c}}(Y_{F, 1}^*(\n), V_{kk}(R)) \times
\frac{\h^2(Y_{F, 1}^*(\n), T_{kk}(R))}{(\mathrm{torsion})} \longrightarrow R.
\end{equation}
\end{definition}
\begin{remark-numbered}
If we relax the assumption that $\Psi$ be ordinary, and let $h = v_p(c(p\roi_F, \Psi))$ (where the valuation is normalised such that $v_p(p) = 1$), then we can still make sense of $\subc L_p^{\mathrm{As}}(\Psi)$ as long as $h < 1$; however, it is no longer a measure, but a distribution of order $h$. This can be extended to $h < 1 + k$ using the same techniques as in \cite{LZ16}. However, if $k = 0$ and $h \ge 1$ (as in the case of an eigenform associated to an elliptic curve supersingular at the primes above $p$) then we are stuck.
\end{remark-numbered}
\begin{proposition}
The class $\subc L_p^{\mathrm{As}}(\Psi)$ is invariant under translation by $[-1] \in \Zp^\times$.
\end{proposition}
\begin{proof} This follows from Proposition \ref{prop:xi-sign}, since $\smallmatrd{-1}{}{}{1}_*$ acts trivially on $\omega_{\Psi}$ (and thus on $\phi^*_{\Psi}$).
\end{proof}
If we interpret $R[[\Zp^\times]]$ as the algebra of $R$-valued rigid-analytic functions on the ``weight space'' $\mathcal{W} = \Hom(\Zp^\times, \Cp^\times)$ parametrising characters of $\Zp^\times$, then this proposition shows that $\subc L_p^{\mathrm{As}}(\Psi)$ vanishes identically on the subspace $\mathcal{W}^- \subset \mathcal{W}$ parametrising odd characters.
We close this section by giving notation that will be useful when stating the interpolation properties of $L_p^{\mathrm{As}}(\Psi)$.
\begin{notation}
Let $\chi$ be a Dirichlet character of conductor $p^r$ for some $r\geq 0$, and let $j$ be any integer. We write
\[
\subc L_p^{\mathrm{As}}(\Psi,\chi,j) \defeq \int_{\Zp^\times}\chi(x)x^j\, \mathrm{d}\, \subc L_p^{\mathrm{As}}(\Psi)(x).
\]
\end{notation}
\subsection{Getting rid of $c$}\label{sec:getting rid of c}
\begin{proposition}
Suppose that the nebentypus character $\varepsilon_\Psi: (\roi_F / \n)^\times \to R^\times$ of $\Psi$ has non-trivial restriction to $(\Z / N\Z)^\times$, and moreover this restriction does not have $p$-power conductor. Then there exists a measure $L_p^{\mathrm{As}}(\Psi) \in L \otimes_R R[[\Zp^\times]]$ such that
\[ {}_c L_p^{\mathrm{As}}(\Psi) = (c^2 - c^{-2k} \varepsilon_\Psi(c) [c]^2) L_p^{\mathrm{As}}(\Psi)\]
for all valid integers $c$.
\end{proposition}
\begin{proof}
Some bookkeeping starting from \eqref{eq:cdsymmetry} shows that if $c, d$ are two integers $> 1$, both coprime to $6Np$, then the element
\[ (d^2 - d^{-2k}[d]^2 \varepsilon_{\Psi}(d)) \cdot {}_c L_p^{\mathrm{As}}(\Psi) \]
is symmetric in $c$ and $d$. Moreover, since $\varepsilon_{\Psi}$ does not have $p$-power conductor, we can choose $d$ such that $(d^2 - d^{-2k}[d]^2 \varepsilon_{\Psi}(d))$ is a unit in $L \otimes_R R[[\Zp^\times]]$. So if we define
\[ L_p^{\mathrm{As}}(\Psi) = (d^2 - d^{-2k} \varepsilon_\Psi(d) [d]^2)^{-1} {}_d L_p^{\mathrm{As}}(\Psi), \]
then this is independent of the choice of $d$ and it has the required properties.
\end{proof}
If the restriction $\varepsilon_{\Psi, \Q}$ of $\varepsilon_\Psi$ has $p$-power conductor, then the quotient $L_p^{\mathrm{As}}(\Psi)$ is well-defined as an element of the fraction ring of $R[[\Zp^\times]]$, i.e.~as a meromorphic function on $\mathcal{W}$ with coefficients in $L$. (We shall refer to such elements as \emph{pseudo-measures}.) The only points of $\mathcal{W}$ at which $L_p^{\mathrm{As}}(\Psi)$ may have poles are those corresponding to characters of the form $z \mapsto z^{k+1} \nu(z)$, where $\nu^2 = \varepsilon_{\Psi, \Q}^{-1}$.
\begin{remark-numbered}
Note that if $p = 1 \bmod 4$ and $\varepsilon_{\Psi, \Q}(\rho) = (-1)^k$, where $\rho$ is either of the square roots of $-1$ in $\Zp$, then both of the characters at which $L_p^{\mathrm{As}}$ could have a pole actually lie in $\mathcal{W}^-$, so we see immediately that $L_p^{\mathrm{As}}$ is a measure.
In the remaining cases, where one or both potential poles are in $\mathcal{W}^+$, we suspect that these potential poles are genuine poles if and only if the corresponding complex-analytic Asai $L$-functions have poles (which can only occur if $\Psi$ is either of CM type, or a twist of a base-change form). However, we have not proved this.
\end{remark-numbered}
\section{Interpolation of critical $L$-values}
\label{interpolation}
In this section, we want to show that the values of the Asai $p$-adic $L$-function at suitable locally-algebraic characters are equal to special values of the complex $L$-function.
\subsection{Automorphic forms for $G^*$}
We shall need to work with automorphic forms for the group $G^*$ of Definition \ref{def:G*} above. We refer the reader to \cite{LLZ16} for an account of automorphic forms for the group $G^*$, and their relation to those for $G$, in the analogous setting where $F$ is a \emph{real} quadratic field.
For $U^* \subset G^*(\hat\Z)$, we let $S_{kk}(U^*)$ denote the space of automorphic forms for $G^*$ of level $U^*$ and weight $(k, k)$. These are defined in the same way as for $G$; that is, they are functions
\[ G^*(\Q)_+ \backslash G^*(\A_\Q) / U^* \to V_{2k+2}(\C)\]
transforming appropriately under $\R_{>0} \cdot \SUt(\C)$, and with suitable harmonicity and growth conditions. If $U^* = U \cap G^*$ for an open compact subgroup $U$ of $G(\A_\Q^f)$, then there is a natural pullback map $\jmath^*: S_{kk}(U) \to S_{kk}(U^*)$.
Any $\f \in S_{kk}(U^*_{F, 1}(\n))$ is uniquely determined by its restriction to $G^*(\R)$, since $Y^*_{F, 1}(\n)$ is connected. This restriction can be described by a Fourier--Whittaker expansion of the form
\[
\f\left(\matrd{y_\infty}{x_\infty}{0}{1}\right) = |y_\infty| \sum_{\zeta \in F^\times} W_f(\zeta, \f) W_{\infty}(\zeta y_\infty) e_F(\zeta x),
\]
where $W_f(-, \f)$ is a function on $F^\times$ (supported in $\mathcal{D}^{-1}$). Of course, if $\f = \jmath^*(\Psi)$ for some $\Psi \in S_{kk}(U_{F, 1}(\n))$, then $W_f(-, \f)$ is simply the restriction of $W_f(-, \Psi)$ to $F^\times \subset (\A_F^f)^\times$.
\begin{remark-numbered}
The theory of automorphic representations of $G^*$ is more complicated than that of $G$: not all cuspidal representations are globally generic, and the naive formulation of strong multiplicity one is false, due to the presence of non-trivial global $L$-packets. In practice, this means that although cusp forms for $G^*$ do have Fourier--Whittaker expansions, one cannot necessarily recover all of their Fourier--Whittaker coefficients from the action of the Hecke algebra of $G^*(\A_\Q^f)$. However, this will not concern us here, since we will only consider automorphic forms for $G^*$ which are restrictions of eigenforms for $G$, or twists of these.
\end{remark-numbered}
\begin{lemma}
Let $\f \in S_{kk}(U_{F, 1}^*(\n))$, $\chi$ a Dirichlet character of conductor $m$, and $a \in \roi_F / m \roi_F$. Then the function
\[ R_{a, \chi} \f = \sum_{t \in (\Z/m)^\times} \chi(t) \kappa_{at/m}^*(\f) \]
is in $S_{kk}(U_{F, 1}^*(m^2 \n))$, and its Fourier--Whittaker coefficients for $\zeta \in \mathcal{D}^{-1}$ are given by
\[
W_f(\zeta, R_{a, \chi} \f) = G(\chi)\, \bar\chi(\operatorname{tr}_{F/\Q} a\zeta)\, W_f(\zeta, \f),
\]
where $G(\chi) \defeq \sum_{t \in (\Z/m)^\times} \chi(t) e^{2\pi i t/m}$ is the Gauss sum of $\chi$.
\end{lemma}
\begin{proof}
It is clear that $R_{a, \chi}\f$ has level $U_{F, 1}^*(m^2 \n)$, since each term in the sum is invariant under $U_{F, 1}^*(m^2 \n)$. It remains to compute its Fourier--Whittaker coefficients. We have
\begin{align*}
W_f(\zeta, R_{a, \chi} \f) &= W_f(\zeta, \f) \sum_{t \in (\Z/m)^\times} \chi(t) e_F(\zeta a t / m) \\
&= W_f(\zeta, \f) \sum_{t \in (\Z/m)^\times} \chi(t) e^{2\pi i t \operatorname{tr}(a\zeta) / m}.
\end{align*}
This is 0 unless the integer $\operatorname{tr}(a\zeta)$ is a unit modulo $m$, in which case it is $\chi(\operatorname{tr}(a\zeta))^{-1} G(\chi)$, as required.
\end{proof}
\subsection{An integral formula for the Asai $L$-function}
In this section, we describe an integral formula for the Asai $L$-function of a Bianchi eigenform twisted by a Dirichlet character $\chi$. This is a generalisation of the work of Ghate in \cite{Gha99} (who considers the case where $\chi$ is trivial), and we shall prove our theorem by reduction to his setting using the twisting maps $R_{a, \chi}$.
Let $0 \leq j \leq k$, and define
\[
I_{\Psi,b,m}^j \defeq \Big\langle \phi_{\Psi}^*, \hspace{4pt}(\kappa_{b/m})_*\iota_* \mathrm{CG}^{[k,k,j]}_* F^{(2k-2j+2)}_{1/m^2N}(\tau)\, \mathrm{d}w^{\otimes 2k-2j} \,\mathrm{d}\tau\Big\rangle,
\]
where $\langle -,-\rangle$ denotes the pairing of equation \eqref{eqn:pairing}, $\phi_{\Psi}^* = \jmath^*\phi_{\Psi}$ as before, and we view the Eisenstein class as an element of the Betti cohomology (with complex coefficients) using the standard comparison isomorphism
\begin{theorem}
\label{thm:int formula}
Let $\chi$ be a Dirichlet character of odd conductor $m$, and let $0 \leq j \leq k$. Let $a \in \roi_F$ be the value we chose in the remarks before Definition \ref{def:p-adic l-function} (so that $a - a^\sigma = \sqrt{-D}$). Then
\[
\sum_{t\in(\Z/m\Z)^\times}\chi(t)I_{\Psi,at,m}^j =
\begin{cases}
\frac{C'(k,j)G(\chi)}{(m^2N)^{2k-2j} \Omega_{\Psi}} L^{\mathrm{As}}(\Psi,\chibar,j+1)& \text{if}\ (-1)^j\chi(-1) = 1,\\
0 & \text{if}\ (-1)^j\chi(-1) = -1,\end{cases}
\]
where
\begin{align*}
C'(k,j) = (-1)^{k+1}\frac{\sqrt{-D}^{j+1} (j!)^2\binomc{k}{j}^2}{2\cdot (2\pi i)^{j+1} N^{2k-2j}}.
\end{align*}
\end{theorem}
We begin by explaining how to reduce the theorem to the case $m = 1$. Note that the definition of the Asai $L$-function depends only on the pullback $\jmath^*\Psi$, and in fact makes sense for any $\f \in S_{kk}(U_{F, 1}^*(\n))$, whether or not it is in the image of $\jmath^*$, as long as it is an eigenvector for the operators $\langle x \rangle$ for $x \in (\Z / N\Z)^\times$. If these operators act on $\f$ via the character $\varepsilon_\f$, then we can define
\[
L^{\mathrm{As}}(\f, s) \defeq L^{(N)}(\varepsilon_{\f}, 2s-2k-2) \sum_{n \ge 1} W_f\left(n/\sqrt{-D}, \f\right) n^{-s}.
\]
One sees easily that if $\chi$ is a Dirichlet character of odd conductor, and $a$ is the value we chose above (so that $a - a^\sigma = \sqrt{-D}$), then
\[ L^{\mathrm{As}}(R_{a,\chi} \jmath^* \Psi, s) = G(\chi) \cdot L^{\mathrm{As}}(\Psi, \chibar, s).\]
\begin{proposition}
\label{prop:int formula untwisted}
Let $\f \in S_{kk}(U_{F, 1}^*(\n))$, and let $N = \n \cap \Z$. Then we have
\begin{multline*}
\Big\langle \omega_{\f}, \iota_* \mathrm{CG}^{[k,k,j]}_* F^{(2k-2j+2)}_{1/N}(\tau)\, \mathrm{d}w^{\otimes 2k-2j} \,\mathrm{d}\tau\Big\rangle \\=
\begin{cases}
\frac{C'(k,j)}{N^{2k-2j}} L^{\mathrm{As}}(\f,j+1)& \text{if}\ \smallmatrd{-1}{}{}{1}^* \f = (-1)^j \f,\\
0 & \text{if}\ \ \smallmatrd{-1}{}{}{1}^* \f = (-1)^{j+1} \f.\end{cases}
\end{multline*}
\end{proposition}
The proof of the proposition is very similar to the work of Ghate \cite{Gha99}, but our conventions are a little different, so we shall give a proof using our conventions in an appendix; see Corollary \ref{cor:int formula}.
Applying this proposition to $R_{a,\chi} \jmath^* \Psi$ and dividing by $\Omega_\Psi$ proves Theorem \ref{thm:int formula}, since $\smallmatrd{-1}{}{}{1}^*$ acts on $R_{a,\chi} \jmath^* \Psi$ as $\chi(-1)$.
\subsection{Interpolation of critical values}\label{sec:interpolation}
We now use the integral formula of Theorem \ref{thm:int formula} to relate the values of the measure $L_p^{\mathrm{As}}(\Psi)$ to critical values of the classical Asai $L$-function.
\begin{theorem}\label{thm:interpolation}
Let $p$ be an odd prime. Let $\Psi$ be an ordinary Bianchi eigenform of weight $(k,k)$ and level $U_{F,1}(\n)$, where all primes above $p$ divide $\n$, with $U_p$-eigenvalue $\lambda_p = c(p\roi_F, \Psi)$. Let $\chi$ be a Dirichlet character of conductor $p^r$, and let $0 \leq j \leq k$.
\begin{itemize}
\item[(a)] If $\chi(-1)(-1)^j = 1$, then
\[L_p^{\mathrm{As}}(\Psi, \chi, j) =
\frac{C(k,j) \mathcal{E}_p(\Psi, \chi, j) G(\chi)}{\Omega_{\Psi}} \cdot L^{\mathrm{As}}(\Psi,\chibar,j+1),\]
where
\[C(k,j) \defeq (-1)^{k+1}\frac{j!\cdot \sqrt{-D}}{2\cdot (2\pi i)^{j+1}},\quad
\mathcal{E}_p(\Psi, \chi, j) \defeq
\begin{cases}
\left( 1 - \tfrac{p^j}{\lambda_p}\right) & \text{if $r = 0$,}\\
\left( p^{j} \lambda_p^{-1}\right)^r & \text{if $r > 0$.}
\end{cases}
\]
\item[(b)]
If $\chi(-1)(-1)^j = -1$, then
\[ L_p^{\mathrm{As}}(\Psi,\chi,j) = 0.\]
\end{itemize}
\end{theorem}
\begin{remark}
Up to rescaling $\Omega_{\Psi}$, this is precisely the interpolation formula predicted by Coates--Perrin-Riou (see \S\ref{sec:coates-perrin-riou}).
\end{remark}
\begin{proof}
For convenience, let $e[r]$ denote the operator $(U_p^{-r})_*e_{\mathrm{ord},*}$ if $r \ge 1$, and $(1 - p^j (U_p^{-1})_*)e_{\mathrm{ord},*}$ if $j = 0$. By the definition of the measure and Proposition \ref{prop:interpPhi}, we have
\[
L_p^{\mathrm{As}}(\Psi,\chi,j) = \frac{1}{\sqrt{-D}^j j!\binomc{k}{j}^2}\sum_{t\in(\Z/p^{r})^\times} \chi(t)\bigg\langle \phi_{\Psi}^*, e[r] \Xi^{k,j}_{p^{r},\n,at}\bigg\rangle.
\]
We know that $(U_p)^*$ is the adjoint of $(U_p)_*$, and $\phi_{\Psi}^*$ is a $(U_p)^*$ eigenvector with unit eigenvalue $\lambda_p$; thus the adjoint of $e[r]$ acts on $\phi_\Psi^*$ as $p^{-jr} \mathcal{E}_p(\Psi, \chi, j)$, so we have
\[
L_p^{\mathrm{As}}(\Psi,\chi,j) = \frac{\mathcal{E}_p(\Psi, \chi, j)}{p^{jr} \sqrt{-D}^j j!\binomc{k}{j}^2}\sum_{t\in(\Z/p^{r})^\times} \chi(t)\bigg\langle \phi_{\Psi}^*, \Xi^{k,j}_{p^{r},\n,at}\bigg\rangle.
\]
Now, by Lemma \ref{Z-and-Xi2}, we have $\Xi^{k,j}_{p^{r},\n,at} = p^{jr} (\kappa_{at/p^r})_*\iota_* \mathrm{CG}^{[k,k,j]}_*\left(\operatorname{Eis}^{2k-2j}_{p^{2r}N}\right)$, and hence
\[
\big\langle \phi_{\Psi}^*, \Xi_{p^r,\n,at}\big\rangle =p^{jr} \big\langle (\mathrm{CG}^{[k,k,j]})^*\iota^*\kappa_{at/p^r}^*(\phi_{\Psi}^*), \operatorname{Eis}^{2k-2j}_{p^{2r}N}\big\rangle,
\]
where the first cup product is at the level of $\Gamma_{F,1}^*(\n)\backslash\uhs$, and the second cup product is at the level of $\Gamma_1(p^{2r}N)\backslash\uhp.$ Now work at the level of complex coefficients. We know that
\begin{align*}
\mathrm{Eis}^{2k-2j}_{p^{2r}N} &= \mathrm{Eis}^{2k-2j}_{p^{2r}N}\\
& =- (p^{2r}N)^{2k-2j} F^{(2k-2j+2)}_{1/p^{2r}N}(\tau)\, \mathrm{d}w^{\otimes 2k-2j} \,\mathrm{d}\tau,\end{align*}
Accordingly, we see that
\[
L_p^{\mathrm{As}}(\Psi,\chi,j) =
-\frac{ (p^{2r}N)^{2k-2j}\mathcal{E}_p(\Psi, \chi, j)}
{\sqrt{-D}^j j!\binomc{k}{j}^2} \sum_{t\in (\Z/p^r)^\times} \chi(t) I^j_{\Psi,at,p^r},
\]
where $I^j_{\Psi,b,m}$ is as defined in the previous section. Using Theorem \ref{thm:int formula}, we see that this expression vanishes unless $\chi(-1)(-1)^j = 1$, in which case we have
\begin{multline*}
L_p^{\mathrm{As}}(\Psi,\chi,j) =
-\frac{ \mathcal{E}_p(\Psi, \chi, j)}
{\Omega_{\Psi}\sqrt{-D}^j j!\binomc{k}{j}^2} \times C'(k,j)G(\chi)L^{\mathrm{As}}(\Psi,\chibar,j+1)\\
= \frac{C(k,j) \mathcal{E}_p(\Psi, \chi, j) G(\chi)}{\Omega_{\Psi}} \cdot L^{\mathrm{As}}(\Psi,\chibar,j+1),
\end{multline*}
which completes the proof of the theorem.
\end{proof}
As an immediate corollary, we get an identical interpolation formula for ${}_c L_p^\mathrm{As}(\Psi)$ with the additional factor $(c^2 - c^{2j-2k} \varepsilon_{\Psi}(c)\chi(c)^2)$.
\begin{remark-numbered} \
\begin{enumerate}
\item[(i)] The factor $\mathcal{E}_p(\Psi, \chi, j)$ is non-zero if $r \ge 1$. If $r = 0$ then this factor vanishes if and only if $k = 0$, $\Psi$ is new at the primes above $p$, and $\varepsilon_\Psi(p) = 1$. In this case the $p$-adic $L$-function has an exceptional zero at the trivial character. For exceptional zeroes of the \emph{standard} $p$-adic $L$-function of a Bianchi cusp form, a theory of $\mathcal{L}$-invariants was developed in \cite{BW17}; it would be interesting to investigate analogues of this for the Asai $L$-function.
\item[(ii)] The measure $L_p^{\mathrm{As}}(\Psi)$ depends on the choice of $\sqrt{-D}$ fixed at the start; indeed, this choice was used to pick a value of $a \in \roi_F$, which in turn was used to construct the Asai--Eisenstein elements. This choice is further encoded by the appearance of $\sqrt{D} = i\sqrt{-D}$ in the interpolation formula. Choosing the other square root simply scales the measure by $-1$. The measure also depends on the choice of period $\Omega_{\Psi}$, and again a different choice changes the measure up to a scalar.
\item[(iii)] If the Bianchi eigenform $\Psi$ (or, more precisely, the automorphic representation it generates) is the base-change lift of an elliptic modular eigenform $f$ of weight $k+2$ and character $\varepsilon_f$, then the complex Asai $L$-function factors as
\[ L^{\mathrm{As}}(\Psi, \chi, s) = L(\operatorname{Sym}^2 f, \chi, s) L(\chi \varepsilon_f \varepsilon_F, s - k - 1),\]
where $\varepsilon_K$ is the quadratic character associated to $K$. Note that all three $L$-functions in the above formula have critical values at integer points $s = 1 + j$ with $0 \le j \le k$ and $(-1)^j \chi(-1) = 1$. By a comparison of interpolating properties at these points, one can verify that if $f$ is ordinary at $p$, then there is a corresponding factorisation of $L_p^\mathrm{As}(\Psi)$ as a product of a shifted $p$-adic Dirichlet $L$-function and Schmidt's $p$-adic $L$-function for $\operatorname{Sym}^2 f$.
This factorisation shows, in particular, that the possibility of poles of the $p$-adic Asai $L$-function is a genuine aspect of the situation, rather than a shortcoming of our method: if $\varepsilon_f \varepsilon_F = 1$, then one of these factors is the $p$-adic Riemann zeta function $\zeta_p(s-k-1)$, which has a simple pole at $s = k + 2$. If $f$ has CM by an imaginary quadratic field $K$ (with $K \ne F$, so that $\Psi = \operatorname{BC}(f)$ is cuspidal), then there is a second abelian factor $L(\chi \varepsilon_f \varepsilon_K, s-k-1)$; this gives rise to examples where both of the zeros of the factor $c^2 - c^{-2k}\varepsilon_\Psi(c)[c]^2$ correspond to genuine poles of $L_p^{\mathrm{As}}(\Psi)$.
\end{enumerate}
\end{remark-numbered}
|
1,108,101,564,623 | arxiv | \section{Prediction of positive parity $B_s$ mesons}
The discovery of the $D_{s0}^*(2317)$ by BaBar \cite{Aubert:2003fg} and the subsequent
discovery of the $D_{s1}(2460)$ more than 10 years ago revealed an unexpected
peculiarity: unlike expected by potential models, these states turned out to
be narrow states below the $DK$ and $D^*K$ thresholds. Moreover their mass is
roughly equal to the mass of their non-strange cousins, which immediately
sparked speculations about their structure in terms of quark content, with
popular options including both tetraquark and molecular structures.
The corresponding $J^P=0^+$ and $1^+$ states in the spectrum of $B_s$ hadrons have not been
established in experiment. Given the success of recent lattice QCD
calculations of the $D_{s0}^*(2317)$ and $D_{s1}(2460)$ \cite{Mohler:2013rwa,Lang:2014yfa}, it is
therefore interesting to see if a prediction of these positive parity $B_s$
states from lattice QCD is feasible.
\begin{table}[bh]
\begin{center}
\begin{tabular}{ccccccc}
$N_L^3\times N_T$ & $N_f$ & $a$[fm] & $L$[fm] & \#configs & $m_\pi$[MeV] & $m_K$[MeV]\\
\hline
$32^3\times64$ & 2+1 & 0.0907(13) & 2.90 & 196 & 156(7)(2) & 504(1)(7)\\
\end{tabular}
\caption{Gauge configurations used for the simulations in these proceedings.}
\label{configs}
\end{center}
\end{table}
\subsection{Lattice techniques}
For this study we use the 2+1 flavor gauge configurations with Wilson-Clover quarks generated by the
PACS-CS collaboration \cite{Aoki:2008sm}. Table \ref{configs} shows details of the ensemble used
in our simulation. Our quark sources are smeared with a Gaussian-like envelope
as produced by use of the stochastic distillation technique
\cite{Morningstar:2011ka}. For the heavy b-quarks in the Fermilab
interpretation \cite{ElKhadra:1996mp}, we tune the heavy-quark hopping parameter
$\kappa_b$ for the spin averaged kinetic mass
$M_{\overline{B_s}}=(M_{B_s}+3M_{B_s^*})/4$ to assume its physical
value. The energy splittings we determine are expected to be close to physical
in this setup. For technical details on the tuning of the
heavy-quark hopping parameter please refer to \cite{Lang:2014yfa,Lang:2015hza}. We work with a
partially quenched strange quark and used the $\phi$ meson and $\eta_{s}$ to
set the strange quark mass, obtaining $\kappa_s=0.13666$ \cite{Lang:2014yfa}. Table
\ref{splittings} shows examples of mass splittings extracted with this setup. Notice
that the uncertainties provided in this table are statistical and scale-setting
uncertainties only. Nevertheless the agreement with experiment is mostly
excellent, indicating that the remaining discretization
effects are small.
\begin{table}[tbh]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
& Lattice [MeV] & Exp. [MeV] \cr
\hline
$m_{B^*}-m_B$ & 46.8(7.0)(0.7) & 45.78(35)\cr
$m_{B_{s^*}}-m_{B_s}$ & 47.1(1.5)(0.7)& $48.7^{+2.3}_{-2.1}$\cr
$m_{B_s}-m_{B}$ & 81.5(4.1)(1.2) & 87.35(23)\cr
$m_Y-m_{\eta_b}$ & 44.2(0.3)(0.6) & 62.3(3.2)\cr
$2m_{\overline{B}}-m_{\overline{\bar{b}b}}$ & 1190(11)(17) & 1182.7(1.0)\cr
$2m_{\overline{B_s}}-m_{\overline{\bar{b}b}}$ & 1353(2)(19)& 1361.7(3.4)\cr
$2m_{B_c}-m_{\eta_b}-m_{\eta_c}$ & 169.4(0.4)(2.4) & 167.3(4.9) \cr
\hline
\end{tabular}
\caption{Selected mass splittings (in MeV) of mesons involving bottom quarks compared to the values from the PDG \cite{Agashe:2014kda}. A bar denotes spin average. Errors are statistical and scale-setting only.\label{splittings}}
\end{center}
\end{table}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.47\textwidth,clip]{A1_linfit_dm_set_7_123}
\includegraphics[width=0.47\textwidth,clip]{T1_linfit_dm_134_set_2_134}
\caption{ Plots of $ap \cot\delta(p)$ vs. $(ap)^2$ for $B^{(*)}K$ scattering in $s$-wave. Circles are values from our simulation; red lines indicate the error band following
the L\"uscher curves (broken lines). The full line gives the linear fit to the points. Below threshold $|p|$ is added and the zero of the
combination indicates the bound state position in infinite volume. Displayed
uncertainties are statistical only. \label{fig:effrange}}
\end{center}
\end{figure}
For the construction of the correlation matrix used to extract the finite
volume energies, our study takes into account both quark-antiquark as well as
B-K structures. The basis is similar to our study of the $D_s$ spectrum
\cite{Mohler:2013rwa,Lang:2014yfa}, where this approach allowed us to obtain
reliable energy levels for the $D_{s0}^{*}(2317)$ and $D_{s1}(2460)$. For
elastic s-wave scattering the L\"uscher relation \cite{Luscher} relating the finite volume
spectrum to the phase shift $\delta$ of the infinite volume scattering amplitude is
given by
\begin{align}
p \cot
\delta(p)&=\frac{2}{\sqrt{\pi}L}Z_{00}(1;q^2)\approx\frac{1}{a_0}+\frac{1}{2}r_0p^2\;.
\label{eq:luescher_z}
\end{align}
We perform an effective range approximation with the s-wave scattering
length $a_0$ and effective range $r_0$. The resulting parameters and the mass
of the resulting binding momentum (from $\cot(\delta(p))=i$) are shown in Figure
\ref{fig:effrange}. We obtain
\begin{align}
a_0^{BK}&=-0.85(10)\,\mathrm{fm} &a_0^{B^*K}=-0.97(16) \,\mathrm{fm}\;\,\\
r_0^{BK}&=0.03(15) \,\mathrm{fm} &r_0^{B^*K}=0.28(15) \,\mathrm{fm}\;\,\nonumber\\
M_{B_{s0}^*}&=5.711(13)\,\mathrm{GeV} &M_{B_{s1}}=5.750(17) \,\mathrm{GeV}\;\,\nonumber
\end{align}
where the uncertainty on the bound state mass is statistical only. A full uncertainty
estimate is given in Table \ref{errors} and explained in more detail in \cite{Lang:2015hza}.
\begin{table}[tbp]
\begin{center}
\begin{tabular}{cc}
source of uncertainty & expected size [MeV]\\
\hline
heavy-quark discretization & 12\\
finite volume effects & 8\\
unphysical Kaon, isospin \& EM & 11\\
b-quark tuning & 3\\
dispersion relation & 2\\
spin-average (experiment) & 2\\
scale uncertainty & 1\\
3 pt vs. 2 pt linear fit & 2\\
\hline
total (added in quadrature) & 19\\
\end{tabular}
\end{center}
\caption{Systematic uncertainties in the mass determination of the
below-threshold states with quantum numbers $J^P=0^+, 1^+$. The heavy-quark
discretization effects are quantified by calculating the Fermilab-method
mass mismatches and employing HQET power counting \cite{Oktay:2008ex} with
$\Lambda=700$~MeV. The finite volume uncertainties are estimated
conservatively by the difference of the lowest energy level and the complex pole position. The last line gives the effect of using only the two points near threshold for the effective range fit. The total uncertainty has been obtained by adding the single contributions in quadrature.\label{errors}}
\end{table}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.6\textwidth,clip]{Bs_summary_proceedings}
\end{center}
\caption{ Spectrum of s-wave and p-wave $B_s$ states from our simulation. The
blue states are naive energy levels, while the bound state energy of the
states in magenta results from an effective range approximation of the phase
shift data close to threshold. The black lines are the energy levels from the
PDG \cite{Agashe:2014kda}. The error bars on the blue states are
statistical only, while the errors on the magenta states show the full
(statistical plus systematic) uncertainties.}
\label{fig:finalbs}
\end{figure}
\subsection{Resulting prediction of positive parity $B_s$ mesons}
Figure \ref{fig:finalbs} shows our final results for the spectrum of s-wave and p-wave
$B_s$ states. For values of masses in MeV we quote $M=\Delta
M^{\mathrm{lat}}+M_{\overline{B_s}}^{\mathrm{exp}}$ where we substitute the experimental $B_s$ spin average in
accordance with our tuning. The states with blue symbols result from a naive determination
of the finite volume energy levels (statistical uncertainty only). Notice that
the $j={\frac{3}{2}}$ states agree well with the experimental $B_{s1}(5830)$
and $B_{s2}^*(5840)$ as determined by CDF/D0 and LHCb \cite{Agashe:2014kda}. The $B_s$ states
with magenta symbols indicate the bound state positions extracted using
L\"uscher's method and taking into account the sources of uncertainty detailed
in Table \ref{errors}. Notice that our Lattice QCD calculations yields bound
states well below the $B^{(*)}K$ thresholds.
\section{$B_s \pi^+$ scattering and search for the X(5568)}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.43\textwidth,clip]{fig_Bspi_conecut.pdf}
\includegraphics[width=0.53\textwidth,clip]{LHCb_Fig2.pdf}
\end{center}
\caption{Left pane: $B_s^0\pi^\pm$ invariant mass distribution from D0 \cite{D0:2016mwd} (after
applying a cone cut). Right pane: $B_s^0\pi^\pm$ invariant mass distribution
by LHCb \cite{Aaij:2016iev} shown in black symbols with a signal
component corresponding to $\rho_x=8.6\%$ as observed by D0 shown in red.}
\label{fig:x5568_exp}
\end{figure}
Recently, the D0 collaboration has reported evidence for a peak in the $B_s\pi^+$
invariant mass not far above threshold \cite{D0:2016mwd}. This peak is
attributed to a resonance dubbed X(5568)with the resonance mass $m_X$ and
width $\Gamma_x$,
\begin{align}
m_X&=5567.8\pm2.9^{+0.9}_{-1.9}\;\mathrm{MeV}\;,\\
\Gamma_X&=21.9\pm6.4^{+5.0}_{-2.5}\;\mathrm{MeV}\;.\nonumber
\end{align}
\begin{wrapfigure}{r}{0.5\textwidth}
\includegraphics[width=0.45\textwidth,clip]{fig_analytic}
\caption{Analytic predictions for energies $E(L)$ of eigenstates as a
function of lattice size $L$.}
\label{fig:analytic}
\end{wrapfigure}
Decay of this resonance into $B_s\pi^+$ implies an exotic flavor structure
with the minimal quark content $\bar b s \bar d u$. Most model studies which
accommodate a X(5568) propose spin-parity quantum numbers $J^P=0^+$. Short
after D0 reported their results, the LHCb collaboration investigated the cross-section
as a function of the $B_s\pi^+$ invariant mass with increased statistics and did
not find any peak in the same region \cite{Aaij:2016iev}. Figure
\ref{fig:x5568_exp} shows both the plot from D0 (left pane) and the data from
LHCb (right pane), where the red shaded region illustrates the signal
expectation given the ratio of yields $\rho_x$ determined by D0.
\subsection{Expected signature for a resonance in $B_s\pi^+$}
The presence of an elastic resonance with the parameters of the $X(5568)$
would lead to a characteristic pattern of finite volume energy levels
corresponding to QCD eigenstates with given quantum numbers for finite spatial size $L$.
\begin{wrapfigure}{r}{0.45\textwidth}
\includegraphics[width=0.45\textwidth,clip]{fig_2new}
\caption{The eigenenergies of the $\bar bs\bar du$ system with $J^P=0^+$
from a lattice simulation for the choices detailed in the text.}
\label{fig:lattice}
\end{wrapfigure}
Figure \ref{fig:analytic} shows analytic predictions for energies of
eigenstates for an elastic resonance in $B_s\pi$ (with $J^P=0^+$) as a
function of the lattice size $L$ as determined from L\"uscher's formalism
\cite{Luscher}. Red solid lines are $B_s\pi$ eigenstates in
the scenario with resonance $X(5568)$ ; orange dashed
lines are $B_s\pi$ eigenstates when $B_s$ and $\pi$ do not interact;
blue dot-dashed lines are $B^+\bar K^0$ eigenstates when $B^+$ and $\bar
K^0$ do not interact; the grey band indicates the position of $X(5568)$ from
the D0 experiment \cite{D0:2016mwd}. The lattice size $L=2.9~$fm, used in
our simulation, is marked by the vertical line. Note that the resonant
scenario predicts an eigenstate near $E\simeq m_X$ (red solid), while there is no such eigenstate for $L=2-4~$fm in
a scenario with no or small interaction between $B_s$ and $\pi^+$ (orange
dashed). In the unlikely scenario of a deeply bound $BK$ state, the
simulation would result in an eigenstate with $E\approx m_X$ up to exponentially small corrections in $L$.
\subsection{Simulation details}
In our simulation we use the PACS-CS ensemble \cite{Aoki:2008sm} from Table \ref{configs}. The interpolator basis
\begin{align}
O_{1,2}^{B_s(0)\pi(0)}&=\left[\,\bar{b}\Gamma_{1,2} s\,\right](\mathbf{p}=0)\left[\,\bar{d}\Gamma_{1,2} u\,\right](\mathbf{p}=0)\;,\nonumber\\
O_{1,2}^{B_s(1)\pi(-1)}&=\!\!\!\!\!\!\!\!\!\sum_{\mathbf{p}=\pm\mathbf{e_{x,y,z}}~2\pi/L}\!\!\!\!\!\!\! \left[\bar{b}\Gamma_{1,2} s\right](\mathbf{p})\left[\bar{d}\Gamma_{1,2} u\right](-\mathbf{p})\nonumber\;,\\
O_{1,2}^{B(0)K(0)}&=\left[\,\bar{b}\Gamma_{1,2} u\,\right](\mathbf{p}=0)\left[\,\bar{d}\Gamma_{1,2} s\,\right](\mathbf{p}=0)\nonumber\;,
\end{align}
consisting of both $B_s\pi$ and $BK$ interpolators, is employed.
Figure \ref{fig:lattice} shows the eigenstates determined from our simulation
for various choices. The sets with full symbols are from correlated fits
while open symbols result from uncorrelated fits. Notation ``all'' refers to
the full set of gauge configurations while ``all-4'' refers to the set with
four (close to exceptional) gauge configurations removed. Set A is from
interpolator basis
$O_{1}^{B_s(0)\pi(0)}$, $O_{1}^{B_s(1)\pi(-1)}$, $O_{1}^{B(0)K(0)}$ while set B
results from a larger basis
$O_{1}^{B_s(0)\pi(0)},O_{1,2}^{B_s(1)\pi(-1)},O_{1,2}^{B(0)K(0)}$. All choices
consistently result in a small scattering length $a_0$ consistent with 0
within error.
\subsection{Concludions from comparing analytic predictions and lattice energy levels}
Figure \ref{fig:combined} shows the eigenenergies of the $\bar bs\bar du$ system with
$J^P=0^+$ calculated on the lattice (left pane) compared to the analytic
prediction based on the $X(5568)$ as observed by D0 (right pane).
Unlike expected for the case of a resonance with the
parameters of the X(5568), our lattice simulation at close-to-physical quark masses does not yield a
second low-lying energy level. Our results therefore do not support the existence of $X(5568)$ with
$J^P=0^+$. Instead, the results appear closer to the limit where $B_s$ and
$\pi$ do not interact significantly, leading to a $B_s\pi$ scattering length
compatible with 0 within errors.
\begin{wrapfigure}{r}{0.5\textwidth}
\includegraphics[width=0.5\textwidth,clip]{fig_3}
\caption{(a) The eigenenergies of the $\bar bs\bar du$ system with $J^P=0^+$
from our lattice simulation and (b) an analytic prediction based on
$X(5568)$, both at lattice size $L=2.9~$fm. The horizontal lines show energies of eigenstates $B_s(0)\pi^+(0)$, $B^+(0)\bar K^0(0)$ and $B_s(1)\pi^+(-1)$ in absence of interactions; momenta in units of $2\pi/L$ are given in parenthesis.
The pane (a) shows the energies
$E=E^{lat}_n-E_{\overline{B_s}}^{lat}+E^{exp}_{\overline{B_s}}$ with the
spin-averaged $B_s$ ground state set to its experiment value. The pane (b) is based on the
experimental mass of the $X(5568)$ \cite{D0:2016mwd}, given by the grey band,
and experimental masses of other particles.}
\label{fig:combined}
\end{wrapfigure}
|
1,108,101,564,624 | arxiv | \section{The idea of Bézout's theorem for $n>2$}
The theorem for $n=2$ was known long before Bézout. Although the modern mind is inclined to think of the theorem for $n>2$ as a natural generalization of the case $n=2$, a mathematician rarely formulates a conjecture before having any clue or hope about its truth. Thus, it is not before the second half of XVIIIth century that one finds a clear statement that the degree of the eliminand should be the product of the degrees even when $n>2$.
Lagrange, in his famous 1770-1771 memoir \textit{Réflexions sur la résolution algébrique des équations} \cite{lagrange1770}, proves Bézout's theorem for several particular systems of more than two equations, by studying the functions of the roots remaining invariant through some permutations. In the same year 1770, Waring enunciates the theorem for more than two equations in his \textit{Meditationes algebraicae} \cite{waring1770}, without demonstration. Up to our knowledge, these are the first occurences of Bézout's theorem for $n>2$.
Bézout probably knew those works of Lagrange and Waring. He is directly concerned by Lagrange's memoir, where Lagrange nominally criticizes the algebraical methods of resolution of equations in one unknown, that Bézout had conceived in the 1760's. Waring says, in the preface to the second edition of his \textit{Meditationes algebraicae}, having sent a copy of its first edition, as soon as 1770, to some scholars, including Lagrange and Bézout\footnote{\textit{Cf.} p.~\textit{xxi} de l'édition de 1782 des \textit{Meditationes algebraicae}}. Thus Bézout's theorem was already in the mind of those three scholars as soon as 1770.\footnote{It is to be noted, that Lagrange and Waring where also among the first readers of Bézout's treatise in 1779. They will, shortly after, give an answer, Lagrange in his correspondance with Bézout, and Waring in the second edition of his \textit{Meditationes algebraicae}.}
On the contrary, Bézout was not yet aware of the formula of the product of the degrees in 1765. His works \cite{bezout1762,bezout1765} of the years 1762-1765 about the resolution of algebraic equations show several examples
of systems, of more than two equations, where the method designed by him in 1764 leads him to a final equation of a degree much higher than the product of the degrees of the initial equations, because of a superfluous factor. The discovery of Bézout's theorem for $n>2$, still as a conjecture, is thus clearly circumscribed in the years 1765-1770.
\section{Bézout's method of elimination and the superfluous factors}
In elimination theory, early works for $n=2$ already show two different and complementary methods (\cite{euler1748a,cramer1750,bezout1764}). One of them relies upon symetrical functions of the roots. This method was used by Poisson to give an alternative demonstration of Bézout's Theorem in 1802. As we have chosen to concentrate on Bézout's path, we won't describe this method in this article.\footnote{Although for $n=2$, \textit{cf.} footnote \ref{Poisson} p. \pageref{Poisson}. Poisson knew of Bézout's work; but he ascribes to it a lack of rigour, thus justifying his own recourse to a different method. \textit{Cf.} \cite{poisson1802}, p.~199; and \cite{poisson1802}, p.~203. One should understand this judgement after section \ref{Demonstration2} below.}
The other method, the one used by Bézout, is a straightforward generalization\footnote{\label{MethodeNewton}This other method was already used for systems of equations of higher degrees by Newton (\emph{cf.} \cite{newton1972} p.~584-594) and before Newton (\emph{cf.} \cite{penchevre2004a}). Bézout sometimes ascribes this method to Newton.} of the principle of substitution used to eliminate unknowns in systems of linear equations ; this principle is still taught today in high school.
This method does not dictate the order in which to eliminate unknowns and powers of the unknowns. When Bézout uses this method in 1764, for $n>2$, he eliminates the unknowns \textit{one after the other}. This necessarily leads to a superfluous factor increasing the degree of the final equation far above the product of the degrees of the initial equations. This difficulty is easily illustrated by the following system of three equations :
\label{FacteursSuperflus}
\begin{align}
&-x^2+y^2+z^2-2yz-2x-1=0\\
&z+x+y-1=0\\
&z-x+y+1=0
\end{align}
Eliminating $z$ between $(1)$ and $(2)$, one obtains
\begin{align}
4y^2+4xy-4x-4y=0
\end{align}
Eliminating $z$ between $(1)$ and $(3)$, one obtains
\begin{align}
4y^2-4xy-4x+4y=0
\end{align}
Eliminating $4xy$ between $(4)$ and $(5)$, one has $x=y^2$. Substituting it for $x$ in $(4)$, the final equation is
$$4y(y^2-1)=0$$
The root $y=0$ does not correspond to any solution of the system above.\footnote{not even an infinite solution, in $\mathbb{P}^{3}$.} In fact, the true eliminand should be $y^2-1=0$.
Bézout was well aware of the difficulty. In 1764, he says :
\begin{quote}
(...) when, having more than two equations, one eliminates by comparing them two by two; even when each equation resulting from the elimination of one unknown would amount to the precise degree that it should have, it is vain to look for a divisor, in any of these intermediate equations, that would lower the degree of the final equation; none of them has a divisor; only by comparing them will one find an equation having a divisor; but where is the thread that would lead out of the maze?\footnote{\textit{Cf.} \cite{bezout1764}, p. 290; and also \cite{bezout1779}, p. vii.
}
\end{quote}
At the time in 1764, Bézout had not yet found the exit out of the maze.
Fifteen years later, in his 1779 treatise, Bézout gets rid of this iterative order by reformulating his method in terms of a new concept called the ``sum-equation'' :
\label{EquationSomme}
\begin{quote}
We conceive of each given equation as being multiplied by a special polynomial. Adding up all those products together, the result is what we call the \textit{sum-equation}. This sum-equation will become the final equation through the vanishing of all terms affected by the unknowns to eliminate.
\footnote{\textit{Cf.} \cite{bezout1779}, \S~224.}
\end{quote}
In other words, for a system of $n$ equations with $n$ unknowns\footnote{Throughout our commentary, we shall use upper indices to distinguish equations or polynomials ($f^{(1)}$, $f^{(2)}$, ...), and lower indices to distinguish unknowns or indeterminates ($x_1$, $x_2$, ...). We are thus losely following Bézout's own notations.}, \textit{viz.}
$$\left\lbrace\begin{aligned}
&f^{(1)}=0\\
&\vdots\\
&f^{(n)}=0
\end{aligned}\right.,$$
Bézout postulates that the final equation resulting from the elimination of $(n-1)$ unknowns is an equation of smallest degree, of the form
$$\phi^{(1)}f^{(1)}+...+\phi^{(n)}f^{(n)}=0.$$
The application of the method is thus reduced to the determination of the polynomials $\phi^{(i)}$. First of all, one must find the degree of those polynomials, as well as the degree of this final equation. This is the ``node of the difficulty'' according to Bézout. Once acertained the degree of the final equation, elimination is reduced to the application of the method of undetermined coefficients, and thus, to the resolution of a unique system of \textit{linear} equations. Hence the need for Bézout's theorem predicting the degree of the final equation.
Although this idea of ``sum-equation'' seems a conceptual break-through reminding us of XIXth century theory of ideals, the immediate effect of this evolution is the rather complicated structure of Bézout's treatise ! For didactical reasons maybe, he introduces his concept of ``sum-equation'' only in the second part of his treatise (``livre second''). In the first part of it, his formulation is a compromise with the classical formulation of the principle of substitution. We shall analyse this order of presentation in section \ref{ThreeViews}.
\section{A treatise of ``finite algebraic analysis''}
In the dedication of his \textit{Théorie générale des équations algébriques}, Bézout says that his purpose is to ``perfect a part of Mathematical Sciences, of which all other parts are awaiting what would now further their progress''\footnote{\textit{Cf.} \cite{bezout1779}.}. In the introduction to the treatise, Bézout opposes two branches of the mathematics of his days: ``finite algebraic analysis'' and ``infinitesimal analysis''. The former is the theory of equations. Historically, it comes first ; according to Bézout, infinitesimal analysis has recently drawn all the attention of mathematicians, being more enjoyable, because of its many applications, and also because of the obstacles met with in algebraic analysis. Bézout says :
\begin{quote}
The former itself [infinitesimal analysis] needs the latter to be perfected.\\
(...)\\
The necessity to perfect this part [algebraic analysis] did not escape the notice of even those to whom infinitesimal analysis is most redeemable.\footnote{\textit{Cf.} \cite{bezout1779}, p. ii.}
\end{quote}
In his view, the logical priority of algebraic analysis adds thus to its historical priority. The composition of his treatise is almost entirely algebraic:
\begin{itemize}
\item Bézout only briefly alludes (\S~48) to the geometric interpretation of elimination methods as research of the intersection locus of curves%
\footnote{although he knew well Euler's memoir of 1748, \textit{Démonstration sur le nombre des points où deux lignes des ordres quelconques peuvent se rencontrer}.};
\item he does never make any hypothesis about the existence or the arithmetical nature of the roots of algebraic equations%
\footnote{It seems that the very notion of root does never make appearance anywhere in his demonstrations. The word appears in \S\S~48, 117, 280--284, but never crucially. Bézout knew, of course, that the known methods of algebraic resolution of equations do not apply beyond fourth degree, as Lagrange had explained it exhaustively in his \textit{Réflexions sur la résolution algébrique des équations}. Moreover, at the time, the status of complex numbers and the fundamental theorem of algebra where still problematic.}.
\end{itemize}
This position of his treatise as specialized research on algebraic analysis is quite singular for his time%
\footnote{H. Sinaceur has commented upon the use of the term ``analysis'' in XVIIIth century (\cite{sinaceur1991}, p.~51):
\begin{quote}
The term ``analysis'' is a generic concept for the mathematical method rather than a particular branch of it. It was then normal that no clear distinction should exist between algebra and analysis, nor any exclusive specialization. Moreover, the analysis of equations, also called ``algebraic analysis'', could be considered as a part of a whole named ``mathematical analysis''.
\end{quote}
}.
\section{A classification of equations}
In fact, Bézout does not only demonstrate his theorem for \textit{generic} equations. When the equations are not generic, the degree of the eliminand may be less than the product of the degrees of the equations. Bézout progressively studies larger and larger classes of equations, by asking that some coefficients vanish or verify certain conditions. He thus builds a classification that allows a better bound on the degree of the eliminand according to the species of the equations. The case of generic equations is thus encompassed, as a very special case, in a research of gigantic proportion. In this regard, Bézout says:
\begin{quote}
Whatever idea our readers might have conceived of the scale of the matter that we are about to study, the idea that he will soon get therefrom, will probably surpass it.\footnote{\textit{Cf.} \cite{bezout1779}, \S~52.}
\end{quote}
\paragraph{An example} In \S~62 of his treatise, Bézout proves everything that was known before, in the case $n=2$. For two equations with two unknowns $x_{1}$, $x_{2}$ of the form
$$
\begin{array}{ll}
\sum\limits _{k_{1}\leq a_{1},\ k_{1}+k_{2}\leq t}{A_{k_{1}k_{2}}x_{1}^{k_{1}}x_{2}^{k_{2}}}=0\\
\sum\limits _{k_{1}\leq a_{1}^{\prime},\ k_{1}+k_{2}\leq t^{\prime}}{A_{k_{1}k_{2}}^{\prime}x_{1}^{k_{1}}x_{2}^{k_{2}}}=0\end{array}
$$
where the $A_{k_1k_2}$ and the $A'_{k_1k_2}$ are undetermined coefficients,
the degree of the final equation resulting from the elimination of $x_2$ is
$D=tt'-(t-a_{1})(t'-a_{1}^{\prime})$. Cramer in 1750, Euler,
then Bézout himself in 1764, had known this result.
Specifying $a_1=t$, $a_1'=t'$, one obtains the case of two ``complete equations'', \textit{i. e.} generic of their degrees :
$$D=tt'$$
In this case, the degree of the eliminand is the product of the degrees of the initial equations.
\paragraph{Orders and species}
What Bézout calls a ``complete polynomial'' is a polynomial, generic of its degree. Non-generic polynomials are called ``incomplete''. Bézout discriminates between several ``orders'' of incomplete polynomials. He thus defines the ``incomplete polynomials of the first order'' as those verifying the following conditions :\footnote{Bézout is either using the term ``polynomial'' or the term ``equation''; here, an equation is always of the form $f=0$ where $f$ is a polynomial.}
\begin{quote}
\begin{description}
\item[1\textsuperscript{o}] that the total number of unknowns being $n$, their combinations $n$ by $n$ should be of some degrees, different for each equation
\item[2\textsuperscript{o}] that their combinations $n-1$ by $n-1$ should be of some degrees, different not only for each equation, but also for each combination
\item[3\textsuperscript{o}] that their combinations $n-2$ by $n-2$ should be of some degrees, different not only for each equation, but also for each combination;
\item etc.\footnote{\textit{cf.} \cite{bezout1779}, p. xiii.}
\end{description}
\end{quote}
Among polynomials of this order, Bézout distinguishes several ``species'' (by the way, the two equations already mentioned in the example above are from the ``first species of incomplete equations''). He says:
\begin{quote}
As it is not possible to attack this problem from the forefront (the problem of incomplete polynomials of first order), I took it in the inverse order, first supposing the absence of the highest degrees of the combinations one by one, then the absence of those and of the highest degrees of the combinations two by two, etc., and also supposing some restrictive conditions in order to facilitate the intelligence of the method (...)
\end{quote}
We shall soon describe the ``restrictive conditions'' alluded to.
Bézout's symbolic notations for incomplete polynomials are such:
\begin{align*}
&(u^{a}...n)^{t} & \text{(\textit{cf.} {\S}57--67)}\\
&\lbrack(u^{a},x^{\underset{\prime}{a}})^{b},y^{\underset{\prime\prime}{a}}...n\rbrack^{t} & \text{(\textit{cf.} {\S}74--81)}\\
&(\lbrack(u^{a},x^{\underset{\prime}{a}})^{b},(u^{a},y^{\underset{\prime\prime}{a}})^{\underset{\prime}{b}},(x^{\underset{\prime}{a}},y^{\underset{\prime\prime}{a}})^{\underset{\prime\prime}{b}}\rbrack^{c},z^{\underset{\prime\prime\prime}{a}}...n)^{t} & \text{(\textit{cf.} {\S}82--132)}\\
&...
\end{align*}
For example, the second line describes a polynomial that we could write today, in a slightly modernized notation but still keeping with Bézout's unusual underscripts:
$$
\sum\limits_{\begin{array}{l}k\leq a,\underset{'}{k}\leq\underset{'}{a},\underset{''}{k}\leq\underset{''}{a},...,\\
k+\underset{'}{k}\leq b,\\
k+\underset{'}{k}+\underset{''}{k}+...\leq t\end{array}}
A_{k\underset{'}{k}\underset{''}{k}}u^{k}x^{\underset{'}{k}}y^{\underset{''}{k}}...
$$
where $u,x,y,z,...$ are the unknowns.
Finally, in section III of book I, Bézout introduces the second, third, fourth orders, etc. of polynomials, represented by this notation:
$$
(u^{a,\overline{a},\overline{\overline{a}},...}...n)^{t,\overline{t},\overline{\overline{t}},...}
$$
where $a\leq\Bar{a}\leq\Bar{\Bar{a}}\leq...$ and $t\geq\Bar{t}\geq\Bar{\Bar{t}}\geq...$. We could write such a polynomial under the form:
\begin{align*}
& \sum\limits _{k\leq\Bar{\Bar{a}},...,k+\underset{'}{k}+\underset{''}{k}+...\leq\Bar{\Bar{t}}}{A_{k\underset{'}{k}\underset{''}{k}}u^{k}x^{\underset{'}{k}}y^{\underset{''}{k}}...}\\
+ & \sum\limits _{k\leq\Bar{a},...,\Bar{\Bar{t}}<k+\underset{'}{k}+\underset{''}{k}+...\leq\Bar{t}}{A_{k\underset{'}{k}\underset{''}{k}}u^{k}x^{\underset{'}{k}}y^{\underset{''}{k}}...}\\
+ & \sum\limits _{k\leq a,...,\Bar{t}<k+\underset{'}{k}+\underset{''}{k}+...\leq t}{A_{k\underset{'}{k}\underset{''}{k}}u^{k}x^{\underset{'}{k}}y^{\underset{''}{k}}...}\\
+ & ...
\end{align*}
The whole book I of Bézout's treatise is thus progressing in an increasing order of generality. Bézout says, several times, that the equations studied earlier are particular cases of the new forms under study\footnote{\textit{cf.} \cite{bezout1779} \S~64, 81.}.
\paragraph{Four important cases}
We don't want to describe in full generality all the cases studied by Bézout.
We shall concentrate on the four large following classes of equations:
\begin{itemize}
\item complete equations, for all $n$
\item first species of incomplete equations, for all $n$
\item second species of incomplete equations, for all $n$
\item third species of incomplete equations, for $n=3$
\end{itemize}
Only for those four classes, Bézout gives an explicit formula for the degree of the eliminand when each proposed equation is generic within its species. We shall give a detailed summary of Bézout's demonstration for the second species, slightly modernized as regards symbolic notations and algebraic terminology\footnote{\textit{Cf.} sections \ref{Demonstration2} below. Some steps of the demonstration will have to await a complete justification in section \ref{Toric}. As for complete equations and for the first species of incomplete equations, results will be derived from the case of second species ; we also provide a more elementary proof in the appendix when $n=3$. As for the third species of incomplete equations, we shall say more in section \ref{Demonstration3}, with a demonstration in section \ref{Toric}.}.
To describe our notations, let us consider a system of $n$ equations in $n$ unknowns :
$$\left\lbrace\begin{aligned}
f^{(1)}=0\\
f^{(2)}=0\\
\vdots\\
f^{(n)}=0
\end{aligned}\right.$$
where the $f^{(i)}$ are elements of a polynomial ring $C=K[x_1,x_2,...,x_n]$. Bézout himself is using several kinds of indices : upper index means equation number, and lower indices mark unknown quantities\footnote{\textit{cf.} \cite{bezout1779} \S 62.}.
We shall also use multi-index notations and write $k=(k_1,k_2,...,k_n)$ and $x^k=x_1^{k_1}x_2^{k_2}...x_n^{k_n}$. The support $\text{supp}(f^{(i)})$ of a polynomial is the set of points $k\in\mathbb{Z}^n$ such that the monomial $x^k$ has non-zero coefficient in $f^{(i)}$. The main breakthrough of Bézout is thus to distinguish cases with respect to the convex envelop of $\text{supp}(f^{(i)})$.
Let $t$, $a_1$, $a_2$,..., $a_n$ be integers verifying the following conditions (the ``restrictive conditions'' alluded to, in the quotation above) :
$$\left\lbrace\begin{array}{l}
(\forall i)\ a_i\leq t\\
(\forall i\neq j)\ a_i+a_j>t
\end{array}\right.$$
Let $E_{t,a}$ be the convex set in $\mathbb{Z}^n$ defined by
$$\left\lbrace\begin{array}{l}
0\leq k_1\leq a_1\\
0\leq k_2\leq a_2\\
\vdots\\
0\leq k_n\leq a_n\\
k_1+k_2+...+k_n\leq t
\end{array}\right.$$
For $n=3$, such a convex set is the top polyhedron on figure 4 at the end of this article. For any such $t$ and $a$, we define the sub-vector space of $C$ over $K$ of polynomials with support in $E_{t,a}$:
$$C_{\leq t,a}=\left\lbrace f\in K[x_1,x_2,...,x_n] \mid \text{supp}(f)\subset E_{t,a}\right\rbrace$$
An incomplete equation of the first species is a generic member of $C_{\leq t,a}$.
As for systems of equations, let it be given for any index $i\in\lbrace 1,2,...,n\rbrace$ such a set of integers $t^{(i)}$, $a^{(i)}_1$, $a^{(i)}_2$,..., $a^{(i)}_n$, and $f^{(i)}$ be a generic member of
$$C_{\leq t^{(i)},a^{(i)}}$$
In other words, $a^{(i)}_j=\deg_j f^{(i)}$ is the degree of $f^{(i)}$ with respect to $x_j$, and $t^{(i)}=\deg f^{(i)}$ is the total degree of $f^{(i)}$. For all $k\neq j$, we require that
$$\deg_jf^{(i)}+\deg_kf^{(i)}\geq\deg f^{(i)}$$
Finally, the genericity of $f^{(i)}$ actually means that
$$\text{supp}(f^{(i)})=E_{t^{(i)},a^{(i)}}$$
and that the non-zero coefficients of $f^{(i)}$ are indeterminates adjoined to a base field. For example, let us say $K$ is purely transcendant over $\mathbb{Q}$ :
$$K=\mathbb{Q}((u^{(i)}_k)_{1\leq i\leq n,\ k\in\text{supp}(f^{(i)})})$$
where every indeterminate $u^{(i)}_k$ is the coefficient of $x^k$ in $f^{(i)}$. The equations $f^{(i)}=0$ are what Bézout calls a system of ``incomplete equations of the first species''.
As for the ``second species of incomplete equations''\footnote{\textit{cf.} \cite{bezout1779} \S 74-81.}, let $t$, $a_1$, $a_2$,..., $a_n$, $b$ be non-negative integers satisfying the following restrictive conditions:\label{secondspecies}
$$\left\lbrace\begin{array}{l}
\max(a_1,a_2)\leq b\\
(\forall i\geq 3)\ a_i\leq t\\
a_1+a_2\geq b,\\
(\forall i\neq j,\ \lbrace i,j\rbrace\neq\lbrace 1,2\rbrace)\ a_i+a_j\geq t\\
b\leq t\\
(\forall i\geq 3)\ a_i+b\geq t
\end{array}\right.$$
These integers define a convex set $E_{t,a,b}$ in $\mathbb{Z}^n$:
$$\left\lbrace\begin{array}{l}
0\leq k_1\leq a_1,\quad 0\leq k_2\leq a_2,...,\ 0\leq k_n\leq a_n,\\
k_1+k_2\leq b\\
k_1+k_2+...+k_n\leq t
\end{array}\right.$$
For any such $t,a,b$, we define the sub-vector space $C_{\leq t,a,b}$ of $C$ over $K$ of polynomials with support in $E_{t,a,b}$. An incomplete equation of the second species is a generic member of $C_{\leq t,a,b}$.
As for the ``third species of incomplete equations''\footnote{\textit{cf.} \cite{bezout1779} \S 82-132.}, when $n=3$, let $t$, $a_1$, $a_2$, $a_3$, $b_1$, $b_2$, $b_3$ be non-negative integers satisfying the following restrictive conditions:
$$\left\lbrace\begin{array}{l}
\max(a_1,a_2)\leq b_3,\quad\max(a_1,a_3)\leq b_2,\quad\max(a_2,a_3)\leq b_1\\
a_1+a_2\geq b_3,\quad a_1+a_3\geq b_2,\quad a_2+a_3\geq b_1\\
\max(b_1,b_2,b_3)\leq t\\
\min(a_1+b_1,a_2+b_2,a_3+b_3)\geq t\\
b_1+b_2+b_3\geq 2t
\end{array}\right.$$
These integers define a convex set $E_{t,a,b}$ in $\mathbb{Z}^3$:
$$\left\lbrace\begin{array}{l}
0\leq k_1\leq a_1,\quad 0\leq k_2\leq a_2,\quad 0\leq k_3\leq a_3,\\
k_1+k_2\leq b_3,\quad k_1+k_3\leq b_2,\quad k_2+k_3\leq b_1,\\
k_1+k_2+k_3\leq t
\end{array}\right.$$
An incomplete equation of the third species is a generic polynomial equation with support in any such convex set. Bézout further subdivides the third species into eight ``forms'', according to the algebraic signs of the quantities
$$t-b_1-b_2+a_3,\quad t-b_2-b_3+a_1,\quad t-b_3-b_1+a_2$$
His second, third and seventh forms are exchanged under permutation of the unknowns, as well as his fourth, fifth and eighth forms. The four lower polyhedra on figure 4 at the end of this article represent supports belonging to the first, the third, the fifth and the sixth forms.
\section{Three different views on elimination}\label{ThreeViews}
The early article by Bézout (1764) was based on the following idea. The elimination process is split into a sequence of operations. Each operation consists in multiplying some of the previously obtained equations by suitable polynomials, and then building the sum of the products. This idea was already known. Yet, it is perfected in book II of Bézout's 1779 treatise. There, Bézout suggests to represent directly the final equation resulting from the elimination, as such a sum of products of the initial equations by suitable polynomials. This representation could well seem natural to the modern reader used to the concept of ``ideal'' inherited from end-of-nineteenth-century algebra. Analysing the proofs of different cases of Bézout's theorem such as he wrote them in his treatise, we are going to study three different views along the path leading to Bézout's concept of \textit{sum-equation}.
We shall limit ourselves in this section to the case of three equations with three unknowns:
$$\left\lbrace\begin{aligned}
f^{(1)}=0\\
f^{(2)}=0\\
f^{(3)}=0
\end{aligned}\right.$$
of respective degrees $t^{(1)},t^{(2)},t^{(3)}$.
Bézout postulates at first the existence of a unique final equation resulting from the elimination of two unknowns (\textit{e.~g.} $x_2$ and $x_3$), of minimal degree $D$. Among the three different views that we are going to expound, the two former ones consist in multiplying $f^{(1)}$ by a ``multiplier-polynomial'' $\phi^{(1)}$ with undetermined coefficients, of degree $(T-t^{(1)})$, where $T$ is a large enough integer, and then making all monomials ``vanish'' in the product $\phi^{(1)}f^{(1)}$, except monomials $1,\ x_1,\ x_1^2,...,\ x_1^D$. Those monomials are building the final equation that we are looking for\footnote{Bézout proceeds in this way for ``incomplete equations of the first species'' for example, \textit{cf.} \cite{bezout1779}, \S~59--67.}. This vanishing could be operated in two steps : first use equations $(2)$ and $(3)$ to make vanish as many monomials as possible, and then use the classical method of undetermined coefficients to do the rest of it. After having used equations $(2)$ and $(3)$, in order to apply the method of undetermined coefficients, there should remain as many vanishable terms (each one of them gives an equation), as undetermined coefficients in $\phi^{(1)}$. But the situation is complicated by the fact that only some of the
undetermined coefficients provided by $\phi^{(1)}$ could, according to Bézout,
serve the purpose of elimination, several of them being ``useless''.
Finally, Bézout ends up with a formula:
$$(\text{number of terms remaining in }\phi^{(1)}f^{(1)})-D=\text{nbr. of useful coefficients in }\phi^{(1)}$$
He would then use this formula to calculate $D$.
As we can see, this
method is quite ambiguous, as long as we don't give a more precise meaning to the word ``useless''. Bézout says that the number of ``useless'' coefficients in $\phi^{(1)}$ is precisely the number of monomials that we could make vanish in $\phi^{(1)}$ using $(2)$ and $(3)$. This is, at best, mysterious, as long as we don't give a precise meaning to all this in terms of dimensions of vector spaces. Bézout does not say anything to clarify this situation, as he reserves the effective calculation for book II.
\paragraph{Substituting monomials}
We have said that there are three different views on elimination in Bézout's treatise, and we have just expounded the common setting of the first two of
them. What differentiates them is the way to count the number of
monomials that we could make vanish in a given polynomial,
thanks to given equations.
In his \emph{first} proof\footnote{As he does for ``complete equations'' (complete, \textit{i. e.} generic of their degrees), \textit{cf.} \cite{bezout1779} \S~45--48.}, Bézout says that:
$$\text{nbr. of vanishable terms in }\phi^{(1)}f^{(1)}=\text{nbr. of terms divisible by }x_{2}^{t^{(2)}}\text{ or }x_{3}^{t^{(3)}}$$
This reminds us of Newton's method of elimination of highest powers of the unknown\footnote{\emph{Cf. supra} note \ref{MethodeNewton} p.~\pageref{MethodeNewton}.}, generalized to any number of equations and unknowns. It is remarkable that
in 1764, after having said that Euler's and Cramer's methods of elimination\footnote{\label{Poisson}When Bézout mentions Euler in \cite{bezout1764}, he is surely refering to \cite{euler1748a}. For a quick overview of Euler's method in \cite{euler1748a}, let there be two equations of degrees $m$ and $n$ in $y$:
$$\left\lbrace\begin{array}{l} y^m-Py^{m-1}+Qy^{m-2}-...=0\\ y^n-py^{n-1}+qy^{n-2}-...=0 \end{array}\right.$$
where $P$,$Q$,...,$p$,$q$,... are polynomials in $x$, such that:
$$\begin{array}{l} \deg P+m-1=\deg Q+m-2=...=m\\ \deg p+n-1=\deg q+n-2=...=n \end{array}$$
Suppose $A$,$B$,$C$,...,$a$,$b$,$c$,... are the roots of those two equations. The final equation resulting from the elimination of $y$ must be:
$$\left.\begin{array}{ll} &(A-a)(A-b)(A-c)...\\ \times&(B-a)(B-b)(B-c)...\\ \times&(C-a)(C-b)(C-c)...\\ \times&... \end{array}\right\rbrace=0$$
This expression is homogeneous of degree $mn$ in $A$,$B$,$C$,...,$a$,$b$,$c$,..., and symetrical with regard to both sets of roots. We thus could obtain it in terms of the elementary symetrical functions of the roots, \textit{i. e.} in terms of the coefficients $P$,$Q$,...,$p$,$q$,...~; moreover the degree \textit{in $x$} of every coefficient coincides with its degree \textit{in the roots}, so that the degree in $x$ of the final equation is also $mn$.} can only be used on systems of two equations, Bézout adds that Newton's method has the same shortcoming:
\begin{quote}
In fact, Newton's method does not require to compare equations two by two.
Nevertheless, it has no advantage over Euler's and Cramer's method for systems
with more than two equations: then, the final equation is mixed with useless
factors.\footnote{\textit{cf.} \cite{bezout1764}, p.~290.}
\end{quote}
In 1779, Bézout does not mention explicitely ``Newton's method''. He
rather speaks of ``the principle of substitution''; and the way he uses this
idea is quite ambiguous. Although the method itself is described as an effective mean of calculating the final equation, it is diverted to the calculation of the \emph{degree} of the final equation. Bézout does never seem to worry about the effective calculation of the final equation.%
\footnote{A simple example could illustrate this problem. Suppose $\deg f^{(2)}=\deg f^{(3)}=2$ and $\deg\phi^{(1)}=4$, and let us make vanish as many terms as possible in $\phi^{(1)}$, thanks to equations $(2)$ and $(3)$. In a naive interpretation of Newton's method, let us start and make vanish terms divisible by $x_3^2$ in $f^{(2)}$ and $\phi^{(1)}$ thanks to $(3)$. If one then makes vanish terms divisible by $x_2^2$ in $\phi^{(1)}$ thanks to $(2)$, other terms divisible by $x_3^2$ will re-appear. A clever use of a ``monomial order'' would remedy this situation, but this was unknown to Bézout. Today, Groebner basis computations rely upon the idea of ordering monomials. About the history of Groebner basis, \textit{cf.} \cite{eisenbud1995} p.~337-338.} After his proof for the case of ``complete equations'', he adds: \label{BezoutPivot}
\begin{quote}
The idea of substitution is the nearest approximation to the elementary ideas of elimination in systems of equations of the first degree\footnote{For $t^{(1)}=t^{(2)}=t^{(3)}=1$, this method embodies what is now termed as ``Gaussian elimination''.}. Although we could apply the same idea to incomplete equations, we are going to present another point of view, that can be applied in a general way, whereas the principle of substitution would need modifications and particular attentions if we should keep up with it.\footnote{\textit{cf.} \cite{bezout1779}, \S~54.}
\end{quote}
Bézout is announcing now a second view on elimination.
\paragraph{Using multiplier-polynomials}
In his second proof\footnote{as he does for ``incomplete equations of the first species'', \textit{cf.} \cite{bezout1779}, \S~59--67.}, Bézout explains that deleting terms in a given polynomial, thanks to equations $(2)$ and $(3)$, amounts to the use of new multiplier-polynomials:
\begin{quote}
\textit{We ask how many terms we could make vanish in a given polynomial, thanks to these equations, without introducing new terms.}
Suppose that there is only one equation; if, having multiplied it by a polynomial (...), we add the product (...) to the given polynomial: it is obvious that
\begin{description}
\item[1\textsuperscript{o}] this addition will not change anything to the value of the given polynomial.
\item[2\textsuperscript{o}] Supposing the multiplier-polynomial is such as not to introduce new terms, we shall be able to make vanish, in the given polynomial, as many terms as there are in the multiplier-polynomial, because each of them brings one coefficient (...)\footnote{\textit{cf.} \cite{bezout1779}, \S~60.}
\end{description}
\end{quote}
Thus, in order to make terms vanish in $\phi^{(1)}f^{(1)}$ thanks to $(2)$, Bézout is using a polynomial multiplier $\phi^{(2)}$ of degree $T-t^{(2)}$, and he studies the sum $\phi^{(1)}f^{(1)}+\phi^{(2)}f^{(2)}$.
Then, to make terms vanish thanks to $(3)$, he studies the sum $\phi^{(1)}f^{(1)}+\phi^{(3)}f^{(3)}$. As we can see, the order of proceeding for effective calculation is still imbued with ambiguity.
\paragraph{The sum-equation}
The first and second views described above are, at best, heuristic views. They could, in no way, be seen as rigorous proofs, nor effective algorithms.
The third view on elimination in Bézout's treatise is explained in book II.
It is the only one that we shall refer to, when summarizing the calculation
of the degree of the final equation, in next section. At some point before or during the writing of his treatise, Bézout must have become aware of the following fact : the set of polynomials that are sums of products of $n$ given polynomials by multiplier-polynomials is \emph{closed under this operation}. That is to say, the result obtained after iterating several such operations is again a sum of products of the given polynomials by multiplier-polynomials. All steps become, so to speak, united:
\begin{quote}
From now on, we shall study elimination in a way that differs from what
preceeds, but not essentially.
Let us think that every given equation is multiplied by a specific polynomial,
and that we add up all those products. The result is called ``sum-equation''.
The sum-equation becomes the final equation through the vanishing of all
terms containing any unknown that we should eliminate.
We shall now 1\textsuperscript{o} settle the form of every multiplier-polynomial. 2\textsuperscript{o} Determine how many coefficients, in each multiplier-polynomial, could be considered as useful for the elimination (...)\footnote{\textit{Cf.} \cite{bezout1779}, \S~224.}
\end{quote}
To calculate the degree of the final equation, one thus has to:
\begin{itemize}
\item multiply each equation $f^{(\alpha)}=0$ by a polynomial multiplier $\phi^{(\alpha)}$ with undetermined coefficients, of degree $T-t^{(\alpha)}$;
\item build the ``sum-equation'';
\item ask that all terms vanish, except 1, $x_1,\ x_1^2,...,\ x_1^D$.
\end{itemize}
In the event of a \emph{single} solution, the method of undetermined coefficients implies that:
$$
\text{nbr. of equations}\geq(\text{nbr. of undetermined coefficients})-(\text{nbr. of useless coeff.})
$$
\section{Bézout's demonstration: second species of incomplete equations}\label{Demonstration2}
We shall now summarize Bézout's demonstration concerning the degree of the eliminand of $n$ incomplete equations of the second species in $n$ unknowns:
$$\left\lbrace\begin{aligned}
f^{(1)}=0\\
f^{(2)}=0\\
\vdots\\
f^{(n)}=0
\end{aligned}\right.$$
where, for all $i$, the polynomial $f^{(i)}$ is a generic member of $C_{\leq t^{(i)},a^{(i)}}$ and the degrees $t^{(i)}$, $a^{(i)}$ verify the restrictive conditions given on page \pageref{secondspecies} above.
To start with, Bézout takes it for granted that there exists a unique ``final equation'' of lowest degree resulting of the elimination of $(n-1)$ unknowns, \textit{i. e.} an eliminand, that could be represented as :
$$\sum_{i=1}^n\phi^{(i)}f^{(i)}=0$$
where the $\phi^{(i)}$ are conveniently chosen ``multiplier-polynomials''
\footnote{\textit{Cf.} \cite{bezout1779} \S 224.}. This being sayed, we are not going to discuss its existence here. The important point is that Bézout is studying the linear map :
$$(\phi^{(1)},\phi^{(2)},...,\phi^{(n)})\longmapsto\sum_{i=1}^n\phi^{(i)}f^{(i)}$$
Doing so, he restricts himself to finite sub-vector spaces by putting an upper bound on the degrees of the $\phi^{(i)}$. For any set of integers $T$, $A_1$, $A_2$,..., $A_n$, $B$ as above, let us call $(f^{(1)},...,f^{(n)})_{\leq T,A,B}$ the linear map defined by
$$\begin{array}{lrcl}
(f^{(1)},...,f^{(n)})_{\leq T,A,B}:&\displaystyle\bigoplus_{i=1}^nC_{\leq T-t^{(i)},A-a^{(i)},B-b^{(i)}}&\longrightarrow &C_{\leq T,A,B}\\
&(\phi^{(1)},\phi^{(2)},..., \phi^{(n)})&\longmapsto &\sum_{i=1}^n\phi^{(i)}f^{(i)}
\end{array}$$
For ease of notation, we shall sometimes write $f_{\leq T,A,B}=(f^{(1)},...,f^{(n)})_{\leq T,A,B}$. We shall also sometimes omit the indices $A$ and $B$ when we fear no confusion. The total number of undetermined coefficients in the $\phi^{(i)}$ polynomials is
$$\dim\bigoplus_{i=1}^nC_{\leq T-t^{(i)},A-a^{(i)},B-b^{(i)}}$$
Bézout says that this number is the number of ``useful coefficients'', plus the number of ``useless coefficients''\footnote{\textit{Cf.} \cite{bezout1779} \S 224.}. In other words :\label{rank}
$$\dim\bigoplus_{i=1}^nC_{\leq T-t^{(i)},A-a^{(i)},B-b^{(i)}}=\dim\text{im}f_{\leq T,A,B}+\dim\ker f_{\leq T,A,B}$$
Now $\text{im} f_{\leq T,A,B}$ is of special interest since any eliminand would belong to it for large enough values of $T$, $A$ and $B$. Moreover, if there exists an eliminand in $x_1$ of lowest degree $D$, then $\lbrace 1,x_1,x_1^2,...,x_1^{D-1}\rbrace$ is a free family in $C_{\leq T,A,B}/\text{im} f_{\leq T,A,B}$. Then one has $D\leq\dim\text{coker} f_{\leq T,A,B}$. We are thus naturally led to calculate\footnote{Let us re-phrase this argument in Bézout's terminology : according to his third view on elimination, the method of undetermined coefficients is leading to the coefficients of the final equation, and these coefficients are the solution of a system of linear equations. One should have, in the event of a single solution :
\begin{align*}
\text{nbr. of undetermined coefficients}&=\dim\bigoplus\limits_{i=1}^nC_{\leq T-t^{(i)},A-a^{(i)},B-b^{(i)}}\\
\text{nbr. of useless coefficients}&=\dim\ker f_{\leq T,A,B}\\
\text{nbr. of linear equations}&=\dim C_{\leq T,A,B}-D\\
\text{nbr. of equations}&\geq(\text{nbr. of undetermined coefficients})-(\text{nbr. of useless coefficients})
\end{align*}
Hence, as written above :
$$D\leq \dim C_{\leq T,A,B}-\dim\bigoplus\limits_{i=1}^nC_{\leq T-t^{(i)},A-a^{(i)},B-b^{(i)}}+\dim\ker f_{\leq T,A,B}$$
As Bézout does never prove the existence of the eliminand, then, what he does actually calculate is the right side of this inequality, \textit{i. e.} $\dim\text{coker} f_{\leq T,A,B}$.
}
\begin{equation}\begin{split}
\dim\text{coker} f_{\leq T,A,B}=&\dim C_{\leq T,A,B}-\dim\text{im} f_{\leq T,A,B}\\
=&\dim C_{\leq T,A,B}-\dim\bigoplus\limits_{i=1}^nC_{\leq T-t^{(i)},A-a^{(i)},B-b^{(i)}}+\dim\ker f_{\leq T,A,B}
\end{split}\end{equation}
In \S~233, Bézout describes how to count the number of ``useless coefficients'', \textit{i. e.} $\dim\ker f_{\leq T,A,B}$. He says :
\begin{quote}
If one remembers what has been said in Book I, one will understand that,
the number of useful coefficients in the first multiplier-polynomials of
the equations undergoing elimination, will always be equal to the number
of coefficients in this polynomial, minus the number of terms that could be
made to vanish in this polynomial, thanks to the $n-1$ other equations,
$n$ being the total number of equations;
That the number of useful coefficients in the second multiplier-polynomial,
will be the total number of coefficients of this polynomial, minus the
number of terms that could be made to vanish in this polynomial, thanks
to the $n-2$ last equations;
That the number of coefficients useful in the third multiplier-polynomial,
will equal the number of terms of this polynomial, minus the number of
terms that could be made to vanish in this polynomial, thanks to the $n-3$
other equations; and so on up to the last one, where the number of
useful coefficients will be precisely equal to the number of
its terms.\footnote{\textit{Cf.} \cite{bezout1779}, \S~233.}
\end{quote}
The argument is inductive. Let us call $(1),...,(n)$ the $n$ equations.
In order to calculate the number of coefficients that could be made to
vanish in $\phi_1$ using equations $(2),...,(n)$, one should use
new multiplier-polynomials. To paraphrase what Bézout tells us :
\begin{align*}
\dim(\ker f_{\leq T,A,B}) = & \text{ nbr. of useless coefficients}\\
= & \text{ nbr. of coeff. to vanish in~}\phi_1\text{ thanks to }(2),...,(n)\\
& +\text{ nbr. of coeff. to vanish in~}\phi_2\text{ thanks to }(3),...,(n)\\
& +...\\
& +\text{ nbr. of coeff. to vanish in }\phi_{n-1}\text{ thanks to }(n)\\
= & \dim\left(\text{im}(f^{(2)},...,f^{(n)})_{\leq T-t^{(1)},A-a^{(1)},B-b^{(1)}}\right)\\
& +\dim\left(\text{im}(f^{(3)},...,f^{(n)})_{\leq T-t^{(2)},A-a^{(2)},B-b^{(2)}}\right)\\
& +...\\
& +\dim\left(\text{im}(f^{(n)})_{\leq T-t^{(n-1)},A-a^{(n-1)},B-b^{(n-1)}}\right)
\end{align*}
Alas, proving it lies beyond Bézout's means. He seems to have been
aware of the difficulty. The problem could be reduced to proving the following
statement:
\paragraph{Statement}\label{statement} For all $2\leq r\leq n$, for all $(\phi^{(1)},...,\phi^{(r)})$ with
$$\phi^{(1)}\in C_{\leq T-t^{(1)},A-a^{(1)},B-b^{(1)}},...,\phi^{(r)}\in C_{\leq T-t^{(r)},A-a^{(r)},B-b^{(r)}},$$ such that
$$\sum_{i=1}^r\phi^{(i)}f^{(i)}=0,$$
we have
$$\phi^{(1)}\in\text{im}(f^{(2)},...,f^{(r)})_{\leq T-t^{(1)},A-a^{(1)},B-b^{(1)}}.$$
Although Bézout goes to great lengths studying this situation\footnote{\label{fictitious}See \cite{bezout1779}, \S~107-118, where he tries to convince his reader that it is impossible to increase the number of terms vanishing in $\phi^{(1)}$ by ``fictitious introduction'' (\textit{introduction fictive}) of terms of higher degree.}, there is, as far as we could understand, no proof of this statement in his treatise. For the time being, let us admit this statement (see section \ref{Toric} for the proof). Then we can write:
\begin{align*}
\dim\ker(f^{(1)},...,f^{(n)})_{\leq T,A,B}=&\dim\text{im}(f^{(2)},...,f^{(n)})_{\leq T-t^{(1)},A-a^{(1)},B-b^{(1)}}\\
&+\dim\ker(f^{(2)},...,f^{(n)})_{\leq T,A,B}
\end{align*}
By recurrence on the number of equations, we thus have, as written above:
\begin{equation}\begin{split}
\dim(\ker(f^{(1)},...,f^{(n)})_{\leq T,A,B}) = & \dim\left(\text{im}(f^{(2)},...,f^{(n)})_{\leq T-t^{(1)},A-a^{(1)},B-b^{(1)}}\right)\\
& +\dim\left(\text{im}(f^{(3)},...,f^{(n)})_{\leq T-t^{(2)},A-a^{(2)},B-b^{(2)}}\right)\\
& +...\\
& +\dim\left(\text{im}(f^{(n)})_{\leq T-t^{(n-1)},A-a^{(n-1)},B-b^{(n-1)}}\right)
\end{split}\end{equation}
Let us now use the following notation for finite differences\footnote{
Bézout's own notation for higher order finite differences could be defined
by recurrence as follows:
$$
d^{r}P(t)\cdots\left(\begin{array}{c} t\\
-t_{1},...,-t_{r}\end{array}\right)=
d^{r-1}P(t)\cdots\left(\begin{array}{c} t\\
-t_{2},...,-t_{r}\end{array}\right)-
d^{r-1}P(t-t_{1})\cdots\left(\begin{array}{c}t\\
-t_{2},...,-t_{r}\end{array}\right)
$$
The relation between his notation and ours could thus be expressed by:
$$
d^{r}P(t)\cdots\left(\begin{array}{c}
t\\
-t_{1},...,-t_{r}\end{array}\right)=\Delta_{t_{1}}...\Delta_{t_{r}}P(t)
$$
} of a given polynomial $P(T,A,B)$:
$$\Delta_{t,a,b}P(T,A,B)=P(T,A,B)-P(T-t,A-a,B-b)$$
From $(6)$ and $(7)$, by gathering terms, one finds:
\begin{align*}
\dim\text{coker} f_{\leq T} = & \dim C_{\leq T}-\sum\limits _{i=2}^{n}\dim C_{\leq T-t^{(i)}}+\dim\left(\text{im}(f^{(3)},...,f^{(n)})_{\leq T-t^{(2)}}\right)\\
&+...+\dim\left(\text{im}(f^{(n)})_{\leq T-t^{(n-1)}}\right)\\
& -\left(\dim C_{\leq T-t^{(1)}}-\dim\left(\text{im}(f^{(2)},...,f^{(n)})_{\leq T-t^{(1)}}\right)\right)\\
= & \dim\left(\text{coker}(f^{(2)},...,f^{(n)})_{\leq T}\right)-\dim\left(\text{coker}(f^{(2)},...,f^{(n)})_{\leq T-t_{1}}\right)\\
= & \Delta_{t^{(1)}}\dim\left(\text{coker}(f^{(2)},...,f^{(n)})_{\leq T}\right)\\
= & \Delta_{t^{(1)}}\Delta_{t^{(2)}}\dim\left(\text{coker}(f^{(3)},...,f^{(n)})_{\leq T}\right)\\
= & ...\\
= & \Delta_{t^{(1)}}\Delta_{t^{(2)}}...\Delta_{t^{(n)}}\dim C_{\leq T}
\end{align*}
where we omit the indices $A,B$ for brevity. For example, the third equality should read:
$$\dim\left(\text{coker}(f_{1},...,f_{n})_{\leq T,A,B}\right)=\Delta_{t^{(1)},a^{(1)},b^{(1)}}\dim\left(\text{coker}(f_{2},...,f_{n})_{\leq T,A,B}\right)$$
This recurrence formula is the heart of Bézout's computations. As $\dim C_{\leq T,A,B}$ is a polynomial of degree $n$ in $T,A,B$, after applying $n$ times the operator $\Delta$, one must obtain a constant independant of $T,A,B$. Eventually, Bézout finds\footnote{More details of this calculation will be given in the proof of prop.~8, section~\ref{Toric}.}:
\begin{multline*}
\dim\text{coker}(f_{1},...,f_{n})_{\leq T,A,B}\\
=\prod_{i=1}^nt^{(i)}-\sum_{j=1}^n\prod_{i=1}^n(t^{(i)}-a_j^{(i)})+\prod_{i=1}^n(t^{(i)}-b^{(i)})-\sum_{i=1}^n\left\lbrack(a_1^{(i)}+a_2^{(i)}-b^{(i)})\prod_{j\neq i}(t^{(j)}-b^{(j)})\right\rbrack
\end{multline*}
In 1782, three years after Bézout, Waring takes over this formula in the preface
to the second edition of his \textit{Meditationes algebraicae}\footnote{Waring
writes (p.~\textit{xvii}-\textit{xx} of the preface to the second edition):
\begin{quote}
si sint $(h)$ aequationes $(n,m,l,k,\text{\&c.})$ dimensionum respective
totidem incognitas quantitates $(x,y,z,v,\text{\&c.})$ involventes~; et
sint $p$, $q$, $r$, $s$, $\text{\&c.}$~; $p'$, $q'$, $r'$, $\text{\&c.}$~;
$p''$, $q''$, \&c.~; maximae dimensiones, ad quas ascendunt
incognitae quantitates $x$, $y$, $z$, $v$, \&c., in respectivis
aequationibus $(n,m,l,k,\text{\&c.})$ dimensionum~; tum aequationem, cujus
radix est $x$ vel $y$ vel $z$, \&c. haud ascendere ad plures quam
$n\times m\times l\times k\times\text{\&c.}-(n-p)\times(m-p')\times(l-p'')\times\text{\&c.}-(n-q)\times(m-q')\times(l-q'')\times\text{\&c.}-(n-r)\times(m-r')\times\text{\&c.}-\text{\&c.}=P$
dimensiones~: si vero dimensiones quantitatum $(x \text{ \& } y)$ simul
sumptarum haud superent dimensiones $a,a',a'',\text{\&c.}$ in predictis
aequationibus~; tum aequationem, cujus radix est $x$, \&c. haud plures
habere quam $P+(n-a)\times(m-a')\times\text{\&c.}-(p+q-a)\times(p'-a')\times\text{\&c.}$
\end{quote}
Waring forgets to mention the ``restrictive conditions'' (see
p.~\pageref{secondspecies} above). With his notations, those conditions require that:
$$\begin{array}{llll}
r+a\geq n & r'+a'\geq m & r''+a''\geq l & ...\\
s+a\geq n & s'+a'\geq m & s''+a''\geq l\\
\vdots & \vdots & \vdots
\end{array}$$
}.
Most importantly, in 1779, Bézout had also noticed that this $n$-th order finite difference could be written as an alternate sum\footnote{\textit{Cf.} \cite{bezout1779} p.~43.}:\label{alternatesum}
$$
\Delta_{t^{(1)}}\Delta_{t^{(2)}}...\Delta_{t^{(n)}}\dim C_{\leq T}=\dim C_{\leq T}-\dim\bigoplus_{i}C_{\leq t-t^{(i)}}+\dim\bigoplus_{i<j}C_{\leq T-t^{(i)}-t^{(j)}}-...
$$
Later, Cayley will shed light upon this alternate sum.
\section{Complete equations and the first species of incomplete equations}
As was said above, one could, from the formula for the degree of the eliminand of $n$ incomplete equations of the second species, derive the formulae for complete equations and for the first species of incomplete equations.
For $n$ incomplete equations of the first species, \textit{i. e.} equations in $x_1,x_2,...,x_n$ of the form:
$$
\sum\limits_{\begin{array}{l}
\scriptstyle k_{1}\leq a_{1},\ k_{2}\leq a_{2},\ k_{3}\leq a_{3},...\\
\scriptstyle k_{1}+k_{2}+k_{3}+...\leq t\end{array}}{u_kx^k}=0
$$
where
$$\left\lbrace\begin{array}{l}
(\forall i)\ a_i\leq t\\
(\forall i\neq j)\ a_i+a_j>t
\end{array}\right.$$
the degree of the final equation resulting from the elimination of $x_{2},x_{3},...$ is:\footnote{This formula is still quoted in 1900 in Netto's \textit{Vorlesungen über Algebra} \cite{netto1900}, \S~419.}
$$D=\prod_{i=1}^nt^{(i)}-\sum_{i=1}^n\prod_{j=1}^n(t^{(j)}-a_i^{(j)})$$
Now specifying, for all $i,j$, $a_j^{(i)}=t^{(i)}$, one obtains the well-known formula for the case of $n$ ``complete equations'', \textit{i. e.} generic of their degrees :
$$D=\prod_{i=1}^nt^{(i)}$$
In fact, for general $n$, only this case of Bézout's theorem will be the
object of rigorous study in XIXth century (\textit{cf.} Serret \cite{serret1866}, Schmidt \cite{schmidt1886}, Netto \cite{netto1900} vol.~2).
\section{Bézout's theorem for three incomplete equations of the third species}\label{Demonstration3}
As for the ``third species of incomplete equations'', when $n=3$, Bézout is hitting another major problem : $\dim C_{\leq T,A,B}$ is not any more a polynomial in $T,A,B$. It is, so to speak, polynomial by pieces. Let us write
$$H_1=T-B_2-B_3+A_1,\qquad H_2=T-B_3-B_1+A_2,\qquad H_3=T-B_1-B_2+A_3.$$
For each of the eight ``forms'' corresponding to different algebraic signs of those three quantities, $\dim C_{\leq T,A,B}$ is a different polynomial in $T,A,B$. More precisely, calling $P_i(T,A,B)=\dim C_{\leq T,A,B}$ when the values of $T,A,B$ belong to the $i$-th form, one has:
\begin{description}
\item[1st form] ($H_1\leq0,\quad H_2\leq0,\quad H_3\leq0$):
\begin{align*}
P_1(T,A,B) = & {3+T \choose 3}-\sum_{i=1}^3{3+T-A_i-1 \choose 3}+\sum_{i=1}^3{3+T-B_i-2 \choose 3}\\
& -\sum_{i=1}^3\left[(A_1+A_2+A_3-A_i-B_i){2+T-B_i-1 \choose 2}\right]
\end{align*}
\item[2d form] ($H_1\leq0,\quad H_2\leq0,\quad H_3\geq0$):
$$P_2(T,A,B)=P_{1}(T,A,B)+{3+T+A_3-B_1-B_2-2 \choose 3}$$
\item[3rd form] ($H_1\geq0,\quad H_2\leq0,\quad H_3\leq0$):
$$P_3(T,A,B)=P_{1}(T,A,B)+{3+T+A_1-B_2-B_3-2 \choose 3}$$
\item[4th form] ($H_1\geq0,\quad H_2\leq0,\quad H_3\geq0$):
$$P_4(T,A,B)=P_{3}(T,A,B)+{3+T+A_3-B_1-B_2-2 \choose 3}$$
\item[5th form] ($H_1\geq0,\quad H_2\geq0,\quad H_3\leq0$):
$$P_5(T,A,B)=P_{3}(T,A,B)+{3+T+A_2-B_1-B_3-2 \choose 3}$$
\item[6th form] ($H_1\geq0,\quad H_2\geq0,\quad H_3\geq0$):
$$P_6(T,A,B)=P_{5}(T,A,B)+{3+T+A_3-B_1-B_2-2 \choose 3}$$
\item[7th form] ($H_1\leq0,\quad H_2\geq0,\quad H_3\leq0$):
$$P_7(T,A,B)=P_{1}(T,A,B)+{3+T+A_2-B_1-B_3-2 \choose 3}$$
\item[8th form] ($H_1\leq0,\quad H_2\geq0,\quad H_3\geq0$):
$$P_8(T,A,B)=P_{7}(T,A,B)+{3+T+A_3-B_1-B_2-2 \choose 3}$$
\end{description}
Suppose that the argument developed above for the second species of incomplete equations is also valid for the third species, then one should have, as above:
$$\dim\text{coker}(f^{(1)},...,f^{(n)})_{\leq T,A,B}=\Delta_{t^{(1)},a^{(1)},b^{(1)}}\Delta_{t^{(2)},a^{(2)},b^{(2)}}...\Delta_{t^{(n)},a^{(n)},b^{(n)}}\dim C_{\leq T,A,B}$$
The rest of the computation could only be done under the assumption that all vector-spaces $C_{\leq...}$ actually involved in this expression belong to the same ``form''. Let us write
$$D_i=\Delta_{t^{(1)},a^{(1)},b^{(1)}}\Delta_{t^{(2)},a^{(2)},b^{(2)}}...\Delta_{t^{(n)},a^{(n)},b^{(n)}}P_i(T,A,B)$$
Bézout calculates $D_1,D_2,...,D_8$. The actual eight formulae occupy no less than eight full pages of the treatise\footnote{\textit{Cf.} \cite{bezout1779} \S 119-127.}. He then proposes a test, or rather, ``symptoms''\footnote{See the ``symptoms enabling us to recognize, among the different expressions of the value of the degree of the final equation, those that one should choose or reject'', in \cite{bezout1779} \S 117. Here again, Bézout's own justifications lack evidence; they rely upon undemonstrated facts about the sum-equation, as in footnote \ref{fictitious} p.~\pageref{fictitious} above.} to reject some of those eight values. The other values are ``admissible''; as such, all of them must be equal to the degree of the eliminand, according to Bézout. In section \ref{Toric} below, we shall prove that Bézout's choice was right.
\section{The theory of the resultant in XIXth century}
We have just presented Bézout's slowly maturated treatise and the fertile historical context of its publication. We are struck by the lack of immediate posterity of this book: sixty years separate the publication of Bézout's treatise and the first revival of what was reckoned, in XIXth century, as the ``theory of elimination''. Bézout's treatise was complex, it was clearly perceived as such by one of his early readers, and this fate follows it until today. Poisson recognized the importance of Bézout's work but immediately pointed out to the gap between the strength of the theorem and the ``difficulties'' of its demonstration :
\begin{quote}
This important theorem is Bézout's, but the way he proves it is neither direct nor simple; nor is it devoid of any difficulty.\footnote{\textit{Cf.} \cite{poisson1802} p. 199. See also Brill and Noether, saying about Bézout's book that it is ``as well-known as lacking readers'', and that, by the time of Jacobi, ``most of it had fallen into oblivion'', \textit{cf.} \cite{brill1892} p. 143 and 147.}
\end{quote}
Three mathematiciens produced, so to speak, a new beginning in elimination theory, between 1839 and 1848: their names are Sylvester, Hesse, Cayley\footnote{Some historians and mathematicians have said that Sylvester and Richelot (1808-1875, student of Jacobi) had discovered the ``dialytic'' method of elimination, although this method was not very different from Bézout's method when dealing with only two equations ; it is also said that Hesse had re-discovered this method, in 1843. \textit{Cf.} Max Noether, \cite{noether1898}, p.~136, about Sylvester \cite{sylvester1839}, \cite{sylvester1840} and \cite{sylvester1841}, and Richelot \cite{richelot1840}. Eberhard Knobloch \cite{knobloch1974,knobloch2001} noticed that Leibniz already knew this method ; perhaps Leibniz was even closer to Sylvester's ideas, than Euler and Bézout. In what follows, we shall only draw a comparison between the works of Sylvester, Richelot and Hesse, and those of Bézout from 1779 concerning the \emph{sum-equation}, with an emphasis on the case of $n$ equations when $n>2$.}. The main object of study is, rather than the eliminand, the ``resultant'' of $n$ homogeneous polynomials in $n$ indeterminates.
Before entering into the works of those three scholars, it is to be noted that two special cases progressed in first half of XIXth century: linear equations, thanks to the theory of determinants, and the case of two equations in one or two unknowns. When eliminating one unknown between two equations in two unknowns, one obtains the eliminand. When eliminating one unknown between two equations in one unknown (or between two \emph{homogeneous} equations in two unknowns, which is the same thing), one obtains the resultant. The discriminant is the resultant of a polynomial and its derivate. The interest in the discriminant is motivated by the study of the euclidian algorithm for polynomials, following Sturm's researches about the roots of polynomials over $\mathbb{R}$. The case of two equations also benefits from the methods of analysis, for example in Ossian Bonnet's works culminating in 1847 when he defines the intersection multiplicity of two curves in one point.
\paragraph{Sylvester} Sylvester's researches are stimulated by Sturm's theorem. When applying the euclidian division algorithm to two polynomials in one indeterminate, the successive remainders are also called ``Sturm functions''. In 1840, Sylvester gives a formula in terms of determinants to calculate Sturm functions. As was known long before, the last remainder, being of degree 0, can be seen as the result of the elimination of the indeterminate. This is what Sylvester calls the ``dialytic method of elimination''. There is no demonstration in this short article. Maybe Sylvester did not know, in 1840, of Euler's and Bézout's works about elimination. He does not refer to them ; later, in 1877, he himself says that he had discovered the dialytic method by teaching to a pupil\footnote{In \cite{sylvester1877}, Sylvester says: ``I remember,...,
how... when a very young professor, fresh from the University of Cambridge,
in the act of teaching a private pupil the simpler parts of Algebra,
I discovered the principle now generally adopted into the higher text
books, which goes by the name of the \textit{Dialytic Method of Elimination}''.}.
In 1841, Sylvester develops the dialytic method in the case of three quadratic \emph{homogeneous} equations in three unknowns $x$, $y$, $z$. The interest in homogeneous equations is crucial to our subject, and it could well be explained by the fusion between projective geometry and algebraic geometry under the influence of Möbius and Plücker. Sylvester delelops several versions of his dialytic method. In one of them\footnote{\textit{Cf.} \cite{sylvester1841}, example 4, p.~64; other versions are given in footnotes.}, he mutiplies each equation by the three monomials of degree 1. Thus, for the system
$$\left\lbrace\begin{array}{l}
U=0\\
V=0\\
W=0
\end{array}\right.$$
one obtains 9 equations of degree 3 :
$$
\left\lbrace \begin{array}{lll}
xU=0 & yU=0 & zU=0\\
xV=0 & yV=0 & zV=0\\
xW=0 & yW=0 & zW=0\end{array}\right.
$$
As there exists 10 monomials of degree 3, we are short of one equation to apply the dialytic method and build a determinant. Sylvester uses the jacobian determinant to get a tenth equation of degree 3:
$$
\frac{1}{8}\times\begin{vmatrix}\dfrac{\partial U}{\partial x} & \dfrac{\partial U}{\partial y} & \dfrac{\partial U}{\partial z}\\[1em]
\dfrac{\partial V}{\partial x} & \dfrac{\partial V}{\partial y} & \dfrac{\partial V}{\partial z}\\[1em]
\dfrac{\partial W}{\partial x} & \dfrac{\partial W}{\partial y} & \dfrac{\partial W}{\partial z}\end{vmatrix}=0
$$
One thus obtains 10 equations in the 10 monomials of degree 3. If one considers those monomials as the 10 independant unknowns of a system of 10 linear equations, elimination is reduced to the calculation of a single determinant. As compared to the method of the sum-equation, this method is sparing the use of undetermined coefficients. Moreover, it is a \emph{symbolic} method. It uses the ambivalence of the symbolic expression of the monomials: each monomial is both a monomial in the unknowns of the initial system, and an independant unknown of a system of linear equations. This allows the transfer of the symbolic methods of linear algebra (determinants, and soon, matrices) to the algebra of forms of higher degree\footnote{Sylvester is probably refering to this in particular when he says that the ``great principle of dialysis, originally discovered in the theory of elimination, in one shape or another pervades the whole theory of concomitance and invariants'', \textit{cf.} \cite{sylvester1852a}, p.~294.}.
Contemporary readers must have been surprised by the use of the jacobian determinant. In general, for equations of higher degree and systems with more unknowns, there still remains to explain the appropriate choice of linear equations. In an article \cite{sylvester1841b} published in the same year, Sylvester gives a general method. We are not going to describe it entirely. Suffice to say that, after having obtained a first set of equations called \emph{augmentatives} by multiplying each initial equation by the monomials (like the 9 equations of degree 3 above), Sylvester builds other equations called \emph{secondary derivatives}, as follows. He writes each of the $n$ initial equations under the form $x^{\alpha}F+y^{\beta}G+z^{\gamma}H+...$ where $x$, $y$, $z$,... are the $n$ unknowns to eliminate, $F$, $G$, $H$,... are polynomials, and $\alpha$, $\beta$, $\gamma$,... is any system of integers allowing such a representation. He thus obtains a system of $n$ equations:
$$
\left\lbrace \begin{array}{l}
x^{\alpha}F+y^{\beta}G+z^{\gamma}H+...=0\\
x^{\alpha}F'+y^{\beta}G'+z^{\gamma}H'+...=0\\
\vdots\end{array}\right.
$$
The following determinant is one of the secondary derivatives:
$$
\begin{vmatrix}
F & G & H & \cdots\\
F' & G' & H' & \cdots\\
\vdots\end{vmatrix}=0
$$
The many choices possible for $\alpha$, $\beta$, $\gamma$, \textit{etc.} allow as many secondary derivatives.
When $n=3$ et $\deg U=\deg V=\deg W=m$, taking $\alpha=\beta=\gamma=1$ and
\begin{align*}
F=\frac{1}{m}\frac{\partial U}{\partial x},\quad G=\frac{1}{m}\frac{\partial U}{\partial y},\quad H=\frac{1}{m}\frac{\partial U}{\partial z}\\
F'=\frac{1}{m}\frac{\partial V}{\partial x},\quad G'=\frac{1}{m}\frac{\partial V}{\partial y},\quad H'=\frac{1}{m}\frac{\partial V}{\partial z}\\
F''=\frac{1}{m}\frac{\partial W}{\partial x},\quad G''=\frac{1}{m}\frac{\partial W}{\partial y},\quad H''=\frac{1}{m}\frac{\partial W}{\partial z}
\end{align*}
one finds the jacobian determinant, up to a constant factor.
Let us now observe the easy case $n=2$, $m=\deg U=\deg V$. Sylvester does not even mention it in his article. If $1\leq\alpha\leq m$ and $\beta=0$, taking as $G$ (resp. $G'$) the sum of terms of $U$ (resp. $V$) of degree less than $\alpha$ in $x$, one has, for each value of $\alpha$, one equation of degree $(m-1)$ in $x$ :
$$
\begin{vmatrix}
F & G\\
F' & G'\end{vmatrix}=0
$$
Hence there is no need of augmentatives. The expression obtained by eliminating dialytically between the $m$ equations of degree $(m-1)$ is none other than the determinant of a matrix that Bézout had already studied \cite{bezout1764} in 1764. Bézout's matrix might thus have inspired this general method to Sylvester. This matrix also played an important role in his famous later memoir \textit{On a theory of syzygetic relations}\footnote{\textit{Cf.} \cite{sylvester1853a}.}.
Still in 1841, Sylvester also considers the possibility of building augmentatives of degree at least $\sum t_{i}-n+1$, where $t_{i}$ is the degree of the $i$-th initial equation. In this case, the augmentatives suffice to build a determinant, and there is no need of secondary derivative. This is the grounding for a method developed by Cayley in 1848 (see below).
One must say that Sylvester's elimination method often brings out a superfluous factor; Sylvester doesn't give any mean of detecting and isolating this factor. His method leads directly to the resultant in but a few cases, as the two cases mentioned above for two or three equations. Despite of this drawback, Sylvester has clearly circumscribed a domain of research not limited to the resultant or the eliminand.\footnote{In 1997, Jean-Pierre Jouanolou gave a complete study of the secondary derivatives that he calls ``formes de Sylvester'', in \cite{jouanolou1997}, \S~3.10.}
\paragraph{Hesse}
Applying elimination to the study of plane cubics in 1844, Hesse goes back to the formalism of Bézout's ``sum-equation''. He knew of Bézout's treatise, and he does mention it\footnote{\textit{Cf.} \cite{hesse1844}. Hesse also knew of Richelot and Sylvester, and of the works of Euler. He extends to $n>2$ equations the method of Euler-Cramer using symetrical functions, as Poisson had done in 1802, although he probably didn't know of Poisson's article.}. Let there be three quadratic equations in two unknowns:
$$\left\lbrace\begin{array}{l}
U=0\\
V=0\\
W=0
\end{array}\right.$$
In order to eliminate the two unknowns, Bézout would have used multiplier-polynomials of degree 2. Following the way of thinking of XIXth century algebraists, let us homogenize the sum-equation of degree 4, thus getting a third unknown $z$. Then there exist multiplier-polynomials $A$, $B$, $C$ such that:
$$AU+BV+CW=z^4R$$
where $R=0$ is the resultant of $U$, $V$, $W$\footnote{It is to be noted that Sylvester had also mentioned this sum-equation, translated in his own dialytic formalism where one would multiply by monomials of degree 2, in a footnote in \cite{sylvester1841}, p.~64-65.}. We must keep in mind this sum-equation when studying the other equations derived by Hesse:
\begin{itemize}
\item First of all, Hesse translates as a sum-equation the method of Sylvester using the jacobian determinant. If $\phi$ is the jacobian determinant of $U$, $V$, $W$, one could obtain a sum-equation of degree 3 thanks to multiplier-polynomials of degree 1:
$$AU+BV+CW+\delta\phi=z^3R$$
where $\deg A=\deg B=\deg C=1$ and $\delta$ is a constant\footnote{\textit{Cf.} \cite{hesse1844}, \S~8-10.}.
\item Hesse then observes that the jacobian itself can be obtained by a sum-equation with multiplier-polynomials of degree 2:
$$AU+BV+CW=z\phi$$
\item He also proves that the partial derivatives of $\phi$ could be obtained in the same fashion, under the form
$$AU+BV+CW+\delta\phi=z\frac{\partial\phi}{\partial x}$$
\item One is thus allowed to calculate the resultant $R$ with multiplier-polynomials of degree 0, thanks to the partial derivatives of the jacobian determinant
\footnote{\textit{Cf.} \cite{hesse1844}, \S~11-14.}:
$$aU+bV+cW+d\frac{\partial\phi}{\partial x}+e\frac{\partial\phi}{\partial y}+f\frac{\partial\phi}{\partial z}=z^2R$$
\end{itemize}
As modern elimination-theorists would say\footnote{As we shall see further, with Hurwitz \cite{hurwitz1913}, one defines the ideal of \emph{Trägheitsformen} of $(U,V,W)$ as
$$\mathfrak{m}^{-\infty}(U,V,W)=\bigcup\limits_{k\geq0}\left\lbrace f\ \mid\ \mathfrak{m}^kf\subset(U,V,W)\right\rbrace$$
where $\mathfrak{m}=(x,y,z)$. This langage is still in use today, \textit{cf.} \cite{jouanolou1991}.}, Hesse thus studied the resultant, the jacobian, and the partial derivatives of the jacobian, as \emph{Trägheitsformen}, or inertia forms, of the ideal $(U,V,W)$. We won't describe how Hesse applied these calculations to the case where $U$, $V$, $W$ are the partial derivatives of the homogeneous polynomial defining a plane cubic.
\iffalse
Hesse déduit aussi de ces travaux une propriété importante du hessien
d'une cubique, dont il donne de belles applications géométriques.
Nous mentionnerons seulement le résultat algébrique. Si $U$, $V$,
$W$ sont les dérivées partielles d'une cubique $f$ et que $\phi$
est le hessien de $f$, alors le hessien du hessien est à son tour
une cubique $\psi$, et au moyen de l'équation-somme obtenue en $(*)$,
il peut se représenter sous la forme%
\footnote{\Cf \cite{hesse1844}, \S~15 et 16.%
}: \[
\psi=df+\delta\phi\]
\\
\fi
\paragraph{Cayley} The concept of ``sum-equation'' appears again in 1847, in an article by Cayley, \textit{On the theory of involution in geometry}. Cayley says that a homogeneous polynomial $\Theta$ of degree $r$ is ``in involution'' with homogeneous polynomials $U$, $V$, ... of degrees $m,n,...$ if
$$\Theta=AU+BV+...,$$
\textit{i. e.} if $\Theta\in(U,V,...)_r$, where $(U,V,...)_r$ is the componant of degree $r$ in the homogeneous ideal $(U,V,...)$. He says that ``there is also an analytical application of the theory, of considerable interest, to the problem of elimination between any number of [homogeneous] equations containing the same number of variables''. He conceives of this elimination as the result of Sylvester's dialytic method, but he traces back the origin of his research to ``Cramer's paradox'' and related works by Euler, Cramer, Plücker, Jacobi and Hesse. In this article and another one of 1848, according to some mathematicians of our day, ``Cayley in fact laid out the foundations of modern homological algebra''\footnote{\textit{Cf.} \cite{gelfand1994} p.~\textit{ix}.}.
Cayley asks for the number of ``arbitrary constants'' in $\Theta$. In modern language, this number is
$$\dim(U,V,...)_r$$
Let us follow Cayley's application of the dialytic method. Let us multiply $U$ by each monomial of degree $r-m$ (for large enough $r$), and $V$ by each monomial of degree $r-n$, and so on. Let $C$ be the ring of polynomials. We shall use the compact notation :
$$\sum\dim C_{r-m}=\dim C_{r-m}+\dim C_{r-n}+...$$
This number is thus the number of linear equations to be solved, when using the dialytic method. The $\dim(C_r)$ monomials of degree $r$ will be the independant unknowns of this system of linear equations. We shall represent this system by a matrix where each column corresponds to one equation\footnote{The matrices mentioned here and below, and the solution of this problem of linear algebra, only appeared in a second article \cite{cayley1848} published by Cayley in 1848. Moreover, we should rather speak of ``matricial figure'' than ``matrix'', because this mathematical object was not yet seen as an \emph{operator}, and its properties were not yet completely developed in the 1840's. The figures 1, 2 and 3, at the end of our article, are lose reproduction of figures in \cite{cayley1848}.}. This matrix (\textit{cf.} figure 1 at the end of this article) is the matrix of a linear map :
$$f_1:\bigoplus C_{r-m}\longrightarrow C_r$$
Going back to the concept of ``sum-equation'' or ``involution'' as Cayley used to say, one sees that $f_1$ is defined by :
$$f_1(A,B,...)=AU+BV+...$$
Cayley notes that the columns of the matrix are not linearly independant, and he asks how many independant columns there are, \textit{i. e.}
$$\dim(\text{im}f_1)$$
In order to eliminate dialytically, one needs $\dim C_r$ independant columns ; then, one could extract a square sub-matrix and build the determinant. To check that the elimination is possible, Cayley sets out to calculate
$$N=\dim C_r-\dim(\text{im}f_1)$$
The relations of linear dependancy between columns are given by families of coefficients that Cayley writes as \emph{lines} of coefficients \emph{under} the original matrix (\textit{cf.} figure 2). In terms of the sum-equation, these relations constitute the kernel of $f_1$. Cayley admits \emph{without proof}\footnote{\textit{Cf.} \cite{cayley1847a}, p.~261, ``the number $N$ must be diminished
by...''.} that $\ker f_1$ is generated by elements of $\bigoplus C_{r-m}$ of the following form:
$$
\left\lbrace \begin{array}{ll}
(MV,-MU,0,...,0)\quad & \text{where}\quad M\in C_{r-m-n}\\
(MW,0,-MU,0,...,0)\quad & \text{where}\quad M\in C_{r-m-p}\\
\vdots\end{array}\right\rbrace
$$
Hence, it is generated by $\sum\dim C_{r-m-n}$ vectors. The height of the second bloc in the matricial figure is thus $\sum\dim C_{r-m-n}$, and $\ker f_1$ is generated by the image of a second map $f_2$ :
$$\begin{array}{llll}
f_{2}: & \bigoplus C_{r-m-n} & \longrightarrow & \bigoplus C_{r-m}\\
& (M,0,...,0) & \longmapsto & (MV,-MU,0,...,0)\\
& (0,M,0,...,0) & \longmapsto & (MW,0,-MU,0,...,0)\\
& ...
\end{array}$$
By iterating this procedure, Cayley obtains a figure that we reproduce on figure 3 ; but he does not explain the following steps. Following the path suggested by Cayley, one should endeavour to write the sequence of linear maps thus obtained, and prove that it is an exact sequence\footnote{\textit{Cf.} \cite{cayley1848}, p.~371. Of course, the fact that the sequence is exact is not at all obvious, and Cayley did not prove it. The first historical proof will be mentioned in section \ref{koszul} below.}:
$$...\longrightarrow\bigoplus C_{r-m-n-p}\xrightarrow{f_{3}}\bigoplus C_{r-m-n}\xrightarrow{f_{2}}\bigoplus C_{r-m}\xrightarrow{f_{1}}C_{r}$$
where $\text{im}f_{i+1}=\ker f_i$. Cayley concludes rashly:
\begin{align*}
N & = \dim C_r-\dim(\text{im} f_1)\\
& = \dim C_r-\sum\dim C_{r-m} +\sum\dim C_{r-m-n}-\sum\dim C_{r-m-n-p} +...
\end{align*}
For large enough values of $r$, this quantity can be expressed in terms of binomial coefficients. As a matter of fact, $N=0$ : hence, according to Cayley, dialytic elimination is possible, for any given number of homogeneous equations with as many unknowns.
What also distinguishes XIXth century authors from their predecessors including Bézout, is the endeavour to give explicit formulae for the result of elimination. Cayley is looking for a formula of the resultant. In 1847, he gives the following formula:
$$R=\frac{Q_{1}Q_{3}...}{Q_{2}Q_{4}...}$$
where, for all $i$, $Q_i$ is a subdeterminant from the matrix of $f_i$. The choice of these subdeterminants obeys the following rule. There are as many columns in the matrix of $f_i$, as lines in the matrix of $f_{i+1}$. The rule is that the set of columns occurring in $Q_i$ must be the complement of the set of lines occurring in $Q_{i+1}$. As for the last one, say $Q_j$, it is the determinant of a maximal square submatrix of the matrix of $f_j$, and it must be chosen such that all $Q_i$ are non-zero. Several authors (Salmon, Netto) seem to have tried proving Cayley's formula, until Macaulay resigned this task and found another formula, simple but ``less general'' \label{FormuleQuotient}, of the form $\dfrac{Q_1}{\Delta}$ where $Q_1$ is still a subdeterminant of the matrix of $f_1$ although its choice is more constrained\footnote{\textit{Cf.} \cite{macaulay1903}, and also \cite{jouanolou1997} for a recent study of Macaulay's formulae.}. En 1926, E. Fischer achieved demonstrating Cayley's formula\footnote{\textit{Cf.} \cite{piel1934}.}.
\section{Koszul complex}\label{koszul}
We have seen the role of an exact sequence leading to an alternate sum of dimensions in Cayley's work about the resultant, although Cayley describes only the two first maps of the sequence, and he omits any proof of exactness.
On the other hand, let us look back at Bézout's work on complete equations. Bézout's alternate sum\footnote{\textit{Cf. supra} p. \pageref{alternatesum}.} obtained through finite differences is quite similar to Cayley's alternate sum. There is just one more unknown in the equations given, because Bézout is studying the eliminand, whereas Cayley is studying the resultant. Bézout's result could be deduced from Cayley's through homogenizing.
As mentioned previously, Serret and Schmidt gave a rigorous proof of Bézout's theorem in the case of the eliminand of $n$ complete equations in $n$ unknowns. Their method could also apply to the case of the resultant. But Serret and Schmidt bypass the need to study the exact sequence above: their proof is an \textit{a posteriori} proof of the degree of the eliminand.
\paragraph{Hurwitz} The first in-depth study of the two first maps of the sequence described by Cayley was given by Hurwitz in 1913. Hurwitz follows the ideas of Mertens \cite{mertens1886} who had already given a complete but complicated theory of the resultant in 1886. In this paragraph and the next, we shall consider $r$ homogeneous polynomials $f_1,f_2,...,f_r$ of degrees $t_1,t_2,...,t_r$ in a polynomial ring $C=K[a,b,...][X_0,...,X_n]$ over a number field, where $X_0,...,X_n$ on one hand, $a,b,...$ on the other hand, are indeterminates. The indeterminates $a,b,...$ will serve the purpose of building ``generic'' polynomials in $X_0,...,X_n$. Let us call $a_{\alpha k}$ the coefficient of $X_k^{t_\alpha}$ in $f_\alpha$. Consider the ideal $\mathfrak{M}=(X_0,...,X_n)$. Hurwitz introduces a new concept : he says that $f\in C$ is a \textit{Trägheitsform} if there exists un integer $k\geq 0$ such that $\mathfrak{M}^kf\subset(f_1,...,f_r)$. For all $k$, let us write, as Hurwitz does, $[f]_k$ the result of substituting $\dfrac{a_{\alpha k}X_{k}^{t_{\alpha}}-f_{\alpha}}{X_{k}^{t_{\alpha}}}$ to $a_{\alpha k}$ for all $\alpha$ in $f$. In the case of generic homogeneous polynomials $f_1,...,f_r$, \textit{i. e.} when their coefficients are the indeterminates $a,b,...$, the following propositions are equivalent%
\footnote{\textit{Cf.} \cite{hurwitz1913}; Hurwitz proves $(\mathrm{i})\iff(\mathrm{ii})$ in his proposition 1, and $(\mathrm{ii})\iff(\mathrm{iii})\iff(\mathrm{iv})\iff(\mathrm{v})$ in proposition 9. It is then obvious that $(\mathrm{iv})\iff(\mathrm{v})\iff(\mathrm{vi})$. The importance of criterion $(\mathrm{iii})$ appears in Mertens' work. For $(\mathrm{ii})\iff(\mathrm{vi})$, see also Jouanolou \cite{jouanolou1991}, (4.2.3) p.~132.}:
\begin{description}
\item[$(\mathrm{i})$] for $t$ large enough, $(f_1,...,f_r)_t=(f,f_1,...,f_r)_t$
\item[$(\mathrm{ii})$] $f$ is a \textit{Trägheitsform}
\item[$(\mathrm{iii})$] there exists $k$ such that $[f]_k$ is zero
\item[$(\mathrm{iv})$] there exists $k,m$ such that $X_k^mf\in(f_1,...,f_r)$
\item[$(\mathrm{v})$] for all $k$ there exists $m$ such that $X_k^mf\in(f_1,...,f_r)$
\item[$(\mathrm{vi})$] there exists $m$ such that $X_0^mf\in(f_1,...,f_r)$
\end{description}
The smallest integer $k$ such that $\mathfrak{M}^kf\subset(f_1,...,f_r)$
is called the ``rank'' of $f$ (\textit{Stufe}). A \textit{Trägheitsform}
is said ``proper'' if its rank is non zero. Hurwitz gives a general study
of the \textit{Trägheitsformen}. Let us first consider the case where $n=r$.
Hurwitz proves that the \textit{Trägheitsformen} of rank $\sigma$ are of degree
$\sum t_{\alpha}-n-\sigma+1$. He shows that all \textit{Trägheitsformen} of
rank 1 belong to the ideal $(J,f_1,...,f_n)$ where $J$ is the jacobian
determinant of $f_1,...,f_n$; in rank $>1$, he succeeds in giving
explicit formulae at least for some of the \textit{Trägheitsformen},
those that are linear in the coefficients of each of
the polynomials $f_{1},...,f_{n}$. He proves the existence of
a \textit{Trägheitsform} of degree 0 which generates
the ideal of all \textit{Trägheitsformen} of degree 0: this is the resultant.
In the case $r<n$, Hurwitz succeeds in proving that $(f_1,...,f_r)$ has no
proper \textit{Trägheitsform}. Consider the following assertions:
\begin{description}
\item[$(\mathrm{I}_{n})$] for all $r<n$, the ideal $(f_{1},...,f_{r})$ has no proper Trägheitsform
\item[$(\mathrm{II}_{n})$] for all $r\leq n$, if $A_{1}f_{1}+...+A_{r}f_{r}=0$, then there exists some $L_{ij}$ such that $(\forall i)\ A_{i}=\sum L_{ij}f_{j}$ and $(\forall i,j)\ L_{ij}=-L_{ji}$
\end{description}
Hurwitz proves $\mathrm{I}_n\Rightarrow \mathrm{II}_n\Rightarrow \mathrm{I}_{n+1}$. We recognize in $\mathrm{II}_n$ the assertion made by Cayley about the exactness of his sequence in degree 1. Finally, without any condition on $r$, $n$, Hurwitz also proves, with a similar argument, that $(f_1,...,f_r)$ has no proper \textit{Trägheitsform} of degree $>\sum t_\alpha-r$.
\paragraph{Koszul's complex} An explicit description of the exact sequence touched upon by Bézout, and later conjectured by Cayley, is to be found in the work of the algebraist Koszul taking over the tools of differential geometry in the late 1940's. Koszul's complex is an avatar of de Rham's complex of differential forms\footnote{Retrospective studies on Koszul's work are to be found in \textit{Annales de l'Institut Fourier}, 37 (1987). See the allocution by H. Cartan \cite{cartan1987}.}. Let there be given $r$ homogeneous polynomials $f_1,f_2,...,f_r\in C=K[X_0,...,X_n]$. Suppose that for all $r\leq n$, $f_r$ does not divide zero modulo $(f_1,...,f_{r-1})$. This hypothesis\footnote{\textit{Cf.} \cite{serre2000}, p. 59, where Serre calls \textit{$M$-sequence} such a sequence of polynomials. Some authors speak of a \textit{regular sequence} (\textit{cf.} \cite{hartshorne1977}, II.8, p.~184). See \cite{bourbaki1958} for the main properties of Koszul's complex.} follows from property $(\mathrm{II}_n)$ above. Under this hypothesis, there exists a free resolution of $C/(f_1,f_2,...,f_n)$ bearing Koszul's name. Put $M=\bigoplus\limits_{i=1}^rCe_i$ the free $C$-module of rank $r$. Koszul's complex is the sequence of maps:
$$0\longrightarrow C=\Lambda^0M\longrightarrow\Lambda^1M\longrightarrow\Lambda^2M\longrightarrow\cdots\longrightarrow\Lambda^{r-1}M\longrightarrow\Lambda^rM\longrightarrow 0$$
where each map is defined by the external product :
$$u\longmapsto u\wedge(f_1e_1+f_2e_2+...+f_re_r)$$
Hence, in degree $T$, the following is an exact sequence of vector spaces :
\begin{align*}
&0\longrightarrow C_{T-t_1-t_2-...-t_r}\longrightarrow\bigoplus_{i=1}^rC_{T-t_1-...-\widehat{t_i}-...-t_r}\longrightarrow\cdots\\
&\cdots\longrightarrow\bigoplus_{i<j}C_{T-t_i-t_j}\longrightarrow\bigoplus_{i=1}^rC_{T-t_i}\longrightarrow C_T\longrightarrow (C/(f_1,f_2,...,f_r))_T\longrightarrow 0
\end{align*}
From this one could calculate dimensions of the vector spaces involved:
$$\dim (C/(f_1,f_2,...,f_r))_T=\dim C_T-\sum_{i=1}^r\dim C_{T-t_i}+\sum_{i<j}\dim C_{T-t_i-t_j}-...$$
where we recognize the alternate sum already known to Bézout and Cayley.
\paragraph{Jouanolou} Hurwitz's project encompasses those of his predecessors Bézout, Hesse, Sylvester, Cayley, Mertens: one should describe every homogeneous component of the ideal of \textit{Trägheitsformen} of $(f_1,...,f_r)$, and, if possible, give a basis for each. These homogeneous components are $K[a,b,...]$-modules, and it is thus, essentially, linear algebra. Eventhough, Hurwitz concedes that ``the problem of determining all \textit{Trägheitsformen} of a module [=ideal] presents important difficulties''\footnote{\textit{Cf.} \cite{hurwitz1913}, p.~614.}. This project has been tackled again by Jean-Pierre Jouanolou in 1980.
Jouanolou studied the Koszul complex of the ideal $(f_1,...,f_r)$ in the language of Grothendieck's theory of schemes. Here, the problem of the \textit{Trägheitsformen} becomes a problem of ``local cohomology''. Now, we shall only explain how one of the propositions above by Hurwitz translates into this conceptual frame. Jouanolou demonstrates\footnote{\textit{Cf.} \cite{jouanolou1980} \S~2.1 to 2.8.} that some groups of cohomology with support in the ideal $\mathfrak{M}=(X_0,...,X_n)$ are null; in order to do so, he uses spectral sequences abutting to the hypercohomology of Koszul complex relative to the fonctor $\Gamma_{\mathfrak{M}}$ of sections with support in $\mathfrak{M}$. Thus, for example, if $n>r$, one has $H_{\mathfrak{M}}^0(C/(f_1,...,f_r))=0$; in other words, the ideal $(f_1,...,f_n)$ has no proper \textit{Trägheitsform}.
\section{Toric varieties}\label{Toric}
A theorem by D. N. Bernshtein \cite{bernshtein1975} published in 1975
also gives the degree of the eliminand for systems of equations with support
in a convex set. Moreover, this theorem leads to the same kind of alternate sum as was found in Bézout's calculations. Bernshtein uses tools and concepts unknown to Bézout (infinite series in several variables, Minkowski's volume). The following year, an article \cite{kushnirenko1976} by Kushnirenko shows how to build a Koszul complex that leads directly to the afore-mentioned alternate sum. It is closely related to the theory of ``toric varieties'' developed in the 1970's.
We shall now use toric varieties and a Koszul complex ; but rather than giving a full account of Kushnirenko's results about equations with support in a convex set, we shall merely give a complete proof of Bézout's theorem for $n$ incomplete equations of the second species (\textit{cf.} section \ref{Demonstration2} above), thereby also subsiding to the gap in Bézout's own demonstration.
Let there be $r$ incomplete equations of the second species, with $n$ unknowns :
$$\left\lbrace\begin{array}{l}
f^{(1)}=0\\
f^{(2)}=0\\
\vdots\\
f^{(r)}=0
\end{array}\right.$$
such that $\text{supp}(f^{(i)})=E_{t^{(i)},a^{(i)},b^{(i)}}$ for $1\leq i\leq n$, using the notations on p.~\pageref{secondspecies} above. The theory of toric varieties will provide us with an algebraic variety $X(\Delta)$ which is a compactification of the torus $(\mathbb{C}^\times)^n$ obtained by glueing affine varieties. This variety will allow a geometric interpretation of the vector spaces of polynomials studied by Bézout, as sets of global sections of some fiber bundles on $X(\Delta)$. Let us start with a few preliminaries.
\paragraph{Proposition 1} Let $P$ be the convex envelop in $\mathbb{R}^n$ of the support $E_{t,a,b}$ of an incomplete equation of the second species. Then $E_{t,a,b}=P\cap\mathbb{Z}^n$.
\emph{Demonstration.} $E_{t,a,b}$ is defined on p.~\pageref{secondspecies} by a system of inequations describing a convex polytope $P'$ in $\mathbb{R}^n$. If $P$ is the convex envelop in $\mathbb{R}^n$ of $E_{t,a,b}$, one must have $P\subset P'$. To prove equality, it is enough to check that every vertex of $P'$ belongs to $E_{t,a,b}$. By saturating some of the inequations defining $P'$, one can easily calculate its vertices. It has $(n^2+2n-3)$ vertices, each vertex belonging to $n$ faces of dimension 1. Some of those vertices may coincide in degenerate cases, \textit{i. e.} when $t$, $a$, $b$ verify certain relations. There are nine classes of vertices :
\begin{enumerate}[(i)]
\item $(0,0,...,0)$
\item $(a_1,b-a_1,0,0,...,0)$
\item $(b-a_2,a_2,0,0,...,0)$
\item for every $1\leq i\leq n$, one vertex $(0,...,0,a_i,0,...,0)$
\item for every $3\leq i\leq n$, one vertex $(a_1,b-a_1,0,...,0,t-b,0,...,0)$ where $t-b$ is the $i$-th coordinate
\item for every $3\leq i\leq n$, one vertex $(b-a_2,a_2,0,...,0,t-b,0,...,0)$ where $t-b$ is the $i$-th coordinate
\item for every $3\leq i\leq n$, one vertex $(a_1,0,...,0,t-a_1,0,...,0)$ where $t-a_1$ is the $i$-th coordinate
\item for every $3\leq i\leq n$, one vertex $(0,a_2,0,...,0,t-a_2,0,...,0)$ where $t-a_2$ is the $i$-th coordinate
\item for every $3\leq i\leq n$, and $1\leq j\leq n$ with $j\neq i$, one vertex $(0,...,0,a_i,0,...,0,t-a_i,0,...,0)$ where $a_i$ is the $i$-th coordinate and $(t-a_i)$ is the $j$-th coordinate
\end{enumerate}
Each of those vertices clearly belong to $\mathbb{Z}^n$ hence to $E_{t,a,b}$. \textit{q. e. d.}
\paragraph{Proposition 2} Let $P$ and $\Pi$ be the convex envelops in $\mathbb{R}^n$ of the supports $E_{t,a,b}$ and $E_{\theta,\alpha,\beta}$ of two incomplete equations of the second species. Then $E_{t+\theta,a+\alpha,b+\beta}$ is also the support of an incomplete equation of the second species, and the polytope $(P+\Pi)$ is its convex envelop in $\mathbb{R}^n$.
\emph{Demonstration.} By linearity, it is clear that $(t+\theta,a+\alpha,b+\beta)$ verify the ``restrictive conditions'' on p.~\pageref{secondspecies}. Let $P'$ be the convex envelop of $E_{t+\theta,a+\alpha,b+\beta}$ in $\mathbb{R}^n$. The polytope $(P+\Pi)$ is defined by
$$P+\Pi=\lbrace y\in\mathbb{R}^n\mid (\exists x\in P,\,\xi\in\Pi)\ y=x+\xi\rbrace$$
Now using the calculations in the demonstration of prop. 1, one sees\footnote{Beware that this crucial fact won't work for the third species of incomplete equations.} that all vertices of $P'$ belong to $P+\Pi$. Hence $P'\subset P+\Pi$.
On the other hand, as $P$, $\Pi$ and $P'$ are all defined by sets of \emph{linear} inequations similar to those on p.~\pageref{secondspecies}, it is also obvious that every $x+\xi\in P+\Pi$ belongs to $P'$. Hence $P'=P+\Pi$. \textit{q. e. d.}
\paragraph{Example} For $n=3$, such polytopes have 8 faces and 12 vertices, \textit{cf.} an example on figure 5.
\paragraph{Construction of $X(\Delta)$} We briefly recall the construction of an algebraic variety over $\mathbb{C}$ associated with a fan. The theory of toric varieties associates to every strongly rational convex cone $\sigma$ an affine algebraic variety $U_\sigma$. A strongly rational convex cone is a subset $\sigma$ of $\mathbb{R}^n$ generated over $\mathbb{R}_+$ by a finite family of vectors with rational coordinates, and such that $\sigma\cap(-\sigma)=\lbrace 0\rbrace$. A fan is a family $\Delta$ of such cones, such that each face of a cone in $\Delta$ is also a cone in $\Delta$, and that the intersection of two cones in $\Delta$ is a face of each. A fan $\Delta$ gives rise to an algebraic variety $X(\Delta)$ by glueing together the corresponding affine pieces. We are going to work with a fan $\Delta$ closely related to the polytopes described above.
\paragraph{Description of the fan $\Delta$} Let $\lbrace e_1,e_2,...,e_n\rbrace$ be the canonical basis in $\mathbb{R}^n$. The maximal cones of the fan $\Delta$ are simplicial cones generated over $\mathbb{R}_+$ by families of $n$ vectors. There are $(n^2+2n-3)$ such cones, corresponding to the following families of $n$ vectors :
\begin{itemize}
\item the cone generated by family $\lbrace e_1,e_2,...,e_n\rbrace$
\item the cone generated by family $\lbrace -e_1,-e_1-e_2,e_3,e_4,...,e_n\rbrace$
\item the cone generated by family $\lbrace -e_2,-e_1-e_2,e_3,e_4,...,e_n\rbrace$
\item the $n$ cones generated by families of the form
$$\lbrace e_1,e_2,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_i\rbrace$$
\item the $2(n-2)$ cones generated by families of the form
$$\lbrace e_3,e_4,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_1,-e_1-e_2,-e_1-e_2-...-e_n\rbrace$$
or of the form
$$\lbrace e_3,e_4,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_2,-e_1-e_2,-e_1-e_2-...-e_n\rbrace$$
\item the $2(n-2)$ cones generated by families of the form
$$\lbrace e_3,e_4,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_1,e_2,-e_1-e_2-...-e_n\rbrace$$
or of the form
$$\lbrace e_3,e_4,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_2,e_1,-e_1-e_2-...-e_n\rbrace$$
\item the $(n-2)(n-1)$ cones generated by families of the form
$$(\lbrace e_1,e_2,...,e_n\rbrace-\lbrace e_i,e_j\rbrace)\cup\lbrace -e_i,-e_1-e_2-...-e_n\rbrace$$
where $i\geq 3$ and $j\neq i$.
\end{itemize}
The fan $\Delta$ is the set of all cones generated by sub-families of these families.
\paragraph{Remark} This fan could also be described as the fan of cones over the faces of a polytope dual to the polytope $P$ occuring in the previous propositions, and it has been calculated as such. See \cite{fulton1993} p.~26. As a matter of fact, the resulting fan does not depend upon the particular choice of $t,a,b$. There is a correspondance $\sigma\mapsto u(\sigma)$ between the maximal cones of $\Delta$ and the vertices of $P$ : \label{vertices}
\begin{itemize}
\item if $\sigma$ is generated by $\lbrace e_1,e_2,...,e_n\rbrace$, then $u(\sigma)=(0,0,...,0)$
\item if $\sigma$ is generated by $\lbrace -e_1,-e_1-e_2,e_3,e_4,...,e_n\rbrace$, then $u(\sigma)=(a_1,b-a_1,0,0,...,0)$
\item if $\sigma$ is generated by $\lbrace -e_2,-e_1-e_2,e_3,e_4,...,e_n\rbrace$, then $u(\sigma)=(b-a_2,a_2,0,0,...,0)$
\item if $\sigma$ is generated by $\lbrace e_1,e_2,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_i\rbrace$, then $u(\sigma)=(0,...,0,a_i,0,...,0)$
\item if $\sigma$ is generated by $\lbrace e_3,e_4,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_1,-e_1-e_2,-e_1-e_2-...-e_n\rbrace$, then $u(\sigma)=(a_1,b-a_1,0,...,0,t-b,0,...,0)$ ($t-b$ is the $i$-th coordinate)
\item if $\sigma$ is generated by $\lbrace e_3,e_4,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_2,-e_1-e_2,-e_1-e_2-...-e_n\rbrace$, then $u(\sigma)=(b-a_2,a_2,0,...,0,t-b,0,...,0)$
\item if $\sigma$ is generated by $\lbrace e_3,e_4,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_1,e_2,-e_1-e_2-...-e_n\rbrace$ then $u(\sigma)=(a_1,0,...,0,t-a_1,0,...,0)$
\item if $\sigma$ is generated by $\lbrace e_3,e_4,...,\widehat{e_i},...,e_n\rbrace\cup\lbrace -e_2,e_1,-e_1-e_2-...-e_n\rbrace$ then $u(\sigma)=(0,a_2,0,...,0,t-a_2,0,...,0)$
\item if $\sigma$ is generated by $(\lbrace e_1,e_2,...,e_n\rbrace-\lbrace e_i,e_j\rbrace)\cup\lbrace -e_i,-e_1-e_2-...-e_n\rbrace$ where $i\geq 3$ and $j\neq i$, then $u(\sigma)=(0,...,0,a_i,0,...,0,t-a_i,0,...,0)$ ($a_i$ is the $i$-th coordinate, and $t-a_i$ is the $j$-th coordinate)
\end{itemize}
When considering several polytopes $P^{(i)}$, we shall write $u^{(i)}(\sigma)$ the vertex of $P^{(i)}$ corresponding to the maximal cone $\sigma$. For a polytope $\Pi$, we shall use the greek letter $\upsilon(\sigma)$.
\paragraph{Example} For $n=3$, see a representation of the fan $\Delta$ on figure 6: each triangle represents one maximal cone.
\paragraph{Affine open sets $U_\sigma\subset X(\Delta)$} For each cone $\sigma$ of $\Delta$, one puts
$$U_\sigma=\text{Spec}(A_\sigma)$$
where $A_\sigma\subset\mathbb{C}\lbrack\chi_1,\chi_1^{-1},\chi_2,\chi_2^{-1},...,\chi_n,\chi_n^{-1}\rbrack$ is defined by :
$$A_\sigma=\bigoplus_{\begin{array}{c}\scriptstyle u\in\mathbb{Z}^n,\\[-.1cm]\scriptstyle(\forall v\in\sigma)\ \left<u,v\right>\geq 0\end{array}}\mathbb{C}\chi^u$$
Here $\chi_1,\chi_2,...,\chi_n$ are the indeterminates over the base field $\mathbb{C}$. If $\tau\in\Delta$ is a face of $\sigma\in\Delta$, there is a
natural mapping
$U_\tau\rightarrow U_\sigma$ embedding $U_\tau$ as a principal open subset of $U_\sigma$ (\textit{cf.} \cite{fulton1993}, p.~18), and one can thus build a variety $X(\Delta)$ by glueing all affine pieces $U_{\sigma_1}$ and $U_{\sigma_2}$ along $U_{\sigma_1}\cap U_{\sigma_2}=U_{\sigma_1\cap\sigma_2}$. In other words :
$$X(\Delta)=\lim_{\sigma\in\Delta}\text{ind.}(\text{Spec}(A_\sigma))$$
\paragraph{Construction of a vector bundle $O(D_P)$ on $X(\Delta)$} If $P$ is the convex envelop in $\mathbb{R}^n$ of the support of an incomplete equation of the second species, one can define a line bundle $O(D_P)$ on $X(\Delta)$ as follows. The bundle is trivial on each $U_\sigma$, \textit{ie.} $\simeq\mathbb{C}\times U_\sigma$. On the intersection of two maximal cones $\sigma_1$ and $\sigma_2$, the change of map is given by :
$$
\begin{diagram}
\node{\mathbb{C}\times U_{\sigma_1}} \node{\mathbb{C}\times U_{\sigma_2}}\\[2]
\node{\mathbb{C}\times U_{\sigma_1\cap\sigma_2}} \arrow[2]{n,J} \arrow{e} \node{\mathbb{C}\times U_{\sigma_1\cap\sigma_2}} \arrow[2]{n,J}\\
\node{(t,\,x)} \arrow{e,T} \node{(\chi^{u(\sigma_1)-u(\sigma_2)}(x)t,\,x)}
\end{diagram}
$$
These are isomorphisms because $\chi^{u(\sigma_1)-u(\sigma_2)}$ is a unit of $A_{\sigma_1\cap\sigma_2}$. The compatibility of the changes of map on $U_{\sigma_1\cap\sigma_2\cap\sigma_3}$ comes from the fact that $\chi^{u(\sigma_1)-u(\sigma_3)}=\chi^{u(\sigma_2)-u(\sigma_3)}\chi^{u(\sigma_1)-u(\sigma_2)}$.
In other words, the sheaf of germs of sections of the vector bundle is isomorphic to the ideal sheaf generated over $A_\sigma$ by $\chi^{u(\sigma)}\in\mathbb{C}[\chi_1,\chi_1^{-1},...,\chi_n,\chi_n^{-1}]$ for every maximal cone $\sigma\in\Delta$.
\paragraph{Proposition 3} Under the same conditions as proposition 1, one has
$$\Gamma(X(\Delta),O(D_P))=\bigoplus_{u\in P\cap\mathbb{Z}^n}\mathbb{C}\chi^u$$
\emph{Demonstration.} On one hand, we must prove that $\chi^u$ is a regular section of $O(D_P)$ over $U_\sigma$, \textit{i. e.} $\chi^{u-u(\sigma)}\in A_\sigma$, for every maximal cone $\sigma$ and every $u\in P\cap\mathbb{Z}^n$. It is easily verified when $u$ is a vertex of $P$ using the description of the vertices on p.~\pageref{vertices}, and this is enough. On the other hand, if $u\in\mathbb{Z}^n$ verifies $(\forall\sigma)\ \chi^{u-u(\sigma)}\in A_\sigma$, one must prove that $u\in P$. Suppose it is not the case. Hahn-Banach theorem implies the existence of a hyperplane separating $u$ from the convex polytope $P$ : there exists $v\in\mathbb{R}^n$ and $a\in\mathbb{R}$ such that
$$(\forall u'\in P)\ \left<u',v\right> > a\quad\text{but}\quad \left<u,v\right> < a.$$
For the cone $\sigma$ containing $v$, this contradicts $(\forall\sigma)\ \chi^{u-u(\sigma)}\in A_\sigma$. Hence it is impossible that $u\not\in P$.
\paragraph{Proposition 4} Let $P^{(1)}$ and $P^{(2)}$ be the convex envelops of the supports of two incomplete equations of the second species, with $P^{(1)}\cap\mathbb{Z}^n=E_{t^{(1)},a^{(1)},b^{(1)}}$ and $P^{(2)}\cap\mathbb{Z}^n=E_{t^{(2)},a^{(2)},b^{(2)}}$. Let $f^{(2)}\in\Gamma(X(\Delta),O(D_{P^{(2)}}))$. Multiplication by $f^{(2)}$ induces a natural map
$$\begin{diagram}\node{O(D_{P^{(1)}})}\arrow{e,t}{\times f^{(2)}}\node{O(D_{P^{(1)}+P^{(2)}})}\end{diagram}$$
\emph{Demonstration.} This map is defined locally on each affine subset $U_\sigma$ by :
$$
\begin{diagram}
\node{\phi\in\Gamma(U_\sigma,O(D_{P^{(1)}}))} \arrow{e,T} \arrow{s,T} \node{\phi\chi^{-u^{(1)}(\sigma)}\in A_\sigma} \arrow{s,T}\\
\node{\phi f^{(2)}\in\Gamma(U_\sigma,O(D_{P^{(1)}+P^{(2)}}))} \arrow{e,T} \node{\phi\chi^{-u^{(1)}(\sigma)}f^{(2)}\chi^{-u^{(2)}(\sigma)}\in A_\sigma}
\end{diagram}
$$
In the following, we shall write $\widetilde{f}^{(2)}=f\chi^{-u^{(2)}(\sigma)}$.
\paragraph{Koszul complex} One could thus define, locally on every open affine subset $U_\sigma$, a whole complex isomorphic to a Koszul complex. Let $\Pi$ be the convex envelop of the support of any incomplete equation of the second species\footnote{Avoid degenerate cases where some of the vertices $\upsilon(\sigma)$ coincide.}, and $f^{(1)}$, $f^{(2)}$,..., $f^{(r)}$ as above. Let $L_\sigma$ be the $A_\sigma$-module defined by
$$L_\sigma=\bigoplus_{i=1}^rA_\sigma$$
The Koszul complex gives a sequence of maps of $A_\sigma$-modules :
$$
\begin{diagram}
\node{0} \arrow{s} \node{0} \arrow{s}\\
\node{\Gamma(U_\sigma,O(D_\Pi))} \arrow{s} \arrow{e,t}{\simeq} \node{\Lambda^0L_\sigma=A_\sigma} \arrow{s,r}{\wedge\widetilde{f}}\\
\node{\Gamma(U_\sigma,\bigoplus_{i=1}^rO(D_{\Pi+P^{(i)}}))} \arrow{s} \arrow{e,t}{\simeq} \node{\Lambda^1L_\sigma=L_\sigma} \arrow{s,r}{\wedge\widetilde{f}}\\
\node{\Gamma(U_\sigma,\bigoplus_{i<j}O(D_{\Pi+P^{(i)}+P^{(j)}}))} \arrow{s} \arrow{e,t}{\simeq} \node{\Lambda^2L_\sigma} \arrow{s,r}{\wedge\widetilde{f}}\\
\node{\vdots} \arrow{s} \node{\vdots} \arrow{s,r}{\wedge\widetilde{f}}\\
\node{\Gamma(U_\sigma,\bigoplus_{i=1}^rO(D_{\Pi+P^{(1)}+...+\widehat{P^{(i)}}+...+P^{(r)}}))} \arrow{s} \arrow{e,t}{\simeq} \node{\Lambda^{r-1}L_\sigma} \arrow{s,r}{\wedge\widetilde{f}}\\
\node{\Gamma(U_\sigma,O(D_{\Pi+P^{(1)}+...+P^{(r)}}))} \arrow{e,t}{\simeq} \node{\Lambda^rL_\sigma\simeq A_\sigma}
\end{diagram}
$$
where $\widetilde{f}=\begin{pmatrix} f^{(1)}\chi^{-u^{(1)}(\sigma)}\\
f^{(2)}\chi^{-u^{(2)}(\sigma)}\\
\vdots\\
f^{(n)}\chi^{-u^{(n)}(\sigma)}
\end{pmatrix}$. Those maps glue with each other on the intersections $U_{\sigma_1}\cap U_{\sigma_2}$ into a sequence of maps of sheaves over $X(\Delta)$:
$$0 \arrow{e} O(D_\Pi) \arrow{e} \bigoplus_{i=1}^rO(D_{\Pi+P^{(i)}})) \arrow{e} \bigoplus_{i<j}O(D_{\Pi+P^{(i)}+P^{(j)}})) \arrow{e} \hdots \arrow{e} O(D_{\Pi+P^{(1)}+...+P^{(r)}})$$
\paragraph{Theorem 1} This sequence of maps of sheaves over $X(\Delta)$ is an exact sequence.
\emph{Demonstration}. We prove it locally on every open affine subset $U_\sigma$. In fact, each $\widetilde{f}^{(i)}$ has a non-zero constant term because $\chi^{u^{(i)}}$ belongs to the support of $f^{(i)}$. One can thus use the following trick by Mertens, in order to prove that $\widetilde{f}^{(1)},\widetilde{f}^{(2)},...,\widetilde{f}^{(r)}$ is a regular sequence. One must prove that, for all $s\leq r$, $\widetilde{f}^{(s)}$ does not divide zero modulo $(\widetilde{f}^{(1)},\widetilde{f}^{(2)},...,\widetilde{f}^{(s-1)})$. Suppose
$$\sum_{i=1}^s\phi^{(i)}\widetilde{f}^{(i)}=0$$
Let us recall that our base field is an extension of $\mathbb{Q}$, and that the coefficients of our polynomials $f^{(i)}$ are indeterminates over $\mathbb{Q}$ (\textit{cf.} p.~\pageref{secondspecies}). The constant term in $\widetilde{f}^{(i)}$ is such an indeterminate, call it $c^{(i)}$. The base field $K$ could thus be written as $k(c^{(1)},...,c^{(r)})$ where $k$ is an extension of $\mathbb{Q}$. There is an isomorphism\footnote{This is inspired by Mertens \cite{mertens1886} p.~528-529.}:
$$\begin{diagram}
\node{k\left[c^{(1)},...,c^{(r)}\right]\left[\chi_1,\chi_1^{-1},...,\chi_n,\chi_n^{-1}\right]/\left(\widetilde{f}^{(1)},...,\widetilde{f}^{(s-1)}\right)} \arrow{s,l}{\simeq} \arrow{e,!} \node{c^{(i)}} \arrow{s,T}\\
\node{k\left[c^{(s)},...,c^{(r)}\right]\left[\chi_1,\chi_1^{-1},...,\chi_n,\chi_n^{-1}\right]} \arrow{e,!} \node{\left\lbrace\begin{array}{l}c^{(i)}-\widetilde{f}^{(i)}\text{ if }i<s\\
c^{(i)}\text{ if }i\geq s\end{array}\right.}
\end{diagram}$$
The bottom ring is a polynomial ring, it is an integral domain, and thus
$$\phi^{(s)}\widetilde{f}^{(s)}\in(\widetilde{f}^{(1)},...,\widetilde{f}^{(s-1)})\quad\Rightarrow\quad\phi^{(s)}\in(\widetilde{f}^{(1)},...,\widetilde{f}^{(s-1)})$$
\textit{q. e. d.}
\paragraph{Theorem 2} For large enough $k$, the fibre bundle $O(D_{k\Pi})$ is very ample. As a consequence, $X(\Delta)$ is a projective variety embedded in $\mathbb{P}^{\vert k\Pi\cap\mathbb{Z}^n\vert-1}(\mathbb{C})$. For such $k$, there exists $N$ such that the following sequence is exact :
$$0 \arrow{e} \Gamma(X,O(D_{(Nk+1)\Pi})) \arrow{e} \bigoplus_{i=1}^r\Gamma(X,O(D_{(Nk+1)\Pi+P^{(i)}})) \arrow{e} \bigoplus_{i<j}\Gamma(X,O(D_{(Nk+1)\Pi+P^{(i)}+P^{(j)}}))...$$
\emph{Demonstration}. The existence of $k$ and of a very ample $O(D_{k\Pi})$ is a consequence of the non-degeneracy of $\Pi$. See \cite{fulton1993} p.~69-70 for a proof. For such $k$, write $O(1)=O(D_{k\Pi})$. A theorem of Serre states that, if $\mathcal{F}$ is a coherent algebraic sheaf on a projective variety, then for $N$ large enough, $\mathcal{F}\otimes O(N)$ has trivial cohomology. This implies that, for $N$ large enough, the following sequence is exact :
$$0 \arrow{e} \Gamma(X,O(D_{\Pi})\otimes O(N)) \arrow{e} \bigoplus_i\Gamma(X,O(D_{\Pi+P^{(i)}})\otimes O(N))) \arrow{e}...$$
(\textit{cf.} Serre's theorem in \cite{grothendieck1963} 2.2.1, and its corollary 2.2.3).
\paragraph{Corollary (Bézout's theorem for the second species)} For $k$ and $N$ as in theorem 2, when $n=r$, the dimension of the cokernel of the last map\footnote{\textit{i. e.} the map defined by $\Lambda^{n-1}L_\sigma\rightarrow\Lambda^nL_\sigma$ over every $U_\sigma$.} is
$$\prod_{i=1}^nt^{(i)}-\sum_{j=1}^n\prod_{i=1}^n(t^{(i)}-a_j^{(i)})+\prod_{i=1}^n(t^{(i)}-b^{(i)})-\sum_{i=1}^n\left\lbrack(a_1^{(i)}+a_2^{(i)}-b^{(i)})\prod_{j\neq i}(t^{(j)}-b^{(j)})\right\rbrack$$
It is an upper bound on the degree of the eliminand of $f^{(1)},...,f^{(n)}$.
\emph{Demonstration}. The alternate sum of dimensions of the vector spaces
appearing in the exact sequence above can be expressed as the following finite difference of order $n$:
$$\Delta_{t^{(n)},a^{(n)},b^{(n)}}...\Delta_{t^{(2)},a^{(2)},b^{(2)}}\Delta_{t^{(1)},a^{(1)},b^{(1)}}\vert E_{T,A,B}\vert$$
where $T,A,B$ are the degrees occuring in $\Lambda^nM$, \textit{i. e.}:
\begin{align*}
T&=(Nk+1)\theta+t^{(1)}+t^{(2)}+...+t^{(n)}\\
(\forall i)\quad A_i&=(Nk+1)\alpha_i+a_i^{(1)}+a_i^{(2)}+...+a_i^{(n)}\\
B&=(Nk+1)\beta+b^{(1)}+b^{(2)}+...+b^{(n)}
\end{align*}
Calculating $\vert E_{T,A,B}\vert$ is an easy combinatorial problem. One has:
\begin{align*}
\vert E_{T,A,B}\vert=&{T+n \choose n}-\sum_{i=1}^n{T-A_i+n-1 \choose n}\\
&+{T-B+n-2 \choose n}-(A_1+A_2-B){T-B+n-2 \choose n-1}
\end{align*}
Let us recall the well-known formula
$${m\choose p}-{m-n\choose p}=\sum_{k=1}^n{m-k\choose p-1}$$
as well as the finite difference of a product:
$$\Delta_t(P(T)Q(T))=Q(T)\Delta_tP(T)+P(T-t)\Delta_tQ(T)$$
Using these formulae enables us to calculate the quantity above and prove the result stated.
\paragraph{Proof of the statement on p.~\pageref{statement}} For $r$ polynomials with $n$ indeterminates, the map $(f^{(1)},f^{(2)},...f^{(r)})_{\leq T,A,B}$ of section \ref{statement} is none other than the last map\footnote{\textit{i. e.} the map defined by $\Lambda^{r-1}L_\sigma\rightarrow\Lambda^rL_\sigma$ over every $U_\sigma$.} of the sequence in theorem~2. For values of $T$, $A$, $B$ ensuring the non-degeneracy of $\Pi$, and for $k$ and $N$ as in theorem~2, the sequence is exact, so the kernel of this map must be the image of the next map : if $(\phi^{(1)},...,\phi^{(r)})$ is an element of the kernel, then there exists a family of polynomials
$$(\psi^{(ij)})_{i,j}\in\bigoplus_{i<j}C_{\leq T-t^{(i)}-t^{(j)},A-a^{(i)}-a^{(j)},B-b^{(i)}-b^{(j)}}$$
such that, in particular,
$$\phi^{(1)}=\sum_{j=2}^r(-1)^j\psi^{(1j)}f^{(j)}$$
\textit{q. e. d.}
\paragraph{Third species of incomplete equations, $n=r=3$} For polynomials $f^{(1)},f^{(2)},f^{(3)}$ of the third species in three indeterminates, most of the arguments above are still valid, although there is a major problem with proposition 2. In the demonstration of proposition 2, the coordinates of the vertices of $P'$ were linear in $t+\theta,a+\alpha,b+\beta$ and could be written as the sums of the coordinates of the corresponding vertices of $P$ and $\Pi$, the three polytopes having the same form; but, as we said in section \ref{Demonstration3} p.~\pageref{Demonstration3}, there are eight different forms of polynomials of the third species. The convex envelops of their supports are polytopes of differents forms (\textit{cf.} figure 4). In order to fix the demonstration of proposition 2, one is going to study a larger class of polytopes, from which the eight forms of polytopes of the third species are only degenerate forms. These polytopes are represented on figure 7. If $(t,a,b)$ belongs to the third species, such a polytope is the convex envelop of a set $E_{t,a,b,s}\subset\mathbb{Z}^3$ defined by :
$$\left\lbrace\begin{aligned}
&0\leq k_1\leq a_1,\quad 0\leq k_2\leq a_2,\quad 0\leq k_3\leq a_3,\\
&k_1+k_2\leq b_3,\quad k_1+k_3\leq b_2,\quad k_2+k_3\leq b_1,\\
&k_1+k_2+k_3\leq t,\\
&2k_1+k_2+k_3\leq s_1,\quad k_1+2k_2+k_3\leq s_2,\quad k_1+k_2+2k_3\leq s_3
\end{aligned}\right.$$
\paragraph{Proposition 5} If $(t,a,b)$ belongs to the third species, put
for $1\leq i\leq 3$:
$$s_i=\min(t+a_i,\ b_{i+1}+b_{i+2})$$
where the indices are modulo 3 (for example $b_4=b_1$). Then $E_{t,a,b,s}=E_{t,a,b}$.
\emph{Demonstration.} It is clear that $E_{t,a,b,s}\subset E_{t,a,b}$. Moreover, if $(k_1,k_2,k_3)\in E_{t,a,b}$, one has:
\begin{align*}
2k_i+k_{i+1}+k_{i+2}=(k_1+k_2+k_3)+k_i\leq t+a_i\\
2k_i+k_{i+1}+k_{i+2}=(k_i+k_{i+1})+(k_i+k_{i+2})\leq b_{i+2}+b_{i+1}
\end{align*}
Hence $2k_i+k_{i+1}+k_{i+2}\leq\min(t+a_i,b_{i+1}+b_{i+2})=s_i$, which concludes the proof.
\paragraph{A new fan} One subdivides the fan $\Delta$ on figure 6, using new rays through the following vectors:
$$-e_1-e_3,\quad -e_2-e_3,\quad -2e_1-e_2-e_3,\quad -e_1-2e_2-e_3,\quad -e_1-e_2-2e_3$$
The new fan is represented on figure 8; it is compatible with the new class of polytopes. Propositions 1 to 4 and theorem 1 and 2 are valid for this fan and these polytopes.
\paragraph{Proposition 6} If $P$ is the convex envelop of $E_{T,A,B,S}$, then
\begin{align*}\vert P\cap\mathbb{Z}^3\vert=&{T+3 \choose 3}+\sum_{i=1}^3\left\lbrack{T-B_i+1 \choose 3}-{T-A_i+2 \choose 3}\right\rbrack\\
&-\sum_{i=1}^3\left\lbrack(A_{i+1}+A_{i+2}-B_i){T-B_i+1 \choose 2}\right\rbrack\\
&+\sum_{i=1}^3\left\lbrack(T+A_i-B_{i+1}-B_{i+2}+1){T+A_i-S_i+1 \choose 2}-2{T+A_i-S_i+2 \choose 3}\right\rbrack
\end{align*}
\emph{Demonstration.} This combinatorial formula derives by truncation from any of the eight formulae given in section \ref{Demonstration3} for polytopes of the third species.
\paragraph{Remark} On the other way around, by specializing $S_i=\min(T+A_i,B_{i+1}+B_{i+2})$ in the formula above, one could also derive the eight formulae given in section \ref{Demonstration3}. For example, if $T+A_3>B_1+B_2$, the corresponding term in the expression above is :
\begin{align*}
&(T+A_3-B_1-B_2+1){T+A_3-B_1-B_2+1 \choose 2}-2{T+A_3-B_1-B_2+2 \choose 3}\\
&={T+A_3-B_1-B_2+1 \choose 3}
\end{align*}
as an easy computation would reveal. This term appears, as it should, in the formulae for the 2d, the 4th, the 6th and the 8th forms.
\paragraph{Corollary} Analog to theorem 2 and its corollary, when $k$ and $N$ are large enough, the dimension of the cokernel of the last map in the Koszul complex gives the following upper bound on the degree of the eliminand:
\begin{align*}&\prod_{i=1}^3t^{(i)}+\sum_{i=1}^3\left\lbrack\prod_{j=1}^3(t^{(j)}-b_i^{(j)})-\prod_{j=1}^3(t^{(j)}-a_i^{(j)})\right\rbrack\\
&-\sum_{i=1}^3\sum_{j=1}^3(a_{i+1}^{(j)}+a_{i+2}^{(j)}-b_i^{(j)})\prod_{k\neq j}(t^{(k)}-b_i^{(k)})\\
&+\sum_{i=1}^3\left\lbrack\sum_{j=1}^3(t^{(j)}+a_i^{(j)}-b_{i+1}^{(j)}-b_{i+2}^{(j)})\prod_{k\neq j}(t^{(k)}+a_i^{(j)}-s_i^{(j)})-2\prod_{j=1}^3(t^{(j)}+a_i^{(j)}-s_i^{(j)})\right\rbrack
\end{align*}
where $s_i^{(j)}=\min(t^{(j)}+a_i^{(j)},b_{i+1}^{(j)}+b_{i+2}^{(j)}).$
\emph{Demonstration.} Use prop. 6 and calculate:
$$\Delta_{t^{(3)},a^{(3)},b^{(3)},s^{(3)}}\Delta_{t^{(2)},a^{(2)},b^{(2)},s^{(2)}}\Delta_{t^{(1)},a^{(1)},b^{(1)},s^{(1)}}\vert E_{T,A,B,S}\vert.$$
\paragraph{Bézout's own formula for the third species} In his treatise, Bézout does not use the truncated polytopes that we have described above. He calculates everything under the hypothesis that all polytopes appearing in the exact sequence belong to one and the same form, among the eight forms pertaining to the third species of incomplete equations. He thus finds eight different formulae, that could be derived from the formula of previous corollary by specializing the $s_i^{(j)}$ to their corresponding values.
One might doubt that any of those eight formulae could apply to the cases where $f^{(1)}$, $f^{(2)}$ and $f^{(3)}$ belong to distinct forms, because the 9 parameters $s_i^{(j)}$ could each be specialized in two distinct ways (either $t^{(j)}+a_i^{(j)}$ or $b_{i+1}^{(j)}+b_{i+2}^{(j)}$), which makes 256 possible outcomes. Nevertheless, calculation reveals that, for every $i$, the last term in square brackets in the sum above, \textit{i. e.}
$$\sum_{j=1}^3(t^{(j)}+a_i^{(j)}-b_{i+1}^{(j)}-b_{i+2}^{(j)})\prod_{k\neq j}(t^{(k)}+a_i^{(j)}-s_i^{(j)})-2\prod_{j=1}^3(t^{(j)}+a_i^{(j)}-s_i^{(j)}),$$
only takes two possible values after such specialization. Indeed, write
$h_i^{(j)}=t^{(j)}+a_i^{(j)}-b_{i+1}^{(j)}-b_{i+2}^{(j)}$. For a given $i$, if $h_i^{(j)}\leq 0$ for two or three among the three possible values of the index $j$, the term in square brackets vanishes identically ; but if $h_i^{(j)}\leq 0$ for at most one value of $j$, then the term in square brackets is equal to $h_i^{(1)}h_i^{(2)}h_i^{(3)}$. Finally, the upper bound on the degree of the eliminand is:
\begin{align*}&\prod_{i=1}^3t^{(i)}+\sum_{i=1}^3\left\lbrack\prod_{j=1}^3(t^{(j)}-b_i^{(j)})-\prod_{j=1}^3(t^{(j)}-a_i^{(j)})\right\rbrack\\
&-\sum_{i=1}^3\sum_{j=1}^3(a_{i+1}^{(j)}+a_{i+2}^{(j)}-b_i^{(j)})\prod_{k\neq j}(t^{(k)}-b_i^{(k)})+\sum_{i=1}^3\left\lbrack\varepsilon_i\prod_{j=1}^3h_i^{(j)}\right\rbrack
\end{align*}
where $\varepsilon_i=0$ or $1$. This coincides with the eight formulae given by Bézout in his treatise \cite{bezout1779} \S~119-127.
\section{Comparing methods}
The geometrical origin of Cayley's researches, unlike Bézout's, might obscure the identity of methods. Both scholars met with the same mechanisms of linear algebra, having to do with the same unsolved problem : the exactness of a sequence of linear maps. Could Cayley, in some way or another, have known of Bézout's treatise ? He does not mention it ; but he knew of Waring's \textit{Meditationes algebraicae}, the second edition of which contains a very brief summary of Bézout's ideas. Cayley's method is exactly the same as Bézout's, informed by the theory of determinants, Sylvester's dialytic method, and the new-born matrix symbolism. The alternate sum of dimensions is present in Bézout's, in Waring's, and in Cayley's works.
Despite of these similarities, with Sylvester, Hesse and Cayley, elimination theory is on a new track characterized by:
\begin{itemize}
\item The important role of projective and algebraic geometry in focusing on homogeneous polynomials.
\item The annexion of elimination theory within the growing \emph{theory of invariants} and the systematic search for invariants.
\item The calculatory trend aiming at explicit formulas, mainly depending on determinants and using matrix algebra.
\end{itemize}
It would be misleading to see the concept of ideal where it is not. Ideals appeared in algebra at the cross-road with number theory in a research trail starting with the \textit{Disquisitiones Arithmeticae} of Gauss, up to Kummer, Dedekind, Weber and Kronecker at the end of XIXth century. Yet, there is novelty in Bézout's treatment of ``sum-equations'' in 1779. This novelty and the lack of rigour in Bézout's treatise, as well as its refusal of geometry\footnote{In this respect, Euler had clearly seen the relation between elimination and projection on the axis of a cartesian coordinate system. The problem of particular cases due to points of intersection at infinity could not be solved before the introduction of projective methods in algebraic geometry.}, had endangered its reception. These obstacles partially overcome, Hesse's and Cayley's articles definitely give a posterity to Bézout's treatise.
The peculiar dialectic between general statements and the many generic cases was also present in Bézout's treatise, and it is both a weakness and a strength. For exemple, the fact that the degree of the eliminand is always less than or equal to the product of the degrees of the given equations, is a general statement. A perfectly grounded universal proof of this statement had to await until the end of XIXth century (Serret, Schmidt, Hurwitz); but Bézout had already understood that this upper bound is the exact degree of the eliminand in the generic case of $n$ ``complete equations''. He had also known of other generic cases where the exact degree of the eliminand is less than the product of the degrees and could be precisely acertained. As proven above, the formulae found by Bézout in many cases are confirmed by the theory of toric varieties and the method of Bernshtein and Kushnirenko.
\section{Appendix : an elementary proof for the first species of incomplete equations}
Let us consider a system of three incomplete equations of first species :
$$\left\lbrace\begin{array}{l}
f^{(1)}=0\\
f^{(2)}=0\\
f^{(3)}=0
\end{array}\right.$$
We use the same notations as above.
For any set of integers $T$, $A_1$, $A_2$, $A_3$ verifying the conditions pertaining to the first species of incomplete equations, let us call $(f)_{\leq T,A}$ the linear map defined by
$$\begin{array}{lrcl}
(f)_{\leq T,A}:&\displaystyle\bigoplus_{i=1}^3C_{\leq T-t^{(i)},A-a^{(i)}}&\longrightarrow &C_{\leq T,A}\\
&(\phi^{(1)},\phi^{(2)},\phi^{(3)})&\longmapsto &\sum_{i=1}^3\phi^{(i)}f^{(i)}
\end{array}$$
We are going to build a resolution of $\text{coker}(f)_{\leq T,A}$, \textit{i. e.} an exact sequence of linear maps :
$$\begin{array}{rl}
&0\\
&\downarrow\\
&C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)},\ A-a^{(1)}-a^{(2)}-a^{(3)}}\\
(h)_{\leq T,A}&\downarrow\\
&\bigoplus\limits_{i=1}^3C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)}+t^{(i)},\ A-a^{(1)}-a^{(2)}-a^{(3)}+a^{(i)}}\\
(g)_{\leq T,A}&\downarrow\\
&\bigoplus\limits_{i=1}^3C_{\leq T-t^{(i)}, A-a^{(i)}}\\
(f)_{\leq T,A}&\downarrow\\
&C_{\leq T,A}
\end{array}$$
\paragraph{The kernel of $(f)_{\leq T,A}$}
Suppose
$$\sum_{i=1}^3\phi^{(i)}f^{(i)}=0$$
There is an isomorphism of $\mathbb{Q}[x_1,x_2,x_3]$-algebras%
\footnote{This is inspired by Mertens \cite{mertens1886} p.~528-529.}
$$\begin{array}{rcl}
\mathbb{Q}\left[(u_{i,k})_{2\leq i\leq 3, k\in\text{supp}(f^{(i)})}\right]\left[x_1,x_2,x_3\right]/\left(f^{(2)},f^{(3)}\right)&\simeq &\mathbb{Q}\left[(u_{i,k})_{k\neq(0,0,0)}\right]\left[x_1,x_2,x_3\right]\\
u_{i,k}&\mapsto &\left\lbrace\begin{array}{l}u_{i,(0,0,0)}-f^{(i)}\text{ if }k=(0,0,0)\\
u_{i,k}\text{ if }k\neq(0,0,0)\end{array}\right.
\end{array}$$
The ring on the right-hand side is a polynomial ring, it is an integral domain, and
$$\phi^{(1)}f^{(1)}\in(f^{(2)},f^{(3)})\quad\Rightarrow\quad\phi^{(1)}\in(f^{(2)},f^{(3)})$$
Hence there exists $\psi^{(2)}$, $\psi^{(3)}$ such that
$$\phi^{(1)}=\psi^{(3)}f^{(2)}-\psi^{(2)}f^{(3)}$$
First of all, we are going to prove that it is possible to choose such $\psi^{(2)}$, $\psi^{(3)}$ with
$$\psi^{(2)}\in C_{\leq T-t^{(1)}-t^{(3)},A-a^{(1)}-a^{(3)}},\quad
\psi^{(3)}\in C_{\leq T-t^{(1)}-t^{(2)},A-a^{(1)}-a^{(2)}}$$
Indeed, if $\deg\psi^{(3)}>T-t^{(1)}-t^{(2)}$, then we should have
$$[\psi^{(3)}][f^{(2)}]-[\psi^{(2)}][f^{(3)}]=0$$
where the brackets designate the terms of highest total degree of a polynomial. Now, $f^{(2)}$ and $f^{(3)}$ being generic, the greatest common divisor of $[f^{(2)}]$ and $[f^{(3)}]$ over $K=\mathbb{Q}((u_{i,k})_{i,k})$ must be a monomial. But no monomial divides any of those two polynomials. So:
$$(\exists\lambda\in K)\quad[\psi^{(3)}]=\lambda[f^{(3)}],\quad[\psi^{(2)}]=\lambda[f^{(2)}]$$
We could then write
$$\phi^{(1)}=(\psi^{(3)}-\lambda f^{(3)})f^{(2)}-(\psi^{(2)}-\lambda f^{(2)})f^{(3)}$$
and the two new multiplier-polynomials are of total degree less than the total degree of the old ones. We thus prove by induction that there exists two multiplier-polynomials $\psi^{(2)}$, $\psi^{(3)}$ with
$$\deg\psi^{(2)}\leq T-t^{(1)}-t^{(3)},\quad\deg\psi^{(3)}\leq T-t^{(1)}-t^{(2)},$$
$$\phi^{(1)}=\psi^{(3)}f^{(2)}-\psi^{(2)}f^{(3)}$$
Let us now suppose that $\deg_1\psi^{(3)}>A_1-a_1^{(1)}-a_1^{(2)}$, then we should have
$$[\psi^{(3)}]_1[f^{(2)}]_1-[\psi^{(2)}]_1[f^{(3)}]_1=0$$
where the brackets $[\cdot]_1$ designate the terms of highest degree in $x_1$. Now, $f^{(2)}$ and $f^{(3)}$ being generic, one has:
$$[f^{(2)}]_1=x_1^{a^{(2)}_1}F^{(2)}$$
$$[f^{(3)}]_1=x_1^{a^{(3)}_1}F^{(3)}$$
where $F^{(2)}$ and $F^{(3)}$ are irreducible polynomials over $K$. Thus there exists a polynomial $\Lambda$ in $x_2$, $x_3$ such that:
$$[\psi^{(3)}]_1=\Lambda F^{(3)}x_1^{a_1^{(3)}-\min\left(a_1^{(2)},a_1^{(3)}\right)}$$
$$[\psi^{(2)}]_1=\Lambda F^{(2)}x_1^{a_1^{(2)}-\min\left(a_1^{(2)},a_1^{(3)}\right)}$$
Because of the hypothesis on $\deg_1\psi^{(3)}$, as soon as $A_1$ will be large enough, $\Lambda$ will be divisible by $x_1^{\min\left(a_1^{(2)},a_1^{(3)}\right)}$, and thus one will be able to write
$$\phi^{(1)}=\left(\psi^{(3)}-\frac{\Lambda}{x_1^{\min\left(a_1^{(2)},a_1^{(3)}\right)}}f^{(3)}\right)f^{(2)}-\left(\psi^{(2)}-\frac{\Lambda}{x_1^{\min\left(a_1^{(2)},a_1^{(3)}\right)}}f^{(2)}\right)f^{(3)}$$
where the two multiplier-polynomials are of degree in $x_1$ less than the old ones. We thus recursively prove that there exists two multiplier-polynomials $\psi^{(2)}$, $\psi^{(3)}$ with
$$\deg_1\psi^{(2)}\leq A_1-a_1^{(1)}-a_1^{(3)},\quad\deg_1\psi^{(3)}\leq A_1-a_1^{(1)}-a^{(2)}$$
We could have worked in the same fashion with $\deg_2$ and with $\deg_3$. Their should still be one care: let us make sure that the transformation of the multiplier-polynomials described above in order to decrease their degrees with respect to a single unknown, let us say $x_1$, does not increase their degrees with respect with to $x_2$ or $x_3$. Concerning $x_2$ (the same reasoning holds for $x_3$), one has:
$$\begin{array}{rcl}
\deg_2(\Lambda f^{(3)})&=&\deg_2[\psi^{(3)}]_1-\deg_2F^{(3)}+a_2^{(3)}\\
&\leq& (\deg\psi^{(3)}-\deg_1\psi^{(3)})-(t^{(3)}-a_1^{(3)})+a_2^{(3)}\\
&<& (T-t^{(1)}-t^{(2)})-(A_1-a_1^{(1)}-a_1^{(2)})-(t^{(3)}-a_1^{(3)})+a_2^{(3)}
\end{array}$$
If $T-t^{(1)}-t^{(2)}-t^{(3)}\leq (A_1-a_1^{(1)}-a_1^{(2)}-a_1^{(3)})+(A_2-a_2^{(1)}-a_2^{(2)}-a_2^{(3)})$, then one has, as we wished:
$$\deg_2(\Lambda f^{(3)})< A_2-a_2^{(1)}-a_2^{(2)}$$
We have thus achieved the proof that it is possible to choose $\psi^{(2)}$, $\psi^{(3)}$ with
$$\psi^{(2)}\in C_{\leq T-t^{(1)}-t^{(3)},A-a^{(1)}-a^{(3)}},\quad
\psi^{(3)}\in C_{\leq T-t^{(1)}-t^{(2)},A-a^{(1)}-a^{(2)}}$$
$$\phi^{(1)}=\psi^{(3)}f^{(2)}-\psi^{(2)}f^{(3)}$$
Let there be such $\phi^{(1)}=\psi^{(3)}f^{(2)}-\psi^{(2)}f^{(3)}$. Obviously
$$(\phi^{(1)},-\psi^{(3)}f^{(1)},\psi^{(2)}f^{(1)})\in\ker(f)_{\leq T,A}$$
The other elements of $\ker(f)_{\leq T,A}$ with first coordinate equal to $\phi^{(1)}$ can all be written
$$(\phi^{(1)},\phi^{(2)}-\psi^{(3)}f^{(1)},\phi^{(3)}+\psi^{(2)}f^{(1)})$$
where $(0,\phi^{(2)},\phi^{(3)})\in\ker(f)_{\leq T,A}$, that is to say
$$\phi^{(2)}f^{(2)}+\phi^{(3)}f^{(3)}=0$$
As $f^{(2)}$ and $f^{(3)}$ are irreducible polynomials over $K$, then there exists $\psi^{(1)}$ such that
$$\phi^{(2)}=\psi^{(1)}f^{(3)},\quad\phi^{(3)}=-\psi^{(1)}f^{(2)}$$
Hence the kernel of $(f)_{\leq T,A}$ is the image of the following linear map $(g)_{\leq T,A}$:
$$\begin{array}{rrcl}
(g)_{\leq T,A}:&\bigoplus\limits_{i=1}^3C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)}+t^{(i)},\ A-a^{(1)}-a^{(2)}-a^{(3)}+a^{(i)}}&\longrightarrow&\bigoplus\limits_{i=1}^3C_{\leq T-t^{(i)},A-a^{(i)}}\\
&(\psi^{(1)},\psi^{(2)},\psi^{(3)})&\longmapsto&(\psi^{(3)}f^{(2)}-\psi^{(2)}f^{(3)},\\
&&&\psi^{(1)}f^{(3)}-\psi^{(3)}f^{(1)},\\
&&&\psi^{(2)}f^{(1)}-\psi^{(1)}f^{(2)})
\end{array}$$
\paragraph{The kernel of $(g)_{\leq T,A}$}
Let $(\psi^{(1)},\psi^{(2)},\psi^{(3)})\in\ker(g)_{T,A}$. The polynomials $f^{(i)}$ are irreducible, so that, again:
$$(\exists\Lambda\in C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)},\ A-a^{(1)}-a^{(2)}-a^{(3)}})\quad\psi^{(1)}=\Lambda f^{(1)},\ \psi^{(2)}=\Lambda f^{(2)},\ \psi^{(3)}=\Lambda f^{(3)}$$
Hence the kernel of $(g)_{\leq T,A}$ is the image of the following linear map $(h)_{\leq T,A}$:
$$\begin{array}{rrcl}
(h)_{\leq T,A}:&C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)},\ A-a^{(1)}-a^{(2)}-a^{(3)}}&\longrightarrow&\bigoplus\limits_{i=1}^3C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)}+t^{(i)},\ A-a^{(1)}-a^{(2)}-a^{(3)}+a^{(i)}}\\
&\Lambda&\longmapsto&(\Lambda\psi^{(1)},\Lambda\psi^{(2)},\Lambda\psi^{(3)})
\end{array}$$
Moreover, this map is injective. In other words, we now have an exact sequence of linear maps building a resolution of $\text{coker}(f)_{\leq T,A}$:
$$\begin{array}{rl}
&0\\
&\downarrow\\
&C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)},\ A-a^{(1)}-a^{(2)}-a^{(3)}}\\
(h)_{\leq T,A}&\downarrow\\
&\bigoplus\limits_{i=1}^3C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)}+t^{(i)},\ A-a^{(1)}-a^{(2)}-a^{(3)}+a^{(i)}}\\
(g)_{\leq T,A}&\downarrow\\
&\bigoplus\limits_{i=1}^3C_{\leq T-t^{(i)}, A-a^{(i)}}\\
(f)_{\leq T,A}&\downarrow\\
&C_{\leq T,A}
\end{array}$$
\paragraph{Finite differences and alternate sum}\label{difference}
Using several times the rank theorem, one thus has:
$$\begin{array}{rcl}\dim\text{coker}(f)_{\leq T,A}&=&\dim C_{\leq T,A}\\
&&-\dim\bigoplus\limits_{i=1}^3C_{\leq T-t^{(i)},A-a^{(i)}}\\
&&+\dim\bigoplus\limits_{i=1}^3C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)}+t^{(i)},\ A-a^{(1)}-a^{(2)}-a^{(3)}+a^{(i)}}\\
&&-\dim C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)},\ A-a^{(1)}-a^{(2)}-a^{(3)}}
\end{array}$$
This expression could be rewritten into a single finite difference of order 3:
$$\begin{array}{rcl}
\dim\text{coker}(f)_{\leq T,A}&=&((\dim C_{\leq T,A}-\dim C_{\leq T-t^{(1)},A-a^{(1)}})\\
&&-(\dim C_{\leq T-t^{(2)},A-a^{(2)}}-\dim C_{\leq T-t^{(1)}-t^{(2)},A-a^{(1)}-a^{(2)}}))\\
&&-((\dim C_{\leq T-t^{(3)},A-a^{(3)}}-\dim C_{\leq T-t^{(1)}-t^{(3)},A-a^{(1)}-a^{(3)}})\\
&&-(\dim C_{\leq T-t^{(2)}-t^{(3)},A-t^{(2)}-t^{(3)}}-\dim C_{\leq T-t^{(1)}-t^{(2)}-t^{(3)},A-a^{(1)}-a^{(2)}-a^{(3)}}))\\
&=&\Delta_{t^{(3)},a^{(3)}}\Delta_{t^{(2)},a^{(2)}}\Delta_{t^{(1)},a^{(1)}}\dim C_{\leq T,A}
\end{array}$$
Calculating $\dim C_{\leq T,A}$ is an easy combinatorial problem:
$$\dim C_{\leq T,A}={T+3 \choose 3}+{T-A_1+2 \choose 3}+{T-A_2+2 \choose 3}+{T-A_3+2 \choose 3}$$
Let us recall a well-known formula:
$${m\choose p}-{m-n\choose p}=\sum_{k=1}^n{m-k\choose p-1}$$
Using this formula several times enables one to calculate the finite difference above. It is a constant, independant of $T$ and $A$. Eventually,
$$\dim\text{coker}(f)_{T,A}=t^{(1)}t^{(2)}t^{(3)}-\sum_{i=1}^3(t^{(1)}-a^{(1)}_i)(t^{(2)}-a^{(2)}_i)(t^{(3)}-a^{(3)}_i)$$
If there exists an eliminand in $x_1$ of lowest degree, that number is an upper bound on its degree.
|
1,108,101,564,625 | arxiv | \section{Introduction}
Let $(X,d,\mu)$ be a complete metric space equipped with a Radon measure $\mu$.
The total variation of a function of bounded variation ($\mathrm{BV}$ function) $u$ in an open set $\Omega\subset X$ is defined by means of approximation with locally Lipschitz functions, that is,
\begin{equation}\label{eq:definition of total variation in intro}
\|Du\|(\Omega):=\inf\left\{\liminf_{i\to\infty}\int_\Omega g_{u_i}\,d\mu:\, u_i\in \Lip_{\mathrm{loc}}(\Omega),\, u_i\to u\textrm{ in } L^1_{\mathrm{loc}}(\Omega)\right\},
\end{equation}
where each $g_{u_i}$ is a $1$-weak upper gradient of $u_i$ in $\Omega$; see Section \ref{sec:prelis} for definitions.
From this definition it easily follows that the total variation is lower semicontinuous with respect to $L^1$-convergence in open sets, that is,
if $U\subset X$ is open and $u_i\to u$ in $L^1_{\mathrm{loc}}(U)$, then
\begin{equation}\label{eq:intro lower semicontinuity}
\Vert Du\Vert(U)\le\liminf_{i\to\infty}\Vert Du_i\Vert(U).
\end{equation}
For arbitrary (measurable) sets $U\subset X$ we cannot
define $\Vert Du\Vert(U)$ simply by replacing $\Omega$
with $U$ in the definition of the total variation, because then
the total variation would not yield a Radon measure, see Example \ref{ex:total variation}.
Instead, $\Vert Du\Vert(U)$ is defined by means of approximation
with open sets containing $U$, following \cite{M}.
On the other hand, a set $U\subset X$ is said to be \emph{1-quasiopen} if for every $\varepsilon>0$ there exists
an open set $G\subset X$ such that $\capa_1(G)<\varepsilon$ and $U\cup G$ is open.
Quasiopen sets and related concepts of \emph{fine potential theory} have been recently studied in the metric
setting in e.g. \cite{BB-OD,BBL-CCK,BBL-WC,BBM-QP} in the case $p>1$.
See also the monographs \cite{MZ} and \cite{HKM} for the Euclidean theory and its history in the unweighted and weighted settings, respectively.
In the case $p=1$, analogous concepts have been recently studied in \cite{L3,L,LaSh}.
In this paper, we assume that the measure $\mu$ is doubling and that the space supports a $(1,1)$-Poincar\'e inequality,
and then we show that if $U\subset X$ is a $1$-quasiopen set and $\Vert Du\Vert(U)<\infty$, then
the total variation $\Vert Du\Vert(U)$ can be equivalently defined by replacing $\Omega$
with $U$ in \eqref{eq:definition of total variation in intro}.
This is Theorem \ref{thm:characterization of total variational}.
Using this result,
we can then show that the lower semicontinuity \eqref{eq:intro lower semicontinuity} holds
true also for every
$1$-quasiopen set $U$, if $\Vert Du\Vert(U)<\infty$ and $u_i\to u$ in $L^1_{\mathrm{loc}}(U)$. This is
Theorem \ref{thm:lower semic in quasiopen sets}.
Such a lower semicontinuity result may be helpful in solving various minimization problems,
for example in the upcoming work \cite{LMS}.
The notion of \emph{uniform integrability} of a sequence of functions $(g_i)\subset L^1(X)$
is often useful in analysis. This involves uniform absolute continuity with respect to the ambient measure. That is, for every $\varepsilon>0$ there exists $\delta>0$ such that if $A\subset X$ with $\mu(A)<\delta$, then $\int_A g_i\,d\mu<\varepsilon$ for every $i\in{\mathbb N}$.
The variation measure $\Vert Du\Vert$ of a $\mathrm{BV}$ function $u$ is, of course, not always
absolutely continuous with respect to $\mu$. On the other hand, it is a well-known fact in the Euclidean setting that $\Vert Du\Vert$ is absolutely continuous with
respect to the $1$-capacity $\capa_1$. The proof of this fact is essentially the same
in the more general metric setting, see \cite[Lemma 3.9]{L2}.
A sequence of $\mathrm{BV}$ functions $u_i$ is said to converge \emph{strictly} to a
$\mathrm{BV}$ function $u$ if $u_i\to u$ in $L^1(X)$ and
$\Vert Du_i\Vert(X)\to \Vert Du\Vert(X)$. Given such a sequence, we show that for every
$\varepsilon>0$
there exists $\delta>0$ such that if $A\subset X$ with $\capa_1(A)<\delta$, then
$\Vert Du_i\Vert(A)<\varepsilon$ for every $i\in{\mathbb N}$. In other words, the variation measures $\Vert Du_i\Vert$ are uniformly absolutely continuous with respect to the $1$-capacity.
This is Theorem \ref{thm:uniform absolute continuity}. The proof combines the previously discussed
lower semicontinuity result with Baire's category theorem.
\section{Notation and definitions}\label{sec:prelis}
In this section we introduce the notation, definitions, and assumptions used in the paper.
Throughout this paper, $(X,d,\mu)$ is a complete metric space equipped
with a metric $d$ and a Borel regular outer measure $\mu$ that satisfies a doubling property, that is,
there is a constant $C_d\ge 1$ such that
\[
0<\mu(B(x,2r))\leq C_d\mu(B(x,r))<\infty
\]
for every ball $B=B(x,r)$ with center $x\in X$ and radius $r>0$.
When we want to specify that a constant $C$
depends on the parameters $a,b, \ldots,$ we write $C=C(a,b,\ldots)$.
A complete metric space equipped with a doubling measure is proper,
that is, closed and bounded sets are compact, see e.g. \cite[Proposition 3.1]{BB}.
For a $\mu$-measurable set $A\subset X$, we define $L^1_{\mathrm{loc}}(A)$ to consist of functions $u$ on $A$
such that for every $x\in A$ there exists $r>0$ such that $u\in L^1(A\cap B(x,r))$.
Other local spaces of functions are defined similarly.
For any open set $\Omega\subset X$,
every function in the class $L^1_{\mathrm{loc}}(\Omega)$ is in $L^1(\Omega')$ for every open $\Omega'\Subset\Omega$.
Here $\Omega'\Subset\Omega$ means that $\overline{\Omega'}$ is a
compact subset of $\Omega$.
For any set $A\subset X$ and $0<R<\infty$, the restricted spherical Hausdorff content
of codimension one is defined to be
\[
\mathcal{H}_{R}(A):=\inf\left\{ \sum_{i=1}^{\infty}
\frac{\mu(B(x_{i},r_{i}))}{r_{i}}:\,A\subset\bigcup_{i=1}^{\infty}B(x_{i},r_{i}),\,r_{i}\le R\right\}.
\]
The codimension one Hausdorff measure of $A\subset X$ is then defined to be
\[
\mathcal{H}(A):=\lim_{R\rightarrow 0}\mathcal{H}_{R}(A).
\]
The measure theoretic boundary $\partial^{*}E$ of a set $E\subset X$ is the set of points $x\in X$
at which both $E$ and its complement have positive upper density, i.e.
\[
\limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}>0\quad\;
\textrm{and}\quad\;\limsup_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}>0.
\]
The measure theoretic interior and exterior of $E$ are defined respectively by
\begin{equation}\label{eq:definition of measure theoretic interior}
I_E:=\left\{x\in X:\,\lim_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}=0\right\}
\end{equation}
and
\begin{equation}\label{eq:definition of measure theoretic exterior}
O_E:=\left\{x\in X:\,\lim_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}=0\right\}.
\end{equation}
Note that we always have a partitioning of the space into the disjoint sets
$\partial^*E$, $I_E$, and $O_E$.
By a curve we mean a rectifiable continuous mapping from a compact interval of the real line
into $X$.
The length of a curve $\gamma$
is denoted by $\ell_{\gamma}$. We will assume every curve to be parametrized
by arc-length, which can always be done (see e.g. \cite[Theorem~3.2]{Hj}).
A nonnegative Borel function $g$ on $X$ is an upper gradient
of an extended real-valued function $u$
on $X$ if for all curves $\gamma$, we have
\begin{equation}\label{eq:definition of upper gradient}
|u(x)-u(y)|\le \int_\gamma g\,ds,
\end{equation}
where $x$ and $y$ are the end points of $\gamma$. We interpret $|u(x)-u(y)|=\infty$ whenever
at least one of $|u(x)|$, $|u(y)|$ is infinite.
We define the local Lipschitz constant of a locally Lipschitz function $u\in\mathrm{Lip}_{\mathrm{loc}}(X)$ by
\[
\Lip u(x):=\limsup_{r\to 0}\sup_{y\in B(x,r)\setminus \{x\}}\frac{|u(y)-u(x)|}{d(y,x)}.
\]
Then $\Lip u$ is an upper gradient of $u$, see e.g. \cite[Proposition 1.11]{Che}.
Upper gradients were originally introduced in \cite{HK}.
If $g$ is a nonnegative $\mu$-measurable function on $X$
and (\ref{eq:definition of upper gradient}) holds for $1$-almost every curve,
we say that $g$ is a $1$-weak upper gradient of~$u$.
A property holds for $1$-almost every curve
if it fails only for a curve family with zero $1$-modulus.
A family $\Gamma$ of curves is of zero $1$-modulus if there is a
nonnegative Borel function $\rho\in L^1(X)$ such that
for all curves $\gamma\in\Gamma$, the curve integral $\int_\gamma \rho\,ds$ is infinite.
Of course, by replacing $X$ with a set $A\subset X$ and considering curves $\gamma$ in $A$, we can talk about a function $g$ being a ($1$-weak) upper gradient of $u$ in $A$.
A $1$-weak upper gradient can always be perturbed in a set of $\mu$-measure zero,
see \cite[Lemma 1.43]{BB}, and so we
understand it to be defined only $\mu$-almost everywhere.
Given a $\mu$-measurable set $U\subset X$, we consider the following norm
\[
\Vert u\Vert_{N^{1,1}(U)}:=\Vert u\Vert_{L^1(U)}+\inf \Vert g\Vert_{L^1(U)},
\]
where the infimum is taken over all $1$-weak upper gradients $g$ of $u$ in $U$.
The substitute for the Sobolev space $W^{1,1}$ in the metric setting is the Newton-Sobolev space
\[
N^{1,1}(U):=\{u:\|u\|_{N^{1,1}(U)}<\infty\}.
\]
We understand every Newton-Sobolev function to be defined at every $x\in U$
(even though $\Vert \cdot\Vert_{N^{1,1}(U)}$ is, precisely speaking, then only a seminorm).
The Newton-Sobolev space with zero boundary values is defined as
\[
N_0^{1,1}(U):=\{u|_{U}:\,u\in N^{1,1}(X)\textrm{ and }u=0\textrm{ in }X\setminus U\}.
\]
Thus $N_0^{1,1}(U)$ is a subclass of $N^{1,1}(U)$, and it can also be considered as a subclass of
$N^{1,1}(X)$, as we will do without further notice.
It is known that for any $u\in N_{\mathrm{loc}}^{1,1}(U)$, there exists a minimal $1$-weak
upper gradient of $u$ in $U$, always denoted by $g_{u}$, satisfying $g_{u}\le g$
$\mu$-almost everywhere in $U$, for any $1$-weak upper gradient $g\in L_{\mathrm{loc}}^{1}(U)$
of $u$ in $U$ \cite[Theorem 2.25]{BB}.
For more on Newton-Sobolev spaces, we refer to \cite{S, BB, HKST}.
The $1$-capacity of a set $A\subset X$ is given by
\[
\capa_1(A):=\inf \Vert u\Vert_{N^{1,1}(X)},
\]
where the infimum is taken over all functions $u\in N^{1,1}(X)$ such that $u\ge 1$ in $A$.
We know that $\capa_1$ is an outer capacity, meaning that
\[
\capa_1(A)=\inf\{\capa_1(\Omega):\,\Omega\supset A\textrm{ is open}\}
\]
for any $A\subset X$, see e.g. \cite[Theorem 5.31]{BB}.
For basic properties satisfied by the $1$-capacity, such as monotonicity and countable subadditivity, see e.g. \cite{BB}.
We say that a set $U\subset X$ is $1$-quasiopen if for every $\varepsilon>0$ there exists an
open set $G\subset X$ such that $\capa_1(G)<\varepsilon$ and $U\cup G$ is open.
Next we recall the definition and basic properties of functions
of bounded variation on metric spaces, following \cite{M}. See also e.g. \cite{AFP, EvaG92, Fed, Giu84, Zie89} for the classical
theory in the Euclidean setting.
Given a function $u\in L^1_{\mathrm{loc}}(X)$, we define the total variation of $u$ in $X$ by
\[
\|Du\|(X):=\inf\left\{\liminf_{i\to\infty}\int_X g_{u_i}\,d\mu:\, u_i\in \Lip_{\mathrm{loc}}(X),\, u_i\to u\textrm{ in } L^1_{\mathrm{loc}}(X)\right\},
\]
where each $g_{u_i}$ is the minimal $1$-weak upper gradient of $u_i$.
We say that a function $u\in L^1(X)$ is of bounded variation,
and denote $u\in\mathrm{BV}(X)$, if $\|Du\|(X)<\infty$.
By replacing $X$ with an open set $\Omega\subset X$ in the definition of the total variation, we can define $\|Du\|(\Omega)$.
For an arbitrary set $A\subset X$, we define
\[
\|Du\|(A)=\inf\{\|Du\|(\Omega):\, A\subset\Omega,\,\Omega\subset X
\text{ is open}\}.
\]
In general, if $A\subset X$ is an arbitrary set, we understand the statement $\Vert Du\Vert(A)<\infty$
to mean that there exists some open set $\Omega\supset A$ such that $u\in L^1_{\mathrm{loc}}(\Omega)$ and $\Vert Du\Vert(\Omega)<\infty$.
If $\Vert Du\Vert(\Omega)<\infty$, $\|Du\|(\cdot)$ is a finite Radon measure on $\Omega$ by \cite[Theorem 3.4]{M}.
A $\mu$-measurable set $E\subset X$ is said to be of finite perimeter if $\|D\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E\|(X)<\infty$, where $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E$ is the characteristic function of $E$.
The perimeter of $E$ in $\Omega$ is also denoted by
\[
P(E,\Omega):=\|D\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E\|(\Omega).
\]
We have the following coarea formula from \cite[Proposition 4.2]{M}: if $\Omega\subset X$ is an open set and $u\in L^1_{\mathrm{loc}}(\Omega)$, then
\begin{equation}\label{eq:coarea}
\|Du\|(\Omega)=\int_{-\infty}^{\infty}P(\{u>t\},\Omega)\,dt.
\end{equation}
We will assume throughout the paper that $X$ supports a $(1,1)$-Poincar\'e inequality,
meaning that there exist constants $C_P\ge 1$ and $\lambda \ge 1$ such that for every
ball $B(x,r)$, every $u\in L^1_{\mathrm{loc}}(X)$,
and every upper gradient $g$ of $u$,
we have
\[
\vint{B(x,r)}|u-u_{B(x,r)}|\, d\mu
\le C_P r\vint{B(x,\lambda r)}g\,d\mu,
\]
where
\[
u_{B(x,r)}:=\vint{B(x,r)}u\,d\mu :=\frac 1{\mu(B(x,r))}\int_{B(x,r)}u\,d\mu.
\]
\section{Preliminary results}\label{sec:preliminary results}
In this section we consider certain preliminary results that we will need in proving the main
theorems.
We start with the following simple result concerning Newton-Sobolev functions with zero
boundary values.
\begin{lemma}\label{lem:zero boundary values}
Let $\Omega\subset X$ be an open set, let $u\in N^{1,1}(\Omega)$ with $-1\le u\le 1$, and
let $\eta\in N^{1,1}_0(\Omega)$ with $0\le\eta\le 1$. Then $\eta u\in N_0^{1,1}(\Omega)$ with a
$1$-weak upper gradient $\eta g_u+|u|g_{\eta}$ (in $X$).
\end{lemma}
Here $g_{u}$ and $g_{\eta}$ are the minimal $1$-weak upper gradients of $u$ and $\eta$
(in $\Omega$ and $X$, respectively).
By \cite[Corollary 2.21]{BB} we know that if $v\in N^{1,1}(X)$, then
\begin{equation}\label{eq:upper gradient in constant set}
g_v=0\ \ \textrm{in}\ \ \{v=0\}
\end{equation}
($\mu$-almost everywhere, to be precise). Thus $g_{\eta}=0$ outside $\Omega$,
and so the function $\eta g_u+|u|g_{\eta}$
can be interpreted to take the value zero outside $\Omega$.
\begin{proof}
By the Leibniz rule, see \cite[Theorem 2.15]{BB}, we know that $\eta u\in N^{1,1}(\Omega)$ with a
$1$-weak upper gradient $\eta g_u+|u|g_\eta$ in $\Omega$.
Moreover, $-\eta\le \eta u\le \eta\in N^{1,1}_0(\Omega)$, and then by \cite[Lemma 2.37]{BB} we conclude
$\eta u\in N_0^{1,1}(\Omega)$. Finally, by \eqref{eq:upper gradient in constant set} we know that
$\eta g_u+|u|g_{\eta}$ is a $1$-weak upper gradient of $u\eta$ in $X$.
\end{proof}
The following two lemmas describe two ways of enlarging a set without
increasing the $1$-capacity significantly.
\begin{lemma}[{\cite[Lemma 3.1]{L}}]\label{lem:covering G by a set of finite perimeter}
For any $G\subset X$ and $\varepsilon>0$ there exists an open set $V\supset G$ with
$\capa_1(V)\le C_1(\capa_1(G)+\varepsilon)$ and $P(V,X)\le C_1(\capa_1(G)+\varepsilon)$,
for a constant $C_1=C_1(C_d,C_P,\lambda)\ge 1$.
\end{lemma}
\begin{proof}
See \cite[Lemma 3.1]{L}; note that there was a slight error in the
formulation, as the possibility $\capa_1(G)=0$ was not taken into account,
but this is easily corrected by adding an $\varepsilon$-term in suitable places.
\end{proof}
\begin{lemma}\label{lem:capacity and Newtonian function}
Let $G\subset X$ and $\varepsilon>0$. There exists an open set $V\supset G$ with $\capa_1(V)\le C_2(\capa_1(G)+\varepsilon)$ and a
function $\eta\in N^{1,1}_0(V)$ with $0\le\eta\le 1$ on $X$, $\eta=1$ on $G$, and $\Vert \eta\Vert_{N^{1,1}(X)}\le C_2(\capa_1(G)+\varepsilon)$,
for some constant $C_2=C_2(C_d,C_P,\lambda)\ge 1$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:covering G by a set of finite perimeter} we find an open set
$V_0\supset G$ with
\[
\capa_1(V_0)\le C_1(\capa_1(G)+\varepsilon)\ \ \ \textrm{and}\ \ \ P(V_0,X)\le C_1(\capa_1(G)+\varepsilon).
\]
By a suitable \emph{boxing inequality}, see \cite[Lemma 4.2]{HaKi}, we find balls $\{B(x_i,r_i)\}_{i=1}^{\infty}$ with $r_i\le 1$ covering $V_0$, and
\[
\sum_{i=1}^{\infty}\frac{\mu(B(x_i,r_i))}{r_i}\le C_B (\mu(V_0)+P(V_0,X))
\]
for some constant $C_B=C_B(C_d,C_P,\lambda)>0$.
For each $i\in{\mathbb N}$, take a $1/r_i$-Lipschitz function $0\le f_i\le 1$ with $f_i=1$ on $B(x_i,2r_i)$ and
$f_i=0$ on $X\setminus B(x_i,4r_i)$.
Let $f:=\sup_{i\in{\mathbb N}} f_i$. By \eqref{eq:upper gradient in constant set} and the fact that
the local Lipschitz constant is an upper gradient, $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{B(x_i,4r_i)}/r_i$ is a $1$-weak upper
gradient of $f_i$. Hence
the minimal $1$-weak upper gradient of $f$ satisfies $g_f\le\sum_{i=1}^{\infty}\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{B(x_i,4r_i)}/r_i$,
see e.g. \cite[Lemma 1.28]{BB}. Then
\begin{align*}
\int_X g_{f}\,d\mu
\le\sum_{i=1}^{\infty}\frac{\mu(B(x_i,4r_i))}{r_i}
&\le C_d^2\sum_{i=1}^{\infty}\frac{\mu(B(x_i,r_i))}{r_i}\\
&\le C_d^2 C_B (\mu(V_0)+P(V_0,X))\\
&\le C_d^2 C_B (\capa_1(V_0)+P(V_0,X))\\
&\le 2C_d^2 C_B C_1(\capa_1(G)+\varepsilon).
\end{align*}
Moreover, since $r_i\le 1$ for each $i\in{\mathbb N}$,
\[
\int_X f\,d\mu\le \sum_{i=1}^{\infty}\int_X f_i\,d\mu
\le \sum_{i=1}^{\infty}\frac{\mu(B(x_i,4r_i))}{r_i}\le 2C_d^2 C_B C_1(\capa_1(G)+\varepsilon).
\]
Let $V:=\bigcup_{i=1}^{\infty}B(x_i,2r_i)$.
Since $f\ge 1$ on $V$, we get the estimate
\[
\capa_1(V)\le \Vert f\Vert_{N^{1,1}(X)}\le 4C_d^2 C_B C_1(\capa_1(G)+\varepsilon).
\]
On the other hand, for each $i\in{\mathbb N}$, we can also take a $1/r_i$-Lipschitz function $0\le \eta_i\le 1$ with $\eta_i=1$ on $B(x_i,r_i)$ and
$\eta_i=0$ on $X\setminus B(x_i,2r_i)$.
Let $\eta:=\sup_{i\in{\mathbb N}} \eta_i$. Then $\eta=1$ on $V_0\supset G$ and $\eta=0$ on $X\setminus V$, and similarly as for
the function $f$, we can estimate $\Vert \eta\Vert_{N^{1,1}(X)}\le 4C_d C_B C_1\capa_1(G)$.
Thus we can choose $C_2=4C_d^2 C_B C_1$.
\end{proof}
The next lemma states that in the definition of the total variation, we can
consider convergence in $L^1(\Omega)$
instead of convergence in $L_{\mathrm{loc}}^1(\Omega)$.
\begin{lemma}[{\cite[Lemma 5.5]{KLLS}}]\label{lem:L1 loc and L1 convergence}
Let $\Omega\subset X$ be an open set and let $u\in L^1_{\mathrm{loc}}(\Omega)$
with $\Vert Du\Vert(\Omega)<\infty$. Then there exists a sequence
$(w_i)\subset \mathrm{Lip}_{\mathrm{loc}}(\Omega)$ with $w_i-u\to 0$ in $L^1(\Omega)$ and
$\int_\Omega g_{w_i}\, d\mu\to \Vert Du\Vert(\Omega)$.
\end{lemma}
Recall that $g_{w_i}$ denotes the minimal $1$-weak upper gradient of $w_i$ (in $\Omega$).
Note that above, we cannot write $w_i\to u$ in $L^1(\Omega)$, since the functions $w_i,u$ are not necessarily
in the class $L^1(\Omega)$.
\begin{lemma}[{\cite[Lemma 9.3]{BB-OD}}]\label{lem:quasiopen sets are measurable}
Every $1$-quasiopen set is $\mu$-measurable.
\end{lemma}
In fact, this is proved for all $1\le p<\infty$ in the above reference, but we only need the case $p=1$.
The coarea formula \eqref{eq:coarea} states that if $\Omega\subset X$ is an open set and $u\in L^1_{\mathrm{loc}}(\Omega)$, then
\[
\|Du\|(\Omega)=\int_{-\infty}^{\infty}P(\{u>t\},\Omega)\,dt.
\]
If $\Vert Du\Vert(\Omega)<\infty$, the above is true with $\Omega$ replaced by any Borel set $A\subset\Omega$; this is also given in \cite[Proposition 4.2]{M}. However, one can construct simple examples of non-Borel $1$-quasiopen sets, so we need to verify the coarea formula for such sets separately.
In doing this, we use the following lemma,
which states that the total variation of a $\mathrm{BV}$ function is absolutely continuous with respect
to the $1$-capacity.
\begin{lemma}[{\cite[Lemma 3.9]{L2}}]\label{lem:absolute cont of variation measure wrt capacity}
Let $\Omega\subset X$ be an open set, and
let $u\in L^1_{\mathrm{loc}}(\Omega)$ with $\Vert Du\Vert(\Omega)<\infty$. Then for every $\varepsilon>0$ there exists $\delta>0$ such that if $A\subset \Omega$ with
$\capa_1(A)<\delta$, then $\Vert Du\Vert(A)<\varepsilon$.
\end{lemma}
\begin{proposition}\label{prop:coarea generalization}
Let $U\subset X$ be a $1$-quasiopen set and suppose that $\Vert Du\Vert(U)<\infty$.
Then
\[
\Vert Du\Vert(U)=\int_{-\infty}^{\infty}P(\{u>t\},U)\,dt.
\]
\end{proposition}
\begin{proof}
Recall that implicit in the condition $\Vert Du\Vert(U)<\infty$ is the requirement
that there exists an open set
$\Omega\supset U$ such that $u\in L^1_{\mathrm{loc}}(\Omega)$ and $\Vert Du\Vert(\Omega)<\infty$.
Since $U$ is $1$-quasiopen, we can pick open sets $G_i\subset X$ such that
$\capa_1(G_i)\to 0$ and each $U\cup G_i$ is an open set, and we can also assume that
$G_i\subset \Omega$ and
$G_{i+1}\subset G_i$
for each $i\in{\mathbb N}$.
Then by the coarea formula \eqref{eq:coarea},
\[
\Vert Du\Vert(U\cup G_i)=\int_{-\infty}^{\infty}P(\{u>t\},U\cup G_i)\,dt.
\]
By Lemma \ref{lem:absolute cont of variation measure wrt capacity},
$\Vert Du\Vert(U\cup G_i)\to \Vert Du\Vert(U)$ as $i\to\infty$. Similarly,
$P(\{u>t\},U\cup G_i)\to P(\{u>t\},U)$ for every $t\in{\mathbb R}$ for which
$P(\{u>t\},U\cup G_1)<\infty$,
that is, for a.e. $t\in{\mathbb R}$. Then by Lebesgue's dominated convergence theorem,
with the majorant function $t\mapsto P(\{u>t\},U\cup G_1)$, we obtain
\[
\Vert Du\Vert(U)=\int_{-\infty}^{\infty}P(\{u>t\},U)\,dt.
\]
\end{proof}
\section{Main results}
In this section we state and prove our main results on the characterization
and lower semicontinuity of the total variation in $1$-quasiopen sets.
The definition of the total variation states that if $\Omega\subset X$
is an open set and $u\in L^1_{\mathrm{loc}}(\Omega)$, then
\begin{equation}\label{eq:definition of total variation repeated}
\|Du\|(\Omega)=\inf\left\{\liminf_{i\to\infty}\int_\Omega g_{u_i}\,d\mu:\, u_i\in \Lip_{\mathrm{loc}}(\Omega),\, u_i\to u\textrm{ in } L^1_{\mathrm{loc}}(\Omega)\right\},
\end{equation}
where each $g_{u_i}$ is the minimal $1$-weak upper gradient of $u_i$ in $\Omega$.
Moreover, by \cite[Theorem 5.47]{BB}, for any $v\in N^{1,1}_{\mathrm{loc}}(\Omega)$
and $\varepsilon>0$ we can find $w\in \mathrm{Lip}_{\mathrm{loc}}(\Omega)$ with $\Vert v-w\Vert_{N^{1,1}(\Omega)}<\varepsilon$.
Thus in the above definition we can equivalently assume that
$u_i\in N^{1,1}_{\mathrm{loc}}(\Omega)$.
\begin{example}\label{ex:total variation}
Let $X={\mathbb R}$ (unweighted), let $A:=[0,1]$, and let $u:=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{A}$.
Then by definition
\[
\Vert Du\Vert(A)=\inf\{\Vert Du\Vert(\Omega):\ \Omega\supset A,\ \Omega\textrm{ open}\}=2.
\]
On the other hand, the constant sequence $u_i:=1$ in $A$, $i\in{\mathbb N}$, converges to $u$ in $L^1(A)$ and
has upper gradients $g_{u_i}=0$ in $A$. This demonstrates that we cannot obtain $\Vert Du\Vert(A)$ simply by
writing \eqref{eq:definition of total variation repeated} with $\Omega$ replaced by $A$.
If we did define $\Vert Du\Vert(D)$ in this way for all ($\mu$-measurable) sets $D\subset {\mathbb R}$, then
we would obtain $\Vert Du\Vert({\mathbb R})=2$, $\Vert Du\Vert(A)=0$, and $\Vert Du\Vert({\mathbb R}\setminus A)=0$,
so that $\Vert Du\Vert$ would not be a measure.
\end{example}
However, for $1$-quasiopen sets we have the following.
\begin{theorem}\label{thm:characterization of total variational}
Let $U\subset X$ be a $1$-quasiopen set. If $\Vert Du\Vert(U)<\infty$, then
\[
\Vert Du\Vert(U)=\inf \left\{\liminf_{i\to\infty}\int_{U}g_{u_i}\,d\mu,\,
u_i\in N_{\mathrm{loc}}^{1,1}(U),\, u_i\to u\textrm{ in }L^1_{\mathrm{loc}}(U)\right\},
\]
where each $g_{u_i}$ is the minimal $1$-weak upper gradient of $u_i$ in $U$.
\end{theorem}
Note that the condition $u_i\to u$ in $L_{\mathrm{loc}}^1(U)$ means, explicitly, that
for every $x\in U$ there exists $r>0$ such that $u_i\to u$ in $L^1(B(x,r)\cap U)$.
In order for the formulation of the theorem to make sense, we need $U$ to be $\mu$-measurable, which is guaranteed by
Lemma \ref{lem:quasiopen sets are measurable}.
First we prove the following weaker version.
\begin{proposition}\label{prop:characterization of total variational for bounded functions}
Let $U\subset X$ be a $1$-quasiopen set. If $\Omega\supset U$ is an open set and
$-1\le u\le 1$ is a $\mu$-measurable function on $\Omega$
with $\Vert Du\Vert(\Omega)<\infty$, then
\[
\Vert Du\Vert(U)=\inf \left\{\liminf_{i\to\infty}\int_{U}g_{u_i}\,d\mu,\,
u_i\in N_{\mathrm{loc}}^{1,1}(U),\, u_i\to u\textrm{ in }L^1_{\mathrm{loc}}(U)\right\},
\]
where each $g_{u_i}$ is the minimal $1$-weak upper gradient of $u_i$ in $U$.
\end{proposition}
\begin{proof}
Denote the infimum in the statement of the theorem by $a(u,U)$.
Clearly $a(u,U)\le \Vert Du\Vert(U)$, so we only need to prove that $\Vert Du\Vert(U)\le a(u,U)$.
We can assume that $a(u,U)<\infty$.
First assume also that $u\in\mathrm{BV}(X)$ with $-1\le u\le 1$. Fix $\varepsilon>0$.
By Lemma \ref{lem:absolute cont of variation measure wrt capacity}, there exists $\delta\in (0,\varepsilon)$
such that if $A\subset X$ with $\capa_1(A)<\delta$, then $\Vert Du\Vert(A)<\varepsilon$.
Take a sequence $(u_i)\subset N_{\mathrm{loc}}^{1,1}(U)$ with $u_i\to u$ in $L^1_{\mathrm{loc}}(U)$ and
\[
\liminf_{i\to\infty}\int_U g_{u_i}\,d\mu\le a(u,U)+\varepsilon.
\]
By truncating, we can also assume that $-1\le u_i\le 1$.
Then take an open set $G\subset X$ such that $\capa_1(G)<\delta/C_2$ and $U\cup G$ is open.
By Lemma \ref{lem:capacity and Newtonian function} we find
an open set $V\supset G$
with $\capa_1(V)<\delta$ and
a function $\eta\in N_0^{1,1}(V)$ with $0\le\eta\le 1$, $\eta=1$ on $G$, and
$\Vert \eta\Vert_{N^{1,1}(X)}<\delta$.
By the definition of the total variation, we find a
sequence $(v_i)\subset N_{\mathrm{loc}}^{1,1}(V)$ with $v_i\to u$ in $L^1_{\mathrm{loc}}(V)$ and
\[
\Vert Du\Vert(V)=\lim_{i\to\infty}\int_{V} g_{v_i}\,d\mu.
\]
We can again assume that $-1\le v_i\le 1$, and then in fact $v_i\to u$ in $L^1(V)$.
Define
\[
w_i:=(1-\eta) u_i+\eta v_i,\quad i\in{\mathbb N}.
\]
By the Leibniz rule, see \cite[Theorem 2.15]{BB}, $(1-\eta)u_i$ has a $1$-weak upper gradient
\[
(1-\eta)g_{u_i}+|u_i|g_{\eta}
\]
in $U$.
By Lemma \ref{lem:zero boundary values},
\[
\eta v_i\in N_0^{1,1}(V)\subset N^{1,1}(X)\subset N^{1,1}(U)
\]
with a $1$-weak upper gradient $\eta g_{v_i}+|v_i|g_{\eta}$ (in $X$, and thus in $U$).
In total, $w_i$ has a $1$-weak upper gradient
\[
g_i:=(1-\eta)g_{u_i}+\eta g_{v_i}+2g_{\eta}
\]
in $U$. Next we show that in fact, $g_i$ is a $1$-weak upper gradient of $w_i$ in $U\cup G$;
note that
while $g_{u_i}$ is only defined on $U$, $(1-\eta)g_{u_i}$ is defined in a natural way on $U\cup G$,
and similarly for the term $\eta g_{v_i}$.
Since $U$ is a $1$-quasiopen set, it is also \emph{$1$-path open},
meaning that for $1$-a.e. curve $\gamma$,
the set $\gamma^{-1}(U)$ is a relatively open subset of $[0,\ell_{\gamma}]$, see \cite[Remark 3.5]{S2}.
Fix such a curve $\gamma$ in $U\cup G$, and assume also that the upper gradient inequality holds for the
pair $(w_i,g_i)$ on any subcurve of $\gamma$ in $U$, and for the
pair $(v_i,g_{v_i})$ on any subcurve of $\gamma$ in $G$; by \cite[Lemma 1.34(c)]{BB} this is true for $1$-a.e. curve.
Now $[0,\ell_{\gamma}]$ is a compact set that is covered by the two relatively
open sets $\gamma^{-1}(U)$ and $\gamma^{-1}(G)$.
By the Lebesgue number lemma,
there exists a number $\beta>0$ such that every subinterval of $[0,\ell_{\gamma}]$ with
length at most $\beta$ is contained either in $\gamma^{-1}(U)$ or in
$\gamma^{-1}(G)$.
Choose $m\in{\mathbb N}$ such that $\ell_{\gamma}/m\le \delta$ and consider the
subintervals $I_j:=[j\ell_{\gamma}/m,(j+1)\ell_{\gamma}/m]$, $j=0,\ldots,m-1$.
If $I_j\subset \gamma^{-1}(U)$, then by our assumptions on $\gamma$,
\[
|w_i(j\ell_{\gamma}/m)-w_i((j+1)\ell_{\gamma}/m)|\le \int_{j\ell_{\gamma}/m}^{(j+1)\ell_{\gamma}/m}g_i(\gamma(s))\,ds.
\]
Otherwise $I_j\subset \gamma^{-1}(G)$. Recall that
$\eta=1$ on $G$. Then by our assumptions on $\gamma$,
\begin{align*}
|w_i(j\ell_{\gamma}/m)-w_i((j+1)\ell_{\gamma}/m)|
&=|v_i(j\ell_{\gamma}/m)-v_i((j+1)\ell_{\gamma}/m)|\\
&\le \int_{j\ell_{\gamma}/m}^{(j+1)\ell_{\gamma}/m}g_{v_i}(\gamma(s))\,ds\\
&=\int_{j\ell_{\gamma}/m}^{(j+1)\ell_{\gamma}/m}g_i(\gamma(s))\,ds.
\end{align*}
Adding up the inequalities for $j=1,\ldots,m-1$, we conclude that the
upper gradient inequality holds for the
pair $(w_i,g_i)$ on the curve $\gamma$, that is,
\[
|w_i(0)-w_i(\ell_{\gamma})|\le \int_0^{\ell_{\gamma}}g_i(\gamma(s))\,ds.
\]
Thus
$g_i$ is a $1$-weak upper gradient of
$w_i$ in the open set $U\cup G$.
Next we show that $w_i\to u$ in $L_{\mathrm{loc}}^1(U\cup G)$.
Let $x\in U$. Since $u_i\to u$ in $L^1_{\mathrm{loc}}(U)$, there is some $r>0$ such that $u_i\to u$
in $L^1(B(x,r)\cap U)$. Moreover, $v_i\to u$ in $L^1(V)$, and by making $r$ smaller, if necessary,
$B(x,r)\subset U\cup G$.
Then
\begin{align*}
\int_{B(x,r)}|w_i-u|\,d\mu
&\le\int_{B(x,r)}|(1-\eta)(u_i-u)|\,d\mu +
\int_{B(x,r)}|\eta (v_i-u)|\,d\mu\\
&\le \int_{B(x,r)\cap U\setminus G}|u_i-u|\,d\mu+
\int_{B(x,r)\cap V}|v_i-u|\,d\mu\\
&\to 0\qquad \textrm{as }i\to\infty.
\end{align*}
On the other hand, if $x\in G$, then for some $r>0$, $B(x,r)\subset G$. Then
\[
\int_{B(x,r)}|w_i-u|\,d\mu= \int_{B(x,r)}|v_i-u|\,d\mu\to 0.
\]
We conclude that $w_i\to u$ in $L_{\mathrm{loc}}^1(U\cup G)$. Now by the definition of the total variation
(recall \eqref{eq:definition of total variation repeated} and the discussion after it)
\begin{align*}
\Vert Du\Vert(U\cup G)
&\le \liminf_{i\to\infty}\int_{U\cup G}g_i\,d\mu\\
&\le \liminf_{i\to\infty}\int_{U}g_{u_i}\,d\mu+\limsup_{i\to\infty}\int_{V}g_{v_i}\,d\mu+2\limsup_{i\to\infty}\int_X g_{\eta}\,d\mu\\
&\le a(u,U)+\varepsilon+\Vert Du\Vert(V)+2\int_X g_{\eta}\,d\mu\\
&< a(u,U)+4\varepsilon;
\end{align*}
recall that $\Vert Du\Vert(V)<\varepsilon$ since $\capa_1(V)<\delta$, and that
$\Vert \eta\Vert_{N^{1,1}(X)}<\delta<\varepsilon$.
In conclusion,
\[
\Vert Du\Vert(U)\le \Vert Du\Vert(U\cup G)\le a(u,U)+4\varepsilon.
\]
Letting $\varepsilon\to 0$,
the proof is complete in the case $u\in\mathrm{BV}(X)$, $-1\le u\le 1$.
Now we drop the assumption $u\in\mathrm{BV}(X)$. By assumption, we have $-1\le u\le 1$ on the open set $\Omega\supset U$,
with $\Vert Du\Vert(\Omega)<\infty$.
Take open sets $\Omega_1\Subset \Omega_2\Subset\ldots \Subset \Omega$ with $\Omega=\bigcup_{j=1}^{\infty}\Omega_j$,
and cutoff functions $\eta_j\in \Lip_c(X)$ with $0\le \eta_j\le 1$, $\eta_j=1$ on $\Omega_j$, and $\eta_j=0$ on $X\setminus \Omega_{j+1}$.
Fix $j\in{\mathbb N}$.
It is easy to check that $u \eta_j\in\mathrm{BV}(X)$ for each $j\in{\mathbb N}$.
Since $\Omega_j\cap U$ is a $1$-quasiopen set\footnote{Quasiopen sets do not form a topology, see \cite[Remark 9.1]{BB-OD}, but it is easy to see that the intersection of a $1$-quasiopen set and an open set is $1$-quasiopen.},
we get by the first part of the proof that
\begin{align*}
\Vert Du\Vert(\Omega_j\cap U)
&=\Vert D(u\eta_j)\Vert(\Omega_j\cap U)\\
&= a(u\eta_j,\Omega_j\cap U)
= a(u,\Omega_j\cap U)\le a(u,U).
\end{align*}
Letting $j\to\infty$ concludes the proof.
\end{proof}
Before proving Theorem \ref{thm:characterization of total variational},
we prove our second main result, which states that the total variation of $\mathrm{BV}$ functions is lower semicontinuous with
respect to $L^1$-convergence in $1$-quasiopen sets.
In fact, we will use this to prove Theorem \ref{thm:characterization of total variational}.
\begin{theorem}\label{thm:lower semic in quasiopen sets}
Let $U\subset X$ be a $1$-quasiopen set.
If $\Vert Du\Vert(U)<\infty$ and $u_i\to u$ in $L^1_{\mathrm{loc}}(U)$, then
\[
\Vert Du\Vert(U)\le \liminf_{i\to\infty}\Vert Du_i\Vert(U).
\]
\end{theorem}
\begin{proof}
First assume that $E,E_i\subset X$, $i\in{\mathbb N}$, are $\mu$-measurable sets with
$P(E,U),P(E_i,U)<\infty$ and
$\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E_i}\to \text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E$ in $L_{\mathrm{loc}}^1(U)$.
For each $i\in{\mathbb N}$, the condition $P(E_i,U)<\infty$ means that we find an open set $\Omega_i\supset U$ such that
$P(E_i,\Omega_i)<P(E_i,U)+1/i<\infty$.
Then by Lemma \ref{lem:L1 loc and L1 convergence}, for each $i\in{\mathbb N}$ we find a function
$v_i\in \mathrm{Lip}_{\mathrm{loc}}(\Omega_i)\subset N_{\mathrm{loc}}^{1,1}(\Omega_i)$ such that
\[
\Vert v_i-\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E_i}\Vert_{L^1(\Omega_i)}<1/i\quad\textrm{and}\quad\int_{\Omega_i}
g_{v_i}\,d\mu<P(E_i,\Omega_i)+1/i,
\]
where $g_{v_i}$ is the minimal $1$-weak upper gradient of $v_i$ in $\Omega_i$.
In particular, we have $v_i\in N_{\mathrm{loc}}^{1,1}(U)$ with
\[
\Vert v_i-\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E_i}\Vert_{L^1(U)}<1/i\quad\textrm{and}\quad\int_{U} g_{v_i}\,d\mu<P(E_i,U)+2/i,
\]
where $g_{v_i}$ is now the minimal $1$-weak upper gradient of $v_i$ in $U$, which
is of course at most the minimal $1$-weak upper gradient of $v_i$ in $\Omega_i$.
Now we clearly have $v_i\to \text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E$ in $L^1_{\mathrm{loc}}(U)$. Moreover, the condition $P(E,U)<\infty$
means that there exists an open set $\Omega\supset U$ such that $P(E,\Omega)<\infty$. Thus by Proposition
\ref{prop:characterization of total variational for bounded functions},
\[
P(E,U)\le \liminf_{i\to\infty}\int_U g_{v_i}\,d\mu\le\liminf_{i\to\infty}( P(E_i,U)+2/i)
=\liminf_{i\to\infty} P(E_i,U).
\]
Thus we have proved lower semicontinuity in the case of sets of finite perimeter.
Then consider the function $u$.
Note that it is enough to prove the lower semicontinuity for a subsequence.
We have $u_i\to u$ in $L_{\mathrm{loc}}^1(U)$, which means that for every $x\in U$ there
exists $r_x>0$ such that $u_i\to u$ in $L^1(B(x,r_x)\cap U)$. Consider the cover
$\{B(x,r_x)\}_{x\in U}$.
We know that the space $X$ is
separable, see e.g. \cite[Proposition 1.6]{BB},
and this property is inherited by subsets of $X$. Thus $U$ is separable,
and so it is also Lindel\"of, meaning that every open cover of $U$ has a countable subcover,
see \cite[pp. 176--177]{Kur}.
Thus there exists a countable subcover $\{B(x_j,r_j)\}_{j\in{\mathbb N}}$ of $U$.
Consider the ball $B(x_1,r_1)$.
We have $u_i\to u$ in $L^1(B(x_1,r_1)\cap U)$, and so
by passing to a subsequence (not relabeled), for a.e. $t\in{\mathbb R}$ we have $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{\{u_i>t\}}\to \text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{\{u>t\}}$ in $L^1(B(x_1,r_1)\cap U)$, see e.g. \cite[p. 188]{EvaG92}.
By a diagonal argument, we find a subsequence (not relabeled) such that for each $j\in{\mathbb N}$ and
a.e. $t\in{\mathbb R}$ we have $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{\{u_i>t\}}\to \text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{\{u>t\}}$ in $L^1(B(x_j,r_j)\cap U)$.
Since the balls $B(x_j,r_j)$ cover $U$, we conclude that for a.e. $t\in{\mathbb R}$,
$\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{\{u_i>t\}}\to \text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{\{u>t\}}$ in $L_{\mathrm{loc}}^1(U)$.
We can assume that $\Vert Du_i\Vert(U)<\infty$ for all $i\in{\mathbb N}$, and so by
Proposition \ref{prop:coarea generalization},
$\int_{-\infty}^{\infty}P(\{u_i>t\},U)\,dt<\infty$ for all $i\in{\mathbb N}$,
and in particular,
for each $i\in{\mathbb N}$ the mapping $t\mapsto P(\{u_i>t\},U)$ is measurable,
enabling us to use Fatou's lemma.
Moreover, we are able to use the lower semicontinuity for sets of finite perimeter proved above,
because for a.e. $t\in{\mathbb R}$, $P(\{u>t\},U)<\infty$ and $P(\{u_i>t\},U)<\infty$ for all $i\in{\mathbb N}$.
Indeed, now we use Proposition \ref{prop:coarea generalization},
the lower semicontinuity for sets of finite perimeter proved above, and Fatou's lemma to obtain
\begin{align*}
\Vert Du\Vert(U)
=\int_{-\infty}^{\infty}P(\{u>t\},U)\,dt
&\le\int_{-\infty}^{\infty}\liminf_{i\to\infty}P(\{u_i>t\},U)\,dt\\
&\le\liminf_{i\to\infty}\int_{-\infty}^{\infty}P(\{u_i>t\},U)\,dt\\
&=\liminf_{i\to\infty}\Vert Du_i\Vert(U).
\end{align*}
\end{proof}
Knowing that the total variation is lower semicontinuous in a wider class of sets than just the open sets should prove useful in dealing with various minimization problems. In the upcoming work \cite{LMS} we need lower semicontinuity of the total variation in the super-level sets of a given Newton-Sobolev function $w\in N^{1,1}(X)$. Such sets are $1$-quasiopen since functions in the class $N^{1,1}(X)$ are $1$-quasicontinuous; see \cite{BBM-QP} for more on these concepts.
Finally, we give the proof of Theorem \ref{thm:characterization of total variational}.
\begin{proof}[Proof of Theorem \ref{thm:characterization of total variational}]
Suppose that $\Vert Du\Vert(U)<\infty$.
First suppose also that there exists $M>0$ and an open set $\Omega\supset U$ such
that $-M\le u\le M$ on $\Omega$,
and $\Vert Du\Vert(\Omega)<\infty$. Again, denote by $a(u,U)$ the infimum in the statement of the theorem.
It is obvious that a function $g$ is a $1$-weak upper gradient of a function $v$
if and only if $g/M$ is a $1$-weak upper gradient of $v/M$.
Using this fact and Proposition \ref{prop:characterization of total variational for bounded functions}, we obtain
\[
\Vert Du\Vert(U)/M=\Vert D(u/M)\Vert(U)= a(u/M,U)=a(u,U)/M,
\]
so that
\[
\Vert Du\Vert(U)= a(u,U).
\]
Then suppose we only have $\Vert Du\Vert(U)<\infty$.
This means that there exists an open set $\Omega\supset U$ such that $u\in L^1_{\mathrm{loc}}(\Omega)$ and
$\Vert Du\Vert(\Omega)<\infty$.
Define the truncations
\[
u_M:=\min\{M,\max\{-M,u\}\},\quad M>0,
\]
and apply Theorem \ref{thm:lower semic in quasiopen sets} and the
first part of the current proof to obtain
\begin{align*}
\Vert Du\Vert(U)
&\le \liminf_{M\to\infty}\Vert Du_M\Vert(U)\\
&= \liminf_{M\to\infty}a(u_M,U)
\le\liminf_{M\to\infty}a(u,U)=a(u,U).
\end{align*}
\end{proof}
Contrary to the case of open sets, lower semicontinuity can actually be violated in $1$-quasiopen sets if the limit function is not a $\mathrm{BV}$ function. Thus the requirement
$\Vert Du\Vert(U)<\infty$ in Theorem \ref{thm:lower semic in quasiopen sets} is essential.
\begin{example}\label{ex:necessity of finite variation}
Let $X={\mathbb R}^2$ (unweighted). Denote the origin by $0$, and let $U:=\{0\}$.
The set $B(0,r)$ is open for all $r>0$,
and it is easy to check that $\capa_1(B(0,r))\le 3\pi r$ for $0<r\le 1$.
Thus $U$ is a $1$-quasiopen set. Let
\[
E:=\bigcup_{i=1}^{\infty}\{(x_1,x_2)\in{\mathbb R}^2:\,(2i+1)^{-1}< x_1< (2i)^{-1}\}.
\]
It is well known that $P(E,B(0,r))=\mathcal H^1(\partial^*E\cap B(0,r))$, where $\mathcal H^1$
is the $1$-dimensional Hausdorff measure, see e.g. \cite[Theorem 3.59]{AFP}.
Thus clearly $P(E,B(0,r))=\infty$ for all $r>0$, and so
$P(E,U)=\infty$.
Next, let
\[
E_k:=\bigcup_{i=1}^k\{(x_1,x_2)\in{\mathbb R}^2:\,(2i+1)^{-1}< x_1< (2i)^{-1}\},\quad k\in{\mathbb N}.
\]
Then $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E_k}\to \text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E}$ even in $L_{\mathrm{loc}}^1({\mathbb R}^2)$ (and obviously in $L_{\mathrm{loc}}^1(U)$).
On the other hand, for any $k\in{\mathbb N}$ and $r<(2k+1)^{-1}$,
\[
P(E_k,U)\le P(E_k, B(0,r))=0,
\]
since $E_k$ does not intersect the open set $ B(0,r)$.
Thus
\[
P(E,U)>\lim_{k\to\infty}P(E_k,U),
\]
that is, lower semicontinuity is violated.
Similarly we see that without the assumption
$\Vert Du\Vert(U)<\infty$, Theorem \ref{thm:characterization of total variational} fails with the choice
$u=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E$, as the left-hand side is $\infty$ but the right-hand side is zero.
\end{example}
It would be interesting to know if the conclusions of Theorem \ref{thm:characterization of total variational} and Theorem \ref{thm:lower semic in quasiopen sets}
actually
characterize $1$-quasiopen sets.
\begin{openproblem}
Let $U\subset X$ be a $\mu$-measurable set such that the conclusion of Theorem
\ref{thm:characterization of total variational} or the conclusion of Theorem
\ref{thm:lower semic in quasiopen sets} holds.
Is $U$ then a $1$-quasiopen set?
\end{openproblem}
To conclude this section, we apply Lemma \ref{lem:capacity and Newtonian function} to prove a somewhat different but quite natural characterization of $1$-quasiopen sets, given in Proposition \ref{prop:quasiopen sets characterization} below.
For other characterizations of quasiopen sets, see \cite{BBM-QP}.
First we take note of the following facts.
By \cite[Theorem 4.3, Theorem 5.1]{HaKi} we know that for any $A\subset X$,
\begin{equation}\label{eq:null sets of Hausdorff measure and capacity}
\capa_1(A)=0\quad\ \textrm{if and only if}\quad\ \mathcal H(A)=0.
\end{equation}
The following proposition follows from
\cite[Corollary 4.2]{L2} (which is originally based on \cite[Theorem 1.1]{LaSh}).
\begin{proposition}\label{prop:quasisemicontinuity}
Let $u\in\mathrm{BV}_{\mathrm{loc}}(X)$ and let $\varepsilon>0$. Then there exists an open set $G\subset X$ with $\capa_1(G)<\varepsilon$
such that $u^{\wedge}|_{X\setminus G}$ is real-valued lower semicontinuous and
$u^{\vee}|_{X\setminus G}$ is
real-valued upper semicontinuous.
\end{proposition}
Moreover, $1$-quasiopen sets can be perturbed in the following way.
\begin{lemma}\label{lem:stability of quasiopen sets}
Let $U\subset X$ be a $1$-quasiopen set and let $A\subset X$ be $\mathcal H$-negligible.
Then $U\setminus A$ and $U\cup A$ are $1$-quasiopen sets.
\end{lemma}
\begin{proof}
Let $\varepsilon>0$. Take an open set $G\subset X$ such that $\capa_1(G)<\varepsilon$ and $U\cup G$ is an open set.
By \eqref{eq:null sets of Hausdorff measure and capacity} we know that $\capa_1(A)=0$, and since $\capa_1$
is an outer capacity, we find an open set $V\supset A$ such that $\capa_1(G)+\capa_1(V)<\varepsilon$.
Now $(U\setminus A)\cup (G\cup V)=(U\cup G)\cup V$ is an open set with $\capa_1(G\cup V)<\varepsilon$, so that
$U\setminus A$ is a $1$-quasiopen set. Similarly, $(U\cup A)\cup (G\cup V)=(U\cup G)\cup V$
is an open set, so that $U\cup A$ is also a $1$-quasiopen set.
\end{proof}
In the following, $\Delta$ denotes the symmetric difference.
\begin{proposition}\label{prop:quasiopen sets characterization}
Let $U\subset X$. The following are equivalent:
\begin{enumerate}[{(1)}]
\item $U$ is $1$-quasiopen.
\item There exists $u\in N_{\mathrm{loc}}^{1,1}(X)$ with $\mathcal H(\{u>0\}\Delta U)=0$.
\item There exists $u\in\mathrm{BV}_{\mathrm{loc}}(X)$ with $\mathcal H(\{u^{\wedge}>0\}\Delta U)=0$.
\end{enumerate}
\end{proposition}
\begin{proof}
\hfill
\begin{itemize}
\item $(1)\implies (2)$: Take a sequence of open sets $G_i\subset X$ such that $\capa_1(G_i)<2^{-i}$
and each $U\cup G_i$ is an open set. Then define
\[
v_i(x):=2^{-i}\min\{1,\dist(x,X\setminus (U\cup G_i))\},\quad x\in X,\ \ i\in{\mathbb N}.
\]
By
Lemma \ref{lem:capacity and Newtonian function} there exist sets $V_i\supset G_i$ and
functions $\eta_i\in N_0^{1,1}(V_i)$ such that $0\le\eta_i\le 1$, $\eta_i=1$ on $G_i$, and
$\Vert \eta_i\Vert_{N^{1,1}(X)}\le 2^{-i}C_2$.
Let $u_i:=(v_i-\eta_i)_+$.
Then $0\le u_i\le 1$, $u_i=v_i>0$ on $U\setminus V_i$, and $u_i=0$
on $G_i$. Since also $u_i\le v_i=0$ on $X\setminus (U\cup G_i)$, in total $u_i=0$ on $X\setminus U$.
For any bounded open set $\Omega\subset X$,
we have
\begin{align*}
\Vert u_i\Vert_{N^{1,1}(\Omega)}
&\le \Vert v_i\Vert_{N^{1,1}(\Omega)}+\Vert \eta_i\Vert_{N^{1,1}(\Omega)}\\
&\le \Vert v_i\Vert_{L^1(\Omega)}+\int_\Omega g_{v_i}\,d\mu+\Vert\eta_i\Vert_{N^{1,1}(X)}\\
&\le 2^{-i}\mu(\Omega)+2^{-i}\mu(\Omega)+2^{-i}C_2.
\end{align*}
Let $u:=\sup_{i\in{\mathbb N}}u_i$.
Then $\sup_{i\in{\mathbb N}}g_{u_i}$ is a $1$-weak upper gradient of $u$, see \cite[Lemma 1.52]{BB}.
Thus $\Vert u\Vert_{N^{1,1}(\Omega)}<\infty$, and so $u\in N_{\mathrm{loc}}^{1,1}(X)$. Moreover, $u>0$ on $U\setminus \bigcap_{i=1}^{\infty}V_i$,
where $\capa_1(\bigcap_{i=1}^{\infty}V_i)=0$ and thus $\mathcal H(\bigcap_{i=1}^{\infty}V_i)=0$ by
\eqref{eq:null sets of Hausdorff measure and capacity}.
On the other hand, $u=0$ on $X\setminus U$.
Thus $\mathcal H(\{u>0\}\Delta U)=0$.
\item $(2)\implies (3)$: Take $u\in N_{\mathrm{loc}}^{1,1}(X)$ with
$\mathcal H(\{u>0\}\Delta U)=0$.
We know that $u$
has a Lebesgue point at $\mathcal H$-a.e. $x\in X$, see \cite[Theorem 4.1, Remark 4.2]{KKST}
and \eqref{eq:null sets of Hausdorff measure and capacity}. Thus $u(x)=u^{\wedge}(x)$
at $\mathcal H$-a.e. $x\in X$, and so $\mathcal H(\{u^{\wedge}>0\}\Delta U)=0$.
Furthermore, $N_{\mathrm{loc}}^{1,1}(X)\subset \mathrm{BV}_{\mathrm{loc}}(X)$ by the discussion after
\eqref{eq:definition of total variation repeated}. Thus $u\in\mathrm{BV}_{\mathrm{loc}}(X)$.
\item $(3)\implies (1)$: Take $u\in\mathrm{BV}_{\mathrm{loc}}(X)$ with $\mathcal H(\{u^{\wedge}>0\}\Delta U)=0$. By
Proposition \ref{prop:quasisemicontinuity}, there exist open sets $G_i\subset X$ such that
$\capa_1(G_i)\to 0$
and for each $i\in{\mathbb N}$, $u^{\wedge}|_{X\setminus G_i}$ is a lower semicontinuous function. Hence
the set $\{u^{\wedge}>0\}$ is open in the subspace topology of $X\setminus G_i$, and so
the sets $\{u^{\wedge}>0\}\cup G_i$ are open (in $X$). We conclude that $\{u^{\wedge}>0\}$ is a $1$-quasiopen
set, and then by Lemma \ref{lem:stability of quasiopen sets}, $U$ is also $1$-quasiopen.
\end{itemize}
\end{proof}
\section{Uniform absolute continuity}
In this section we use the lower semicontinuity result proved in the previous section to
show that the variation measures of a sequence of $\mathrm{BV}$ functions
converging in the strict sense are uniformly absolutely continuous with respect to the
$1$-capacity $\capa_1$.
First recall the following definition.
Given a $\mu$-measurable set $H\subset X$, a sequence of functions $(g_i)\subset L^1(H)$
is said to be \emph{uniformly integrable} if the following two conditions are satisfied.
First, for every $\varepsilon>0$ there exists a $\mu$-measurable set $D\subset H$ with $\mu(D)<\infty$
such that
\[
\int_{H\setminus D}g_i\,d\mu<\varepsilon \quad\textrm{for all }i\in{\mathbb N}.
\]
Second, for every $\varepsilon>0$ there exists $\delta>0$ such that if $A\subset H$ is a $\mu$-measurable set with $\mu(A)<\delta$, then
\[
\int_{A}g_i\,d\mu<\varepsilon \quad\textrm{for all }i\in{\mathbb N}.
\]
The second condition can be called the uniform absolute continuity of the
measures $g_i\mu$ with
respect to $\mu$.
The variation measure of a $\mathrm{BV}$ function is usually not absolutely continuous
with respect to $\mu$, but according to Lemma
\ref{lem:absolute cont of variation measure wrt capacity}, it is absolutely
continuous with respect to the $1$-capacity.
Thus we can analogously talk about the variation measures of a sequence of $\mathrm{BV}$ functions being uniformly absolutely continuous with respect to the $1$-capacity.
Before stating our main theorem, we gather a few preliminary results. For these,
we will also need the concept of $\mathrm{BV}$-capacity, which is defined for a set $A\subset X$
by
\[
\capa_{\mathrm{BV}}(A):=\inf \Vert u\Vert_{\mathrm{BV}(X)},
\]
where the infimum is taken over all $u\in\mathrm{BV}(X)$ such that $u\ge 1$ in a neighborhood of $A$.
By \cite[Theorem 3.4]{HaKi} we know that if $A_1\subset A_2\subset \ldots\subset X$, then
\begin{equation}\label{eq:continuity of BVcap}
\capa_{\mathrm{BV}}\left(\bigcup_{i=1}^{\infty}A_i\right)=\lim_{i\to\infty} \capa_{\mathrm{BV}}(A_i).
\end{equation}
On the other hand, by \cite[Theorem 4.3]{HaKi} there is a constant
$C_{\textrm{cap}}(C_d,C_P,\lambda)\ge 1$ such that for any
$A\subset X$,
\begin{equation}\label{eq:Newtonian and BV capacities are comparable}
\capa_{\mathrm{BV}}(A)\le \capa_1(A)\le C_{\textrm{cap}}\capa_{\mathrm{BV}}(A).
\end{equation}
Thus the $1$-capacity and the $\mathrm{BV}$-capacity can often be used interchangeably, but the $\mathrm{BV}$-capacity
has the advantage that it is continuous with respect to increasing sequences of sets.
\begin{lemma}\label{lem:complete metric space}
Let $\Omega\subset X$ be an arbitrary set.
The space of sets $A\subset \Omega$ with $\capa_1(A)<\infty$, equipped with the metric
\[
\capa_1(A_1\Delta A_2),\quad A_1,A_2\subset \Omega,
\]
is a complete metric space if we identify sets $A_1,A_2\subset \Omega$ with
$\capa_1(A_1\Delta A_2=0$.
\end{lemma}
\begin{proof}
We know that $\capa_1$ is an outer measure, see e.g. \cite[Theorem 6.7]{BB}, and thus
it is straightforward to check that $\capa_1(\cdot\Delta\cdot)$ is indeed a metric.
In particular, note that if $A_1,A_2\subset \Omega$ with $\capa_1(A_1),\capa_1(A_2)<\infty$, then
$\capa_1(A_1\Delta A_2)\le \capa_1(A_1\cup A_2)<\infty$, so the distance is always finite.
To verify completeness, let $\{A_i\}_{i\in{\mathbb N}}$ be a Cauchy sequence. We can pick a subsequence $\{A_{i_j}\}_{j\in{\mathbb N}}$ such that $\capa_1(A_{i_j}\Delta A_{i_{j+1}})<2^{-j}$ for all
$j\in{\mathbb N}$. It follows that $\capa_1(A_{i_j}\Delta A_{i_{l}})<2^{-j+1}$ for all $l> j$.
Let
\[
A:=\bigcap_{k=1}^{\infty}\bigcup_{l=k}^{\infty}A_{i_l},
\]
so that $A\subset \Omega$.
For a fixed $j\in{\mathbb N}$, we now have
\begin{align*}
A_{i_j}\setminus \bigcap_{k=1}^{\infty}\bigcup_{l=k}^{\infty}A_{i_l}
= A_{i_j}\cap \bigcup_{k=1}^{\infty}\left(X\setminus \bigcup_{l=k}^{\infty}A_{i_l}\right)
&= \bigcup_{k=1}^{\infty}
\left(A_{i_j}\cap \left(X\setminus \bigcup_{l=k}^{\infty}A_{i_l}\right)\right)\\
&= \bigcup_{k=1}^{\infty}
\left(A_{i_j}\setminus \bigcup_{l=k}^{\infty}A_{i_l}\right).
\end{align*}
Thus by \eqref{eq:continuity of BVcap} and \eqref{eq:Newtonian and BV capacities are comparable},
\begin{align*}
\capa_1(A_{i_j}\setminus A)
&=\capa_1\left(\bigcup_{k=1}^{\infty}\left(A_{i_j}\setminus
\bigcup_{l=k}^{\infty}A_{i_l}\right)\right)\\
&\le C_{\textrm{cap}}\capa_{\mathrm{BV}}\left(\bigcup_{k=1}^{\infty}\left(A_{i_j}\setminus
\bigcup_{l=k}^{\infty}A_{i_l}\right)\right)\\
&=C_{\textrm{cap}}\lim_{k\to\infty}\capa_{\mathrm{BV}}\left(A_{i_j}\setminus \bigcup_{l=k}^{\infty}A_{i_l}\right)\\
&\le C_{\textrm{cap}}\lim_{k\to\infty}\capa_{1}\left(A_{i_j}\setminus \bigcup_{l=k}^{\infty}A_{i_l}\right)\\
&\le C_{\textrm{cap}}\lim_{k\to\infty}\capa_1\left(A_{i_j}\setminus A_{i_k}\right)\\
&\le C_{\textrm{cap}}\lim_{k\to\infty}2^{-j+1}\\
&=2^{-j+1}C_{\textrm{cap}}
\to 0\quad\textrm{as }j\to \infty.
\end{align*}
Conversely,
\begin{align*}
\capa_1(A\setminus A_{i_j})
\le \capa_1\left(\bigcup_{l=j}^{\infty}A_{i_l}\setminus A_{i_j}\right)
&=\capa_1\left(\bigcup_{l=j}^{\infty}(A_{i_{l+1}}\setminus A_{i_l})\right)\\
&\le \sum_{l=j}^{\infty}\capa_1(A_{i_{l+1}}\setminus A_{i_l})\\
&\le \sum_{l=j}^{\infty} 2^{-l}\\
&=2^{-j+1}\to 0\quad\textrm{as }j\to\infty.
\end{align*}
Thus $\capa_1(A_{i_j}\Delta A)\to 0$ as $j\to\infty$, and since $\{A_i\}_{i\in{\mathbb N}}$ is a Cauchy sequence, we have $\capa_1(A_i\Delta A)\to 0$ as $i\to\infty$. It is also clear that $\capa_1(A)<\infty$.
\end{proof}
The following proposition,
which follows from Proposition \ref{prop:quasisemicontinuity},
provides many $1$-quasiopen sets in which the lower semicontinuity
result of the previous section can be applied; recall the definitions of the measure theoretic interior $I_E$ and the measure theoretic exterior $O_E$ from
\eqref{eq:definition of measure theoretic interior} and \eqref{eq:definition of measure theoretic exterior}.
\begin{proposition}[{\cite[Proposition 4.2]{L3}}]\label{prop:set of finite perimeter is quasiopen}
Let $\Omega\subset X$ be an open set and let $E\subset X$ be a $\mu$-measurable set with
$P(E,\Omega)<\infty$. Then the sets $I_E\cap\Omega$ and $O_E\cap\Omega$ are $1$-quasiopen.
\end{proposition}
The $1$-capacity has the following useful rigidity property.
\begin{lemma}\label{lem:capacity of measure theoretic closure}
For any $A\subset X$, we have $\capa_1(I_A\cup \partial^*A)\le \capa_1(A)$.
\end{lemma}
\begin{proof}
This follows by combining \cite[Lemma 3.1]{L3} and \cite[Proposition 3.8]{L3}.
\end{proof}
Now we give the main result of this section. The proof is partially based on Baire's category theorem, similarly to the proof of the Vitali-Hahn-Saks theorem concerning uniformly integrable sequences of functions, see e.g. \cite[Theorem 1.30]{AFP}.
\begin{theorem}\label{thm:uniform absolute continuity}
Let $\Omega\subset X$ be an open set,
and suppose that $u_i\to u$ in $L_{\mathrm{loc}}^1(\Omega)$ and $\Vert Du_i\Vert(\Omega)\to \Vert Du\Vert(\Omega)$, with
$\Vert Du\Vert(\Omega)<\infty$ and $\Vert Du_i\Vert(\Omega) <\infty$ for all $i\in{\mathbb N}$.
Then for every $\varepsilon>0$ there exists $\delta>0$ such that if $A\subset \Omega$
with $\capa_1(A)<\delta$, then $\Vert Du_i\Vert(A)<\varepsilon$ for all $i\in{\mathbb N}$.
\end{theorem}
\begin{proof}
Fix $\varepsilon>0$. By Lemma \ref{lem:absolute cont of variation measure wrt capacity} there
exists $\alpha>0$ such that if $D\subset \Omega$ with $\capa_1(D)<C_1\alpha$, then
$\Vert Du\Vert(D)<\varepsilon/2$. Fix $A\subset\Omega$ with $\capa_1(A)<\alpha$.
By Lemma \ref{lem:covering G by a set of finite perimeter} we find an open set $V\supset A$
with $\capa_1(V)< C_1\alpha$ and $P(V,X)< C_1\alpha$.
By Lemma \ref{lem:capacity of measure theoretic closure}, also
$\capa_1(I_V\cup \partial^*V)< C_1\alpha$.
By Proposition \ref{prop:set of finite perimeter is quasiopen}, $\Omega\cap O_V$ is a
$1$-quasiopen set, and thus by the lower semicontinuity Theorem \ref{thm:lower semic in quasiopen sets} we get
\[
\Vert Du\Vert (\Omega\cap O_V)\le
\liminf_{i\to\infty}\Vert Du_i\Vert(\Omega\cap O_V).
\]
Since also $\Vert Du_i\Vert(\Omega)\to \Vert Du\Vert(\Omega)$, we have
\[
\Vert Du\Vert (\Omega\setminus O_V)\ge\limsup_{i\to\infty}\Vert Du_i\Vert(\Omega\setminus O_V),
\]
that is,
\[
\Vert Du\Vert (\Omega\cap (I_V\cup \partial^*V))\ge\limsup_{i\to\infty}
\Vert Du_i\Vert(\Omega\cap (I_V\cup \partial^*V)).
\]
But since $\capa_1(I_V\cup \partial^*V)< C_1\alpha$, we get
\[
\limsup_{i\to\infty}\Vert Du_i\Vert(\Omega\cap (I_V\cup \partial^*V))<\varepsilon/2.
\]
Moreover, $A\subset \Omega\cap V\subset \Omega\cap(I_V\cup \partial^*V)$,
since $V$ is open.
In conclusion,
\begin{equation}\label{eq:absolute continuity at limit}
A\subset \Omega\ \textrm{and}\ \capa_1(A)<\alpha\quad\textrm{imply}\quad \limsup_{i\to\infty}\Vert Du_i\Vert(A)<\varepsilon/2.
\end{equation}
Consider the metric space defined in Lemma \ref{lem:complete metric space}.
Define the sets
\[
\mathcal A_k:=\{D\subset \Omega:\,\, \capa_1(D)<\infty\ \, \textrm{and}\ \, \sup_{i\ge k}\Vert Du_i\Vert(D)\le\varepsilon/2\},\quad k\in{\mathbb N}.
\]
We show that these sets are closed. Fix $k\in{\mathbb N}$ and then
fix $i\ge k$. Let $D\subset X$ with $\capa_1(D)<\infty$. If $D_n\in \mathcal A_k$, $n\in{\mathbb N}$,
is a sequence with
$\capa_1(D_n\Delta D)\to 0$, then since
$\Vert Du_i\Vert$ is absolutely continuous
with respect to $\capa_1$, we have
\[
\Vert Du_i\Vert(D)\le \liminf_{n\to\infty}(\Vert Du_i\Vert(D\setminus D_n)+\Vert Du_i\Vert(D_n))
= 0+\liminf_{n\to\infty}\varepsilon/2=\varepsilon/2.
\]
Since $i\ge k$ was arbitrary, we have $D\in \mathcal A_k$, so $\mathcal A_k$ is closed.
Let
\[
\mathcal{Y}:=\{D\subset \Omega:\, \capa_1(D)<\alpha\}.
\]
By \eqref{eq:absolute continuity at limit}, $\mathcal{Y}=\bigcup_{k=1}^{\infty}(\mathcal A_k\cap \mathcal{Y})$.
Since $\mathcal{Y}$ is an open subset of a complete metric space, Baire's category theorem applies.
Thus at least one of the sets $\mathcal A_k$ has nonempty interior in $\mathcal{Y}$. That is, there exists $D\in \mathcal{Y}$ and $\widetilde{\delta}>0$ such that every $H\subset \Omega$ with $\capa_1(H\Delta D)<\widetilde{\delta}$ belongs to $\mathcal A_k$.
Take any $A\subset \Omega$ with $\capa_1(A)<\widetilde{\delta}$. Then
\[
\capa_1((D\cup A)\Delta D)<\widetilde{\delta}
\]
and so
\[
\sup_{i\ge k}\Vert Du_i\Vert(A)\le \sup_{i\ge k}\Vert Du_i\Vert(D\cup A)\le\varepsilon/2< \varepsilon.
\]
By Lemma \ref{lem:absolute cont of variation measure wrt capacity}, we find $\widehat{\delta}>0$ such that if $A\subset\Omega$ with $\capa_1(A)<\widehat{\delta}$, then $\Vert Du_i\Vert(A)<\varepsilon$ for all $i=1,\ldots,k-1$. Finally, we let $\delta:=\min\{\widetilde{\delta},\widehat{\delta}\}$.
\end{proof}
|
1,108,101,564,626 | arxiv | \section{Introduction}
When $N$ is a power of $2$, the Walsh-Hadamard transform is defined recursively by $H_2 = \begin{bmatrix}
1 & 1\\
1 & -1\\
\end{bmatrix},$ and $$H_N = \begin{bmatrix}
H_{N/2} & H_{N/2}\\
H_{N/2} & -H_{N/2}\\
\end{bmatrix}.$$ A common task in many areas of computation is to \emph{compute the length-$N$ Walsh-Hadamard transform}, i.e., given as input a length-$N$ vector $v \in \mathbb{F}^N$, compute the vector $H_N v$. The most straightforward algorithm would compute this using $O(N^2)$ arithmetic operations, but the fast Walsh-Hadamard transform (FWHT) algorithm can compute this using only $O(N \log N)$ arithmetic operations. It is widely believed that $\Omega(N \log N)$ arithmetic operations are necessary, and a substantial amount of work has gone into proving this in restricted arithmetic models\footnote{For instance, this is known to hold for arithmetic circuits with `bounded coefficients'; see e.g.,~\cite[{Section 3.3}]{lokam2009complexity}.}, and studying conjectures which would imply this\footnote{For instance, the now-refuted conjecture that the Walsh-Hadamard transform is rigid~\cite{alman2017probabilistic}.}; see, for instance, the survey~\cite{lokam2009complexity}.
A natural question arises: why restrict ourselves to arithmetic models of computation? The Walsh-Hadamard transform is commonly used in practice, and speedups using non-arithmetic operations could be impactful. Nonetheless, there has not been much work on non-arithmetic algorithms for the Walsh-Hadamard transform.
A related problem is matrix multiplication: given as input two $n \times n$ matrices, compute their product. The asymptotically fastest known algorithm is algebraic, and uses $O(n^{2.373})$ arithmetic operations. That said, non-algebraic algorithmic techniques, especially lookup tables, have been used to design more practical, `combinatorial' algorithms for a variant on this problem called Boolean matrix multiplication\footnote{In Boolean matrix multiplication, given two matrices whose entries are from $\{0,1\}$, our goal is to multiply them over the AND-OR semiring, or equivalently, to determine which entries of their product over $\mathbb{R}$ are nonzero (but not necessarily what nonzero values they take on).} since the work of~\cite{arlazarov1970economical}. These techniques save logarithmic factors over the straightforward $O(n^3)$ time algorithm -- the best algorithm along these lines runs in $n^3 \cdot \frac{(\log\log n)^{O(1)}}{(\log n)^3}$ time~\cite{yu2018improved} in the standard word RAM model -- but they are considered more practical than the algebraic algorithms (which save polynomial factors in $n$). Lookup tables have also been used in practice to approximately multiply matrices~\cite{jeon2020biqgemm,blalock2021multiplying}.
In this paper, we show that lookup tables can be used to speed up the asymptotically fastest known algorithms for both the Walsh-Hadamard transform and (exact) matrix multiplication \emph{over finite fields}. We will show, for instance, that only $o(N \log N)$ bit operations suffice to compute the length-$N$ Walsh-Hadamard transform over finite fields when we augment the arithmetic model to allow for lookup tables. This may help to explain the difficulty of proving lower bounds for this problem, and help to guide future work on arithmetic circuit lower bounds (since any lower bounds technique would need to fail when lookup tables are allowed).
We focus here on constant-sized finite fields, though our algorithms generalize to larger finite fields as well. Our algorithms are simple modifications of the usual recursive algorithms for solving these problems, and we describe below how they compare favorably to algorithms which are used in practice.
\subsection{Model of computation}
As discussed, these problems are typically studied in the arithmetic circuit model, wherein an algorithm may only perform arithmetic operations ($+,-,\times,\div$) over the field $\mathbb{F}$ applied to inputs and fixed constants, and the arithmetic complexity of the algorithm is the number of such operations. The asymptotically fastest known algorithms for the problems we study here all fit within this model. However, we would also like the ability to use lookup tables, so we consider a more general model: the \emph{bit operations model}.
The bit complexity of a RAM algorithm is the number of operations on bits performed by the algorithm. This is the natural model with the most direct comparison to the arithmetic model, since an algorithm with arithmetic complexity $T$ over a constant-sized finite field naturally has bit complexity $O(T)$. This model is often used as a more realistic version of the arithmetic model (see e.g.,~\cite{pan1981bit,lingas1991bit,van2013bit,el2018bit}).
One can see (via a simple tree data structure, for instance) that a lookup table with $b$-bit keys and values can be implemented so that values can be looked up and changed using $O(b)$ bit operations.
We note before moving on that all the algorithms in this paper will only perform arithmetic operations and lookup table operations; one could also define a nonstandard model of computation which allows for just these two types of operations, and get the same results.
\subsection{Results: Walsh-Hadamard Transform}
Our first result is a faster algorithm for the Walsh-Hadamard transform.
\begin{theorem}
Let $\mathbb{F}_q$ be a finite field of size $q = O(1)$, let $n$ be a positive integer, and let $N = 2^n$. There is an algorithm for computing the length-$N$ Walsh-Hadamard transform over $\mathbb{F}_q$ that uses $O\left(\frac{N \log N}{\log \log N}\right)$ bit operations.
\end{theorem}
By comparison, the fast Walsh-Hadamard transform algorithm, which uses $\Theta(N \log N)$ arithmetic operations, would take $\Theta(N \log N)$ bit operations. $\Theta(N \log N)$ arithmetic operations is widely believed to be optimal over any field (whose characteristic is not 2; the problem is trivial in that case), but our algorithm improves on this using lookup tables.
Our algorithm uses the same recursive approach as the fast Walsh-Hadamard transform; our main idea is to use results from a lookup table to quickly jump forward many recursive layers at a time.
Although the Walsh-Hadamard transform is most often applied over the real or complex numbers, such as in signal processing and data compression,
it has been applied over finite fields in areas including coding theory~\cite{rajan2001quasicyclic,xu2019three}, cryptographic protocols~\cite{helleseth2010new,mesnager2019two}, and learning algorithms~\cite{liu2020deep}.
Our algorithm also generalizes directly to any transform defined by Kronecker powers of a fixed matrix over a finite field. For instance, given as input the $2^n$ coefficients of a multilinear polynomial in $n$ variables over $\mathbb{F}_2$, we can compute its evaluation on all $2^n$ inputs from $\mathbb{F}_2^n$ in $O(2^n \cdot n / \log n)$ bit operations (improving over the usual recursive algorithm by a factor of $\log n$)\footnote{This problem corresponds to computing the linear transform defined by Kronecker powers of the matrix $\begin{bmatrix}
1 & 1\\
1 & 0\\
\end{bmatrix}$.}.
\subsection{Results: Matrix Multiplication}
Our result for matrix multiplication shows how to convert any algebraic algorithm into one which uses a lookup table to save a superconstant factor.
\begin{theorem} \label{thm:mainmm}
For any finite field $\mathbb{F}_q$ of size $q = O(1)$, suppose there is an algebraic algorithm for multiplying $n \times n$ matrices over $\mathbb{F}_q$ in $O(n^{\tau})$ arithmetic operations for some constant $\tau > 2$. Then, there is another algorithm for multiplying $n \times n$ matrices over $\mathbb{F}_q$ which uses $O(n^\tau / (\log n)^{\tau/2 - 1})$ bit operations.
\end{theorem}
This speeds up the standard implementation of the algebraic algorithm by a factor of $\Theta((\log n)^{\tau/2 - 1})$. For instance, Strassen's algorithm~\cite{strassen} gives $\tau = 2.81$, resulting in an algorithm using $O(n^{2.81} / (\log n)^{0.4})$ bit operations by Theorem~\ref{thm:mainmm}, and the asymptotically fastest known algorithm~\cite{coppersmith,alman2021refined} gives $\tau = 2.373$, resulting in an algorithm using $O(n^{2.373} / (\log n)^{0.186})$ bit operations by Theorem~\ref{thm:mainmm}.
Notably, much work has gone into improving the leading constant in the running time of Strassen's algorithm. Strassen's original algorithm~\cite{strassen} has leading constant $7$ (meaning, it uses $7 n^{\log_2(7)} + o(n^{\log_2(7)})$ operations), and Winograd~\cite{winograd1971multiplication} improved this to $6$. This was believed to be optimal due to lower bounds by Probert~\cite{probert1976additive} and Bshouty~\cite{bshouty1995additive}. However, in a recent breakthrough, Karstadt and Schwartz~\cite{karstadt2020matrix} gave a new `change of basis' approach, and used it to improve the leading constant to $5$. They showed that $5$ is optimal for their new approach, and later work showed it is also optimal for the more general `sparse decomposition' approach~\cite{beniamini2019faster}. The fact that we achieve an asymptotic speedup in this paper (which one can view as achieving leading constant $\varepsilon$ for any $\varepsilon>0$ in bit complexity) may help to explain the difficulty of extending these lower bounds on the constant beyond restricted classes of algorithmic approaches.
Our approach for matrix multiplication is one that is commonly used in practice: use iterations of a recursive algebraic algorithm until the matrices one would like to multiply are sufficiently small, and then use a different algorithm optimized for small matrices. When this is used in practice, an optimized version of the straightforward (cubic-time) algorithm is used for small matrices, giving a constant factor improvement to the running time; see e.g.,~\cite{huang2016strassen}.
We implement this approach by instead using lookup tables to multiply superconstant-sized matrices very quickly, and thus get an asymptotic improvement.
Matrix multiplication over finite fields has many applications. One prominent example is Boolean matrix multiplication, which has a simple randomized reduction to matrix multiplication over any field\footnote{Set each entry of one of the matrices to 0 independently with probability $1/2$, then multiply the two matrices and check which entries are nonzero.}. Hence our algorithm gives an asymptotic speedup for applications of Boolean matrix multiplication such as detecting triangles in graphs.
\section{Preliminaries}
\subsection{Notation}
Throughout this paper, we write $\log$ to denote the base $2$ logarithm. For a positive integer $n$, we use the notation $[n] := \{1,2,3,\ldots,n\}$.
\subsection{Kronecker Products}
If $\mathbb{F}$ is a field, and $A \in \mathbb{F}^{N \times N}$ and $B \in \mathbb{F}^{M \times M}$ are matrices, their \emph{Kronecker product} $A \otimes B \in \mathbb{F}^{(NM) \times (NM)}$ is a matrix given by
$$A \otimes B = \begin{bmatrix}
B[1,1] \cdot A & B[1,2] \cdot A & \cdots & B[1,M] \cdot A\\
B[2,1] \cdot A & B[2,2] \cdot A & \cdots & B[2,M] \cdot A\\
\vdots & \vdots & \ddots & \vdots \\
B[M,1] \cdot A & B[M,2] \cdot A & \cdots & B[M,M] \cdot A\\
\end{bmatrix}.$$
For a positive integer $n$, we write $A^{\otimes n} \in \mathbb{F}^{N^n \times N^n}$ for the $n$th \emph{Kronecker power} of $A$, which is the Kronecker product of $n$ copies of $A$.
\subsection{Matrices of interest}
For positive integer $N$, we write $I_N$ to denote the $N \times N$ identity matrix. If $N$ is a power of $2$, we write $H_N$ to denote the $N \times N$ Walsh-Hadamard transform, given by $H_N = H_2^{\otimes \log N}$, where $$H_2 = \begin{bmatrix}
1 & 1\\
1 & -1\\
\end{bmatrix}.$$
\section{Walsh-Hadamard Transform}
Let $\mathbb{F}_q$ be the finite field of size $q$, and let $N$ be a power of $2$. In this section, we give an algorithm for computing the length-$N$ Walsh-Hadamard transform, $H_N$, over $\mathbb{F}_q$. The key idea behind our new algorithm is to pick a $K = O(\log_q N)$ such that $q^K \ll N$, and first create a lookup table of the length-$K$ Walsh-Hadamard transforms of all vectors $v \in \mathbb{F}_q^K$. We will then use this lookup table in conjunction with the following standard recursive approach for computing Kronecker powers (sometimes called Yates' algorithm), which we will apply with $M = H_K$:
\begin{lemma} \label{lem:yates}
Let $\mathbb{F}_q$ be any constant-sized finite field, let $m,d$ be positive integers, and let $M \in \mathbb{F}^{d \times d}$ be any matrix. Suppose we are given an algorithm which, on input $v \in \mathbb{F}^d$, outputs $Mv$ in time $T$. Then, there is an algorithm which, on input $z \in \mathbb{F}^{d^m}$, outputs $M^{\otimes m} w$ in time $O(T \cdot m \cdot d^{m-1})$.
\end{lemma}
\begin{proof}
By definition of the Kronecker product, we can write $M^{\otimes m}$ as a $d \times d$ block matrix (where each block is a $d^{m-1} \times d^{m-1}$ matrix) as
\begin{align*} &M^{\otimes m} \\ &= \begin{bmatrix}
M[1,1] \cdot M^{\otimes (m-1)} & M[1,2] \cdot M^{\otimes (m-1)} & \cdots & M[1,d] \cdot M^{\otimes (m-1)}\\
M[2,1] \cdot M^{\otimes (m-1)} & M[2,2] \cdot M^{\otimes (m-1)} & \cdots & M[2,d] \cdot M^{\otimes (m-1)}\\
\vdots & \vdots & \ddots & \vdots \\
M[d,1] \cdot M^{\otimes (m-1)} & M[d,2] \cdot M^{\otimes (m-1)} & \cdots & M[d,d] \cdot M^{\otimes (m-1)}\\
\end{bmatrix}
\\ &= \begin{bmatrix}
M[1,1] \cdot I_{d^{m-1}} & M[1,2] \cdot I_{d^{m-1}} & \cdots & M[1,d] \cdot I_{d^{m-1}}\\
M[2,1] \cdot I_{d^{m-1}} & M[2,2] \cdot I_{d^{m-1}} & \cdots & M[2,d] \cdot I_{d^{m-1}}\\
\vdots & \vdots & \ddots & \vdots \\
M[d,1] \cdot I_{d^{m-1}} & M[d,2] \cdot I_{d^{m-1}} & \cdots & M[d,d] \cdot I_{d^{m-1}}\\
\end{bmatrix} \hspace{-4pt} \times \hspace{-4pt} \begin{bmatrix}
M^{\otimes (m-1)} & & \\
& M^{\otimes (m-1)} & & \\
& & \ddots & \\
& & & M^{\otimes (m-1)}\\
\end{bmatrix}
\end{align*}
Thus, we can multiply $M^{\otimes m}$ times $z \in \mathbb{F}^{d^m}$ with a two-step process (corresponding to multiplying the matrix on the right times $z$, then the matrix on the left times the result):
\begin{enumerate}
\item Partition $z \in \mathbb{F}^{d^m}$ into $d$ vectors $z_1, \ldots, z_d \in \mathbb{F}^{d^{m-1}}$. Recursively compute $u_i = M^{\otimes (m-1)} z_i \in \mathbb{F}^{d^{m-1}}$ for each $i \in \{ 1, \ldots, d \}$.
\item For each $j \in \{ 1, \ldots, d^{m-1} \}$, let $x_j \in \mathbb{F}^d$ be the vector consisting of, for each $i \in \{1, \ldots, d\}$, entry $j$ of vector $u_i$. Use the given algorithm to compute $y_j = M x_j$.
\end{enumerate}
Finally, we output the appropriate concatenation of the $y_j$ vectors (where the first $d^{m-1}$ entries are the first entries of all the $y_j$ vectors, the second $d^{m-1}$ entries are the second entries of all the $y_j$ vectors, and so on).
Our algorithm makes $d$ recursive calls in the first step, and calls the given algorithm $d^{m-1}$ times in the second step.
Hence, the total running time, $E(d^m)$, has the recurrence $$E(d^m) = d \cdot E(d^{m-1}) + d^{m-1} \cdot T.$$
This solves, as desired, to $E(d^m) = O(T \cdot m \cdot d^{m-1})$.
\end{proof}
We can now give our main algorithm for the Walsh-Hadamard transform:
\begin{theorem} \label{thm:bodywht}
Let $\mathbb{F}_q$ be a finite field of size $q = O(1)$, let $n$ be a positive integer, and let $N = 2^n$. There is an algorithm for computing the length-$N$ Walsh-Hadamard transform over $\mathbb{F}_q$ that uses $O\left(\frac{N \log N}{\log \log N}\right)$ bit operations.
\end{theorem}
\begin{proof}
Let $k = \left\lfloor \log \left( \frac{n}{2 \log q} \right) \right\rfloor$, and let $K = 2^k \leq \frac{n}{2 \log q}$. We begin by iterating over all vectors $v \in \mathbb{F}_q^K$, computing $H_K v$, and storing it in a lookup table. We can do this in a straightforward way: there are $q^K$ such vectors, and each can be computed using $O(K^2)$ additions and subtractions over $\mathbb{F}_q$, so the total time to create this lookup table is at most $$O(q^K \cdot K^2) \leq O(q^{n / (2 \log q)} \cdot (\log N)^2) \leq O(\sqrt{N} \cdot (\log N)^2) \leq O(N^{0.6} ).$$ (This simple time bound could be improved, but we won't bother here since it won't substantially contribute to the final running time.)
Our goal now is, given $v \in \mathbb{F}_q^N$, to compute $H_N v$. Assume for now that $k$ divides $n$.
We will apply \Cref{lem:yates} with $M = H_K$ (and hence $d = K$) and $m = n/k$, which will multiply the matrix $(H_K)^{\otimes n/k} = H_N$ times the vector $v$, as desired. Each time that algorithm needs to multiply $H_K$ times a vector of length $K$, we do so by looking up the answer from the lookup table. Hence, $T$ in \Cref{lem:yates} will be the time to do one lookup from this table whose keys and values have length $O(\log(q^K)) = O(K)$, so $T = O(K)$.
The total number of bit operations of our algorithm is thus, as desired, $$O\left(N^{0.6} + T \cdot \frac{n}{k} \cdot K^{n/k - 1}\right) = O\left(N^{0.6} + T \cdot \frac{n}{k \cdot K} \cdot N\right) = O\left(N^{0.6} + \frac{n}{k} \cdot N\right) = O\left(\frac{N \log N }{\log \log N}\right).$$
Finally, consider when $k$ does not divide $n$. Let $n'$ be the largest multiple of $k$ that is smaller than $n$, so $n-k < n' < n$. By the usual recursive approach (e.g., one recursive step of the algorithm presented in \Cref{lem:yates}), it suffices to first perform $2^{n - n'}$ instances of a length-$2^{n'}$ Walsh-Hadamard transform, and then perform $2^{n'}$ instances of a length-$2^{n-n'}$ Walsh-Hadamard transform.
We now count the number of bit operations for these two steps.
Using the same algorithm as above, since $k$ divides $n'$, a length-$2^{n'}$ Walsh-Hadamard transform can be performed using $O(\frac{n' \cdot 2^{n'} }{\log n'})$ bit operations. Hence, the total bit operations for the first step is $O(2^{n - n'} \cdot \frac{n' \cdot 2^{n'} }{\log n'}) \leq O(\frac{n \cdot 2^{n} }{\log n}) = O(N \log N / \log \log N)$.
Using the usual fast Walsh-Hadamard transform, a length-$2^{n-n'}$ Walsh-Hadamard transform can be performed using $O(2^{n-n'} \cdot (n - n') )$ bit operations. Hence, the total bit operations for the second step is $O(2^{n'} \cdot 2^{n-n'} \cdot (n - n'))\leq O(2^n \cdot k) = O(N \log \log N)$. We thus get the desired total running time.
\end{proof}
\section{Matrix Multiplication}
Fast algebraic algorithms for matrix multiplication over a field $\mathbb{F}$ critically rely on algebraic identities which take the following form, for positive integers $q,r$, formal variables $X_{i,j}, Y_{j,k}, Z_{i,k}$ for $i,j,k \in [t]$, and field coefficients $\alpha_{i,j,\ell}, \beta_{j,k,\ell}, \gamma_{i,k,\ell} \in \mathbb{F}$ for $i,j,k \in [t]$ and $\ell \in [r]$:
\begin{align}\label{eq:mm}\sum_{i=1}^t \sum_{j=1}^t \sum_{k=1}^t X_{i,j} Y_{j,k} Z_{i,k} = \sum_{\ell=1}^r \left( \sum_{i=1}^t \sum_{j=1}^t \alpha_{i,j,\ell} X_{i,j} \right)\left( \sum_{j=1}^t \sum_{k=1}^t \beta_{j,k,\ell} Y_{j,k} \right)\left( \sum_{i=1}^t \sum_{k=1}^t \gamma_{i,k,\ell} Z_{i,k} \right).\end{align}
As we will see next, an identity (\ref{eq:mm}) can be used to design a matrix multiplication algorithm which runs using only $O(n^{\log_t(r)})$ field operations.
For instance, Strassen's algorithm~\cite{strassen} gives an identity (\ref{eq:mm}) with $t=2$ and $r=7$, yielding exponent $\log_2(7) < 2.81$. The matrix multiplication exponent $\omega$ is defined as the infimum of all numbers such that, for every $\varepsilon>0$, there is a sufficiently large $t$ for which one gets an identity (\ref{eq:mm}) with $r \leq t^{\omega + \varepsilon}$.
Indeed, it is known~\cite{strassen1973vermeidung} that any algebraic algorithm for matrix multiplication can be converted into an identity (\ref{eq:mm}) yielding the same running time in this way, up to a constant factor, so $\omega$ captures the best exponent from any possible algebraic algorithm. The fastest known matrix multiplication algorithm~\cite{coppersmith,alman2021refined} shows that $\omega < 2.37286$.
The standard recursive algorithm for multiplying matrices using identity (\ref{eq:mm}) works as described in \Cref{alg:mm} below. It recurses until it gets to a base case of multiplying $n \times n$ matrices when $n \leq S$ for some parameter $S$. In the usual algorithm, one picks $S$ to be a constant, so that such matrices can be multiplied in a constant number of operations; in our improvement, we will pick a larger $S$ and multiply such matrices using a lookup table.
\begin{algorithm}[H]\caption{Recursive matrix multiplication algorithm, using identity (\ref{eq:mm}), with base case size $S$}\label{alg:mm}
\begin{algorithmic}[1]
\Procedure{\textsc{MM}}{$X, Y \in \mathbb{F}^{n \times n}$}
\If{$n \leq S$}
\State Use base case procedure to multiply $X \times Y$ and output the result.
\Else
\State Partition $X$ into a $t \times t$ block matrix, with blocks $X_{i,j} \in \mathbb{F}^{(n/t) \times (n/t)}$ for $i, j \in [t]$.
\State Similarly partition $Y$ into a $t \times t$ block matrix, with blocks $Y_{j,k} \in \mathbb{F}^{(n/t) \times (n/t)}$ for $j, k \in [t]$.
\State For $k, i \in [t]$, set the matrices $Z_{k,i} \in \mathbb{F}^{(n/t) \times (n/t)}$ to initially be all 0.
\For{$\ell \in [r]$}
\State Compute $A_\ell := \sum_{i, j \in [t]} \alpha_{i,j,\ell} X_{i,j}$.
\State Compute $B_\ell := \sum_{j, k \in [t]} \beta_{j,k,\ell} Y_{j,k}$.
\State Compute $C_\ell = \textsc{MM}(A_\ell, B_\ell)$ \Comment{Recursively multiply $(n/t) \times (n/t)$ matrices $A_\ell \times B_\ell$.}
\For{$i, k \in [t]$}
\State Add $\gamma_{i,k,\ell} C_\ell$ to $Z_{i,k}$.
\EndFor
\EndFor
\State Output $Z \in \mathbb{F}^{n \times n}$ given by the blocks $Z_{i,k}$.
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\Cref{alg:mm} correctly outputs the product $X \times Y$.
\end{lemma}
\begin{proof}
For a fixed $i, k \in [t]$, we can see that lines 8-15 of the algorithm will set $Z_{i,k}$ to be the following matrix: $$\sum_{\ell=1}^r \gamma_{i,k,\ell} \cdot C_\ell = \sum_{\ell=1}^r \gamma_{i,k,\ell} \cdot \left( \sum_{i=1}^t \sum_{j=1}^t \alpha_{i,j,\ell} X_{i,j} \right)\left( \sum_{j=1}^t \sum_{k=1}^t \beta_{j,k,\ell} Y_{j,k} \right).$$
Notice in particular that this is the coefficient of the formal variable $Z_{i,k}$ in the right-hand side of the identity~(\ref{eq:mm}). Hence, it is also the coefficient of that variable in the left-hand side, namely, $$\sum_{j=1}^t X_{i,j} Y_{j,k}.$$ This is the desired output $(i,k)$ block when multiplying matrices $X,Y$.
\end{proof}
Normally one would analyze the recursion for the number of operations performed by this algorithm when $S$ is a constant, and conclude a total operation count of $O(n^{\log_t(r)})$. Here we will improve on this over finite fields using a lookup table:
\begin{theorem} \label{thm:mainmm}
Fix an identity~(\ref{eq:mm}) and let $\tau = \log_t(r)$.
For any finite field $\mathbb{F}_q$ of size $q = O(1)$, and any positive integer $n$, there is an algorithm for multiplying $n \times n$ matrices over $\mathbb{F}_q$ which uses $O(n^\tau / (\log n)^{\tau/2 - 1})$ bit operations.
\end{theorem}
\begin{proof}
Let $s = \left\lfloor \sqrt{ \frac14 \log n / \log q} \right\rfloor$. We begin by iterating over all positive integers $s' \leq s$ and all pairs of $s' \times s'$ matrices $A, B \in \mathbb{F}_q^{s' \times s'}$, computing their product $A \times B$, and storing it in another lookup table. Similar to before, the running time for this step won't substantially contribute to the final running time, so we give a straightforward upper bound on the time it takes. The number of pairs of matrices we need to multiply is at most $s \cdot (q^{s^2})^2$, and the time to multiply each pair is at most $O(s^3 )$ by the straightforward algorithm. The total time to create the lookup table is hence at most $$ O(s^4 \cdot (q^{s^2})^2) \leq O(q^{\frac12 \log n / \log q} \cdot (\log n)^2) \leq O(n^{0.5} \cdot (\log n)^2).$$
Note that this table's keys and values are strings of length $O(\log (q^{s^2})^2) \leq O(\log n)$, so lookup table operations can be performed using $O(\log n)$ bit operations.
We now use \Cref{alg:mm}, with base case size $S = s$. The base case procedure is that, whenever we need to multiply two $s' \times s'$ matrices for $s' \leq S$, we find the result in the lookup table.
Let $E(m)$ denote the running time of this algorithm to multiply two $m \times m$ matrices. We get $E(m) = O(\log n)$ if $m \leq s$, and if $m > s$, the recurrence \begin{align}\label{eq:recurr}E(m) \leq r \cdot E(m/t) + O(m^2),\end{align}
recalling that $r,t$ are constants given in the identity~(\ref{eq:mm}). The $O(m^2)$ term in the right-hand side of Equation~(\ref{eq:recurr}) counts the bit operations to do a constant number of additions and scalar multiplications of $m/t \times m/t$ matrices. Solving this recurrence yields $$E(n) = O\left( \left( \frac{n}{s} \right)^{\log_t(r)} \cdot \log n \right) = O\left( \frac{n^\tau }{ (\log n)^{\tau/2 - 1}} \right),$$ as desired.
\end{proof}
\section{Conclusion}
We showed that for two important open problems -- determining the complexity of the Walsh-Hadamard transform, and determining the leading constant of Strassen's algorithm for matrix multiplication -- asymptotic improvements over the conjectured optimal arithmetic bounds are possible if one is allowed to use bit operations rather than just arithmetic operations.
Our algorithms only made use of arithmetic operations and lookup table operations, so they could extend to other models of computation as well. One natural question is whether they extend to the standard word RAM model with word size $w = O(\log n)$ for input size $n$. Indeed, operations for the lookup tables we use, (with keys and values of $O(\log n)$ bits) require only $O(1)$ word operations in this model.
It is not hard to see that our algorithm for matrix multiplication can be implemented to take advantage of this model, improving Theorem~\ref{thm:mainmm} to running time $O(n^\tau / (\log n)^{\tau / 2})$ (improving by a factor of $\log n$).
On the other hand, our algorithm for the Walsh-Hadamard transform seemingly \emph{cannot} be implemented in the word RAM model to get asymptotic savings. The main culprit is step 2 in the proof of Lemma~\ref{lem:yates}: if we want to efficiently use the lookup table to compute $y_j = M x_j$, then we first have to permute the bits of the $u_i$ vectors so that each $x_j$ fits in a single word. In other words, we are given the $u_i$ vectors in order, and would like to permute their entries to get the $x_j$ vectors in order instead.
Let $s = \Theta(d^m / w)$ be the number of words that our input fits into.
In general it is known that performing a fixed permutation of the bits of a string contained in $s$ words, for $s \geq w$, requires $\Theta(s \log s)$ word operations~\cite{brodnik1997trans}. However, our particular permutation can be broken up into $s/w$ different permutations on $w$ words (e.g., the first words in the descriptions of $x_1, \ldots, x_w$ are a permutation of the first words in the descriptions of $u_1, \ldots, u_w$). It can thus be performed in only $O(s \log w)$ word operations.
Since this must be done at all $m$ levels of recursion, it incurs an additional $\Theta(m \cdot d^m \cdot \frac{\log w}{w})$ word operations. With the parameter setting we ultimately use in Theorem~\ref{thm:bodywht}, this is $\Theta(N)$ word operations to perform all these permutations. By comparison, it is not too difficult to implement the usual fast Walsh-Hadamard transform to use a total of $O(N)$ word operations as well. Hence, the time to perform permutations (which doesn't come into play in the arithmetic or bit operation models) swamps any other computational savings in this model, and another approach is needed for a speedup.
\section*{Acknowledgements}
I would like to thank Dylan McKay and Ryan Williams for invaluable discussions throughout this project, and anonymous reviewers for helpful comments.
This research was supported in part by a grant from the Simons Foundation (Grant Number 825870 JA).
\bibliographystyle{alpha}
|
1,108,101,564,627 | arxiv | \section{INTRODUCTION}
In astrophysical environments, dust grains condense in the metal-rich
cooling gas such as the stellar winds from evolved stars and the
expanding ejecta of novae and supernovae (SNe).
Once newly formed dust grains are injected into the interstellar
medium (ISM), they cause interstellar extinction and diffuse infrared
emission, and also serve as catalysts for H$_2$ formation and building
materials of such planets as we live on.
Hence, the investigation of formation and evolution of dust is
indispensable in disclosing the nature of objects at high redshifts,
the radiative process and energy balance in the ISM, and the formation
history of stars and planetary systems.
In particular, the origin of dust has been hotly debated since the
discoveries of a huge amount of dust grains at redshifts higher than
$z = 5$ (Gall et al.\ 2011 and references therein).
In an early epoch of the universe, core-collapse SNe arising from
short-lived massive stars are likely to be dominant sources of dust
(e.g., Dwek et al.\ 2007).
In fact, recent far-infrared to submillimeter observations of young
supernova remnants,
SN 1987A (Matsuura et al.\ 2011; Laki\'{c}evi\'{c} et al.\ 2012),
Cas A (Sibthorpe et al.\ 2010; Barlow et al.\ 2010), and
Crab (Gomez et al.\ 2012),
have reported the presence of subsolar mass of cool dust formed in
the ejecta, which seems to be high enough to account for the observed
amount of dust at high redshifts.
However, these cool grains have not yet undergone the destruction in
the hot gas swept up by the reverse and forward shocks, and thus their
mass does not necessarily represent the amount of dust finally ejected
by SNe.
What fraction of newly formed grains can survive on their journeys to
and in the ISM heavily depends on their sizes at the time of formation
(e.g., Nozawa et al.\ 2006, 2007).
Thus, in order to reveal the roles of SNe as sources of dust in the
universe, it is essential to understand not only the total mass but
also the size distribution of dust produced in the ejecta of SNe.
The formation process of dust in the SN ejecta has been studied
mainly with the classical nucleation theory and its extension
(Kozasa et al.\ 1989, 1991; Todini \& Ferrara 2001;
Nozawa et al.\ 2003, 2008, 2010, 2011; Bianchi \& Schneider 2007).
In the nucleation theory, the condensation of dust is described by the
formation of stable seed nuclei (called critical clusters) and their
growth, where the formation rate of critical clusters is derived by
assuming the nucleation current to be in a steady state.
This theory enables us to predict the size distribution and mass of
condensing grain species, and the results of the dust formation
calculations have nicely explained the mass of dust formed in SN 1987A
(Kozasa et al.\ 1991) and the formation and evolution processes of
dust in Cas A (Nozawa et al.\ 2010).
However, it has been argued that the validity for the application of
classical nucleation theory could not be justified in the rarefied gas
typical of dust-forming region of astrophysical interest
(Donn \& Nuth 1985 and references therein);
in much less dense systems, the timescale on which the nucleation
current achieves a steady state must be longer than the timescales of
evolutions of the gas density and temperature, which would render the
application of the steady--state nucleation rate questionable.
On the other hand, Paquette \& Nuth (2011) suggested that the lack of
the steady--state condition is unlikely to change the radius and
number density of newly formed grains significantly.
They showed that the resulting size distribution of dust is little
affected even if the steady--state nucleation rate is reduced by a few
orders of magnitude, although they did not clarify the effect of a
non-steady state on the formation process of dust.
In this paper, we develop a method managing the dust formation process
without postulating a steady state, which is expected to be more
appropriate in astrophysical application.
In Section 2, we formulate the non-steady--state dust formation
process, including chemical reactions at the time of formation of
clusters and grains.
After describing a simple model of the time evolutions of the
gas temperature and density in the ejecta of SNe in Section 3,
we present, in Section 4, the results of the calculations for the
formation of C and MgSiO$_3$ grains and discuss the effect of
the non-steady state and its dependence on the physical conditions
in the ejecta.
In Section 5, we demonstrate that the average radius and condensation
efficiency can be uniquely determined by the ratio of the
supersaturation timescale to the collision timescale at the time when
the condensation efficiency rises to $10^{-10}$.
Our conclusions are presented in Section 6.
We also present the detailed derivation of the steady--state
nucleation rate for the formation of compound grains such as silicates
in Appendix A.
\section{FORMULATION OF NON--STEADY--STATE FORMATION PROCESS OF
CLUSTERS AND DUST GRAINS}
In this section, we formulate the non-steady--state formation of
clusters and dust grains accompanied by chemical reactions, by means
of a kinetic approach.
Most of grain species of astrophysical interest, like silicate, have
no monomer molecule with the same chemical composition as the
condensate.
This implies that the formation of such compound grains proceeds via
the chemical reactions involving the relevant gas species, while the
reaction pathways and their rate constants are not well known.
One of the methods of evading the difficulty in treating the
formation of compound grains without the detailed knowledge of the
chemical pathways and reaction constants is to employ the concept
of a key species (key molecule) that is defined as a gaseous
reactant with the least collisional frequency among the gaseous
reactants, as proposed by Kozasa \& Hasegawa (1987).
In this method, two-body reactions between a cluster and the key
molecule are considered to control the kinetics of the chemical
reaction; the concept of key molecule has been applied also for the
growth process of compound grains in circumstellar envelopes as well
as molecular clouds
(e.g., Ferrarotti \& Gail 2001; Zhukovska et al.\ 2008).
In what follows, we refer to the cluster composed of $n$--key
molecules as $n$--mer, and assume that clusters are spherical.
We also assume the temperature of clusters to be the same as the gas
temperature.
First, for simplicity, we shall consider a cooling gas in a closed
box with the initial concentration of the key molecule $c_{10}$
at a time $t = t_0$.
As the gas cools down, the condensation of dust grains proceeds
through formation and growth of $n$--mer clusters via the attachment
of the key molecules.
In principle, the time evolution of concentration $c(n,t) = c_n$
of $n$--mers can be described by a set of differential equations
\begin{eqnarray}
\frac{dc_n}{dt} = J_n(t) - J_{n+1}(t)
~~~ {\rm for} ~~ 2 \le n \le n_*,
\end{eqnarray}
where $J_n(t)$ is the net current density from $(n-1)$--mer to
$n$--mer.
Here we consider that a cluster containing more key molecules
than $n = n_*$ can be treated as a macroscopic grain (hereafter
simply called ``grain'').
Given the concentration $c_1$ and the mass $m_1$ of the key molecule,
the growth rate of grains is given by
\begin{eqnarray}
\frac{da}{dt} = s \Omega_0 \left( \frac{k T}{2 \pi m_1}
\right)^{\frac{1}{2}} c_1 \left( 1 - \frac{1}{S} \right),
\end{eqnarray}
where $a$ is the grain radius, $s$ is the sticking probability of the
key molecule onto grains, $\Omega_0$ is the volume of the condensate
per key molecule, $k$ is the Boltzmann constant, $T$ is the
temperature of the gas, and $S$ is the supersaturation ratio.
The successive formation of clusters and the growth of grains cause
the depletion of the key molecules.
The time variation of the concentration $c_1$ of the key molecule is
determined from the equation of the mass conservation;
\begin{eqnarray}
c_{10} - c_1 = \sum^{n_*-1}_{n=2} n c_n +
\int^t_{t_0} J_{n_*}(t') \frac{a^3(t, t')}{a^3_0} dt',
\end{eqnarray}
where $a_0 = (3 \Omega_0/4 \pi)^{1/3}$ is the hypothetical radius of
the condensate per key molecule, and $a(t, t')$ is the radius of a
grain that reaches $n = n_*$ at $t'$ and is measured at $t$.
In the following subsections, we describe how the current density
$J_n$ and the supersaturation ratio $S$ can be presented according to
the chemical reaction at the time of formation.
\subsection{Case for a Single--element Grain}
In this subsection, as a reference, we consider the formation of
clusters whose chemical composition is the same as that of the key
molecule (hereafter we refer to such a grain as a
``single-element grain'') in order to clarify how the formation
process of clusters is formulated by means of the kinetic approach.
In this case, the formation of clusters proceeds through attachment
and detachment of a key molecule as follows;
\begin{eqnarray}
\mathcal{X} + \mathcal{X} \ &\rightleftharpoons& \ \mathcal{X}_2 \\
\mathcal{X}_{n-1} + \mathcal{X} \ &\rightleftharpoons&
\ \mathcal{X}_{n} ~~~ {\rm for} ~~ 3 \le n \le n_*,
\end{eqnarray}
where $\mathcal{X}$ and $\mathcal{X}_{n}$ represent the key
molecule and the $n$--mer cluster, respectively.
Then, the current density $J_n(t)$ is given by
\begin{eqnarray}
J_n(t) = \alpha_{n-1} c_{n-1} c_1 - \beta_n c_n
~~~ {\rm for} ~~ 2 \le n \le n_*,
\end{eqnarray}
where $\alpha_n$ is the attachment rate coefficient of a monomer to
an $n$--mer, and $\beta_n$ is the detachment rate coefficient of a
monomer from an $n$--mer.
In general, $\alpha_n$ has not been measured for the materials of
astrophysical interest.
Thus, considering that collisions of monomers control the kinetics of
attachment, we evaluate $\alpha_n$ as follows,
\begin{eqnarray}
\alpha_n = \frac{s_n}{1 + \delta_{1n}}\ 4 \pi a_0^2 \ n^{\frac{2}{3}}
\left( \frac{k T}{2 \pi m_{n,1}} \right)^{\frac{1}{2}},
\end{eqnarray}
where $s_n$ is the sticking probability of a monomer onto an $n$--mer,
$\delta_{1n}$ is the Kronecker's delta, and
$m_{n,1} = n m_1/(n+1)$ is the reduced mass of a monomer and an
$n$-mer.
The detachment rate coefficient $\beta_n$ ($n \ge 2$) can be
related to $\alpha_{n-1}$ through the principle of detailed balance;
\begin{eqnarray}
\beta_n = \alpha_{n-1} \frac{\mathring{c}_{n-1}}{\mathring{c}_n}
\mathring{c}_1,
\end{eqnarray}
where $\mathring{c}_n$ is the concentration of the $n$--mer in the gas
in thermodynamic equilibrium at a temperature $T$.
Then, the current density $J_n(t)$ is reduced to
\begin{eqnarray}
J_n(t) = \alpha_{n-1} c_1 \left( c_{n-1} - c_n
\frac{\mathring{c}_{n-1}}{\mathring{c}_n}
\frac{\mathring{c}_1}{c_1} \right).
\end{eqnarray}
From the law of mass action stemming from the condition that
the sum of chemical potentials of the reactants is equal to that of
the products in chemical equilibrium (see Landau \& Lifshitz 1976)
\begin{eqnarray}
\frac{\mathring{p}_{n-1}}{\mathring{p}_n}
\frac{\mathring{p}_1}{p_{\rm s}} =
\exp \left[ \frac{1}{k T} \left( \mathring{g}_n - \mathring{g}_{n-1}
- \mathring{g}_1 \right) \right],
\end{eqnarray}
where $\mathring{p}_n = \mathring{c}_n k T$ and $\mathring{g}_n$ are,
respectively, the partial pressure and the chemical potential at a
standard pressure $p_{\rm s}$ of the $n$--mer, the factor
$\mathring{c}_{n-1} \mathring{c}_1 / \mathring{c}_n {c_1}$ of the
second term in the parenthesis on the right-hand side of Equation (9)
is written as
\begin{eqnarray}
\frac{\mathring{c}_{n-1}}{\mathring{c}_n}
\frac{\mathring{c}_1}{c_1} =
\exp\left[\frac{1}{k T} \left( \mathring{g}_n - \mathring{g}_{n-1} -
\mathring{g}_1 \right) - \ln \left( \frac{p_1}{p_{\rm s}} \right) \right].
\end{eqnarray}
By introducing the supersaturation ratio $S$ defined as
\begin{eqnarray}
\ln S = \ln \left(\frac{p_1}{p_{1 {\rm v}}} \right)
= - \frac{1}{k T} \left( \mathring{g}_{\rm c} - \mathring{g}_1 \right)
+ \ln \left(\frac{p_1}{p_{\rm s}} \right),
\end{eqnarray}
where $\mathring{g}_{\rm c}$ and $p_{1 {\rm v}}$ are, respectively, the
chemical potential at a standard pressure $p_{\rm s}$ and the vapor
pressure of the bulk condensate, the exponent in Equation (11) is
represented as
\begin{eqnarray}
\gamma_n = \frac{1}{k T} \left( \mathring{g}_n - \mathring{g}_{n-1} -
\mathring{g}_{\rm c} \right) - \ln S.
\end{eqnarray}
Then, the current density $J_n(t)$ is expressed as
\begin{eqnarray}
J_n(t) = \alpha_{n-1} c_1 \left[ c_{n-1} - c_n
\exp(\gamma_n) \right]
\end{eqnarray}
for $2 \le n \le n_*$.
Note that, as is seen from Equations (1) and (14), the current density
$J_{n_*}$ cannot be evaluated without any relation between $c_{n_*}$
and $J_{n_*+1}$.
Thus, in order to close Equations (1) to (3), we introduce a
closure relation that the current density from $n_*$--mer to
$(n_*+1)$--mer, $J_{n_*+1}$, is approximated by
\begin{eqnarray}
J_{n_*+1}(t) \simeq \alpha_{n_*} c_1 c_{n_*}
\left[ 1 - \exp(\gamma_{n_*+1}) \right],
\end{eqnarray}
supposing that $c_{n_*} \simeq c_{n_*+1}$ for $n_* \gg 1$.
\subsection{Case for a Multi--element Grain}
In order to derive the formula as generally as possible, we
consider that an $n$--mer cluster $\mathcal{Z}_n$ containing
$n$ of the key molecule $\mathcal{X}$ is generated from the
following chemical reactions;
\begin{eqnarray}
\mathcal{Z}_{n-1} + \left( \mathcal{X} + \nu_1 \mathcal{A}_1 + \dots +
\nu_i \mathcal{A}_i \right)
\rightleftharpoons \ \mathcal{Z}_n + \left(
\eta_1 \mathcal{B}_1 + \dots + \eta_j \mathcal{B}_j \right)
~~~ \textrm{for} ~~ 3 \le n \le n_*,
\end{eqnarray}
where $\mathcal{A}_k$ ($k=1$--$i$) and $\mathcal{B}_k$ ($k=1$--$j$)
denote the gaseous reactants and products, respectively, and
the stoichiometric coefficients of the gaseous reactants ($\nu_k$)
and products ($\eta_k$) are normalized to the key molecule.
In the following, we designate the physical quantities of the
gaseous reactants $\mathcal{A}_k$ and products $\mathcal{B}_k$ by
attaching the superscript $A$ and $B$, respectively;
for example, the concentrations (partial pressures) of the gas
species $\mathcal{A}_k$ are denoted as $c_k^A$ ($p_k^A$).
Below we first formulate the formation of dimer and then describe the
formation of $n$--mer.
In the formulation, we assume that clusters with $n \ge 2$ have the
same stoichiometric composition as the condensate.
\subsubsection{Formation of dimer ($n = 2$)}
We consider that the dimer formation proceeds through the reaction
\begin{eqnarray}
2(\mathcal{X} + \nu_1 \mathcal{A}_1 + \dots +
\nu_ i \mathcal{A}_i) \rightleftharpoons \
\mathcal{Z}_{2} + 2( \eta_1 \mathcal{B}_1 + \dots +
\eta_j \mathcal{B}_j).
\end{eqnarray}
Under the consideration that the collision of key molecule controls
the kinetics of the chemical reaction, the current density $J_2(t)$
is written as
\begin{eqnarray}
J_2(t) = \alpha_1 c_1^2 - \beta_2 c_{2}
\left[ \frac{\prod_{k=1}^{j} (c_k^B)^{\eta_k}}
{\prod_{k=1}^{i} (c_k^A)^{\nu_k}} \right]^2,
\end{eqnarray}
where the forward reaction coefficient $\alpha_1$ is the same as that
given in Equation (7).
The form of the second term on the right-hand side of Equation (18)
is based on the principle of detailed balance that the ratio of
the forward reaction coefficient to the backward reaction coefficient
$K$ is expressed as $K = \mathring{c}_2 \left[ \prod_{k=1}^{j}
(\mathring{c}_k^B)^{\eta_k} / \mathring{c}_1 \prod_{k=1}^{i}
(\mathring{c}_k^A)^{\nu_k} \right]^2$
in chemical equilibrium.
Then, the current density $J_2(t)$ is represented as
\begin{eqnarray}
J_2(t) = \alpha_1 c_1
\left( c_1 - c_2 \frac{c_1}{\mathring{c}_2} \frac{1}{b^2} \right)
\end{eqnarray}
with
\begin{eqnarray}
b
= \frac{c_1}{\mathring{c}_1}
\frac{\prod^i_{k=1} \left( c^A_k / \mathring{c}^A_k \right)^{\nu_k}}
{\prod^j_{k=1} \left( c^B_k / \mathring{c}^B_k \right)^{\eta_k}}
= \frac{p_1}{\mathring{p}_1}
\frac{\prod^i_{k=1} \left( p^A_k / \mathring{p}^A_k \right)^{\nu_k}}
{\prod^j_{k=1} \left( p^B_k / \mathring{p}^B_k \right)^{\eta_k}},
\end{eqnarray}
where $\mathring{c}^A_k$ and $\mathring{c}^B_k$ ($\mathring{p}^A_k$
and $\mathring{p}^B_k$) are the concentrations (partial gas pressures)
of the $k$--th gaseous reactants and products, respectively, in the
gas in thermodynamic equilibrium at a temperature $T$.
As is the case for a single-element grain, by applying the law of mass
action and introducing $\omega$ ($\ne 0$)\footnote{
Note that the formulation is applicable except for the case of
$\omega = 0$ such as the endothermic dissociative reaction
C$_2$H$_2$ $+$ C$_2$H$_2$ $\rightleftharpoons$ (C$_2)_2$ $+$ 2H$_2$
for the formation of the dimer (C$_2)_2$.}
defined as
\begin{eqnarray}
\omega = 1 + \sum_{k=1}^i \nu_k - \sum_{k=1}^j \eta_k,
\end{eqnarray}
the factor $c_1 / \mathring{c}_2 b^2$ of the second term in the
parenthesis on the right-hand side of Equation (19) can be rewritten
as follows;
since
\begin{eqnarray}
& &
\frac{c_1}{\mathring{c}_2 b^2}
\left[
\frac{\prod_{k=1}^i \left( c^A_k / c_1 \right)^{\nu_k}}
{\prod_{k=1}^j \left( c^B_k / c_1 \right)^{\eta_k}}
\right]^{\frac{1}{\omega}}
=
\frac{p_1}{\mathring{p}_2}
\left[
\frac{\mathring{p}_1}{p_1}
\frac{\prod^i_{k=1} \left( \mathring{p}^A_k / p^A_k \right)^{\nu_k}}
{\prod^j_{k=1} \left( \mathring{p}^B_k / p^B_k \right)^{\eta_k}}
\right]^2
\left[
\left( \frac{p_{\rm s}}{p_1} \right)^{\omega - 1}
\frac{\prod_{k=1}^i \left( p^A_k / p_{\rm s} \right)^{\nu_k}}
{\prod_{k=1}^j \left( p^B_k / p_{\rm s} \right)^{\eta_k}}
\right]^{\frac{1}{\omega}}
\nonumber \\
&=&
\frac{p_{\rm s}}{\mathring{p}_2}
\left( \frac{\mathring{p}_1}{p_{\rm s}} \right)^2
\left[
\frac{\prod^i_{k=1} \left( \mathring{p}^A_k / p_{\rm s} \right)^{\nu_k}}
{\prod^j_{k=1} \left( \mathring{p}^B_k / p_{\rm s} \right)^{\eta_k}}
\right]^2
\left[
\left( \frac{p_1}{p_{\rm s}} \right)
\frac{\prod^i_{k=1} \left( p^A_k / p_{\rm s} \right)^{\nu_k}}
{\prod^j_{k=1} \left( p^B_k / p_{\rm s} \right)^{\eta_k}}
\right]^{-2}
\left[
\left( \frac{p_1}{p_{\rm s}} \right)
\frac{\prod_{k=1}^i \left( p^A_k / p_{\rm s} \right)^{\nu_k}}
{\prod_{k=1}^j \left( p^B_k / p_{\rm s} \right)^{\eta_k}}
\right]^{\frac{1}{\omega}}
\nonumber \\
&=&
\frac{p_{\rm s}}{\mathring{p}_2}
\left( \frac{\mathring{p}_1}{p_{\rm s}} \right)^2
\left[
\frac{\prod_{k=1}^i \left( \mathring{p}_k^A / p_{\rm s} \right)^{\nu_k}}
{\prod_{k=1}^j \left( \mathring{p}_k^B / p_{\rm s} \right)^{\eta_k}}
\right]^2
\left[
\left( \frac{p_1}{p_{\rm s}} \right)
\frac{\prod_{k=1}^i \left( p^A_k / p_{\rm s} \right)^{\nu_k}}
{\prod_{k=1}^j \left( p^B_k / p_{\rm s} \right)^{\eta_k}}
\right]^{\frac{1}{\omega} - 2},
\end{eqnarray}
we have
\begin{eqnarray}
\frac{p_{\rm s}}{\mathring{p}_2}
\left( \frac{\mathring{p}_1}{p_{\rm s}}
\right)^2 \left[
\frac{\prod_{k=1}^i \left( \mathring{p}_k^A / p_{\rm s} \right)^{\nu_k}}
{\prod_{k=1}^j \left( \mathring{p}_k^B / p_{\rm s} \right)^{\eta_k}}
\right]^2
=
\frac{c_1 \Pi}{\mathring{c}_2 b^2}
\left( \frac{p_1}{p_{\rm s}} \Xi \right)^{2 - \frac{1}{\omega}},
\end{eqnarray}
where
\begin{eqnarray}
\Pi &=& \left[
\frac{\prod_{k=1}^i \left( c^A_k / c_1 \right)^{\nu_k}}
{\prod_{k=1}^j \left( c^B_k / c_1 \right)^{\eta_k}}
\right]^{\frac{1}{\omega}}
\\
\Xi &=& \frac{\prod_{k=1}^{i} \left( p_k^A/p_{\rm s} \right)^{\nu_k}}
{\prod_{k=1}^{j} \left( p_k^B/p_{\rm s} \right)^{\eta_k}}.
\end{eqnarray}
Then, by applying the law of mass action
\begin{eqnarray}
\frac{p_{\rm s}}{\mathring{p}_2}
\left( \frac{\mathring{p}_1}{p_{\rm s}} \right)^2 \left[
\frac{\prod_{k=1}^i \left( \mathring{p}_k^A / p_{\rm s} \right)^{\nu_k}}
{\prod_{k=1}^j \left( \mathring{p}_k^B / p_{\rm s} \right)^{\eta_k}}
\right]^2
= \exp \left[ \frac{1}{k T}
\left( \mathring{g}_2 - 2 {\it \Delta} \mathring{g}_{\rm gas}
\right) \right],
\end{eqnarray}
with
\begin{eqnarray}
{\it \Delta} \mathring{g}_{\rm gas} =
\mathring{g}_1 + \sum_{k=1}^{i} \nu_k \mathring{g}_k^A
- \sum_{k=1}^{j} \eta_k \mathring{g}_k^B,
\end{eqnarray}
where $\mathring{g}_k^A$ and $\mathring{g}_k^B$ are the chemical
potentials of $k$--th gaseous reactants and products at a standard
pressure $p_{\rm s}$, respectively, the factor in the second term in
the parenthesis on the right-hand side of Equation (19) is reduced to
\begin{eqnarray}
\frac{c_1}{\mathring{c}_2 b^2} =
\frac{1}{\Pi} \exp \left\{
\frac{1}{k T} \left( \mathring{g}_2 - 2 {\it \Delta}
\mathring{g}_{\rm gas} \right) \right.
- \left.
\left( 2 - \frac{1}{\omega} \right) \left[ \ln \left(
\frac{p_1}{p_{\rm s}} \right) + \ln \Xi \right] \right\}.
\end{eqnarray}
The exponent in Equation (28) is written as
\begin{eqnarray}
\gamma_2 = \frac{1}{kT} \left[ \mathring{g}_2 -
\left( 2 - \frac{1}{\omega} \right) \mathring{g}_{\rm c}
- \frac{1}{\omega} {\it \Delta} \mathring{g}_{\rm gas} \right]
- \left( 2- \frac{1}{\omega} \right) \ln S,
\end{eqnarray}
where the supersaturation ratio $S$ is defined as
\begin{eqnarray}
\ln S = - \frac{1}{k T}
\left( \mathring{g}_{\rm c} - {\it \Delta} \mathring{g}_{\rm gas}
\right) + \ln \left( \frac{p_1}{p_{\rm s}} \right) + \ln \Xi,
\end{eqnarray}
and Equation (19) is finally reduced to
\begin{eqnarray}
J_2(t) = \alpha_1 c_1 \left[ c_1 - c_2 \frac{1}{\Pi}
\exp(\gamma_2) \right].
\end{eqnarray}
\subsubsection{Formation of $n$--mer ($n \ge 3$)}
For the chemical reaction (16) for the formation of $n$--mers
($3 \le n \le n_*$), the current density $J_n(t)$ is given by
\begin{eqnarray}
J_n(t) = \alpha_{n-1} c_{n-1} c_1 - \beta_n c_n
\frac{\prod_{k=1}^{j} (c_k^B)^{\eta_k}}
{\prod_{k=1}^{i} (c_k^A)^{\nu_k}}
\end{eqnarray}
with $\alpha_n$ defined by Equation (7), and the principle of detailed
balance leads to the equation
\begin{eqnarray}
J_n(t) = \alpha_{n-1} c_1 \left( c_{n-1} - c_n
\frac{\mathring{c}_{n-1}}{\mathring{c}_n} \frac{1}{b}
\right).
\end{eqnarray}
From the law of mass action
\begin{eqnarray}
\frac{\mathring{p}_{n-1}}{\mathring{p}_n}
\frac{\mathring{p}_1}{p_{\rm s}}
\frac{\prod_{k=1}^i \left( \mathring{p}_k^A / p_{\rm s} \right)^{\nu_k}}
{\prod_{k=1}^j \left( \mathring{p}_k^B / p_{\rm s} \right)^{\eta_k}}
= \exp \left[
\frac{1}{k T} \left( \mathring{g}_n - \mathring{g}_{n-1}
- {\it \Delta} \mathring{g}_{\rm gas} \right) \right],
\end{eqnarray}
the factor $\mathring{c}_{n-1}/\mathring{c}_n b$ in Equation (33)
can be reduced to
\begin{eqnarray}
\frac{\mathring{c}_{n-1}}{\mathring{c}_n} \frac{1}{b}
= \exp \left[
\frac{1}{k T} \left( \mathring{g}_n - \mathring{g}_{n-1} -
{\it \Delta} \mathring{g}_{\rm gas} \right)
- \ln \left( \frac{p_1}{p_{\rm s}} \right)
- \ln \Xi \right].
\end{eqnarray}
With $\ln S$ defined by Equation (30), the exponent in Equation (35)
is rewritten as
\begin{eqnarray}
\gamma_n = \frac{1}{k T} \left(
\mathring{g}_n - \mathring{g}_{n-1} - \mathring{g}_{\rm c}
\right) -\ln S,
\end{eqnarray}
and consequently, the current density $J_n(t)$ for $3 \le n \le n_*$ is
given by
\begin{eqnarray}
J_n(t) = \alpha_{n-1} c_1 \left[ c_{n-1} - c_n
\exp(\gamma_n) \right].
\end{eqnarray}
It should be mentioned here that the chemical reaction at the time of
formation is included in the current density only through the factor
$\Pi$ given by Equation (24), $\omega$ in Equation (21), and the
supersaturation ratio $S$ defined by Equation (30).
The formation process of multi-element grains formulated by introducing
the key molecule can be treated as the natural extension of the
formation process of single-element grains;
in fact, Equations (31) and (37) can be reduced to Equation (14) by
substituting $\Pi = \omega = 1$ calculated for
$\nu_k^A = \nu_k^B = 0$.
Note that in principle the current density $J_n$ can be evaluated once
the chemical potentials of $n$--mers are given.
However, the chemical potential has been available only for tiny
clusters ($n \la 5$) of very few materials of astrophysical interest
(e.g., Goumans \& Bromley 2012).
Therefore, the so-called capillary approximation, which is the
practice for estimating the chemical potential of an $n$--mer in
terms of the chemical potential of a monomer in the bulk
condensate and the surface energy (Abraham 1974; Blander \& Katz 1972),
is generally adopted for evaluating the current density as well as
the steady--state nucleation rate.
For example, $\mathring{g}_n$ is expressed as
\begin{eqnarray}
\mathring{g}_n = 4 \pi a_0^2 \sigma (n-1)^{\frac{2}{3}}
+ (n-1) \mathring{g}_{\rm c} + \mathring{g}_1
\end{eqnarray}
for a single-element grain (e.g., Yasuda \& Kozasa 2012), where
$\sigma$ is the surface tension of bulk condensate.
By substituting $\mathring{g}_n$ into Equation (13), $\gamma_n$ for
a single-element grain is written as
\begin{eqnarray}
\gamma_n = \mu \left[ (n-1)^{\frac{2}{3}} -
(n-2)^{\frac{2}{3}} \right] -\ln S,
\end{eqnarray}
where $\mu = 4 \pi a_0^2 \sigma / k T$.
Inspecting the change of the chemical potential $\gamma_2$ for the
formation of a dimer from the reactants given by Equation (29) for a
multi-element grain and $\gamma'_n$ (Equation (A5)) for the formulation
of the steady--state nucleation rate in comparison with the corresponding
one given by Equation (13) for a single-element grain, we can see
that the factor $1/\omega$ in Equation (29) represents the contribution
of the key molecule to the change of chemical potential.
Thus, the chemical potential of an $n$--mer with $n \ge 2$ for a
multi-element grain can be defined as
\begin{eqnarray}
\mathring{g}_n =
4 \pi a_0^2 \sigma \left( n - \frac{1}{\omega} \right)^{\frac{2}{3}}
+ \left(n - \frac{1}{\omega} \right) \mathring{g}_c +
\frac{1}{\omega} {\it \Delta} \mathring{g}_{\rm gas},
\end{eqnarray}
so as to be consistent with the formula for a single-element grain.
Then, $\gamma_n$ is evaluated by
\begin{eqnarray}
\gamma_2 = \mu \left( 2 - \frac{1}{\omega} \right)^{\frac{2}{3}}
- \left( 2 - \frac{1}{\omega} \right) \ln S
~~~ {\rm for} ~~ n = 2,
\end{eqnarray}
and
\begin{eqnarray}
\gamma_n = \mu \left[ \left( n - \frac{1}{\omega} \right)^{\frac{2}{3}}
- \left( n - 1 - \frac{1}{\omega} \right)^{\frac{2}{3}}
\right] - \ln S ~~~ {\rm for} ~~ 3 \le n \le n_*.
\end{eqnarray}
\subsection{Formulation of Cluster and Dust Formation in a Cooling
Gas Flow}
Equations (1) and (3) describe, respectively, the time evolution of
concentrations of $n$--mer clusters and the conservation of the key
molecule in a fixed volume.
However, the formation of dust takes place generally in a cooling gas
flow such as the stellar winds from evolved stars and the expanding
ejecta of SNe and novae.
Therefore, it is more useful to formulate the formation process of
dust in a frame comoving with the gas.
Let us consider a specific volume $V(t)$ comoving with the gas.
The nominal concentration of the key molecule $\tilde{c}_1(t)$ is
defined as the concentration without the depletion due to the
formation of clusters and dust grains.
Thus, $\tilde{c}_1(t)$ at a time $t$ holds the relation
$c_{10} V(t_0) = \tilde{c}_1(t) V(t)$.
The current from the $(n-1)$--mer to the $n$--mer in $V(t)$ is
given as
$J_n(t) V(t)$, and the time variation of the number of $n$--mers
in $V(t)$, $N_n(t) = c_n(t) V(t)$, is expressed as
\begin{eqnarray}
\frac{d N_n}{dt} = V(t) \left( J_n - J_{n+1} \right)
~~~{\rm for} ~~ 2 \le n \le n_*.
\end{eqnarray}
Then, Equation (43), being divided by $\tilde{c}_1 V$, is reduced to
\begin{eqnarray}
\frac{d Y_n}{dt} = I_n - I_{n+1} ~~~{\rm for} ~~ 2 \le n \le n_*,
\end{eqnarray}
where $Y_n = c_n / \tilde{c}_1$ represents the normalized concentration
of $n$--mers, and the normalized current density from the
$(n-1)$--mer to the $n$--mer $I_n = J_n / \tilde{c}_1$ is given by
\begin{equation}
I_n = \tau_{n-1}^{-1} \times
\begin{cases}
\left[ Y_{n-1} - Y_n \Pi^{-1} \exp \left( \gamma_n \right) \right]
~~~ {\rm for} ~~~ n = 2 \\
\left[ Y_{n-1} - Y_n \exp \left( \gamma_n \right) \right]
~~~~~ {\rm for} ~~~ 3 \le n \le n_* \\
Y_{n-1} \left[ 1 - \exp \left( \gamma_n \right) \right]
~~~~~ {\rm for} ~~~ n = n_* + 1
\end{cases}
\end{equation}
with $\tau_{n-1}^{-1} = \alpha_{n-1} c_{1}$.
The equation of the mass conservation for the key molecule given by
\begin{eqnarray}
\tilde{c}_1 V - c_1 V = \sum^{n_*-1}_{n=2} n c_n V +
\int^t_{t_0} V(t') J_{n_*}(t') \frac{a^3(t, t')}{a_0^3} dt',
\end{eqnarray}
being divided by $\tilde{c}_1 V$, is also rewritten as
\begin{eqnarray}
1 - Y_1 = \sum^{n_*-1}_{n=2} n Y_n + K_3.
\end{eqnarray}
By introducing
\begin{eqnarray}
K_i(t) = \int^t_{t_0} I_*(t') \frac{a^i(t, t')}{a_0^i} dt'
~~~{\rm for} ~~ i = 0{\rm -}3
\end{eqnarray}
and $I_* = I_{n_*}$,
$K_3$ is calculated by solving the following simultaneous
differential equations
\begin{eqnarray}
\frac{dK_i}{dt} &=& I_*(t) n_*^{\frac{i}{3}}
+ \frac{i}{a_0} \left( \frac{da}{dt} \right) K_{i-1}
~~~{\rm for} ~~ i = 1{\rm -}3 \nonumber \\
&=& I_*(t) \hspace{3.8cm} {\rm for} ~~ i = 0,
\end{eqnarray}
and $Y_1$ is calculated from Equation (47).
The concentrations of gaseous reactants and products except for the
key molecule taking into account the depletion due to the formation
of clusters and the growth of grains are evaluated, respectively, by
\begin{eqnarray}
Y_k^A = \frac{c_k^A}{\tilde{c}_1}
= \frac{\tilde{c}_k^A}{\tilde{c}_1} - \nu_k^A (1 - Y_1),
\end{eqnarray}
and
\begin{eqnarray}
Y_k^B = \frac{c_k^B}{\tilde{c}_1}
= \frac{\tilde{c}_k^B}{\tilde{c}_1} + \eta_k^B (1 - Y_1),
\end{eqnarray}
where $\tilde{c}_k^A$ and $\tilde{c}_k^B$ are the nominal concentrations
of $k$--th gaseous reactant and product, respectively.
These equations can be solved, given the initial abundances of gaseous
reactants and products as well as clusters ($Y_n$) at $t = t_0$ together
with the time evolutions of gas temperature and density.
Note that $Y_1 = c_1 / \tilde{c}_1$ represents the number fraction of
the key molecules left in the gas phase (so-called depletion efficiency),
$K_0$ the number density of dust grains
($K_0 = n_{\rm dust} / \tilde{c}_1$), and
$K_3$ the number fraction of the key molecules locked in dust grains.
Hence, the condensation efficiency $f_{\rm con}(t)$ and
volume-equivalent average radius $a_{\rm ave}(t)$ are calculated by
\begin{eqnarray}
f_{\rm con}(t) = K_3(t) ~~~{\rm and} ~~~
a_{\rm ave} =
a_0 \left[ \frac{K_3(t)}{K_0(t)} \right]^{\frac{1}{3}},
\end{eqnarray}
respectively.
In addition, since the grains nucleated in a time interval between
$t$ and $t + dt$ have the radii between $a$ and $a + da$, the size
distribution function $f(a)$ of newly formed grains is calculated by
\begin{eqnarray}
f(a) da = \tilde{c}_1(t) I_*(t) dt.
\end{eqnarray}
\section{APPLICATION TO DUST FORMATION IN THE EJECTA OF SNe}
We apply the derived formula to the formation of dust grains in the
ejecta of SNe.
The aim is to reveal how the non-steady--state effect and the physical
conditions of the dust-forming regions affect
the condensation efficiency, average grain radius, and
size distribution.
Also, in the following sections, we discuss the applicability of
the steady--state nucleation rate.
\subsection{Grain Species and Elemental Composition of the Gas}
In the ejecta of SNe, the macroscopic mixing of elements is likely to
be caused by the Rayleigh--Taylor instability.
Although Cherchneff \& Dwek (2009) have claimed that hydrogen atoms
mixed with heavy elements play a critical role in the formation of
precursor molecules of dust grains,
the microscopic mixing of hydrogen penetrating into the inner
layer is absolutely impossible within the timescale of a few
years, because molecular diffusion length is much smaller than the
typical sizes of the gas shell and clumps in the ejecta
(Deneault et al.\ 2003; Clayton 2013).
Thus, we suppose the onion-like composition as an elemental
composition in the inner ejecta of SNe, and consider the formation of
C and MgSiO$_3$ grains individually, which are expected to form in
the carbon-rich layer and the oxygen-rich layer of SNe, respectively.
The chemical reactions at the time of formation of C and MgSiO$_3$
grains and clusters, along with the physical constants necessary for
the calculations, are taken from Table 2 of Nozawa et al.\ (2003).
It has been believed that carbon reacts with oxygen to produce CO
molecules, and that carbon atoms tied up in CO cannot be available for
the formation of C grains because CO is stable against dissociation.
However, in the ejecta of SNe, some of CO molecules might be destroyed
by the collisions with energetic electrons and charge transfer
reactions with the ionized inert gas (Liu \& Dalgarno 1994, 1996;
Clayton et al.\ 1999, 2001; Clayton 2013).
Here, we do not consider the formation and destruction processes of
CO molecules, since we treat the initial abundance of carbon atoms
available for dust formation as a parameter.
For MgSiO$_3$ grains, we assume that the key molecule is SiO
molecule, which has been considered to be a precursor for the
formation of silicate grains in SNe (e.g., Kozasa et al. 1989;
Kotak et al.\ 2009).
The initial abundance of SiO is also treated as a parameter.
The number ratios of Mg and O atoms to SiO molecules are taken to be
$c_{{\rm Mg},0}/c_{{\rm SiO},0} = 2$ and
$c_{{\rm O},0} /c_{{\rm SiO},0} = 20$.
These abundance ratios are of typical in the oxygen-rich layer of
solar-metallicity SNe if almost all Si atoms are bound to SiO
molecules (see e.g., Figure 1 of Nozawa et al.\ 2010).
\subsection{Evolution of the Gas Density and Temperature}
The number density and size distribution of newly formed dust depend
on the time evolution of the density and temperature of the gas.
In the ejecta of SNe, the gas expands homologously after $\sim$1 day,
and the nominal concentration of a gas species decreases as
\begin{eqnarray}
\tilde{c}(t) = c_{0} \left( \frac{t}{t_0} \right)^{-3},
\end{eqnarray}
where $c_{0}$ is the concentration at a time $t = t_0$.
On the other hand, the temperature of the gas in the ejecta is
determined by the balance between the energy input due to the decay
of radioactive elements and the energy output due to expansion and
radiative cooling.
In this study, as in some previous works (e.g., Kozasa et al.\
1989), we assume the time evolution of the gas temperature as
\begin{eqnarray}
T(t) = T_0 \left( \frac{t}{t_0}\right)^{-3 (\gamma-1)},
\end{eqnarray}
where $T_0$ is the gas temperature at $t_0$, and $\gamma$ is a
constant parameter.
In the calculations of dust formation, we employ the capillary
approximation expressed by Equations (38)--(42) for evaluating the
chemical potentials $\mathring{g}_n$ and $\gamma_n$.
As the gas cools down, it shifts from unsaturated states ($\ln S < 0$)
to supersaturated ones ($\ln S > 0$) in which the formation of
dust grains occurs.
Thus, we take $t_0$ as a time when $\ln S = 0$, and determine the
equilibrium temperature $T_0$ from the equation
\begin{eqnarray}
\ln S = \frac{A}{T_0} - B + \ln \left( \frac{c_{10} k T_0}{p_{\rm s}}
\right) + \ln \Xi = 0
\end{eqnarray}
for a given initial concentration of the key molecule $c_{10}$
(and given abundance ratios of reactants and products).
Throughout this paper, the chemical potential for the formation of
a bulk condensate from reactants per key molecule is approximated as
$(\mathring{g}_{\rm c} - {\it \Delta} \mathring{g}_{\rm gas}) / k T
= - A/T + B$ with the numerical values $A$ and $B$ from Table 2 of
Nozawa et al.\ (2003).
Figure 1 shows the equilibrium temperature $T_0$ for C and
MgSiO$_3$ grains as a function of $c_{10}$.
For MgSiO$_3$, $c_{\rm Mg}/c_{\rm SiO} = 2$ and
$c_{\rm O}/c_{\rm SiO} = 20$ are adopted as mentioned above, but
$T_0$ is insensitive to the changes in the abundance ratios.
For both the grain species, $T_0$ is higher for higher $c_{10}$,
and $T_0 \simeq 2000$ K for C grains and $T_0 \simeq 1500$ K for
MgSiO$_3$ grains with $c_{10} = 10^8$ cm$^{-3}$.
As mentioned in Section 2, the condensation of dust is achieved
through the formation of clusters and their growth.
Thus, it is convenient to define the timescales characterizing these
processes along with given temporal evolutions of the gas density and
temperature.
The time evolution of current density and thus the formation of
clusters are regulated mainly by the time evolution of
$\ln S$ in $\gamma_n$ (see Equations (41), (42), and (45)).
Then, assuming that the depletion of the key molecule is negligible
at the earlier stage of dust formation, we introduce the timescale
of supersaturation $\tau_{\rm sat}$ with which the supersaturation
ratio $S$ increases as follows,
\begin{eqnarray}
\tau_{\rm sat}^{-1} \equiv \frac{d \ln S}{dt}
= \frac{3 ( \gamma - 1)}{t} \left[ \frac{A}{T}
- \frac{\gamma \omega}{\gamma - 1} \right]
\sim \frac{A}{T} \tau_{\rm cool}^{-1},
\end{eqnarray}
where $\tau_{\rm cool} = t/3 (\gamma-1)$ is the timescale of gas
cooling for the current model.
The second term on the right-hand side in Equation (57) generates
from the time differentiation of the term $\ln [ (p_1/p_{\rm s}) \Xi]$
in Equation (30) for the time evolutions of the gas density and
temperature in Equation (54) and (55).
On the other hand, the growth of dust grains proceeds through the
collision (attachment) of the key species onto their surfaces, and
the collision timescale is defined as
\begin{eqnarray}
\tau_{\rm coll}^{-1} \equiv s 4 \pi a_0^2 \tilde{c}_{1}
\left( \frac{k T}{2 \pi m_1} \right)^{\frac{1}{2}}.
\end{eqnarray}
\subsection{Calculations of Dust Formation}
In what follows, we adopt $n_* =100$ as the minimum number of the key
molecule that is regarded as a grain.
This number corresponds to the minimum radius of grains $a_* = 5.9$
\AA~for C grains and $a_* = 10.8$ \AA~for MgSiO$_3$ grains.
The effect of changing $n_*$ on the results of calculations is
examined in Appendix B.
We take a sticking probability $s_n = 1$ onto all sizes of clusters
and grains.
Among the three free parameters $c_{10}$, $\gamma$, and $t_0$ in the
models described above, we adopt $\gamma = 1.25$ and $t_0 = 300$
days as the standard values representing the gas cooling rate and the
equilibrium time at which $\ln S = 0$ in the SN ejecta, although
the cases for different $\gamma$ and $t_0$ are examined as well.
The calculations are performed up to the time long enough so that
the current density $I_*$ and the growth rate of grains can become
negligibly small.
The formation of dust grains from the gas phase takes place when
$\ln S > 0$.\footnote{The steady--state nucleation rate in Equation
(59) can be applied only for $S > 1$; see Appendix A.}
On the other hand, our non-steady--state calculations with the initial
conditions of $\ln S < 0$ and $c_n = 0$ ($n \ge 2$) have confirmed
the following;
the formation of small ($n \la 10$) clusters is possible even if
$\ln S < 0$, although their abundances are very small.
The abundances of small clusters at $\ln S \la 0$ are the same as
the steady--state values as a result of too high backward reaction rates
($\exp(\gamma_n) \gg 1$, see Equation (45)).
Therefore, the steady--state abundances of small clusters at
$\ln S = 0$ are taken as the initial values for the simulations
starting from $t = t_0$.
We also compare the results obtained from a set of formulae as
described in Section 2 (hereafter referred to as the non-steady
model) with those calculated from the revised steady--state
nucleation rate (hereafter the steady model) given by
\begin{eqnarray}
J_{\rm s} =
s_{\rm crit} \Omega_0
\left( \frac{2 \sigma}{\pi m_1} \right)^{\frac{1}{2}}
\ c_1^2 \ \Pi \ \exp \left[ - \frac{4}{27} \frac{\mu^3}{(\ln S)^2}
\right],
\end{eqnarray}
for which the detailed derivation is presented in Appendix A.
In the steady model which does not involve the formation of clusters,
the calculations of dust formation are performed by replacing $I_*$
with $I_{\rm s} = J_{\rm s}/\tilde{c}_1$ in Equation (47) without
the first term on the right-hand side
and by replacing $n_*$ with $n_{\rm crit}$ given in Equation (A9).
In what follows, we refer to $I_n$ and $I_{\rm s}$ as current densities
and steady--state current density, respectively, for convenience.
\section{RESULTS OF DUST FORMATION CALCULATIONS}
Given the parameters $\gamma$ and $t_0$ representing the cooling and
dynamical times of the gas, the concentration $c_{10}$ controls the
behavior of the formation processes of clusters and grains.
We first present the results of the calculations in the case that
the initial concentration of the key molecule is high enough that
the assumption of a steady state is considered to be a good
approximation.
Then, we demonstrate the results for the low-density case.
We also explore the dependence of the results on $t_0$ and $\gamma$
in Section 4.3.
\subsection{High Density Case}
Figure 2 illustrates the formation process of C grains as a function
of time ($x = t/t_0$) for $c_{10} = 10^8$ cm$^{-3}$,
$t_0 = 300$ days, and $\gamma = 1.25$;
Figure 2(a) depicts the evolutions of abundances of $n$--mer clusters
$Y_n$ ($n \ge 2$) as well as of the key molecules $Y_1$, and
Figure 2(b) the time evolutions of the current density of $n_*$--mer
$I_*$, average grain radius $a_{\rm ave}$, and condensation efficiency
$f_{\rm con}$.
Note that, in the figure, the time evolution of the average radius
is plotted after the time at which the condensation efficiency
reaches $10^{-10}$, and hereafter the time is referred to as the onset
time $x_{\rm on}$ of dust formation.\footnote{
The threshold value $10^{-10}$ is arbitrary, but, given that
$f_{\rm con}$ rises up quickly with time, it does not affect the
conclusion of this paper as long as the value is less than
$\sim$$10^{-5}$.}
Figure 2(c) presents the time evolutions of the supersaturation ratio
$S$ and the critical size $n_{\rm crit}$ that is defined as the
size satisfying the condition $\gamma_n = 0$, and Figure 2(d) the time
evolutions of current densities $I_n$ for the formation of given
$n$--mers.
The results for MgSiO$_3$ grains are provided in Figure 3.
For both the grain species, the non-steady--state formation process
of dust is described as follows.
As can be seen from (c) and (d) of Figures 2 and 3, the increase in
$\ln S$, induced by the decrease in gas temperature with time, leads
to the formation of clusters with larger $n$ progressively.
Once $\ln S$ reaches $\simeq$2 at which $n_{\rm crit} \simeq 100$,
$I_*$ becomes high enough that some amount of grains with $n \ge 100$
start to form.
A further increase in $\ln S$ enhances $I_n$, producing a much greater
number of clusters and grains.
Note that the onset time of dust formation is later for C grains
($x_{\rm on} \simeq 1.08$) than for MgSiO$_3$ grains
($x_{\rm on} \simeq 1.03$), which stems partly from the longer
timescale of supersaturation and partly from the larger surface
tension:
$\tau_{\rm sat} \simeq$ 10 days and $\sigma = 1400$ erg cm$^{-3}$
for C grains
($\tau_{\rm sat} \simeq$ 2.5 days and $\sigma = 400$ erg cm$^{-3}$
for MgSiO$_3$ grains).
Since newly formed grains grow efficiently to cause the consumption
of the key molecule, the supersaturation ratio $S$ reaches a maximum
and then decreases.
The critical size $n_{\rm crit}^{\rm smax}$ at $S = S_{\rm max}$ is
$\sim$20 for C grains ($\sim$10 for MgSiO$_3$ grains).
The current densities $I_n$ for the formation of clusters with
$n < n_{\rm crit}^{\rm smax}$ cease almost abruptly just before $S$
becomes $S_{\rm max}$, whereas those for $n > n_{\rm crit}^{\rm smax}$
--mer reach almost the same maximum value at $S \simeq S_{\rm max}$
and then decrease quickly.
Accordingly, $I_*$ has a sharp peak;
the gas temperature at the peak of $I_*$ is 1850 K (1490 K) for C
(MgSiO$_3$) grains, being lower than its equilibrium temperature
$T_0 = 1990$ K (1530 K).
After then, dust grains continue to grow until $f_{\rm con} \simeq 1$
by consuming almost all of the key molecules.
It should be noticed here that the current densities for the formation
of clusters with $n \ga n_{\rm crit}^{\rm smax}$ around
$S = S_{\rm max}$, being almost independent of $n$, reach a
steady--state value in this high-density case for which the condition
of $\tau_{\rm coll} \ll \tau_{\rm sat}$ is satisfied (see Section 5).
In addition, the steady--state value excellently matches the
steady--state current density $I_{\rm s}$, as is seen from (b) and (d)
in Figures 2 and 3, where we overplot the results obtained from the
steady model (dotted lines).
The behavior of the formation process of clusters and grains in this
high-density case can be qualitatively understood by inspecting the
time evolution of $\mathrm{e}^{\gamma_n}$ regulating the backward
reactions in Equation (45).
The factor $\mathrm{e}^{\gamma_n}$ is a decreasing function of $n$,
and in a supersaturated gas, it becomes below unity for $n$ larger
than a critical size $n_{\rm crit}$ approximately given as
\begin{eqnarray}
n_{\rm crit} - \frac{1}{\omega}
\simeq \left( \frac{2 \mu}{3 \ln S} \right)^3
\end{eqnarray}
for $n_{\rm crit} \gg 1$.
Note that this expression for $n_{\rm crit}$ is equivalent to that
for the steady--state current density as defined by Equation (A9).
At the initial phase of $\ln S \la 2$, $n_{\rm crit}$ is very large
($n_{\rm crit} > 100$) (see Figures 2(c) and 3(c)), so the backward
reactions are dominant ($\mathrm{e}^{\gamma_{n}} \gg 1$) for any
size of $n$--mers.
In this case, the abundances of $n$--mers at each time approximately
take the values, $Y_n \simeq Y_{n-1} \mathrm{e}^{- \gamma_n}$, and
the current to larger $n$--mers is considerably smaller
($I_n/I_{n-1} \ll 1$).
On the other hand, as $\ln S$ increases with time, $n_{\rm crit}$
falls below $n_* = 100$, reaching down to 10--20.
In this phase, the backward reactions are suppressed
($\mathrm{e}^{\gamma_{n}} < 1$) for $ n \ga n_{\rm crit}$, so the
$n$--mers can grow exclusively through the collisions with the key
molecules.
Since the collision timescale is extremely small
($\tau_{\rm sat} / \tau_{\rm coll} \gg 1$) in this high-density case,
the reactions for $n \ga n_{\rm crit}$ proceed instantaneously, which
makes the current densities $I_n$ being in a steady state
(i.e., $I_n \simeq I_{n-1}$ for $n \ga n_{\rm crit}$).
Furthermore, the agreement of $I_*$ with $I_{\rm s}$ can be interpreted
as follows;
even if $\ln S$ is high enough, the backward reactions remain
predominant for $n \la n_{\rm crit}$, where the relation
$Y_n \simeq Y_{n-1} \mathrm{e}^{- \gamma_n}$ holds.
Thus, the abundance of $n_{\rm crit}$--mers can be approximately
estimated as
\begin{eqnarray}
Y_{n_{\rm crit}}
\simeq Y_{1} \exp \left(- \sum_{n=2}^{n_{\rm crit}} \gamma_n \right)
\simeq Y_{1} \exp \left[- \mu \left( n_{\rm crit} - \frac{1}{\omega}
\right)^{\frac{2}{3}} +
\left( n_{\rm crit} - \frac{1}{\omega} \right) \ln S \right].
\end{eqnarray}
Then, the current of $n_*$--mers is found to be on orders of
\begin{eqnarray}
I_* \simeq I_{n_{\rm crit}}
\sim \tau_{n_{\rm crit} - 1}^{-1} Y_{n_{\rm crit}}
\sim (n_{\rm crit} -1)^{\frac{2}{3}} \tau_{\rm coll}^{-1} Y_1
\exp \left[- \frac{4 \mu^3}{ 27 (\ln S)^2} \right].
\end{eqnarray}
The exponential term in Equation (62), which dominates the time
evolution of the current density, has the same form as the term in
the steady--state current density $I_{\rm s}$.
Hence, the current density for the formation of $n_*$--mers
$I_*$ is
essentially equal to the steady--state current density $I_{\rm s}$.
This allows us to conclude that the application of the steady--state
current density could be valid as long as the consumption of the key
molecules due to formation and growth of clusters and grains causes the
supersaturation ratio $S$ to decrease in the course of time evolution
of gas density and temperature, as is demonstrated in the case of a
high initial gas density.
Figures 2(e) and 3(e) show the final size distributions of grains
and clusters.
The size distribution is lognormal--like with
$a_{{\rm ave},\infty} = 0.07$ $\mu$m for C grains and
$a_{{\rm ave},\infty} = 0.08$ $\mu$m for MgSiO$_3$ grains.
Since the size distributions of grains follow the time evolution of
$I_*$ and $I_{\rm s}$ is equal to $I_*$, we can see that the size
distribution in the non-steady model is the same as those in the
steady model.
In this high-density case with the final condensation efficiency
$f_{{\rm con},\infty} \simeq 1$, the abundance of the key molecules
locked in the clusters is extremely small ($\sum n Y_n < 10^{-7}$).
Our calculations show that, when the initial concentration is as high
as $c_{10} \ga 10^7$ cm$^{-3}$, all carbon atoms and SiO molecules
are ultimately locked in grains with the size distributions identical
to those in the steady models.
\subsection{Low Density Case}
Figures 4 and 5 show the formation processes of C and MgSiO$_3$ grains,
respectively, for $c_{10} = 10^5$ cm$^{-3}$, $t_0 = 300$ days, and
$\gamma = 1.25$.
Even for such a low initial gas density, the formation of $n$--mers
progresses as $\ln S$ increases, as is the same as the high-density
case.
However, even if grains with $n\ge 100$ are produced, they cannot
grow efficiently through attachment of the key molecules because the
collision timescale is considerably longer
($\tau_{\rm sat}/\tau_{\rm coll} \sim$ 1--10).
Thus, despite the fact that the formation and growth of clusters and
grains significantly consume the key molecules, $\ln S$ continues to
increase (Figures 4(c) and 5(c)), and $I_*$ gradually decreases after
passing the peak in contrast to the high-density case (Figures 4(d)
and 5(d)), which results in the formation of many small grains with
$n \la 1000$.
Finally, the depletion due to formation of clusters and grains and/or
the dilution due to the expansion of the gas makes the concentration
of the key molecules too low to advance the further growth, and the
abundances of clusters approach to the constant values
(see Figures 4(a) and 5(a)).
Figures 4(b) and 5(b) compare the time evolutions of $I_*$ (and
$I_{\rm s}$), $a_{\rm ave}$, and $f_{\rm con}$ between the non-steady
and the steady models.
It can be seen that the non-steady current density $I_*$ rises up at
a time later than the steady--state current density $I_{\rm s}$,
corresponding to the later onset time of dust formation, and that its
peak value is much smaller than that in the steady model.
This is at odds with the high-density case, indicating that the steady
model is no longer appropriate for this low-density case with
$\tau_{\rm sat} / \tau_{\rm coll} \la 10$ during the evolution.
For clusters with $n \la n_{\rm crit}$, as mentioned in the previous
subsection, $Y_n$ evolves as
$Y_n \simeq Y_{n-1} \mathrm{e}^{- \gamma_n}$, and
$I_{n_{\rm crit}}$ at each time is on order of $I_{\rm s}$
(c.f., Equation (62)).
However, the collision timescale is too long for the current densities
$I_n$ to establish a steady state at $n \ga n_{\rm crit}$, so $I_*$
remains much lower than $I_{n_{\rm crit}} \simeq I_{\rm s}$
(Figures 4(d) and 5(d)).
There also appear differences in the final average radius and
condensation efficiency of dust grains between the non-steady and
steady models.
The final condensation efficiency in the non-steady model is
$f_{{\rm con},\infty} \simeq 0.3$ for C grains
($f_{{\rm con},\infty} \simeq 0.01$ for MgSiO$_3$ grains),
which is lower than $f_{{\rm con},\infty} = 1$ in the steady model for
both the grain species.
In both the models, the key molecules are little left in the gas phase
($f_{{\rm dep},\infty} < 10^{-5}$) in the end, indicating that 70 \%
(99 \%) of them are finally bound to clusters for the non-steady model.
On the other hand, the final average radius of dust grains
is larger for the non-steady model ($a_{{\rm ave},\infty} = 0.0007$
$\mu$m for C grains and 0.0011 $\mu$m for MgSiO$_3$ grains)
than for the steady model ($a_{{\rm ave},\infty} = 0.0004$ $\mu$m for
C grains and 0.0005 $\mu$m for MgSiO$_3$ grains).
The discrepancies in the average grain radius and condensation
efficiency between the two models simply reflect the difference in
the minimum size considered as grains.
As can be seen from Figures 4(e) and 5(e), the final size distribution
in the steady model is quite similar to the combined size distribution
of grains and clusters in the non-steady model.
This is because $n_{\rm crit}$ and $\tau_{\rm coll}$ are the same
in both the models, so the number of $n_{\rm crit}$--mer formed
at a given time and the growth rate are essentially identical.
However, in the steady model, clusters that meet $n \ge n_{\rm crit}$
are taken as bulk grains, and $n_{\rm crit}$ is normally less than 10,
even down to $\simeq$1--2 in the low density case as shown in Figures
4 and 5.
Thus, the steady model, which regards small $n$--mers as grains,
leads to a smaller average grain radius and a higher condensation
efficiency.
Although we cannot have a clear number of constituent atoms to
distinguish between small clusters and grains, it must be
unreasonable to consider clusters with $n \le 10$ to hold the
properties of bulk grains.
The results of calculations for this low initial density clearly
attest that the application of the steady--state current density
overestimates the condensation efficiency and underestimates the
average grain radius for dust formation in less dense and/or rapidly
cooling environments.
Also, it may be useful to point out here that the application of the
steady--state current density with a given cut-off value of critical
size (e.g., Bianchi \& Schneider 2007) in low-density/rapidly cooling
environments, arguing the inadequacy of the extension to smaller
critical sizes, leads to the considerable depression of the
condensation efficiency and enhancement of the average grain radius,
and cannot reproduce the combined size distribution of clusters and
grains.
\subsection{Dependence on $t_0$ and $\gamma$}
In the subsections 4.1 and 4.2, we have demonstrated how the initial
concentration of the key molecule affects the formation process
and properties of newly formed grains.
In this subsection, we investigate the dependence of other free
parameters $t_0$ and $\gamma$ on the formation process, average
radius, and size distribution of dust grains.
Figure 6(a) plots the formation process of C grains for $t_0 =$ 100,
300, and 600 days with $c_{10} = 10^7$ cm$^{-3}$ and $\gamma =$ 1.25.
The results of the calculations show that a larger $t_0$ leads to a
smaller peak of $I_*$ as well as a little earlier onset time of
dust formation $x_{\rm on}$.
This is explained as follows:
as seen from Equation (57), the timescale of supersaturation in
terms of $x$, $(d \ln S / dx)^{-1}$ is independent of $t_0$,
while the timescale of grain growth,
$(d \ln a / dx)^{-1} \propto \tau_{\rm coll} / t_0$, is inversely
proportional to $t_0$.
This means that the increase in $t_0$ makes grain growth more active
but has little impact on the number of clusters at a given time $x$.
Therefore, for a larger $t_0$, dust grains capture the key molecules
more efficiently through their growth, which causes a faster rise of
$f_{\rm con}$ and a faster drop of $I_*$ (a smaller peak of $I_*$)
and results in the grain size distribution weighted toward a larger
radius (see Figure 6(b)).
Thus, the increase in $t_0$ enhances the effect of grain growth
relative to formation of clusters and act to produce dust grains with
large average radii.
Figure 7 gives the results of the calculations for the formation
of MgSiO$_3$, adopting $\gamma =$ 1.25, 1.4, and 1.6 for
$c_{10} = 10^7$ cm$^{-3}$ and $t_0$ = 300 days.
For a larger $\gamma$, which corresponds to a more rapid cooling of
the gas, the onset time of dust formation is earlier, and $I_*$ has
a higher peak.
Again considering the timescale in terms of $x$, the timescale of
supersaturation,
$(d \ln S / dx)^{-1} \propto x^{-3 \gamma+4}/(\gamma-1)$ decreases
with increasing $\gamma$, whereas the timescale of grain growth,
$(d \ln a / dx)^{-1} \propto x^{3 (\gamma+1) / 2}$, increases.
Hence, for a larger $\gamma$, a more rapid increase in $\ln S$ as
well as a more rapid decrease in $n_{\rm crit}$ leads to the formation
of a larger number of clusters with $n \ga n_{\rm crit}$ at an earlier
time before grain growth efficiently consumes the key molecules.
Here it should be noted that the collision timescale $\tau_{\rm coll}$
in the models considered here is still short enough so that the
consumption of the key molecules due to grain growth makes $\ln S$
decrease during the evolution.
Consequently, in the model with larger $\gamma$, $I_*$ increases more
rapidly and has a higher and narrower peak at an earlier time, and the
average grain radius as well as the peak radius of size distribution
becomes smaller, as seen from Figures 7(a) and 7(b).
In conclusion, the increase in $\gamma$ makes the formation of
clusters more active relative to grain growth and induces the
formation of dust grains with small average radii even if
$\tau_{\rm sat}/\tau_{\rm coll}$ is not much larger than unity.
\section{THE SCALING RELATIONS FOR AVERAGE GRAIN RADIUS AND
CONDENSATION EFFICIENCY}
The objects to be clarified in the study of dust formation in
astrophysical environments are not only the chemical composition
of dust grains but also their amount and size distribution.
In the ejecta of SNe, the knowledge on size distribution of newly
formed dust is crucial for unraveling what amount and size of dust
grains are finally ejected from SNe to the ISM, because the destruction
efficiency of dust by the reverse shock heavily depends on the size
distribution (e.g., Nozawa et al.\ 2007).
Our results show that the size distribution of dust formed for given
time evolutions of gas density and temperature is lognormal--like
as long as $\tau_{\rm sat}/\tau_{\rm coll} \gg 1$ during the formation
of dust.
Therefore, the average grain radius can be taken as a representative
measurement of the size distribution.
The condensation efficiency is also a fundamental quantity in
estimating the mass of newly formed dust.
In this section, we explore how the average grain radius and
condensation efficiency can be constrained from a physical condition
at the time of dust formation, and derive the scaling relations for
the average radius and condensation efficiency, referring to the
results of the calculations presented in the previous section.
In addition, we clarify in what conditions the steady--state nucleation
rate is applicable.
Then, we shall present some examples of the application of the
scaling relations for the formation of dust in SNe.
\subsection{The Physical Quantity Characterizing Dust Formation Process
and the Scaling Relations}
The results in Section 4 demonstrate that the formation process of
dust is determined by the competition between the formation of
clusters and the growth of grains.
Although the onset time of dust formation (and the condensation time
at which the current density $I_*$ reaches the maximum), average grain
radius, and condensation efficiency depend on $c_{10}$, $\gamma$, and
$t_0$ in a complicated manner, we have shown that the behavior of
formation processes of dust grains can be qualitatively interpreted in
terms of $\tau_{\rm sat}$ and $\tau_{\rm coll}$ during the formation
of dust.
It has been shown in the studies based on the steady--state nucleation
rate that the average radius and number density of dust grains formed in
a cooling gas undergoing macroscopic motion can be scaled by a
non-dimensional quantity $\Lambda = \tau_{\rm sat}/\tau_{\rm coll}$
at the condensation time when the nucleation rate reaches the maximum
(e.g., Hasegawa \& Kozasa 1988).
The formation time of dust is the most direct information that can be
obtained from observations of SNe.
Hence, it is useful to relate the average grain radius and condensation
efficiency to the gas density and temperature at the time of dust
formation, in order to get the information on physical conditions in
the ejecta from the observations and vice versa.
One of the best indicators for the formation time of dust would be
the condensation time, $t_{\rm c}$, defined as the time when $I_*$
has a peak.
However, the depletion of the key molecules due to the formation and
growth of clusters and grains at $t_{\rm c}$ is considerably large
($Y_1 \simeq$ 0.1--0.2 at $t_{\rm c}$), which has significant effects
on the relevant physical quantities.
Thus, we adopt, as the time of dust formation, the onset time of
dust formation, $t_{\rm on}$, defined as the time at which
$f_{\rm con}$ reaches $10^{-10}$.
Then, the non-dimensional physical quantity $\Lambda_{\rm on}$
characterizing the formation process of dust grains is given as
\begin{eqnarray}
\Lambda_{\rm on}
&\equiv& \frac{ \tau_{\rm sat}(t_{\rm on}) }{ \tau_{\rm coll} (t_{\rm on}) }
\sim \frac{t_{\rm on}}{3 (\gamma -1)} \frac{T_{\rm on}}{A}
\times s 4 \pi a_0^2 \tilde{c}_{\rm on}
\left( \frac{ k T_{\rm on} }{ 2 \pi m_1} \right)^{\frac{1}{2}}
\nonumber \\
&\sim& \frac{C}{ \gamma - 1 }
\left( \frac{s}{1.0} \right)
\left( \frac{\tilde{c}_{\rm on}}{10^8 ~{\rm cm}^{-3}} \right)
\left( \frac{T_{\rm on}}{2,000 ~{\rm K}} \right)^{\frac{3}{2}}
\left( \frac{t_{\rm on}}{300~{\rm days}} \right),
\end{eqnarray}
where $\tilde{c}_{\rm on} = \tilde{c}_1(t_{\rm on})$ and
$T_{\rm on} = T(t_{\rm on})$, and $C = 1.94 \times 10^3$
($1.15 \times 10^3$) for C grains (MgSiO$_3$ grains).
In Equation (63), we employ the approximation
$\tau_{\rm sat}^{-1} \simeq (A/T) \tau_{\rm cool}^{-1}$
(see Equation (57)), although we have calculated $\Lambda_{\rm on}$
without using this approximation in the following figures.
Figure 8 presents the final average grain radius $a_{{\rm ave},\infty}$
and condensation efficiency $f_{{\rm con},\infty}$ as a function of
$\Lambda_{\rm on}$ calculated for $\gamma =$ 1.1, 1.3, 1.5, and 1.7
by covering a wide range of $c_{10}$ and $t_0$.
The figures show that, as $\Lambda_{\rm on}$ increases,
$a_{{\rm ave},\infty}$ and $f_{{\rm con},\infty}$ increase, and
$f_{{\rm con},\infty} = 1$ at $\Lambda_{\rm on} \ga$ 20--30 for
both C and MgSiO$_3$ grains;
for a larger $\Lambda_{\rm on}$, grain growth becomes more dominant
over the formation of clusters, and as a result larger grains are
formed to lock up all of the key molecules.
The remarkable consequence of Figure 8 is that $a_{{\rm ave},\infty}$
and $f_{{\rm con},\infty}$ for different $\gamma$ (especially for
$\gamma \ga 1.2$) are, respectively, plotted almost completely
as a single curve for both C grains (Figure 8(a)) and MgSiO$_3$
grains (Figure 8(b)).
This means that the average grain radius and condensation efficiency
can be uniquely determined by one parameter $\Lambda_{\rm on}$,
except for C grains formed in extremely slowly cooling gas with low
densities corresponding to the case of $\gamma = 1.1$ with
$\Lambda_{\rm on} \la 10$.
In the figures, we also plot $a_{{\rm ave},\infty}$ and
$f_{{\rm con},\infty}$ from the steady model for $\gamma = 1.3$.
They deviate from those from the non-steady model at
$\Lambda_{\rm on} \la 30$, where the steady model predicts too small
$a_{{\rm ave},\infty}$ to be regarded as bulk grains with keeping
$f_{{\rm con},\infty} = 1$.
In other words, the steady--state nucleation rate is applicable
only if $\Lambda_{\rm on} \ga 30$.
Then, we derive the approximation formulae
describing the dependence of $a_{{\rm ave},\infty}$ and
$f_{{\rm con},\infty}$ on $\Lambda_{\rm on}$ for the non-steady model,
which are, respectively, given by
\begin{eqnarray}
\log \left( \frac{a_{{\rm ave},\infty} }{ a_* } -1 \right) =
\epsilon_1 + \epsilon_2 \log \Lambda_{\rm on}
\end{eqnarray}
and
\begin{eqnarray}
\log f_{{\rm con},\infty} =
\chi_1 \left[ \tanh \left( \chi_2 \log \Lambda_{\rm on} + \chi_3
\right) -1 \right],
\end{eqnarray}
where the fitting parameters $\epsilon_1$, $\epsilon_2$, and
$\chi_k$ ($k =$ 1--3) are given in Table 1.
In Figure 9, $a_{{\rm ave},\infty}$ and $f_{{\rm con},\infty}$
calculated by the above fitting formulae are compared with the results
of simulations for $\gamma = 1.25$.
Equation (64) reproduces the calculated average radii with the
error less than 5 \% for $\Lambda_{\rm on} \le 10^6$.
It should be emphasized here that the scaling relations given above
are independent of the initial conditions of the calculations and the
time evolution of the gas density.
In fact, we have performed the dust formation calculations for
expanding gas flows with a constant velocity by changing $\gamma$,
and confirmed that the resulting $a_{{\rm ave},\infty}$ and
$f_{{\rm con},\infty}$ entirely coincide with those shown in Figure 8.
Therefore, the average grain radius and condensation efficiency
of a given grain species can be universally described by the
corresponding non-dimensional physical quantity $\Lambda _{\rm on}$.
\subsection{Application of the Scaling Relations to Dust Formation
in SNe}
Equation (64) allows us to estimate the typical size of
newly formed grains once we know the density of the gas in the ejecta
and the onset time of dust formation (or the formation time of dust).
For example, from Figures 1 and 2 in Nozawa et al.\ (2003), the
concentration of carbon atoms in the carbon-rich He layer of Type
II--P SNe is found to be $\tilde{c}_{\rm on} \simeq$ $10^8$--$10^9$
cm$^{-3}$ at $t_{\rm on} = 330$ days.
For the reasonable values of $\gamma$ ($\simeq$1.5--1.7) and
$T_{\rm on}$ ($\simeq$2,000 K), Equation (64) presents
$a_{{\rm ave},\infty} =$0.03--0.3 $\mu$m ($\Lambda_{\rm on} =$
(0.3--8$) \times 10^4$, see Figure 9(a)), which is consistent with
the average grain radii given in Figure 7 of Nozawa et al.\ (2003).
In Type IIb SNe with much less massive hydrogen envelopes, the
density of the gas at $t_{\rm on} = 330$ days is by a factor of
100--300 lower than in Type II--P
(see Figure 2 of Nozawa et al.\ (2010)), and Equation (64) leads to
$a_{{\rm ave},\infty} \sim$ 0.001 $\mu$m
($\Lambda_{\rm on} \sim 100$).
This radius also agrees with that obtained in Nozawa et al.\ (2010).
These simple analyses suggest that our previous calculations by a
theory of the non-steady--state nucleation and grain growth applying
the steady--state nucleation rate with a relaxation time toward the
steady--state rate (see Nozawa et al.\ 2003) were performed under
the condition that the steady--state approximation is appropriate.
Observationally, there have been few studies that reported a typical
size of dust formed in SNe.
Recently, Maeda et al.\ (2013) clearly detected the formation of C
grains in the luminous Type IIn SN 2010jl around day 550 after the
explosion by the optical through the near-infrared observation.
They suggested that the typical radius of the dust grains is less
than 0.1 $\mu$m (more probably $\la$0.01 $\mu$m) to account for
the wavelength-dependence of obscuration of hydrogen emission lines.
These C grains are likely to have formed not in the ejecta but in
relatively dense clumps in the shocked circumstellar shell, but it
would be interesting to compare with our results.
Based on the simple argument of optical depth, Maeda et al.\ (2013)
showed that the gas density in the interclump medium must be
$c_{\rm gas} \la 10^9$ cm$^{-3}$.
Then, assuming the abundance of carbon atoms to be
$c_1/c_{\rm gas} = 10^{-4}$, and with typical values of $\gamma$ and
$T_{\rm on}$, $\Lambda_{\rm on} \la 8 \times 10^2 (D / 100)$, where
$D$ is the density contrast between the dense clumps and the
interclump medium.
Adopting $D =$ 100--1000, Equation (64) yields
$a_{{\rm ave},\infty} \le$ 0.01--0.08 $\mu$m, which is consistent
with the grain size estimated from the observation.
This emphasizes that Equations (64) and (65) could provide powerful
constraints on the properties of newly formed grains through the gas
density and condensation time extracted from observations as well as
those predicted from theoretical models.
\section{Summary}
We have developed a new formulation describing the non-steady--state
formation of small clusters and grains in a self-consistent manner,
taking into account chemical reactions at the time of dust formation
and assuming that the temperatures of small clusters are the same as
that of the gas.
Given the chemical potentials of small clusters, the formula can be
applied to investigate the formation process of dust grains in
rarefied astrophysical environments, where the steady--state
nucleation rate is not applicable.
It should be pointed out here that the formation process of dust is
formulated in the present study under the assumption that the
smallest cluster (dimer) has the same chemical composition as the
grains.
However, the formulation can be extended for the case that chemical
compositions of small clusters are different from the grains, given
the chemical reaction paths and chemical potentials.
Also, the formula can be extended and applied to explore the effects
of the difference in temperatures of small clusters and the gas
(Kozasa et al.\ 1996; Yasuda \& Kozasa 2012) as well as the
temperature fluctuation (Keith \& Lazzati 2011) and the shape of
small clusters (Fallest et al.\ 2011) on the formation process of
dust grains.
These subjects will be studied in the future works.
Applying the new formulation with the capillary approximation for
evaluating the chemical potentials of small grains, we have
investigated the formation processes of C and MgSiO$_3$ grains over
a wide range of physical conditions expected in the ejecta of SNe.
The results of the calculations have shown that the behavior of
non-steady--state formation process of small clusters and grains
can be qualitatively interpreted in terms of the temporal evolutions
of the collision timescale of key molecule $\tau_{\rm coll}$ and
the supersaturation timescale of the gas $\tau_{\rm sat}$ during the
formation of dust;
in the condition that $\tau_{\rm coll} \ll \tau_{\rm sat}$, the
formation process of dust grains can be completely reproduced by the
steady--state nucleation rate, and grains form with the condensation
efficiency $f_{\rm con} \simeq 1$, otherwise the formation of
clusters and grains proceeds in a non-steady state, and the resulting
condensation efficiency is $f_{\rm con} < 1$ with the efficiency of
growth of clusters and grains being depressed considerably.
Analyzing the results of the model calculations, we found that the
condensation efficiency and average radius of newly formed grains
can be fully described by one non-dimensional quantity
$\Lambda_{\rm on}$, the ratio of the supersaturation timescale to the
collision timescale at the onset time of dust formation, although the
time evolutions of gas temperature and density considerably influence
the formation process as well as the average grain radius and
condensation efficiency.
Also, we have revealed that the steady--state nucleation rate is
applicable under the condition of $\Lambda_{on} \ga 30$, irrespective
of grain species;
otherwise the application of the steady--state nucleation rate results
in the formation of a large number of unreasonably small grains
with the condensation efficiency considerably higher that calculated
by the non-steady rate.
Furthermore, we have derived the scaling relations for the
average radius and condensation efficiency of C and MgSiO$_3$ grains
as a function of $\Lambda_{\rm on}$.
The approximation formulae depend neither on the time evolution of the
gas density and temperature nor on the initial condition, and thus
could serve as a universal relation to predict the mass and average
size of newly formed grains from the observations and/or the model
calculations of explosions of supernovae and novae as well as
mass-loss winds from stars.
\acknowledgments
We are grateful to the anonymous referee for critical comments
that improved the manuscript.
This research has been supported by World Premier International
Research Center Initiative (WPI Initiative), MEXT, Japan, and by the
Grant-in-Aid for Scientific Research of the Japan Society for the
Promotion of Science (20340038, 22684004, and 23224004).
\newpage
|
1,108,101,564,628 | arxiv | \section{Introduction}
\label{sec:introduction}
\setcounter{equation}{0}
Recently, a new decoding rule called jar decoding was proposed in \cite{yang-meng:jardecoding}, \cite{yang-meng:isit2012_jardecoding}, under which the decoder first forms a set of suitable size, called a jar, consisting of sequences from the channel input alphabet considered to be closely related to the received channel output sequence through the channel, and then takes any codeword from the jar as the estimate of the transmitted codeword. It was shown in \cite{yang-meng:jardecoding} and \cite{yang-meng:isit2012_jardecoding} that under jar decoding, for any binary input memoryless channel with discrete or continuous output and with uniform capacity achieving distribution (BIMC), linear codes ${\cal C}_n$ of block length $n$ with rate $R({\cal C}_n)$ and word error probability $P_e ({\cal C}_n)$ exist such that
\begin{equation}
\label{eq1-1}
P_e (\mathcal{C}_{n}) \leq \left( \bar{\xi}_H (X|Y,\lambda,n) + \frac{2(1-C_{BE})
M_{\mathrm{H}} (X|Y,\lambda)}{\sqrt{n} \sigma^3_{\mathrm{H}} (X|Y,\lambda)}\right) e^{- n r_{X|Y} (\delta) }
\end{equation}
and
\begin{equation}
\label{eq1-2}
{R}(\mathcal{C}_{n}) \geq C_{\mathrm{BIMC}} - \delta - r_{X|Y}
(\delta) + \frac{\ln \frac{2 (1-C_{BE})
M_{\mathrm{H}} (X|Y,\lambda)}{ \sqrt{n} \sigma^3_{\mathrm{H}} (X|Y,\lambda)} }{n}
\end{equation}
for any $\delta \in (0, \Delta^* (X|Y))$, where $C_{\mathrm{BIMC}}$ is the capacity of the given BIMC, $\lambda = r'_{X|Y} (\delta)$, and all other quantities are defined later in Sections \ref{sec:non-asympt-conv} and \ref{sec:appr-eval}. Similar achievable results were also established in \cite{yang-meng:jardecoding} for non-linear codes for any discrete input memoryless channel with discrete or continuous output (DIMC).
The achievability given in \eqref{eq1-1} and \eqref{eq1-2} is quite sharp. It implies \cite{yang-meng:jardecoding}, \cite{yang-meng:isit2012_jardecoding} that for any BIMC, there exist linear codes
$\mathcal{C}_n$ of block length $n$ such that
\begin{equation}
\label{eq1-3}
R(\mathcal{C}_n) \geq C_{\mathrm{BIMC}} - \sigma_{\mathrm{H}} (X|Y)
\sqrt{\frac{2 \alpha \ln n}{n}} - \left( \alpha + \frac{1}{2} \right) \frac{\ln n}{n} - O
\left( \frac{\ln \ln n}{n} \right)
\end{equation}
while maintaining the word error probability
\begin{equation}
\label{eq1-4}
P_e(\mathcal{C}_n) \leq \frac{n^{-\alpha}}{2 \sqrt{\pi
\alpha \ln n}} + O \left( n^{-\alpha}
\frac{\ln n}{\sqrt{n}} \right) = \Theta \left(
\frac{n^{-\alpha}}{\sqrt{\ln n}} \right)
\end{equation}
and
\begin{equation}
\label{eq1-5}
R(\mathcal{C}_n) \geq C_{\mathrm{BIMC}} - \frac{c}{\sqrt{n}}
- \frac{\ln n}{2 n} + \frac{1}{n} \ln \frac{(1-C_{BE})
M_{\mathrm{H}} (X|Y)}{\sigma^3_{\mathrm{H}} (X|Y)}
\end{equation}
while maintaining the word error probability
\begin{equation}
\label{eq1-6}
P_e(\mathcal{C}_n) \leq Q \left( \frac{c}{ \sigma_{\mathrm{H}} (X|Y) } \right )
+ \frac{M_{\mathrm{H}} (X|Y)}{\sigma^3_{\mathrm{H}} (X|Y)} {1
\over \sqrt{n}},
\end{equation}
where $\sigma^2_{\mathrm{H}} (X|Y)$ and $M_{\mathrm{H}}
(X|Y)$ are parameters related to the
channel and specified in Section~\ref{sec:non-asympt-conv},
\begin{equation}
\label{eq1-7}
Q(z) = {1\over \sqrt{2 \pi}} \int_z^{\infty} e^{-t^2/2} d t,
\end{equation}
and $C_{BE} < 1$ is the universal constant in the
Berry-Esseen central limit theorem. Furthermore, when the error probability is
maintained constant in \eqref{eq1-6}, the first two terms (i.e.,
$C_{\mathrm{BIMC}}$ and
$\frac{c}{\sqrt{n}}$) in \eqref{eq1-5}
coincide with the asymptotic second order coding rate analysis in \cite{strassen-1962}, \cite{Hayashi-2009}, \cite{Yury-Poor-Verdu-2010}. Consequently, jar decoding is shown to be second order optimal asymptotically when the error probability $\epsilon$ is maintained constant with respect to block length $n$.
In the non-asymptotic regime, however, the concept of constant error probability with respect to block length $n$ is not applicable. For example, suppose that $n= 1000$ and the error probability $\epsilon$ is equal to $10^{-6}$. How would one interpret the relationship between $\epsilon$ an $n$ in this case? Does it make sense to interpret $\epsilon$ as a constant with respect $n$? Or is it better to interpret $\epsilon$ as a polynomial function of $n$, namely, $\epsilon = n^{-2}$? Since $\epsilon$ is pretty small relatively to $n$, we believe that the latter interpretation makes a lot of sense in this particular case. In general, when both the error probability $\epsilon$ and block length $n$ are finite, what really matters is their relative magnitude to each other. Therefore, it is interesting to see if the achievability in \eqref{eq1-1} and \eqref{eq1-2} remains tight up to the second order in the non-asymptotic regime where both the error probability $\epsilon$ and block length $n$ are finite.
In this paper, we provide an affirmative answer to the above question. Specifically, we first present a new converse proof technique dubbed the outer mirror image of jar and use the technique to establish new non-asymptotic converse coding theorems for any binary input memoryless symmetric channel with discrete or continuous output (BIMSC) and any DIMC. We then introduce a quantity $\delta_{t,n} (\epsilon)$ to measure the relative magnitude of the error probability $\epsilon$ and block length $n$ with respect to a given channel and an input distribution $t$. By combining the achievability of jar decoding (see \eqref{eq1-1} and \eqref{eq1-2} in the case of BIMSC) with the new converses, we further show that when $\epsilon < 1/2$, the best channel coding rate $R_n (\epsilon)$ given $n$ and $\epsilon$ has a ``Taylor-type expansion'' with respect to $\delta_{t, n} (\epsilon)$ in a neighborhood of $\delta_{t, n} (\epsilon) =0$, where the first two terms of the expansion are $\max_{t} [ I(t; P) - \delta_{t, n} (\epsilon) ] $, which is equal to $ I(t^*, P) - \delta_{t^*, n} (\epsilon) $ for some optimal distribution $t^*$, and the third order term of the expansion is $O(\delta^2_{t^*, n} (\epsilon)) $ whenever $\delta_{t^*, n} (\epsilon) = \Omega(\sqrt{ \ln n / n})$. Since the leading two terms in the achievability of jar decoding (see \eqref{eq1-2} in the case of BIMSC when $P_e (\mathcal{C}_{n}) = \epsilon$) coincide with the first two terms of this Taylor-type expansion of $R_n (\epsilon)$, jar decoding is indeed optimal up to the second order coding performance in the non-asymptotical regime.
Finally, based on the Taylor-type expansion of $R_n (\epsilon)$ and our new non-asymptotic converses, we also derive two approximation formulas (dubbed ``SO'' and ``NEP'') for $R_n (\epsilon)$ in the non-asymptotic regime. The SO approximation formula consists only of the first two terms in the Taylor-type expansion of $R_n (\epsilon)$. On the other hand, in addition to the first two terms in the Taylor-type expansion of $R_n (\epsilon)$, the NEP approximation formula includes some higher order terms from our non-asymptotic converses as well. (Here, NEP stands for non-asymptotic equipartition properties established recently in \cite{yang-meng:nep}, and underlies both the achievability bounds in \eqref{eq1-1} and \eqref{eq1-2} and our non-asymptotic converses.) These formulas are further evaluated and compared against some of the best bounds known so far, as well as the normal approximation of $R_n (\epsilon)$ in \cite{Yury-Poor-Verdu-2010}. It turns out that while the normal approximation is all over the map, i.e. sometime below achievability and sometime above converse, the SO approximation is much more reliable as it is always below converses; in the meantime, the NEP approximation is the best among the three and always provides an accurate estimation for $R_n (\epsilon)$. An important implication arising from the Taylor-type expansion of $R_n (\epsilon) $ is that in the practical non-asymptotic regime, the optimal marginal codeword symbol distribution is not necessarily a capacity achieving distribution.
The rest of this paper is organized as follows. Non-asymptotic converses and the Taylor-type expansion of $R_n (\epsilon)$ for BIMSC and DIMC are established in Sections \ref{sec:non-asympt-conv} and \ref{sec:non-asysmpt-converse-coding-dsc}, respectively. The SO and NEP approximation formulas are developed, numerically calculated, and compared against the normal approximation in Section \ref{sec:appr-eval} for the binary symmetric channel (BSC), binary erasure channel (BEC), binary input additive Gaussian channel (BIAGC), and Z-channel. And finally conclusions are drawn in Section \ref{sec:conclusion}.
\section{Non-Asymptotic Converse and Taylor-type Expansion: BIMSC}
\label{sec:non-asympt-conv}
\setcounter{equation}{0}
Consider a BIMC $\{p(y|x): x \in \mathcal{X}, y \in \mathcal{Y}\}$, where $\mathcal{X}=\{0,1\}$ is the channel input alphabet, and $\mathcal{Y}$ is the channel output alphabet, which is arbitrary and could be discrete or continuous. Throughout this section, let $X$ denote the uniform random variable on $\mathcal{X}$ and $Y$ the corresponding channel output of the BIMC in response to $X$. Then the capacity (in nats) of the BIMC is calculated by
\begin{equation}
\label{eq-non-bimc-1}
C_{\mathrm{BIMC}} = \ln 2 - H(X|Y)
\end{equation}
where $H(X |Y)$ is the conditional entropy of $X$ given $Y$. Here and throughout the rest of the paper, $\ln$ stands for the logarithm with base $e$, and all information quantities are measured in nats. Further assume that the random variable $ -\ln p(0|Y) $ given $X=0$ and the random variable $-\ln p(1|Y) $ given $X=1$ have the same distribution, where $p(0|Y)$ ($p(1|Y)$, respectively) denotes the conditional probability of $X=0$ ($X=1$, respectively) given $Y$. Such a BIMC is called a binary input memoryless symmetrical channel (BIMSC). (It can be verified that BSC, BEC, BIAGC, and general binary input symmetric output channels all belong to the class of BIMSC.) Under this assumption, we have
\begin{equation}
\label{eq-non-bimc-3}
\Pr \left\{ \left. -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta
\right| X^n = x^n \right\} = \Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta
\right\}
\end{equation}
for any $x^n \in \mathcal{X}^n$, where $Y^n$ is the output of the BIMSC in response to $X^n$, the $n$ independent copies of $X$. Throughout this paper, for any set $S$, we use $S^n$ to denote the set of all sequences of length $n$ drawn from $S$.
\subsection{Definitions}
Before stating our converse channel coding theorem for the BIMSC, let us first introduce some
definitions from \cite{yang-meng:nep}. Define
\begin{equation} \label{eq2-1}
\lambda^* (X|Y) \mbox {$ \ \stackrel{\Delta}{=} $} \sup
\left\{\lambda \geq 0: \int p(y) \left [\sum_{x \in \mathcal{X}} p^{-\lambda +1} (x|y) \right ] d y < \infty \right \}
\end{equation}
where $\int d y $ is understood throughout this paper to be the
summation over $\mathcal{Y}$ if $\mathcal{Y}$ is discrete.
Suppose that
\begin{equation} \label{eq2-1+}
\lambda^* (X|Y) >0 \;.
\end{equation}
Define for any $\delta \geq 0$
\begin{equation} \label{eq2r}
r_{X|Y} (\delta) \mbox {$ \ \stackrel{\Delta}{=} $} \sup_{\lambda \geq 0} \left [ \lambda (H(X|Y) + \delta) -
\ln \sum_{x \in \mathcal{X}} \int p(y) p^{-\lambda +1} (x|y) d y \right ].
\end{equation}
For any $\lambda \in [0, \lambda^* (X|Y))$, let $X _{\lambda}$ and $Y _{\lambda}$ be random variables
under joint distribution $p(x,y) f_{\lambda} (x, y)$ where
\begin{equation} \label{eq2-12}
f_{\lambda} (x, y) \mbox {$ \ \stackrel{\Delta}{=} $} {p^{-\lambda} (x|y) \over \sum_{u \in \mathcal{X}} \int p(v) p^{-\lambda +1} (u|v) d v } .
\end{equation}
Further define
\begin{equation} \label{eq2d}
\delta(\lambda) \mbox {$ \ \stackrel{\Delta}{=} $} \mathbf{E} [-\ln p(X_{\lambda} | Y_{\lambda})]
- H(X|Y) \;
\end{equation}
\begin{equation}
\label{eq2e}
\Delta^* (X|Y) \mbox {$ \ \stackrel{\Delta}{=} $} \lim_{\lambda \uparrow \lambda^* (X|Y)}
\delta (\lambda)
\end{equation}
\begin{equation} \label{eq2-13}
\sigma^2_H (X|Y, \lambda) \mbox {$ \ \stackrel{\Delta}{=} $} \mathbf{Var} [-\ln p(X_{\lambda} | Y_{\lambda})]
= \mathbf{E} [\left|-\ln p(X_{\lambda} | Y_{\lambda}) - \mathbf{E} [-\ln p(X_{\lambda} | Y_{\lambda})] \right|^2]
\end{equation}
\begin{equation} \label{eq2-14}
M_H (X|Y, \lambda) \mbox {$ \ \stackrel{\Delta}{=} $} \mathbf{M_3} [-\ln p(X_{\lambda} | Y_{\lambda})]
= \mathbf{E} [\left|-\ln p(X_{\lambda} | Y_{\lambda}) - \mathbf{E} [-\ln p(X_{\lambda} | Y_{\lambda})] \right|^3]
\end{equation}
and
\begin{equation} \label{eq2-cm-1}
\hat{M}_H (X|Y, \lambda) \mbox {$ \ \stackrel{\Delta}{=} $} \mathbf{\hat{M}_3} [-\ln p(X_{\lambda} | Y_{\lambda})]
= \mathbf{E}\left [-\ln p(X_{\lambda} | Y_{\lambda}) - \mathbf{E} [-\ln p(X_{\lambda} | Y_{\lambda})] \right]^3
\end{equation}
where $\mathbf{E} [\cdot]$, $\mathbf{Var}[\cdot]$, $\mathbf{M_3} [\cdot]$, and $\mathbf{\hat{M}_3} [\cdot]$ are respectively expectation, variance, third absolute
central moment, and third central moment operators on random variables, and
write $\hat{M}_H (X|Y,0) $ as $\hat{M}_H (X|Y)$, $M_H (X|Y,0) $ as $M_H (X|Y)$, and $\sigma^2_H (X|Y, 0)$ as $\sigma^2_H (X|Y)$. Clearly, $\sigma^2_H (X|Y)$, $M_H (X|Y)$, and $\hat{M}_H (X|Y)$ are the variance, third absolute central moment, and third central moment of $-\ln p(X|Y)$. In particular, $\sigma^2_H (X|Y)$ is referred to as the conditional information variance of $X$ given $Y$ in \cite{yang-meng:nep}. Assume that
\begin{equation}
\label{eq2-15}
\sigma^2_H (X|Y) >0 \mbox{ and } M_H (X|Y) = \mathbf{M_3} [-\ln p(X|Y)]< \infty.
\end{equation}
Then it follows from \cite{yang-meng:nep} that $r_{X|Y} (\delta)$ is strictly increasing, convex, and continuously differentiable up to at least the third order inclusive over $\delta \in [0, \Delta^* (X|Y))$, and furthermore has the following parametric expression
\begin{equation} \label{eq2p1}
r_{X|Y} (\delta(\lambda)) = \lambda (H(X|Y) + \delta (\lambda)) -
\ln \sum_{x \in \mathcal{X}} \int p(y) p^{-\lambda +1} (x|y) d y
\end{equation}
with $\delta (\lambda)$ defined in \eqref{eq2d} and $\lambda =
r'_{X|Y} (\delta)$.
In addition, let
\begin{eqnarray}
\lefteqn{\bar{\xi}_H (X|Y,\lambda,n) \mbox {$ \ \stackrel{\Delta}{=} $} \frac{2C_{BE} M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)} } \nonumber \\
&&{+}\: e^{\frac{n \lambda^2 \sigma^2_H (X|Y,\lambda)}{2}}
\left[ Q \left( \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right)
- Q \left( \rho^* + \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right)
\right] \\
\lefteqn{\underline{\xi}_H (X|Y,\lambda,n) \mbox {$ \ \stackrel{\Delta}{=} $} e^{\frac{n \lambda^2 \sigma^2_H (X|Y,\lambda)}{2}}
Q \left( \rho_* + \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right) }
\end{eqnarray}
with $Q(\rho^*) = \frac{C_{BE} M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)}$ and
$Q(\rho_*)=\frac{1}{2} - \frac{2 C_{BE} M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)}$.
The significance of the above quantities related to the channel can be
seen from Theorem 4 in \cite{yang-meng:nep}, summarized as below:
{\em
\begin{description}
\item[(a)] There exists a $\delta^* >0$ such that for any $\delta \in (0, \delta^*]$,
\begin{equation} \label{eq2-4-}
r_{X|Y} (\delta) = {1 \over 2 \sigma^2_H (X|Y) } \delta^2 +
O(\delta^3) .
\end{equation}
\item[(b)] For any $\delta \in (0, \Delta^* (X|Y))$ and any positive integer $n$
\begin{eqnarray} \label{eq2-17}
\bar{\xi}_H (X|Y,\lambda,n) e^{- n r_{X|Y} (\delta) }
&\geq&
{ \Pr \left \{ - {1 \over n} \ln p(X^n |Y^n ) > H(X|Y) +\delta \right \}} \nonumber \\
&\geq& \underline{\xi}_H (X|Y,\lambda,n)
e^{- n r_{X|Y} (\delta) },
\end{eqnarray}
where $\lambda = r'_{X|Y} (\delta) >0$. Moreover, when $\delta = o(1)$ and $\delta = \Omega (1/\sqrt{n})$,
\begin{eqnarray}
\label{eq2-17-1}
\bar{\xi}_H (X|Y,\lambda,n) &=& e^{\frac{n \lambda^2 \sigma^2_H (X|Y,\lambda)}{2}}
Q \left( \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right)\left( 1 + o(1) \right) \\
\label{eq2-17-2}
\underline{\xi}_H (X|Y,\lambda,n) &=& e^{\frac{n \lambda^2 \sigma^2_H (X|Y,\lambda)}{2}}
Q \left( \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right)\left( 1 - o(1) \right)
\end{eqnarray}
and
\begin{equation}
\label{eq2-17-3}
e^{\frac{n \lambda^2 \sigma^2_H (X|Y,\lambda)}{2}}
Q \left( \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right) = \Theta \left( \frac{1}{\sqrt{n} \lambda} \right)
\end{equation}
with $\lambda = r'_X (\delta) = \Theta (\delta)$.
\item[(c)] For any $ \delta \leq c \sqrt{\ln n \over n} $, where $c < \sigma_H (X|Y)$ is a constant,
\begin{eqnarray} \label{eq2-17+}
Q \left ( {\delta \sqrt{n} \over \sigma_H (X|Y)} \right ) - {C_{BE} M_H (X|Y) \over \sqrt{n} \sigma^3_H (X|Y)}
& \leq & \Pr \left \{ - {1 \over n} \ln p(X^n |Y^n ) > H(X|Y) + \delta \right \} \nonumber \\
& \leq & Q \left ( {\delta \sqrt{n} \over \sigma_H (X|Y)} \right
) + {C_{BE} M_H (X|Y) \over \sqrt{n} \sigma^3_H (X|Y)} .
\end{eqnarray}
\end{description}
}
Define for any $x^n \in \mathcal{X}^n$,
\begin{equation} \label{eq-def-bxd}
B(x^n,\delta) \mbox {$ \ \stackrel{\Delta}{=} $} \left\{ y^n: \infty > - \frac{1}{n} \ln
{p(x^n|y^n)} > H(X|Y) + \delta \right\}
\end{equation}
and
\begin{equation} \label{eq-def-bd}
B_{n, \delta} \mbox {$ \ \stackrel{\Delta}{=} $} \cup_{x^n \in \mathcal{X}^n} B(x^n,\delta) .
\end{equation}
Since for any $y^n \in \mathcal{Y}^n$, the following set
\begin{equation} \label{eq-jar}
\left\{ x^n \in {\cal X}^n: - \frac{1}{n} \ln
{p(x^n|y^n)} \leq H(X|Y) + \delta \right\}
\end{equation}
is referred to as a BIMC jar for $y^n$ in
\cite{yang-meng:jardecoding}, \cite{yang-meng:isit2012_jardecoding}, we shall call $B(x^n,\delta) $ the {\em outer mirror image of jar} corresponding to $x^n$.
Moreover, define
for any set $B \subseteq \mathcal{Y}^n$,
\begin{equation} \label{eq-def-pb}
P(B) \mbox {$ \ \stackrel{\Delta}{=} $} \Pr \left\{ Y^n \in B \right\}
\end{equation}
\begin{equation} \label{eq-def-pxb}
P_{x^n}(B) \mbox {$ \ \stackrel{\Delta}{=} $} \Pr \left\{ Y^n \in B | X^n=x^n \right\}.
\end{equation}
It is easy to see that
\begin{eqnarray}
\label{eq-pb}
P_{x^n} (B(x^n,\delta)) &=& \Pr \left\{ \left. -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta
\right| X^n = x^n \right\} \nonumber \\
&=& \Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta
\right\}
\end{eqnarray}
where the last equality is due to \eqref{eq-non-bimc-3}.
\subsection{Converse Coding Theorem}
We are now ready to state our non-asymptotic converse coding theorem for BIMSCs.
\begin{theorem}
\label{thm-bimc}
Given a BIMSC, for any channel code $\mathcal{C}_n$ of block length $n$
with average word error probability $P_e (\mathcal{C}_n) = \epsilon_n$,
\begin{equation}
\label{eq-thm-bimc-0}
R (\mathcal{C}_n) \leq C_{\mathrm{BIMSC}} - \delta
-
\frac{\ln \epsilon_n - \ln P(B_{n,\delta}) + {\ln \frac{-2 \ln \epsilon_n }{\sigma^2_H(X|Y) n} }
- \ln \left( 1 + \frac{\sqrt{\frac{-2 \ln \epsilon_n}{n}} }{\sigma_H (X|Y)} \right)}{n}
\end{equation}
where $\delta$ is the largest number such that
\begin{equation}
\label{eq-thm-bimc-0+}
\left( 1 + \frac{2}{\sigma_H(X|Y)} \sqrt{\frac{- 2 \ln \epsilon_n }{n}} \right) \epsilon_n
\leq \Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta
\right\} .
\end{equation}
Moreover, the following hold:
\begin{enumerate}
\item
\begin{equation}
\label{eq-thm-bimc-1}
R (\mathcal{C}_n) \leq C_{\mathrm{BIMSC}} - \delta - \frac{\ln
\epsilon_n - \ln P(B_{n,\delta}) + {\ln \frac{-2 \ln \epsilon_n }{\sigma^2_H(X|Y) n} }
- \ln \left( 1 + \frac{\sqrt{\frac{-2 \ln \epsilon_n}{n}} }{\sigma_H (X|Y)} \right) }{n}
\end{equation}
where $\delta$ is the solution to
\begin{equation}
\label{eq-thm-bimc-2}
\left( 1 + \frac{2}{\sigma_H(X|Y)} \sqrt{\frac{- 2 \ln \epsilon_n }{n}} \right) \epsilon_n
= \underline{\xi}_H (X|Y,\lambda,n) e^{-n r_{X|Y} (\delta) }
\end{equation}
with $\delta(\lambda) = \delta$.
\item When $\epsilon_n = \frac{e^{-n^{\alpha}}}{2 \sqrt{\pi n^{\alpha}}}
\left( 1 - \frac{1}{2 n^{\alpha}} \right)$ for $\alpha \in (0,1)$,
\begin{equation}
\label{eq-thm-bimc-2+}
R(\mathcal{C}_n) \leq C_{\mathrm{BIMSC}} - \sqrt{2} \sigma_H (X|Y) n^{-\frac{1-\alpha}{2}} + O(n^{-(1-\alpha)}) .
\end{equation}
\item When $\epsilon_n = \frac{n^{-\alpha}}{2 \sqrt{\pi \alpha \ln n}}
\left( 1 - \frac{1}{2 \alpha \ln n} \right)$ for $\alpha > 0$,
\begin{eqnarray}
\label{eq-thm-bimc-3}
R(\mathcal{C}_n) &\leq& C_{\mathrm{BIMSC}} - \sigma_{H} (X|Y) \sqrt{\frac{2 \alpha \ln n}{n}}
+ O \left( {\frac{\ln n }{n }} \right) .
\end{eqnarray}
\item When $\epsilon_n = \epsilon$ satisfying $\epsilon + \frac{1}{\sqrt{n}} \left( \frac{2 \sqrt{- 2 \ln \epsilon}}{\sigma_H (X|Y)} \epsilon +
\frac{C_{BE} M_H (X|Y)}{\sigma^3_H (X|Y)} \right) < 1$,
\begin{eqnarray}
\label{eq-thm-bimc-6}
R(\mathcal{C}_n) &\leq& C_{\mathrm{BIMSC}}
- \frac{ \ln \epsilon + \ln \frac{-2 \ln \epsilon}{\sigma^2_H (X|Y) n}
- \ln \left( 1 + \frac{\sqrt{\frac{-2 \ln \epsilon}{n}}}{\sigma_H (X|Y)} \right)}{n}
\nonumber \\
&&{-}\: \frac{\sigma_{H} (X|Y)}{\sqrt{n}} Q^{-1}
\left( \epsilon + \frac{1}{\sqrt{n}} \left( \frac{2 \sqrt{- 2 \ln \epsilon}}{\sigma_H (X|Y)} \epsilon +
\frac{C_{BE} M_H (X|Y)}{\sigma^3_H (X|Y)} \right) \right) \nonumber \\
\\
\label{eq-thm-bimc-7}
&=& C_{\mathrm{BIMSC}} - \frac{\sigma_{H} (X|Y)}{\sqrt{n}} Q^{-1}
\left( \epsilon \right) + \frac{\ln n}{n} + O(n^{-1}) .
\end{eqnarray}
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
Assume that the message $M$ is uniformly distributed in $\{1,2, \ldots, e^{nR(\mathcal{C}_n)}\}$,
$x^n(m)$ is the codeword corresponding to the
message $m$, and $\epsilon_{m,n}$ is the conditional error probability given message $m$. Then
\begin{equation} \label{eq-proof-bimc--2}
\epsilon_n = \mathbf{E} [\epsilon_{M,n}].
\end{equation}
Let
\begin{equation} \label{eq-proof-bimc--1}
\mathcal{M} \mbox {$ \ \stackrel{\Delta}{=} $} \left\{ m: \epsilon_{m,n} \leq \epsilon_n
(1+\beta_n) \right\} ,
\end{equation}
where $\beta_n > 0$ will be specified later.
By Markov inequality,
\begin{equation}
\label{eq-proof-bimc-1}
\Pr \{ M \in {\cal M} \} \geq \frac{\beta_n}{1+\beta_n} \mbox{ and } |\mathcal{M}| \geq e^{nR(\mathcal{C}_n)
+ \ln \frac{\beta_n}{1+\beta_n} } .
\end{equation}
Denote the decision region for message $m \in {\cal M} $ as $D_m$. Then
\begin{eqnarray}
\label{eq-proof-bimc-2}
P_{x^n(m)}( B(x^n(m),\delta) \cap D_m ) &=&
P_{x^n(m)}( B(x^n(m),\delta) ) - P( B(x^n(m),\delta) \cap D^c_m ) \nonumber \\
&\geq& P_{x^n(m)}( B(x^n(m), \delta) ) - \epsilon_{m,n} \nonumber \\
&\geq& P_{x^n(m)}( B(x^n(m), \delta) ) - \epsilon_n(1+\beta_n) \nonumber \\
&=& \Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta
\right\} - \epsilon_n(1+\beta_n) \nonumber \\
\end{eqnarray}
where the last equality is due to \eqref{eq-pb}.
At this point, we select $\delta$ such that
\begin{equation}
\label{eq-proof-bimc-4}
\Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta
\right\} \geq \epsilon_n (1 + 2 \beta_n).
\end{equation}
Substituting \eqref{eq-proof-bimc-4} into
\eqref{eq-proof-bimc-2}, we have
\begin{equation} \label{eq-proof-bimc-4-1}
P_{x^n(m)}( B(x^n(m),\delta) \cap D_m ) \geq \beta_n \epsilon_n.
\end{equation}
By the fact that $D_m$ are disjoint for different $m$ and
\begin{equation} \label{eq-proof-bimc-4-2}
\cup_{m \in \mathcal{M}} (B(x^n(m),\delta) \cap D_m) \subseteq B_{n,\delta},
\end{equation}
we have
\begin{eqnarray}
\label{eq-proof-bimc-5+}
P(B_{n,\delta}) &=& \int\limits_{B_{n,\delta}} p (y^n) dy^n \nonumber \\
&\geq& \sum_{m \in \mathcal{M} } \int\limits_{B(x^n(m),\delta) \cap D_m}
p (y^n) dy^n \nonumber \\
&= & \sum_{m \in \mathcal{M}} \int\limits_{B(x^n(m),\delta)
\cap D_m} \frac{p(y^n|x^n(m)) p(x^n(m))}{p(x^n(m)|y^n)} dy^n
\nonumber \\
&\stackrel{1)}{\geq}& \sum_{m \in \mathcal{M} } \int\limits_{B(x^n(m),\delta)
\cap D_m} p(y^n|x^n(m)) e^{n( -C_{\mathrm{BIMSC}} + \delta)} dy^n \nonumber \\
&=& \sum_{m \in \mathcal{M} } e^{n( -C_{\mathrm{BIMSC}} + \delta)} \int\limits_{B(x^n(m),\delta)
\cap D_m} p(y^n|x^n(m)) dy^n \nonumber \\
&=& \sum_{m \in \mathcal{M} } e^{n( -C_{\mathrm{BIMSC}} + \delta)} P_{x^n(m)}(
B(x^n(m),\delta) \cap D_m ) \nonumber \\
&\stackrel{2)}{\geq} & \sum_{m \in \mathcal{M} } e^{n( -C_{\mathrm{BIMSC}} + \delta)}
\beta_n \epsilon_n = |\mathcal{M}| e^{n( -C_{\mathrm{BIMSC}} + \delta)} \beta_n \epsilon_n
\end{eqnarray}
where the inequality 1) is due to the definition of $B(x^n, \delta)$ given in \eqref{eq-def-bxd}, and the inequality 2) follows from \eqref{eq-proof-bimc-4-1}. From \eqref{eq-proof-bimc-5+}, it follows that
\begin{equation}
\label{eq-proof-bimc-5}
|\mathcal{M}| \leq e^{n(C_{\mathrm{BIMSC}} - \delta) - \ln \beta_n -
\ln \epsilon_n + \ln P(B_{n,\delta})} .
\end{equation}
Then combining
\eqref{eq-proof-bimc-1} and \eqref{eq-proof-bimc-5} yields
\begin{equation}
\label{eq-proof-bimc-5++}
R (\mathcal{C}_n) \leq C_{\mathrm{BIMSC}} - \delta - \frac{\ln
\frac{\beta_n}{1+\beta_n} }{n} - \frac{\ln \beta_n}{n} - \frac{\ln
\epsilon_n - \ln P(B_{n,\delta})}{n}
\end{equation}
By letting $\beta_n=\frac{1}{\sigma_H (X|Y)} \sqrt{\frac{-2 \ln \epsilon_n}{n}} $, \eqref{eq-thm-bimc-0} and
\eqref{eq-thm-bimc-0+} directly come from \eqref{eq-proof-bimc-4} and
\eqref{eq-proof-bimc-5++}.
\begin{enumerate}
\item
By \eqref{eq2-17} shown in \cite{yang-meng:nep},
selecting $\delta$ to be the solution to
\eqref{eq-thm-bimc-2} will make \eqref{eq-proof-bimc-4} satisfied, and
therefore \eqref{eq-thm-bimc-1} is proved.
\item Towards proving \eqref{eq-thm-bimc-2+}, we want to show that
by making $\delta = \sqrt{2} \sigma_H (X|Y) n^{-\frac{1-\alpha}{2}} - \eta n^{-(1-\alpha)}$ for some constant $\eta$,
\begin{equation}
\label{eq-proof-bimc-10+}
\Pr \left\{ -\frac{1}{n} p(X^n|Y^n) > H(X|Y) + \delta \right\} \geq
\left( 1 + \frac{2}{\sigma_H(X|Y)} \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right) \epsilon_n
\end{equation}
with $\epsilon_n = \frac{e^{-n^{\alpha}}}{2 \sqrt{\pi n^{\alpha}}} \left( 1 - \frac{1}{2 n^{\alpha}} \right)$.
Then the proof follows essentially the same approach as that of \eqref{eq-thm-bimc-3}, shown below in details.
\item Apply the trivial bound $P(B_{n,\delta}) \leq 1$.
Then to show \eqref{eq-thm-bimc-3}, we only have to show that
$\delta = \sigma_H(X|Y) \sqrt{\frac{2 \alpha \ln n}{n}} - \frac{\eta \ln n}{n} $ for some constant $\eta$
can make
\begin{eqnarray}
\label{eq-proof-bimc-11++}
\lefteqn{\Pr \left\{ -\frac{1}{n} p(X^n|Y^n) > H(X|Y) + \delta \right\}} \nonumber \\
&\geq& \underline{\xi}_H (X|Y,\lambda,n) e^{- n r_{X|Y} (\delta)} \nonumber \\
&\geq& \left(1 + \eta_0 \sqrt{\frac{\ln n}{n}} \right)
\frac{n^{-\alpha}}{2 \sqrt{\pi \alpha \ln n}} \left( 1 - \frac{1}{2 \alpha \ln n} \right)
\nonumber \\
&\geq& \left( 1 + \frac{2}{\sigma_H(X|Y)} \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right) \epsilon_n
\end{eqnarray}
satisfied, where $\lambda = r'_{X|Y} (\delta)$ and
\begin{equation}
\frac{2}{\sigma_H(X|Y,\lambda)} \sqrt{\frac{-2 \ln \epsilon_n}{n}} = \Theta \left( \sqrt{\frac{\ln n}{n}} \right) \leq \eta_0 \sqrt{\frac{\ln n}{n}}
\end{equation}
for some constant $\eta_0$.
Towards this, recall \eqref{eq2-4-} \eqref{eq2-17-2} and \eqref{eq2-17-3},
\begin{eqnarray}
\label{eq-proof-bimc-11}
e^{- n r_{X|Y} (\delta)}
&=& e^{- n r_{X|Y} \left( \sigma_H(X|Y) \sqrt{\frac{2 \alpha \ln n}{n}} - \frac{\eta \ln n}{n} \right)} \nonumber \\
&=& e^{- n \left[ \frac{1}{2 \sigma^2_H (X|Y)} \left( \sigma_H(X|Y) \sqrt{\frac{2 \alpha \ln n}{n}} - \frac{\eta \ln n}{n} \right)^2 + O \left( \sqrt{\frac{\ln^3 n}{n^3}} \right) \right]}
\nonumber \\
&=& e^{ - \alpha \ln n + \frac{\eta}{\sigma_H (X|Y)}\sqrt{\frac{2 \alpha \ln^3 n}{n}} - O \left( \sqrt{\frac{\ln^3 n}{n}} \right) } \nonumber \\
&\geq& e^{ - \alpha \ln n + \left( \frac{\sqrt{2 \alpha} \eta}{\sigma_H (X|Y)} - \eta_1 \right) \sqrt{\frac{\ln^3 n}{n}} }
\end{eqnarray}
for some constant $\eta_1$, and
\begin{eqnarray}
\label{eq-proof-bimc-11+}
\lefteqn{\underline{\xi}_H (X|Y,\lambda,n)} \nonumber \\
&=& e^{\frac{n \lambda^2 \sigma^2_{H} (X|Y,\lambda)}{2}} Q \left( \rho_* + \sqrt{n} \lambda \sigma_H (X|Y,\lambda) \right) \nonumber \\
&\geq& e^{\frac{n \lambda^2 \sigma^2_{H} (X|Y,\lambda)}{2}}
\frac{e^{-\frac{\left( \rho_* + \sqrt{n} \lambda \sigma_H (X|Y,\lambda) \right)^2}{2}}}{\sqrt{2 \pi} (\rho_* + \sqrt{n} \lambda \sigma_H (X|Y,\lambda) )}
\left[ 1 - \frac{1}{(\rho_* + \sqrt{n} \lambda \sigma_H (X|Y,\lambda) )^2} \right] \nonumber \\
&=& \frac{e^{-\frac{ \rho^2_* + 2 \rho_* \sqrt{n} \lambda \sigma_H (X|Y,\lambda) }{2}}}{\sqrt{2 \pi} (\rho_* + \sqrt{n} \lambda \sigma_H (X|Y,\lambda) )}
\left[ 1 - \frac{1}{(\rho_* + \sqrt{n} \lambda \sigma_H (X|Y,\lambda) )^2} \right]
\nonumber \\
&\geq& \frac{1}{2 \sqrt{\pi \alpha \ln n}} \left( 1 - \frac{1}{2 \alpha \ln n} \right) \left( 1 - \Theta \left( \sqrt{\frac{\ln n}{n}} \right) \right)
\nonumber \\
&\geq& \frac{1}{2 \sqrt{\pi \alpha \ln n}} \left( 1 - \frac{1}{2 \alpha \ln n} \right) \left( 1 - \eta_2 \sqrt{\frac{ \ln n}{n}} \right)
\end{eqnarray}
for another constant $\eta_2$,
where $\rho_* = Q^{-1} \left( \frac{1}{2} - \frac{2 C_{BE} M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)} \right)
= \Theta \left( \frac{1}{\sqrt{n}}\right)$, and we utilize the fact that
\begin{eqnarray}
\lambda &=& r'_{X|Y} (\delta) \nonumber \\
&=& \frac{\delta}{\sigma^2_H (X|Y)} + O (\delta^2) \\
\sigma_H (X|Y,\lambda) &=& \sigma_H (X|Y) \pm O(\lambda) .
\end{eqnarray}
Then \eqref{eq-proof-bimc-11++} is satisfied by choosing a constant $\eta$ such that
\begin{eqnarray} \label{eq-proof-bimc-11-1}
\lefteqn{ e^{ \left( \frac{\sqrt{2 \alpha} \eta}{\sigma_H (X|Y)} - \eta_1 \right) \sqrt{\frac{\ln^3 n}{n}}}
\left( 1 - \eta_2 \sqrt{\frac{ \ln n}{n}} \right)} \nonumber \\
&\geq& \left [ 1 + \left( \frac{\sqrt{2 \alpha} \eta}{\sigma_H (X|Y)} - \eta_1 \right) \sqrt{\frac{\ln^3 n}{n}} \right ] \left( 1 - \eta_2 \sqrt{\frac{ \ln n}{n}} \right)
\nonumber \\
&\geq& 1 + \eta_0 \sqrt{\frac{ \ln n}{n}}
\end{eqnarray}
for some constants $\eta_0$, $\eta_1$ and $\eta_2$.
\item According to
\eqref{eq-proof-bimc-4}, we should select $\delta$ such that
\begin{equation}
\label{eq-proof-bimc-8}
\Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta \right\} \geq
\left(1+\frac{2}{\sigma_H(X|Y)} \sqrt{\frac{-2 \ln \epsilon}{n}} \right) \epsilon.
\end{equation}
Then by \eqref{eq2-17+},
\begin{equation}
\label{eq-proof-bimc-9}
\delta = \frac{\sigma_{H} (X|Y)}{\sqrt{n}} Q^{-1}
\left( \epsilon + \frac{1}{\sqrt{n}} \left( \frac{2 \sqrt{- 2 \ln \epsilon}}{\sigma_H (X|Y)} \epsilon +
\frac{M_H (X|Y)}{\sigma^3_H (X|Y)} \right) \right)
\end{equation}
will guarantee \eqref{eq-proof-bimc-8}.
Consequently,
\eqref{eq-thm-bimc-6} is proved by substituting \eqref{eq-proof-bimc-9} and $\epsilon_n = \epsilon$ into \eqref{eq-proof-bimc-5++}
and applying the trivial bound $P(B_{n,\delta}) \leq 1$, and
\eqref{eq-thm-bimc-7} follows the fact that
\begin{equation} \label{eq-proof-bimc-12}
Q^{-1}
\left( \epsilon + \frac{1}{\sqrt{n}} \left( \frac{2 \sqrt{- 2 \ln \epsilon}}{\sigma_H (X|Y)} \epsilon +
\frac{C_{BE} M_H (X|Y)}{\sigma^3_H (X|Y)} \right) \right)
= Q^{-1} (\epsilon) - O \left( \frac{1}{\sqrt{n}} \right).
\end{equation}
\end{enumerate}
\end{IEEEproof}
\begin{remark} \label{re1}
It is clear that the above converse proof technique depends heavily on the concept of the outer mirror image of jar corresponding to codewords. To facilitate its future reference, it is beneficial to loosely call such a converse proof technique the outer mirror image of jar.
\end{remark}
\begin{remark} \label{re2}
In general, the evaluation of $P(B_{n,\delta})$ may not be feasible,
in which case the trivial bound $P(B_{n,\delta}) \leq 1$ can be
applied without affecting the second order performance in the non-exponential
error probability regime, as shown above. However, there are cases where $P(B_{n,\delta})$
can be tightly bounded (e.g. BEC, shown in section
\ref{sec:appr-eval}).
\end{remark}
\begin{remark} \label{re3}
For the bound \eqref{eq-thm-bimc-6}, when $\epsilon$ is small with respect to
$\frac{1}{\sqrt{n}}$, $\frac{C_{BE} M_H(X|Y)}{\sqrt{n} \sigma^3_H(X|Y)}$ (the
estimation error that comes from Berry-Esseen central limit theorem)
will be dominant; in this case, \eqref{eq-thm-bimc-6} is loose.
\end{remark}
\begin{remark} \label{re3+}
The choice $\beta_n = \frac{1}{\sigma_H (X|Y)} \sqrt{\frac{-2 \ln \epsilon_n}{n}}$
in the proof of Theorem \ref{thm-bimc} is not arbitrary.
Actually, it is optimal when $\delta$ is small in the sense of minimizing the upper bound \eqref{eq-proof-bimc-5++}
in which $\delta$ depends on $\beta_n$ through \eqref{eq-proof-bimc-4}. To derive the expression for $\beta_n$, the following
approximations can be adopted when $\delta$ is small:
\begin{eqnarray}
\label{eq-re3+-1}
\frac{d \delta}{d \beta_n} &\approx& - \frac{2 \beta_n \sigma^2_H (X|Y)}{n \delta} \\
\label{eq-re3+-2}
\delta^2 &\approx& \frac{-2\sigma^2_H (X|Y) \ln \epsilon_n}{n} \\
\ln \frac{\beta_n}{1 + \beta_n} &\approx& \ln \beta_n
\end{eqnarray}
where \eqref{eq-re3+-1} and \eqref{eq-re3+-2} can be developed from
\eqref{eq2-4-} and \eqref{eq2-17}.
\end{remark}
By reviewing the proof of Theorem \ref{thm-bimc}, it is not hard to
reach the following corollary.
\begin{corollary}
\label{col-bimc}
Given a BIMSC, for any channel code $\mathcal{C}_n$ of block length $n$ with maximum error probability
$P_m (\mathcal{C}_n) = \epsilon_n$,
\begin{equation}
\label{eq-thm-maxbimc-3}
R (\mathcal{C}_n) \leq C_{\mathrm{BIMSC}} - \delta - \frac{\ln
\epsilon_n + \ln \frac{1}{\sigma_H(X|Y)} \sqrt{\frac{-2 \ln \epsilon_n}{n}} - \ln P(B_{n,\delta})}{n}
\end{equation}
where $\delta$ is the largest number such that
\begin{equation}
\label{eq-thm-maxbimc-4}
\left( 1 + \frac{1}{\sigma_H(X|Y)} \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right) \epsilon_n
\leq \Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta \right\}.
\end{equation}
Moreover, the following hold:
\begin{enumerate}
\item
\begin{equation}
\label{eq-thm-maxbimc-1}
R (\mathcal{C}_n) \leq C_{\mathrm{BIMSC}} - \delta - \frac{\ln
\epsilon_n + \ln \frac{1}{\sigma_H(X|Y)} \sqrt{\frac{-2 \ln \epsilon_n}{n}} - \ln P(B_{n,\delta})}{n}
\end{equation}
where $\delta$ is the solution to
\begin{equation}
\label{eq-thm-maxbimc-2}
\left( 1+ \frac{1}{\sigma_H(X|Y)} \sqrt{\frac{- 2 \ln \epsilon_n}{n}} \right) \epsilon_n = \underline{\xi}_H (X|Y,\lambda,n) e^{-n
r_{X|Y} (\delta) }
\end{equation}
with $\delta (\lambda) = \delta$.
\item
When $\epsilon_n=\epsilon$ satisfying $\epsilon + \frac{1}{\sqrt{n}} \left( \frac{\sqrt{-2 \ln \epsilon}}{\sigma_H(X|Y)} \epsilon +
\frac{C_{BE} M_H (X|Y)}{\sigma^3_H (X|Y)} \right) <1$,
\begin{eqnarray}
\label{eq-thm-maxbimc-6}
R(\mathcal{C}_n) &\leq& C_{\mathrm{BIMSC}} - \frac{\ln \epsilon + \ln \frac{1}{\sigma_H(X|Y)} \sqrt{\frac{-2 \ln \epsilon}{n}} }{n}
\nonumber \\
&&{-}\: \frac{\sigma_{H} (X|Y)}{\sqrt{n}} Q^{-1}
\left( \epsilon + \frac{1}{\sqrt{n}} \left( \frac{\sqrt{-2 \ln \epsilon}}{\sigma_H(X|Y)} \epsilon +
\frac{C_{BE} M_H (X|Y)}{\sigma^3_H (X|Y)} \right) \right) \\
&=& C_{\mathrm{BIMSC}} - \frac{\sigma_{H} (X|Y)}{\sqrt{n}} Q^{-1}
\left( \epsilon \right) + \frac{\ln n}{2 n} + O(n^{-1})
\end{eqnarray}
\end{enumerate}
\end{corollary}
Remarks \ref{re2}, \ref{re3} and \ref{re3+} also apply to Corollary \ref{col-bimc}.
\subsection{Taylor-type Expansion}
Fix a BIMSC. For any block length $n$ and average error probability $\epsilon$, let $R_n (\epsilon)$ be the best coding rate achievable with block length $n$ and average error probability $\leq \epsilon$, i.e.,
\begin{equation} \label{eq-bt-1}
R_n (\epsilon) \mbox {$ \ \stackrel{\Delta}{=} $} \max \{ R({\cal C}_n): {\cal C}_n \mbox{ is a channel code of block length $n$ with } P_e ({\cal C}_n) \leq \epsilon \} .
\end{equation}
In this subsection, we combine the non-asymptotic achievability given in \eqref{eq1-1} \eqref{eq1-2} with the non-asymptotic converses given in \eqref{eq-thm-bimc-0} to
\eqref{eq-thm-bimc-2} to derive a Taylor-type expansion of $R_n (\epsilon)$ in the non-asymptotic regime where both $n$ and $\epsilon$ are finite. As mentioned early, when both $n$ and $\epsilon$ are finite, what really matters is the relative magnitude of $\epsilon$ and $n$. As such, we begin with introducing a quantity $\delta_n (\epsilon)$ to measure the relative magnitude of $\epsilon$ and $n$ with respect to the given BIMSC.
A close look at the non-asymptotic achievability given in \eqref{eq1-1} \eqref{eq1-2} and the non-asymptotic converses given in \eqref{eq-thm-bimc-0} to
\eqref{eq-thm-bimc-2} reveals that
\begin{displaymath}
\Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta \right\}
\end{displaymath}
is crucial in both cases. According to \eqref{eq2-17-1} and \eqref{eq2-17-2},
\begin{eqnarray}
\label{eq-so-1}
\Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta \right\} &\approx&
e^{\frac{n \lambda^2 \sigma^2_H(X|Y,\lambda)}{2}} Q \left( \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right)
e^{- n r_{X|Y} (\delta)} \nonumber \\
&\mbox {$ \ \stackrel{\Delta}{=} $}& g_{X|Y,n} (\delta)
\end{eqnarray}
where $\lambda = r'_{X|Y} (\delta)$. Consequently, we would like to define $\delta_n(\epsilon)$ as
the solution to
\begin{equation}
\label{eq-so-2}
g_{X|Y,n} (\delta)= \epsilon
\end{equation}
given $n$ and $\epsilon \leq 1/2$, where the uniqueness of the solution in certain range is shown in Lemma~\ref{le1}.
\begin{lemma} \label{le1}
There exists $\delta^+ > 0$ such that for any $n >0$,
$g_{X|Y,n} (\delta)$ is a strictly decreasing function of $\delta$ over $\delta \in [0,\delta^+]$.
\end{lemma}
\begin{IEEEproof}
Since $\lambda = r'_{X|Y}(\delta)$, it follows from \eqref{eq2d} and \eqref{eq2p1} that $g_{X|Y,n} (\delta) = g_{X|Y,n} (\delta (\lambda))$
is a function of $\lambda$ through $\delta = \delta ( \lambda )$. (For details about the properties of $\delta ( \lambda )$ and $r_{X|Y}(\delta)$, please see \cite{yang-meng:nep}.) Moreover, by the fact that $\delta (0) = 0$ and
$\delta (\lambda)$ is a strictly increasing function of $\lambda$,
the proof of this lemma is yielded by analyzing the derivative of $g_{X|Y,n} (\delta (\lambda))$ with respect to $\lambda$
around $\lambda = 0$. Towards this,
\begin{eqnarray}
\label{eq-so-4}
\lefteqn{\frac{d g_{X|Y,n} (\delta (\lambda))}{d \lambda}} \nonumber \\
&=& \frac{d}{d \lambda}
\left( e^{\frac{n \lambda^2 \sigma^2_H(X|Y,\lambda)}{2}} Q \left( \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right) \right)
e^{- n r_{X|Y} (\delta(\lambda))} \nonumber \\
&&{-}\: e^{\frac{n \lambda^2 \sigma^2_H(X|Y,\lambda)}{2}} Q \left( \sqrt{n} \lambda \sigma_H(X|Y,\lambda) \right)
e^{- n r_{X|Y} (\delta(\lambda))} \frac{d}{d \lambda} \left( n r_{X|Y} (\delta (\lambda)) \right) \nonumber \\
&=& e^{- n r_{X|Y} (\delta(\lambda))}
\left\{
\left[ x e^{\frac{x^2}{2}}
Q(x) - \frac{1}{\sqrt{2 \pi}} \right] \frac{d x}{d \lambda}
-e^{\frac{x^2}{2}} Q(x) n \left. \frac{d r_{X|Y} (\delta)}{d \delta} \right|_{\delta = \delta (\lambda)}
\frac{d \delta (\lambda)}{d \lambda}
\right\}
\end{eqnarray}
where $x = \sqrt{n} \lambda \sigma_H(X|Y,\lambda)$.
On one hand,
\begin{eqnarray}
\label{eq-so-5}
\frac{d x}{d \lambda} &=&
\sqrt{n} \left(\sigma_H(X|Y,\lambda) + \lambda \frac{d \sigma_H(X|Y,\lambda)}{d \lambda} \right) \nonumber \\
&=& \sqrt{n} \left(\sigma_H(X|Y,\lambda) + \frac{\lambda}{2 \sigma_H(X|Y,\lambda)} \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} \right) .
\end{eqnarray}
On the other hand,
\begin{eqnarray}
\label{eq-so-6}
\left. \frac{d r_{X|Y} (\delta)}{d \delta} \right|_{\delta = \delta (\lambda)} &=& \lambda \\
\label{eq-so-7}
\frac{d \delta (\lambda)}{d \lambda} &=& \sigma^2_H (X|Y,\lambda)
\end{eqnarray}
which further implies
\begin{eqnarray}
\label{eq-so-8}
e^{\frac{x^2}{2}} Q(x) n \left. \frac{d r_{X|Y} (\delta)}{d \delta} \right|_{\delta = \delta (\lambda)}
\frac{d \delta (\lambda)}{d \lambda} &=& e^{\frac{x^2}{2}} Q(x) n \lambda \sigma^2_H (X|Y,\lambda)
\nonumber \\
&=& \sqrt{n} \sigma_H(X|Y,\lambda) x e^{\frac{x^2}{2}} Q(x).
\end{eqnarray}
Substituting \eqref{eq-so-5} and \eqref{eq-so-8} into \eqref{eq-so-4}, we have
\begin{eqnarray}
\label{eq-so-9}
\lefteqn{\frac{d g_{X|Y,n} (\delta (\lambda))}{d \lambda}} \nonumber \\
&=& e^{- n r_{X|Y} (\delta(\lambda))}
\left\{
\left[ x e^{\frac{x^2}{2}} Q(x)
- \frac{1}{\sqrt{2 \pi}} \right]
\left( \frac{ \sqrt{n} \lambda \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} }{2 \sigma_H(X|Y,\lambda)} \right)
- \frac{\sqrt{n} \sigma_H(X|Y,\lambda)}{\sqrt{2 \pi}}
\right\} \nonumber \\
&=& e^{- n r_{X|Y} (\delta(\lambda))}
\frac{\sqrt{n} \sigma_H(X|Y,\lambda)}{\sqrt{2 \pi}}
\left\{
\left[ \sqrt{2 \pi} x e^{\frac{x^2}{2}} Q(x)
- 1 \right]
\left( \frac{ \lambda \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} }{2 \sigma^2_H(X|Y,\lambda)} \right)
- 1
\right\}.
\end{eqnarray}
Note that
\begin{eqnarray}
\label{eq-so-10}
\sqrt{2 \pi} x e^{\frac{x^2}{2}} Q(x) &<& \sqrt{2 \pi} x e^{\frac{x^2}{2}} \frac{1}{\sqrt{2 \pi} x} e^{-\frac{x^2}{2}} \nonumber \\
&=& 1.
\end{eqnarray}
If $\frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} \geq 0$, then
\begin{equation}
\label{eq-so-11}
\left[ \sqrt{2 \pi} x e^{\frac{x^2}{2}} Q(x)
- 1 \right]
\left( \frac{ \lambda \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} }{2 \sigma^2_H(X|Y,\lambda)} \right)
\leq 0,
\end{equation}
which further implies that $\frac{d g_{X|Y,n} (\delta (\lambda))}{d \lambda} < 0$. In the meantime, if
$\frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} < 0$,
\begin{eqnarray}
\label{eq-so-12}
\lefteqn{\left[ \sqrt{2 \pi} x e^{\frac{x^2}{2}} Q(x) - 1 \right]
\left( \frac{ \lambda \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} }{2 \sigma^2_H(X|Y,\lambda)} \right)
- 1} \nonumber \\
&<& \left[ \sqrt{2 \pi} x e^{\frac{x^2}{2}} \frac{x}{\sqrt{2 \pi} (1+x^2) } e^{-\frac{x^2}{2}} - 1 \right]
\left( \frac{ \lambda \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} }{2 \sigma^2_H(X|Y,\lambda)} \right)
- 1 \nonumber \\
&=& - \frac{1}{1+x^2}
\left( \frac{ \lambda \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} }{2 \sigma^2_H(X|Y,\lambda)} \right)
- 1 \nonumber \\
&=& - \frac{ \lambda \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda} }
{2 \sigma^2_H(X|Y,\lambda) \left( 1+ n \lambda^2 \sigma^2_H(X|Y,\lambda) \right)} - 1.
\end{eqnarray}
To continue, let us evaluate $\frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda}$. From \eqref{eq2-12}, \eqref{eq2d}, and \eqref{eq2-13}, it is not hard to verify that \begin{eqnarray}
\label{eq-so-13}
\frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda}
=\sum_{x \in \mathcal{X}} \int p(x,y) \frac{\partial f_{\lambda} (x,y)}{\partial \lambda} \ln^2 p(x|y) dy
- 2 \sigma^2_H (X|Y,\lambda) \left( H(X)+\delta(\lambda) \right)
\end{eqnarray}
where
\begin{equation}
\label{eq-so-14}
\frac{\partial f_{\lambda} (x,y)}{\partial \lambda} = [- \ln p(x|y) - (H(X|Y) + \delta(\lambda)) ] f_{\lambda} (x,y) .
\end{equation}
Plugging \eqref{eq-so-14} into \eqref{eq-so-13} yields
\begin{eqnarray}
\label{eq-so-3-}
\lefteqn{ \frac{d \sigma^2_H(X|Y,\lambda)}{d \lambda}} \nonumber \\
& = & \mathbf{E}
\left( - \ln^3 p(X_{\lambda} | Y_{\lambda}) \right) - 3 \sigma^2_H(X|Y,\lambda) (H(X|Y) + \delta)
-(H(X|Y) + \delta)^3 \nonumber \\
& = & \hat{M}_H (X|Y, \lambda) .
\end{eqnarray}
Combining \eqref{eq-so-9}, \eqref{eq-so-11}, \eqref{eq-so-12}, and \eqref{eq-so-3-} together, we have
\begin{eqnarray}
\lefteqn{\frac{d g_{X|Y,n} (\delta (\lambda))}{d \lambda}} \nonumber \\
&\leq & e^{- n r_{X|Y} (\delta(\lambda))}
\frac{\sqrt{n} \sigma_H(X|Y,\lambda)}{\sqrt{2 \pi}}
\left (
\left| - \frac{ \lambda \hat{M}_H (X|Y, \lambda) }
{2 \sigma^2_H(X|Y,\lambda) \left( 1+ n \lambda^2 \sigma^2_H(X|Y,\lambda) \right)} \right| -1 \right ) \label{eq-so-14+} \\
& \leq & e^{- n r_{X|Y} (\delta(\lambda))}
\frac{\sqrt{n} \sigma_H(X|Y,\lambda)}{\sqrt{2 \pi}}
\left (
\left| - \frac{ \lambda \hat{M}_H (X|Y, \lambda) }
{2 \sigma^2_H(X|Y,\lambda) } \right| -1 \right) \label{eq-so-15+}
\end{eqnarray}
In view of the continuity of $ \sigma^2_H(X|Y,\lambda)$ and $\hat{M}_H (X|Y, \lambda) $ as functions of $\lambda$, it is easy to see that there is a $\lambda^+ >0$ such that for any $\lambda \in [0, \lambda^+]$,
\[ \left| - \frac{ \lambda \hat{M}_H (X|Y, \lambda) }
{2 \sigma^2_H(X|Y,\lambda) } \right| -1 <0 \]
and hence
\[ \frac{d g_{X|Y,n} (\delta (\lambda))}{d \lambda} < 0 \]
for any $n \geq 0$. This completes the proof of Lemma~\ref{le1} with $\delta^+ = \delta (\lambda^+) $.
\end{IEEEproof}
\begin{remark}
From \eqref{eq-so-14+}, it is clear that when $n$ is large,
\[ \left| - \frac{ \lambda \hat{M}_H (X|Y, \lambda) }
{2 \sigma^2_H(X|Y,\lambda) \left( 1+ n \lambda^2 \sigma^2_H(X|Y,\lambda) \right)} \right| -1 <0 \]
and hence
\[ \frac{d g_{X|Y,n} (\delta (\lambda))}{d \lambda} < 0 \]
even for $\lambda \geq \lambda^+$. Nonetheless,
as can be seen later, we are concerned only with the case where
$\delta_n (\epsilon)$ is around $0$. Consequently,
the exact value of $\delta^+$ is not important to us.
\end{remark}
\begin{remark}
In view of Lemma~\ref{le1} and the definition of $\delta_n (\epsilon)$ in \eqref{eq-so-1} and
\eqref{eq-so-2}, it follows that $\delta_n ({1\over 2}) =0$ for any $n$ and any BIMSC. However, when $\epsilon < 1/2$, $\delta_n (\epsilon)$ depends not only on $n$ and $\epsilon$, but also on the BIMSC itself through the function $r_{X|Y} (\delta)$. Given $n$ and $\epsilon < 1/2$, the value of $\delta_n (\epsilon)$ fluctuates a lot from one BIMSC to another through the behavior of $r_{X|Y} (\delta)$ around $\delta =0$, which depends on both the second and third order derivatives of $r_{X|Y} (\delta)$. Given a BIMSC, if $r_{X|Y} (\delta)$ is approximated as in \eqref{eq2-4-}, then $\delta_n (\epsilon) $ is in the order of $\sqrt{-\ln \epsilon \over n}$. Of course, such an approximation is accurate only when $\delta$ or $\sqrt{-\ln \epsilon \over n}$ is sufficiently small.
\end{remark}
With respect to $\delta_n (\epsilon)$, $R_n (\epsilon)$ has a nice Taylor-type expansion, as shown in Theorem \ref{thm-BIMSC-second-order}.
\begin{theorem}
\label{thm-BIMSC-second-order}
Given a BIMSC, for any $n$ and $\epsilon$ satisfying $g_{X|Y,n} (\delta^+ /2) \leq \epsilon <1$,
\begin{equation}
\label{eq-so-17}
\left| R_n (\epsilon) - \left( C_{\mathrm{BIMSC}} - \delta_n (\epsilon) \right) \right|
\leq o \left( \delta_n (\epsilon) \right)
\end{equation}
where
\begin{eqnarray}
\label{eq-so-18}
o \left( \delta_n (\epsilon) \right)
&=& r_{X|Y} (\delta_n (\epsilon)) + \frac{ \ln n + d_1}{n}
\end{eqnarray}
if $\epsilon \leq \frac{1}{3} $, and
\begin{equation}
\label{eq-so-16}
\left| R_n (\epsilon) - \left( C_{\mathrm{BIMSC}} - \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) \right) \right|
\leq \frac{\ln n + d_2}{n}
\end{equation}
otherwise, where $d_1$ and $d_2$ are channel parameters independent of both $n$ and $\epsilon$.
\end{theorem}
\begin{IEEEproof}
When $\epsilon > \frac{1}{3}$, \eqref{eq-so-16} can be easily proved by combining \eqref{eq1-5}, \eqref{eq1-6} and
\eqref{eq-thm-bimc-6}. Therefore, it suffices for us to show \eqref{eq-so-17} and \eqref{eq-so-18} for $\epsilon \leq \frac{1}{3}$.
By \eqref{eq1-1} and definition of $\bar{\xi}_H(X|Y,\lambda,n)$, for any BIMSC
there exists a channel code $\mathcal{C}_n$ such that
\begin{eqnarray}
\label{eq-so-19}
P_e (\mathcal{C}_n) &\leq&
\left( \bar{\xi}_H (X|Y,\lambda,n) + \frac{2 (1-C_{BE}) M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)} \right)
e^{- n r_{X|Y} (\delta)} \nonumber \\
&\leq& g_{X|Y,n} (\delta) + \frac{2M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)}
e^{- n r_{X|Y} (\delta)}
\end{eqnarray}
and
\begin{equation}
\label{eq-so-20}
R(\mathcal{C}_n) \geq C_{\mathrm{BIMSC}} - \delta +
\frac{\ln \left[ \frac{2 (1-C_{BE}) M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)} e^{- n r_{X|Y} (\delta)} \right] }{n}
\end{equation}
which implies that for any $\delta$ such that
\begin{equation}
\label{eq-so-21}
g_{X|Y,n} (\delta) + \frac{2M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)}
e^{- n r_{X|Y} (\delta)} \leq \epsilon
\end{equation}
the following inequality holds
\begin{equation}
\label{eq-so-21+}
R_n(\epsilon) \geq C_{\mathrm{BIMSC}} - \delta +
\frac{\ln \left[ \frac{2 (1-C_{BE}) M_H(X|Y,\lambda)}{\sqrt{n} \sigma^3_H (X|Y,\lambda)} e^{- n r_{X|Y} (\delta)} \right] }{n}
\end{equation}
where $\lambda = r'_{X|Y} (\delta)$. Now let $\bar{\delta} = \delta_n (\epsilon) + \frac{\eta}{n}$ for some constant
$\eta >0$, which will be specified later, and $\bar{\lambda} = r'_{X|Y} (\bar{\delta})$. By convexity of $r_{X|Y} (\delta)$,
\begin{equation}
\label{eq-so-22}
r_{X|Y} (\bar{\delta}) \geq r_{X|Y} (\delta_n (\epsilon)) + \lambda_n (\epsilon) \frac{\eta}{n}
\end{equation}
where $\lambda_n (\epsilon) = r'_{X|Y} (\delta_n (\epsilon))$. Then
\begin{eqnarray}
\label{eq-so-23}
\lefteqn{g_{X|Y,n} (\bar{\delta}) + \frac{2M_H(X|Y,\bar{\lambda})}{\sqrt{n} \sigma^3_H (X|Y,\bar{\lambda})}
e^{- n r_{X|Y} (\bar{\delta})}} \nonumber \\
&\stackrel{1)}{\leq} &
\left( e^{\frac{n \bar{\lambda}^2 \sigma^2_H(X|Y,\bar{\lambda})}{2}}
Q \left( \sqrt{n} \bar{\lambda} \sigma_H (X|Y,\bar{\lambda}) \right)
+ \frac{2M_H(X|Y,\bar{\lambda})}{\sqrt{n} \sigma^3_H (X|Y,\bar{\lambda})}
\right) e^{- n \left( r_{X|Y} (\delta_n (\epsilon)) + \lambda_n (\epsilon) \frac{\eta}{n} \right)} \nonumber \\
&=&
\left( 1
+ \frac{\frac{2M_H(X|Y,\bar{\lambda})}{\sqrt{n} \sigma^3_H (X|Y,\bar{\lambda}) }}
{e^{\frac{n \bar{\lambda}^2 \sigma^2_H(X|Y,\bar{\lambda})}{2}}
Q \left( \sqrt{n} \bar{\lambda} \sigma_H (X|Y,\bar{\lambda}) \right) } \right)
e^{\frac{n \bar{\lambda}^2 \sigma^2_H(X|Y,\bar{\lambda})}{2}}
Q \left( \sqrt{n} \bar{\lambda} \sigma_H (X|Y,\bar{\lambda})) \right)
\nonumber \\
&&{\times}\: e^{- n r_{X|Y} (\delta_n (\epsilon)) - \eta \lambda_n (\epsilon) }
\nonumber \\
&\stackrel{2)}{\leq}& \left( 1
+ \frac{2M_H(X|Y,\bar{\lambda}) \sqrt{2 \pi} \bar{\lambda}
\left(1 + \frac{1}{n \bar{\lambda}^2 \sigma^2_H (X|Y,\bar{\lambda})} \right) }
{\sigma^2_H (X|Y,\bar{\lambda})} \right)
\nonumber \\
&&{\times}\: e^{\frac{n \lambda^2_n (\epsilon) \sigma^2_H(X|Y,\lambda_n (\epsilon))}{2}}
Q \left( \sqrt{n} \lambda_n (\epsilon) \sigma_H (X|Y,\lambda_n (\epsilon)) \right)
e^{- n r_{X|Y} (\delta_n (\epsilon)) - \eta \lambda_n (\epsilon) } \nonumber \\
&=& g_{X|Y,n} \left( \delta_n (\epsilon) \right)
e^{- \eta \lambda_n (\epsilon) } \left( 1 +
\frac{2 \sqrt{2 \pi} M_H(X|Y,\bar{\lambda}) \left(1 + \frac{1}{n \bar{\lambda}^2 \sigma^2_H (X|Y,\bar{\lambda})} \right)}{ \sigma^2_H (X|Y,\bar{\lambda})} \bar{\lambda}
\right) \nonumber \\
&\stackrel{3)}{=}& \epsilon e^{- \eta \lambda_n (\epsilon) } \left( 1 +
\frac{2 \sqrt{2 \pi} M_H(X|Y,\bar{\lambda}) \left(1 + \frac{1}{n \bar{\lambda}^2 \sigma^2_H (X|Y,\bar{\lambda})} \right)}{ \sigma^2_H (X|Y,\bar{\lambda})} \left( \lambda_n (\epsilon) + \frac{1}{\sigma^2_H (X|Y,\tilde{\lambda})} \frac{\eta}{n} \right)
\right) \nonumber \\
&\stackrel{4)}{\leq}& \epsilon
\frac{1 +
\frac{2 \sqrt{2 \pi} M_H(X|Y,\bar{\lambda}) \left(1 + \frac{1}{n \bar{\lambda}^2 \sigma^2_H (X|Y,\bar{\lambda})} \right)}{ \sigma^2_H (X|Y,\bar{\lambda})} \left( \lambda_n (\epsilon) + \frac{1}{\sigma^2_H (X|Y,\tilde{\lambda})} \frac{\eta}{n} \right)}
{1 + \eta \lambda_n (\epsilon) + \frac{1}{2} \eta^2 \lambda^2_n (\epsilon)}\;.
\end{eqnarray}
In the derivation of \eqref{eq-so-23}, the inequality 1) is due to \eqref{eq-so-22}; the inequality 2) follows from the fact that $e^{\frac{x^2}{2}} Q(x)$ is a strictly decreasing function of $x$,
$\lambda \sigma_H (X|Y,\lambda)$ is strictly increasing with respect to $\lambda$ as shown below
\begin{eqnarray}
\label{eq-so-22+}
\frac{d \lambda \sigma_H (X|Y,\lambda)}{ d \lambda}
&=& \sigma_H (X|Y,\lambda) + \lambda \frac{d \sigma_H (X|Y,\lambda)}{d \lambda}
\nonumber \\
&=& \sigma_H (X|Y,\lambda) \left( 1 +
\lambda \frac{ \frac{d \sigma^2_H (X|Y,\lambda)}{d \lambda} }{2 \sigma^2_H (X|Y,\lambda)}
\right)
\nonumber \\
& = &
\sigma_H (X|Y,\lambda) \left( 1 +
\lambda \frac{ \hat{M}_H (X|Y, \lambda) }{2 \sigma^2_H (X|Y,\lambda)}
\right)
\nonumber \\
&>& 0
\end{eqnarray}
for $\lambda \in [0, \lambda^+]$, and
\begin{equation}
\label{eq-so-23-}
e^{\frac{x^2}{2}} Q(x) \geq \frac{x}{\sqrt{2 \pi}(1+x^2)};
\end{equation}
the equality 3) is attributable to
\begin{equation}
\label{eq-so-23--}
\bar{\lambda} = \lambda_n (\epsilon) + \left. \frac{d \lambda}{d \delta} \right|_{\lambda = \tilde{\lambda}} \frac{\eta}{n}
= \lambda_n (\epsilon) + \frac{1}{\sigma^2_H (X|Y,\tilde{\lambda})} \frac{\eta}{n}
\end{equation}
for some $\tilde{\lambda} \in [\lambda_n (\epsilon), \bar{\lambda}]$; and finally, the inequality 4) follows from the inequality
\[ e^x > 1 + x + {x^2 \over 2} \]
for any $x >0$. In order to satisfy \eqref{eq-so-21}, let us now choose $\eta$ such that
\begin{equation}
\label{eq-so-23+}
\eta \lambda_n (\epsilon) \geq \frac{2 \sqrt{2 \pi} M_H(X|Y,\bar{\lambda}) \left(1 + \frac{1}{n \bar{\lambda}^2 \sigma^2_H (X|Y,\bar{\lambda})} \right)}{ \sigma^2_H (X|Y,\bar{\lambda})} \lambda_n (\epsilon)
\end{equation}
and
\begin{equation}
\label{eq-so-24}
\frac{1}{2} \eta^2 \lambda^2_n (\epsilon) \geq \frac{2 \sqrt{2 \pi} M_H(X|Y,\bar{\lambda}) \left(1 + \frac{1}{n \bar{\lambda}^2 \sigma^2_H (X|Y,\bar{\lambda})} \right)}{ \sigma^2_H (X|Y,\bar{\lambda})}
\frac{1}{\sigma^2_H (X|Y,\tilde{\lambda})} \frac{\eta}{n} ,
\end{equation}
i.e.
\begin{equation}
\label{eq-so-24+}
\eta = \frac{2 \sqrt{2 \pi} M_H(X|Y,\bar{\lambda}) \left(1 + \frac{1}{n \bar{\lambda}^2 \sigma^2_H (X|Y,\bar{\lambda})} \right)}{ \sigma^2_H (X|Y,\bar{\lambda})}
\max \left\{ 1, \frac{2}{n \lambda^2_n (\epsilon)\sigma^2_H (X|Y,\tilde{\lambda})} \right\} .
\end{equation}
To see $\eta$ is bounded, note that
$\frac{M_H (X|Y,\lambda)}{\sigma^2_H (X|Y,\lambda)}$ is always bounded for
$\lambda \in [0,\lambda^+]$.
On the other hand, for $\epsilon \leq \frac{1}{3}$,
$\sqrt{n} \lambda_n (\epsilon) \sigma_H(X|Y,\lambda_n (\epsilon)) > c$ for some constant $c$,
as $\sqrt{n} \lambda_n (\epsilon) \sigma_H(X|Y,\lambda_n (\epsilon)) \rightarrow 0$
implies that $\epsilon = g_{X|Y,n} (\delta_n (\epsilon)) \rightarrow \frac{1}{2}$,
and the same argument can be applied
to $\sqrt{n} \lambda_n (\epsilon) \sigma^2_H (X|Y,\tilde{\lambda})$. Therefore,
\begin{eqnarray}
\label{eq-so-25}
\eta &\leq& 2 \sqrt{2 \pi} \max_{\lambda \in [0,\lambda^+]} \left[ \frac{M_H (X|Y,\lambda)}{\sigma^2_H (X|Y,\lambda)}\right] \left( 1 + c^{-2} \right) \max \left\{ 1, 2 c^{-2} \right\} .
\end{eqnarray}
Then combining \eqref{eq-so-21}, \eqref{eq-so-21+}, \eqref{eq-so-22}, \eqref{eq-so-23}, \eqref{eq-so-23+}
and \eqref{eq-so-24} yields
\begin{eqnarray}
\label{eq-so-25+}
R_n (\epsilon) &\geq& C_{\mathrm{BIMSC}} - \bar{\delta} +
\frac{\ln \left[ \frac{2 (1-C_{BE}) M_H(X|Y,\bar{\lambda})}{\sqrt{n} \sigma^3_H (X|Y,\bar{\lambda})}
e^{- n r_{X|Y} (\bar{\delta})} \right] }{n} \nonumber \\
&=& C_{\mathrm{BIMSC}} - \bar{\delta} - r_{X|Y} (\bar{\delta}) +
\frac{\ln \left[ \frac{2 (1-C_{BE}) M_H(X|Y,\bar{\lambda})}{\sigma^3_H (X|Y,\bar{\lambda})}
\right] - \frac{1}{2} \ln n }{n} \nonumber \\
&\stackrel{1)}{\geq}& C_{\mathrm{BIMSC}} - \delta_n (\epsilon) - r_{X|Y} (\delta_n (\epsilon)) - \bar{\lambda} \frac{\eta}{n}
+ \frac{\ln \left[ \frac{2 (1-C_{BE}) M_H(X|Y,\bar{\lambda})}{\sigma^3_H (X|Y,\bar{\lambda})}
\right] - \eta - \frac{1}{2} \ln n }{n}
\nonumber \\
&\geq& C_{\mathrm{BIMSC}} - \delta_n (\epsilon) - r_{X|Y} (\delta_n (\epsilon)) \nonumber \\
&&{+}\:
\frac{ -\lambda^+ \eta +
\ln \left[ 2 (1-C_{BE}) \min_{\lambda} \left( \frac{2M_H(X|Y,{\lambda})}{ \sigma^3_H (X|Y,{\lambda})} \right)
\right] - \eta - \frac{1}{2} \ln n }
{n} \nonumber \\
&=& C_{\mathrm{BIMSC}} - \delta_n (\epsilon) - r_{X|Y} (\delta_n (\epsilon)) - \frac{\frac{1}{2} \ln {n} + \bar{d}_1}{n},
\end{eqnarray}
where $\bar{d}_1$ is independent of both $n$ and $\epsilon$. In the derivation of \eqref{eq-so-25+}, the inequality 1) follows from the convexity of $r_{X|Y} (\delta) $ and the fact that
\[ r_{X|Y} ( \bar{\delta}) \leq r_{X|Y} (\delta_n (\epsilon)) + \bar{\lambda} {\eta \over n} .\]
We now proceed to establish an upper bound on $R_n (\epsilon)$.
Towards this end, recall \eqref{eq-thm-bimc-1} and \eqref{eq-thm-bimc-2} where
we make a small modification by choosing $\beta_n = \lambda = r'_{X|Y} (\delta)$
in the proof of Theorem \ref{thm-bimc}. Then
for any $\delta$ such that
\begin{equation}
\label{eq-so-29}
\left( 1 + 2 \lambda \right) \epsilon
\leq \underline{\xi}_H (X|Y,\lambda,n) e^{- n r_{X|Y} (\delta)}
\end{equation}
we have
\begin{eqnarray}
\label{eq-so-30}
R_n(\epsilon) &\leq& C_{\mathrm{BIMSC}} - \delta - \frac{\ln
\epsilon - \ln P(B_{n,\delta}) + {2 \ln \lambda }
- \ln \left( 1 + \lambda \right) }{n} \nonumber \\
&\leq& C_{\mathrm{BIMSC}} - \delta + \frac{ - \ln \epsilon - 2 \ln \lambda + \lambda }{n}
\end{eqnarray}
where the trivial bound $P(B_{n,\delta}) \leq 1$ is applied. Now let
$\underline{\delta} = \delta_n (\epsilon) - \frac{\eta'}{n}$ for some constant $\eta' >0$, which will be specified later, and
$\underline{\lambda} = r'_{X|Y} (\underline{\delta})$. Then
\begin{eqnarray}
\label{eq-so-31}
\lefteqn{\underline{\xi}_H (X|Y,\underline{\lambda},n) e^{- n r_{X|Y} (\underline{\delta})}} \nonumber \\
&\stackrel{1)}{\geq} & e^{\frac{n \underline{\lambda}^2 \sigma^2_H (X|Y,\underline{\lambda})}{2}}
Q \left( \rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) \right)
e^{- n r_{X|Y} (\delta_n (\epsilon))+ \underline{\lambda} \eta' } \nonumber \\
&=& e^{\frac{n \underline{\lambda}^2 \sigma^2_H (X|Y,\underline{\lambda})}{2}}
Q \left( \sqrt{n} \underline{\lambda} \sigma_H (X|Y,\underline{\lambda}) \right)
\frac{Q \left( \rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) \right)}
{Q \left( \sqrt{n} \underline{\lambda} \sigma_H (X|Y,\underline{\lambda}) \right)}
e^{- n r_{X|Y} (\delta_n (\epsilon))+ \underline{\lambda} \eta'} \nonumber \\
&\stackrel{2)}{\geq}& g_{X|Y,n} (\delta_n (\epsilon))
\frac{Q \left( \rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) \right)}
{Q \left( \sqrt{n} \underline{\lambda} \sigma_H (X|Y,\underline{\lambda}) \right)}
e^{\underline{\lambda} \eta' } \nonumber \\
&\stackrel{3)}{\geq}& (1 +2 \underline{\lambda}) \epsilon
\frac{Q \left( \rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) \right)}
{Q \left( \sqrt{n} \underline{\lambda} \sigma_H (X|Y,\underline{\lambda}) \right)}
e^{\underline{\lambda} (\eta'-2) }.
\end{eqnarray}
In the derivation of \eqref{eq-so-31}, the inequality 1) is due to the convexity of $r_{X|Y} (\delta)$ and the fact that
\[ r_{X|Y} ( \underline{\delta}) \leq r_{X|Y} (\delta_n (\epsilon)) - \underline{\lambda} {\eta' \over n} ;\]
the inequality 2) follows again from the fact that $e^{\frac{x^2}{2}} Q(x)$ is a strictly decreasing function of $x$ and
$\lambda \sigma_{H} (X|Y,\lambda)$ is increasing with respect to $\lambda$; and finally the inequality 3) is attributable to the inequality $e^x \geq 1 + x $ for any $ x \geq 0$.
In order for \eqref{eq-so-29} to be satisfied, we now choose $\eta'$ such that
\begin{eqnarray}
\label{eq-so-33+}
\eta' &=& 2 + \frac{1}{\underline{\lambda}} \ln
\frac{Q \left( \sqrt{n} \underline{\lambda} \sigma_H (X|Y,\underline{\lambda}) \right) }
{Q \left( \rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) \right)}
\nonumber \\
&=& 2 + \frac{1}{\underline{\lambda}} \ln \left[ 1 +
\rho_* \frac{ \frac{1}{\sqrt{2 \pi}} e^{- \frac{(\tilde{\rho} + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) )^2}{2}} }
{Q \left( \rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) \right)}
\right]
\end{eqnarray}
where $0 \leq \tilde{\rho} \leq \rho_*$. One can verify that
\begin{eqnarray}
\label{eq-so-34-}
\eta' &\leq& 2 + \frac{\rho_*}{\underline{\lambda}}
\frac{ \frac{1}{\sqrt{2 \pi}} e^{- \frac{(\tilde{\rho} + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) )^2}{2}} }
{Q \left( \rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) \right)}
\nonumber \\
&\leq& 2 +\frac{\rho_*}{\underline{\lambda}}
\frac{ 1 + ( \rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}))^2}
{\rho_* + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda})}
e^{\sqrt{n} \underline{\lambda} \sigma_H (X|Y,\underline{\lambda}) (\rho_* - \tilde{\rho}) + \frac{\rho^2_* - \tilde{\rho}^2}{2} }
\end{eqnarray}
where the last inequality is due to \eqref{eq-so-23-}. From the definition of $\rho_*$, it is not hard to see that $\rho_* = \frac{\eta''}{\sqrt{n}}$ for some constant $\eta''$ depending only on channel parameters. Meanwhile, we have $\sqrt{n} \underline{\lambda} \sigma_H (X|Y,\underline{\lambda}) > c$ as discussed above. Then
\begin{eqnarray}
\label{eq-so-34--}
\eta' &\leq& 2+ \frac{\eta''}{\sqrt{n} \underline{\lambda}}
\left( c^{-1} + \frac{\eta''}{\sqrt{n}} + \sqrt{n} \underline{\lambda} \sigma_H (X|Y, \underline{\lambda}) \right)
e^{\eta'' \lambda^+ \max_{\lambda \in [0, \lambda^+ ]} \sigma_H (X|Y,\lambda)+ \frac{(\eta'')^2}{2 n}}
\nonumber \\
&\leq& 2+ \left( c^{-2}
+ c^{-1} \eta'' + 1 \right) \eta''
\left[ \max_{\lambda \in [0, \lambda^+ ]} \sigma_H (X|Y,\lambda) \right]
e^{\eta'' \lambda^+ \max_{\lambda \in [0, \lambda^+ ]} \sigma_H (X|Y,\lambda)+ (\eta'')^2}
\nonumber \\
\end{eqnarray}
which is independent of both $n$ and $\epsilon$. Now
combining \eqref{eq-so-31} and \eqref{eq-so-33+}, we have
\begin{equation}
\label{eq-so-34}
\underline{\xi}_H (X|Y,\underline{\lambda},n) e^{- n r_{X|Y} (\underline{\delta})} \geq (1 +2 \underline{\lambda} )\epsilon
\end{equation}
and consequently,
\begin{eqnarray}
\label{eq-so-37}
R_n(\epsilon)
&\leq&
C_{\mathrm{BIMSC}} - \underline{\delta} + \frac{ - \ln \epsilon - 2 \ln \underline{\lambda} + \underline{\lambda} }{n}
\nonumber \\
&\stackrel{1)}{\leq} & C_{\mathrm{BIMSC}} - \delta_n (\epsilon) + r_{X|Y} (\delta_n (\epsilon)) \nonumber \\
&&{+}\: \frac{ \ln \left [ \sqrt{2 \pi} \sqrt{n} \lambda_n (\epsilon) \sigma_H (X|Y,\lambda_n (\epsilon)) \left ( 1 + {1 \over n \lambda^2_n (\epsilon) \sigma^2_H (X|Y,\lambda_n (\epsilon)) } \right ) \right ] }{n} \nonumber \\
& & \mbox{ } + {
- 2\ln \underline{\lambda} + \lambda^+ + \eta' \over n} \nonumber \\
&=& C_{\mathrm{BIMSC}} - \delta_n (\epsilon) + r_{X|Y} (\delta_n (\epsilon)) + { \ln \left ( 1 + {1 \over n \lambda^2_n (\epsilon) \sigma^2_H (X|Y,\lambda_n (\epsilon)) } \right ) \over n} \nonumber \\
&&{+}\: \frac{ \ln n + \ln \sqrt{2 \pi} \sigma_H (X|Y,\lambda_n (\epsilon))
+ \ln \frac{\lambda_n (\epsilon)}{ \underline{\lambda}}
- \ln \sqrt{n} \underline{\lambda} + \lambda^+ + \eta' }{n} \nonumber \\
&\stackrel{2)}{\leq} & C_{\mathrm{BIMSC}} - \delta_n (\epsilon) + r_{X|Y} (\delta_n (\epsilon)) + \frac{\ln n + \underline{d}_1 }{n}
\end{eqnarray}
where $\underline{d}_1$ is another constant depending only on the channel. In the derivation of \eqref{eq-so-37}, the inequality 1) is due to \eqref{eq-so-23-}
and the definition of $\delta_n (\epsilon)$ in \eqref{eq-so-2}; and the inequality 2) follows from the fact that
\[ {\lambda_n (\epsilon) \over \underline{\lambda} } = 1 + {1 \over \sigma^2_H (X|Y, \hat{\lambda}) } {\eta' \over n \underline{\lambda}} \]
for some $\hat{\lambda} \in [ \underline{\lambda}, \lambda_n (\epsilon) ] $ and
\[ \sqrt{n} \underline{\lambda} \sigma_H (X|Y,\underline{\lambda}) > c.\]
Then the theorem is proved by combining \eqref{eq-so-25+} and
\eqref{eq-so-37} and making $d_1 = \max \{ \bar{d}_1, \underline{d}_1 \}$.
\end{IEEEproof}
\begin{remark}
The condition $\epsilon \leq \frac{1}{3}$ for \eqref{eq-so-17} and \eqref{eq-so-18}
can be relaxed
as we only require that $\sqrt{n} \delta_n (\epsilon)$ or equivalently $\sqrt{n} \lambda$ be lower bounded by a constant,
which is true when $\epsilon \leq d$ for any constant $d < \frac{1}{2}$. In addition, when $\epsilon \leq g_{X|Y, n} (\delta^+ /2)$, $\epsilon$ is an exponential function of $n$, in which case the maximum achievable rate is below the channel capacity by a positive constant even when $n$ goes to $\infty$. As such, from a practical point of view, the case $\epsilon \leq g_{X|Y, n} (\delta^+ /2)$ is not interesting, especially when one can approach the channel capacity very closely as shown in the achievability given in \eqref{eq1-1} and \eqref{eq1-2}.
\end{remark}
\begin{remark}
In the definition of $R_n (\epsilon )$, the average error probability is used. If the maximal error probability is used instead, Theorem \ref{thm-BIMSC-second-order}
remains valid. This can be proved similarly by first using the standard technique of removing bad codewords from the code in the
achievability given in \eqref{eq1-1} and \eqref{eq1-2} to establish similar achievability with maximal error probability and then combining it with Corollary \ref{col-bimc}.
\end{remark}
\begin{remark}
In view of Theorem~\ref{thm-BIMSC-second-order}, it is now clear that jar decoding is indeed optimal up to the second order coding performance in the non-asymptotical regime. Since the achievability given in \eqref{eq1-1} and \eqref{eq1-2} was established for linear block codes, it follows from Theorem~\ref{thm-BIMSC-second-order} that linear block coding is also optimal up to the second order coding performance in the non-asymptotical regime for any BIMSC.
In addition, in the Taylor-type expansion of $R_n (\epsilon)$, the third order term is $O(\delta^2_n (\epsilon)) $ whenever $\delta_n (\epsilon) = \Omega(\sqrt{ \ln n / n})$ since it follows from \eqref{eq2-4-} that $r_{X|Y} (\delta_n (\epsilon)) = O(\delta^2_n (\epsilon)) $.
\end{remark}
\subsection{Comparison with Asymptotic Analysis} \label{sec2-d}
It is instructive to compare Theorem \ref{thm-BIMSC-second-order} with the second order asymptotic performance analysis as $n$ goes to $\infty$.
{\em Asymptotic analysis with constant $0< \epsilon <1$ and $n \to \infty$}: Fix $0 < \epsilon < 1$. It was shown in \cite{strassen-1962}, \cite{Hayashi-2009}, \cite{Yury-Poor-Verdu-2010} that for a BIMSC with a discrete output alphabet
\begin{equation} \label{eq-comp-1}
R_n (\epsilon) = C_{\mathrm{BIMSC}} - \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) + O \left ({\ln n \over n} \right )
\end{equation}
for sufficiently large $n$. The expression $C_{\mathrm{BIMSC}} - \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) $ was referred to as the normal approximation for $R_n (\epsilon)$. Clearly, when $\epsilon > 1/3$, \eqref{eq-comp-1} is essentially the same as
\eqref{eq-so-16}. Let us now look at the case $\epsilon \leq 1/3$.
In this case, by using the Taylor expansion of $r_{X|Y} (\delta)$ around $\delta =0$
\begin{eqnarray} \label{eq-comp-2}
r_{X|Y} (\delta) & = & {1 \over 2 \sigma^2_H (X|Y) } \delta^2 + \frac{ \left. - \frac{d \sigma^2_H (X|Y,\lambda)}{d \lambda} \right|_{\lambda=0}}{6 \sigma^6_H (X|Y)}
\delta^3 + O(\delta^4) \nonumber \\
& = & {1 \over 2 \sigma^2_H (X|Y) } \delta^2 + \frac{ - \hat{M}_H (X|Y) }{6 \sigma^6_H (X|Y)}
\delta^3 + O(\delta^4)
\end{eqnarray}
it can be verified that
\begin{equation} \label{eq-comp-3}
\delta_n (\epsilon) = \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) + O \left ( {1 \over n} \right ) .
\end{equation}
Thus the Taylor-type expansion of $R_n (\epsilon )$ in Theorem~\ref{thm-BIMSC-second-order} implies the second order asymptotic analysis with constant $0< \epsilon <1$ and $n \to \infty$ shown in \eqref{eq-comp-1}.
{\em Asymptotic analysis with $n \to \infty$ and non-exponentially decaying $\epsilon$}: Suppose now $\epsilon$ is a function of $n$ and goes to $0$ as $n \to \infty$, but at a non-exponential speed. In this case, as $n \to \infty$, $\delta_n (\epsilon)$ goes to $0$ at the speed of $\Theta \left( \sqrt{-\ln \epsilon \over n} \right)$, and $\sqrt{n} \lambda_n (\epsilon) $ goes to $\infty$. By ignoring the third and higher order terms in the Taylor expansion of $r_{X|Y} (\delta)$, one has the following approximations:
\begin{equation} \label{eq-comp-4}
g_{X|Y, n } (\delta_n (\epsilon)) \approx {1 \over \sqrt{2 \pi} \sqrt{n} \lambda_n (\epsilon) \sigma_H (X|Y,\lambda_n (\epsilon)) } e^{-n {\delta_n^2 (\epsilon) \over 2 \sigma^2_H (X|Y)}}
\end{equation}
and
\[ Q(x) \approx {1 \over \sqrt{2 \pi} x} e^{- {x^2 \over 2}} \mbox{ for large } x. \]
By these approximations, it is not hard to verify that in this case
\[ \lim_{n \to \infty} {\delta_n (\epsilon) \over \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) } =1. \]
Therefore, from Theorem~\ref{thm-BIMSC-second-order}, it follows that when $\epsilon $ goes to $0$ at a non-exponential speed as $n \to \infty$, $ \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) $ is still the second order term of $R_n (\epsilon)$ in the asymptotic analysis with $n \to \infty$. Indeed, this can also be verified by looking at the specific case given by \eqref{eq1-3}, \eqref {eq1-4}, and \eqref{eq-thm-bimc-3} when $\epsilon$ goes to $0$ at a polynomial speed as $n \to \infty$. To the best of our knowledge, the second order asymptotic analysis with $n \to \infty$ and non-exponentially decaying $\epsilon$ has not been addressed before in the literature.
{\em Divergence of $\delta_n (\epsilon)$ from $ \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) $}: The agreement between $\delta_n (\epsilon)$ and $ \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) $ terminates when the third order term
\[ \frac{ - \hat{M}_H (X|Y) }{6 \sigma^6_H (X|Y)}
\delta^3 \]
in the Taylor expansion of $r_{X|Y} (\delta)$ shown in \eqref{eq-comp-2} can not be ignored. This happens when $\delta$ is not small, which is typical in practice for finite block length $n$, or
\begin{equation} \label{eq-comp-6}
\zeta_{X|Y} \mbox {$ \ \stackrel{\Delta}{=} $} \frac{ - \hat{M}_H (X|Y) }{6 \sigma^6_H (X|Y)}
\end{equation}
is large. In this case, $ \frac{\sigma_H (X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) $ will be smaller than $\delta_n (\epsilon)$ by a relatively large margin if $ \zeta_{X|Y} <0$, and larger than
$\delta_n (\epsilon)$ by a relatively large margin if $ \zeta_{X|Y} >0$. As such, the normal approximation would fail to provide a reasonable estimate for $R_n (\epsilon)$. This will be further confirmed by numerical results shown in Section \ref{sec:appr-eval} for well known channels such as the BEC, BSC, and BIAGN for finite $n$.
\section{Non-Asymptotic Converse and Taylor-Type Expansion: DIMC}
\label{sec:non-asysmpt-converse-coding-dsc}
\setcounter{equation}{0}
We now extend Theorems~\ref{thm-bimc} and \ref{thm-BIMSC-second-order}
to the case of DIMC $ P = \{p(y|x), x \in \mathcal{X}, y \in
\mathcal{Y}\}$, where $\mathcal{X}$ is discrete, but $\mathcal{Y}$ is
arbitrary (discrete or continuous).
\subsection{Definitions}
Let $\mathcal{P}$ denote the set of all distributions over $\mathcal{X}$. Let
$\mathcal{P}_n$ denote the set of types on $\mathcal{X}^n$ with
denominator $n$ \cite{csiszar:type}, and $t(x^n)$ be the type of $x^n$.
Moreover, for $t \in \mathcal{P}_n$, let
\begin{equation} \label{eq-def-xt}
\mathcal{X}^n_t \mbox {$ \ \stackrel{\Delta}{=} $} \{x^n \in \mathcal{X}^n: t(x^n)=t \}.
\end{equation}
Before stating our converse channel coding theorem for DIMC, we again need to
introduce some definitions from \cite{yang-meng:nep}.
For any $t \in \mathcal{P}$, define
\begin{equation} \label{eq-def-qtn}
q_t(y^n) \mbox {$ \ \stackrel{\Delta}{=} $} \prod^n_{i=1} q_t (y_i)
\end{equation}
where
\begin{equation} \label{eq-def-qt}
q_t (y) \mbox {$ \ \stackrel{\Delta}{=} $} \sum_{x \in \mathcal{X}} t(x) p(y|x),
\end{equation}
\begin{equation} \label{eq3r-3}
I (t; P) \mbox {$ \ \stackrel{\Delta}{=} $} \sum_{x \in {\cal X}} t(x) \int p(y |x) \ln {p(y|x) \over q_t (y) } d y
\end{equation}
and
\begin{equation} \label{eq-def-lambda*}
\lambda^*_{-} (t; P) \mbox {$ \ \stackrel{\Delta}{=} $}
\sup \left \{ \lambda \geq 0: \sum_{a \in {\cal X}} t(a) \int p(y |a) \left [ {p(y|a) \over q_t (y) } \right ]^{-\lambda} d y <\infty \right \}.
\end{equation}
It is easy to see that $ \lambda^*_{-} (t; P) $ is the same for all $t \in {\cal P}$ with the same support set $\{a\in {\cal X}: t(a) >0 \}$. Suppose that
\begin{equation} \label{eq-def-lambda*+}
\lambda^*_{-} (t; P) > 0.
\end{equation}
Define for any $t \in {\cal P}$ and any $\delta \geq 0$
\begin{equation} \label{eq3rlr}
r_{-} (t, \delta) \mbox {$ \ \stackrel{\Delta}{=} $} \sup_{\lambda \geq 0} \left [ \lambda (\delta - I(t; P) ) -
\sum_{x \in {\cal X}} t(x) \ln \int p(y |x) \left [ {p (y|x) \over q_t (y) } \right ]^{-\lambda} d y \right ]
\end{equation}
and for any $t \in {\cal P}$ and any $\lambda \in [0, \lambda^*_{-} (t; P))$,
random variables $X_{t}$ and $Y_{t,\lambda}$ with joint distribution $t(x)p(y|x)f_{-\lambda}(y|x)$ where
\begin{equation} \label{eq3rlf}
f_{-\lambda} (y |x) \mbox {$ \ \stackrel{\Delta}{=} $}
{\left [ {p (y|x) \over q_t (y) } \right ]^{-\lambda} \over \int p(v |x ) \left [ {p (v|x ) \over q_t (v)} \right ]^{-\lambda} d v } .
\end{equation}
Then define
\begin{equation} \label{eq3rld-}
D(t, x, \lambda) \mbox {$ \ \stackrel{\Delta}{=} $} \mathbf{E} \left[ \left. \ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})} \right| X_t = x \right]
\end{equation}
\begin{equation} \label{eq3rld}
\delta_{-} (t, \lambda)
\mbox {$ \ \stackrel{\Delta}{=} $} \mathbf{E} \left[ -\ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})} \right]
+ I(t; P)
\end{equation}
\begin{equation}
\label{eq3rld-1}
\Delta^*_{-} (t) \mbox {$ \ \stackrel{\Delta}{=} $} \lim_{\lambda \uparrow \lambda^*_{-} (t;P)} \delta_{-} (t, \lambda)
\end{equation}
\begin{eqnarray} \label{eq3rl-13}
\sigma^2_{D, -} (t; P, \lambda) &\mbox {$ \ \stackrel{\Delta}{=} $}&
\mathbf{E} \left\{ \mathbf{Var} \left[ \left. \ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})} \right| X_t \right] \right\}
\nonumber \\
&=& \sum_{x \in \mathcal{X}} t(x) \mathbf{Var} \left[ \left. \ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})} \right| X_t = x\right]
\end{eqnarray}
\begin{eqnarray} \label{eq3rl-14}
M_{D, -} (t; P , \lambda)
&\mbox {$ \ \stackrel{\Delta}{=} $}& \mathbf{E} \left\{ \mathbf{M_3} \left[ \left. \ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})} \right| X_t \right] \right\}
\nonumber \\
&=& \sum_{x \in \mathcal{X}} t(x) \mathbf{M_3} \left[ \left. \ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})} \right| X_t =x \right]
\;
\end{eqnarray}
and
\begin{eqnarray} \label{eq-tm-2}
\hat{M}_{D, -} (t; P , \lambda)
&\mbox {$ \ \stackrel{\Delta}{=} $}& \mathbf{E} \left\{ \mathbf{\hat{M}_3} \left[ \left. \ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})} \right| X_t \right] \right\}
\nonumber \\
&=& \sum_{x \in \mathcal{X}} t(x) \mathbf{\hat{M}_3} \left[ \left. \ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})} \right| X_t =x \right].
\end{eqnarray}
Note that $ \sigma^2_{D, -} (t; P, \lambda) $, $M_{D, -} (t; P , \lambda)$, and $ \hat{M}_{D, -} (t; P , \lambda)$ are respectively the conditional variance, conditional third absolute central moment, and conditional third central moment of
$\ln \frac{p(Y_{t,\lambda}|X_t)}{q_t (Y_{t,\lambda})}$ given $X_t$. Write $\sigma^2_{D, -} (t; P, 0) $ simply as $\sigma^2_D (t; P)$,
$M_{D, -} (t; P , 0)$ as $M_{D} (t; P )$, and $\hat{M}_{D, -} (t; P , 0)$ as
$ \hat{M}_{D} (t; P )$. Assume that
\begin{equation}
\label{eq3rl-14-1}
\sigma^2_D (t; P) >0 \mbox{ and } M_{D} (t; P ) < \infty.
\end{equation}
Furthermore $r_{-} (t, \delta)$ has the following parametric expression
\begin{equation} \label{eq3rp1}
r_{ -} (t, \delta_{-}(t, \lambda)) = \lambda ( \delta_{-} (t, \lambda) - I(t; P)) -
\sum_{x \in {\cal X}} t(x) \ln \int p(y |x) \left [ {p (y|x) \over q_t (y) } \right ]^{-\lambda} d y
\end{equation}
with $\lambda = {\partial r_{-} (t, \delta) \over \partial \delta }$
satisfying $\delta_{-}(t, \lambda) = \delta$.
In addition, let
\begin{eqnarray}
\lefteqn{\bar{\xi}_{D,-} ( t; P, \lambda, n )
\mbox {$ \ \stackrel{\Delta}{=} $} \frac{2 C_{BE} M_{\mathrm{D},-} (t;P,\lambda)}
{\sqrt{n}\sigma^3_{\mathrm{D},-} (t;P,\lambda)}
} \nonumber \\
&&{+}\: e^{\frac{n \lambda^2 \sigma^2_{D,-} (t;P,\lambda)}{2}}
\left[ Q(\sqrt{n} \lambda \sigma_{D,-} (t;P,\lambda))
- Q(\rho^*+\sqrt{n} \lambda \sigma_{D,-} (t;P,\lambda)) \right] \\
\lefteqn{\underline{\xi}_{D,-} (t;P,\lambda,n)
\mbox {$ \ \stackrel{\Delta}{=} $} e^{\frac{n \lambda^2 \sigma^2_{D,-} (t;P,\lambda)}{2}}
Q(\rho_*+\sqrt{n} \lambda \sigma_{D,-} (t;P,\lambda))
}
\end{eqnarray}
with $Q(\rho^*) = \frac{C_{BE} M_{\mathrm{D},-} (t;P,\lambda)}
{\sqrt{n}\sigma^3_{\mathrm{D},-} (t;P,\lambda)}
$
and $Q(\rho_*) = \frac{1}{2} - \frac{2 C_{BE} M_{\mathrm{D},-} (t;P,\lambda)}
{\sqrt{n}\sigma^3_{\mathrm{D},-} (t;P,\lambda)}
$.
Similar to the case in Section \ref{sec:non-asympt-conv}, the purpose
of introducing above definitions is to utilize the following results,
proved as Theorem 8 in \cite{yang-meng:nep}, which are valid for any $t \in {\cal P}_n$ satisfying \eqref{eq-def-lambda*+} and \eqref{eq3rl-14-1}.
{\em
\begin{description}
\item[(a)] There exists a $\delta^* >0$ such that for any $\delta \in (0, \delta^*]$ \begin{equation} \label{eqrl3-4-}
r_{-} (t, \delta) = {1 \over 2 \sigma^2_D (t; P) } \delta^2 + O(\delta^3).
\end{equation}
\item[(b)] For any $\delta \in (0, \Delta^*_{-} (t))$, and any $x^n
\in \mathcal{X}^n_t$,
\begin{eqnarray} \label{eqrl3-17}
\underline{\xi}_{D,-} (t; P, \lambda ,n) e^{- n r_{-} (t, \delta) } &\geq&
{ \Pr \left \{ \left. {1 \over n} \ln {p(Y^n |X^n) \over
q_t(Y^n) }\leq I(t; P) - \delta \right | X^n = x^n \right
\}} \nonumber \\
&\geq& \underline{\xi}_{D,-} (t; P, \lambda ,n) e^{- n r_{-} (t, \delta) }
\end{eqnarray}
where $\lambda = {\partial r_{-} (t, \delta) \over \partial \delta} >0$, and $Y^n = Y_1 Y_2 \cdots Y_n$ is the output of the DIMC in response to an independent and identically distributed (IID) input $X^n = X_1 X_2 \cdots X_n$, the common distribution of each $X_i$ having $\cal X$ as its support set.
Moreover, when $\delta = o(1)$ and $\delta = \Omega (1/\sqrt{n})$,
\begin{eqnarray}
\label{eql3-17-1}
\bar{\xi}_{D,-} (t;P,\lambda,n) &=& e^{\frac{n \lambda^2 \sigma^2_{D,-} (t;P,\lambda)}{2}}
Q \left( \sqrt{n} \lambda \sigma_{D,-} (t;P,\lambda) \right)\left( 1 + o(1) \right) \\
\label{eql3-17-2}
\underline{\xi}_{D,-} (t;P,\lambda,n) &=& e^{\frac{n \lambda^2 \sigma^2_{D,-} (t;P,\lambda)}{2}}
Q \left( \sqrt{n} \lambda \sigma_{D,-} (t;P,\lambda) \right)\left( 1 - o(1) \right)
\end{eqnarray}
and
\begin{equation}
\label{eql3-17-3}
e^{\frac{n \lambda^2 \sigma^2_{D,-} (t;P,\lambda)}{2}}
Q \left( \sqrt{n} \lambda \sigma_{D,-} (t;P,\lambda) \right) = \Theta \left( \frac{1}{\sqrt{n} \lambda} \right)
\end{equation}
with $\lambda = r'_X (\delta) = \Theta (\delta)$.
\item[(c)] For any $ \delta \leq c \sqrt{\ln n \over n} $, where
$c < \sigma_D (t; P)$ is a constant, and $x^n
\in \mathcal{X}^n_t$,
\begin{eqnarray} \label{eqrl3-17+}
Q \left ( {\delta \sqrt{n} \over \sigma_D (t; P)} \right ) - {C_{BE} M_D (t; P) \over \sqrt{n} \sigma^3_D (t; P)}
& \leq & \Pr \left \{ \left. {1 \over n} \ln {p(Y^n |X^n) \over q_t(Y^n) }\leq I(t; P) - \delta \right | X^n = x^n \right \}
\nonumber \\
& \leq & Q \left ( {\delta \sqrt{n} \over \sigma_D (t; P)} \right ) + {C_{BE} M_D (t; P) \over \sqrt{n} \sigma^3_D (t; P)}.
\end{eqnarray}
\end{description}
}
Turn our attention to sequences in $\mathcal{Y}^n$.
For any $t \in {\cal P}_n$ and any $x^n \in \mathcal{X}^n_t$, define
\begin{equation} \label{eq-def-btd}
B_t(x^n,\delta) \mbox {$ \ \stackrel{\Delta}{=} $} \left\{ y^n: -\infty < \frac{1}{n} \ln
\frac{p(y^n|x^n)}{q_t(y^n)} \leq I(t;P) - \delta \right\}
\end{equation}
and
\begin{eqnarray} \label{eq-def-ptd}
P_{t,\delta} &\mbox {$ \ \stackrel{\Delta}{=} $}& P_{x^n} (B_t(x^n,\delta)) \nonumber \\
&=& \Pr \left\{ \left.\frac{1}{n} \ln
\frac{p(Y^n|X^n)}{q_t(Y^n)} \leq I(t;P) - \delta \right| X^n =
x^n \right\}
\end{eqnarray}
where $P_{t,\delta}$ only depends on type $t$ and $\delta$. Since for any $y^n \in \mathcal{Y}^n$, the following set
\begin{equation} \label{eq-jar-dimc}
\left\{ x^n \in {\cal X}^n_t: \frac{1}{n} \ln
\frac{p(y^n|x^n)}{q_t(y^n)} \geq I(t;P) - \delta
\right\}
\end{equation}
is referred to as a DIMC jar for $y^n$ based on type $t$ in
\cite{yang-meng:jardecoding}, we shall call $B_t (x^n,\delta) $ the {\em outer mirror image of jar} corresponding to $x^n$. Further define
\begin{equation} \label{eq-def-btnd}
B_{t,n,\delta} \mbox {$ \ \stackrel{\Delta}{=} $} \cup_{x^n \in \mathcal{X}^n_t} B_t (x^n, \delta)
\end{equation}
\begin{equation} \label{eq-def-pbtnd}
P(B_{t,n,\delta}) \mbox {$ \ \stackrel{\Delta}{=} $} \int_{y^n \in B_{t,n,\delta}} q_t(y^n) d y^n.
\end{equation}
\subsection{Converse Coding Theorem}
For any channel code $\mathcal{C}_n$ of block length $n$ with
average word error probability $P_e (\mathcal{C}_n) = \epsilon_n$, assume that the message $M$ is uniformly distributed in $\{1,2, \ldots, e^{nR (\mathcal{C}_n)}\}$. Let $x^n(m)$ be the codeword corresponding to the
message $m$, and $\epsilon_{m,n}$ the conditional error probability given message $m$. Then
\begin{equation} \label{eq-proof-dsc--2}
\epsilon_n = \mathbf{E} [\epsilon_{M,n}].
\end{equation}
Let $\beta_n = \sqrt{-2 \ln \epsilon_n \over n }$ and
\begin{equation} \label{eq-proof-dsc--1}
\mathcal{M} \mbox {$ \ \stackrel{\Delta}{=} $} \left\{ m: \epsilon_{m,n} \leq \epsilon_n
(1+\beta_n) \right\}.
\end{equation}
Consider a type $t \in {\cal P}_n$ such that
\begin{equation} \label{eqth2-0}
| \{ m \in \mathcal{M}: t(x^n (m)) = t \} | \geq {| \mathcal{M}| \over (n+1)^{|{\cal X}|} }.
\end{equation}
Here and throughout the paper, $|S|$ denotes the cardinality of a finite set $S$. Since $|{\cal P}_n | \leq (n+1)^{|{\cal X}|}$, it follows from the pigeonhole principle that such a type $t \in {\cal P}_n$ exists. In other words, if we classify codewords in $\{ x^n (m): m \in \mathcal{M} \}$ according to their types, then there is at least one type $ t \in {\cal P}_n$ such that the number of codewords in $\{ x^n (m): m \in \mathcal{M} \}$ with that type is not less than the average.
We are now ready to state our converse theorem for DIMC.
\begin{theorem}
\label{thm-dimc}
Given a DIMC,
for any channel code $\mathcal{C}_n$ of block length $n$ with
average word error probability $P_e (\mathcal{C}_n) = \epsilon_n$,
\begin{eqnarray}
\label{eq-thm-dsc-0}
R(\mathcal{C}_n) &\leq& I(t;P) - \delta - \frac{\ln \epsilon_n - \ln P(B_{t,n,\delta})}{n} +
|\mathcal{X}| \frac{\ln (n+1)}{n} \nonumber \\
&&{-}\: \frac{\ln \frac{-2 \ln \epsilon_n}{ n} - \ln \left( 1 + \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right)}{n}
\end{eqnarray}
for any $t \in {\cal P}_n$ satisfying \eqref{eqth2-0}, where $\delta$ is the largest number satisfying
\begin{equation}
\label{eq-thm-dsc-0+}
\left( 1 + 2 \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right) \epsilon_n \leq P_{t,\delta}.
\end{equation}
Moreover, if a type $t \in {\cal P}_n$ satisfying \eqref{eqth2-0} also satisfies \eqref{eq-def-lambda*+} and \eqref{eq3rl-14-1}, then
the following hold:
\begin{enumerate}
\item
\begin{eqnarray}
\label{eq-thm-dsc-1}
R(\mathcal{C}_n) &\leq& I(t;P) - \delta - \frac{\ln \epsilon_n - \ln P(B_{t,n,\delta})}{n} +
|\mathcal{X}| \frac{\ln (n+1)}{n} \nonumber \\
&&{-}\: \frac{\ln \frac{-2 \ln \epsilon_n}{ n} - \ln \left( 1 + \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right)}{n}
\end{eqnarray}
where $\delta$ is the solution to
\begin{equation}
\label{eq-thm-dsc-2}
\left( 1 +2 \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right) \epsilon_n = \underline{\xi}_{D,-} (t;P,\lambda,n) e^{-n r_{-} (t,\delta) }
\end{equation}
with $\delta_{-} (t,\lambda) = \delta$.
\item When $\epsilon_n = \frac{e^{- n^{\alpha}}}{2 \sqrt{\pi n^{\alpha}}} \left( 1 - \frac{1}{2 n^{\alpha}} \right)$ for $\alpha \in (0,1)$,
\begin{equation}
\label{eq-thm-dsc-2+}
R(\mathcal{C}_n) \leq I(t;P) - \sqrt{2} \sigma_{D} (t;P) n^{-\frac{1-\alpha}{2}} + O(n^{-(1-\alpha)}) .
\end{equation}
\item When $\epsilon_n = \frac{n^{-\alpha}}{2 \sqrt{\pi \alpha \ln n}} \left( 1 - \frac{1}{2 \alpha \ln n} \right)$ for $\alpha > 0$,
\begin{eqnarray}
\label{eq-thm-dsc-3}
R(\mathcal{C}_n) &\leq& I(t;P) - \sigma_D (t;P) \sqrt{\frac{2 \alpha \ln n}{n}} + O \left( {\frac{\ln n}{n} }\right).
\end{eqnarray}
\item When $\epsilon_n = \epsilon$ satisfying $\epsilon + \frac{1}{\sqrt{n}} \left( 2\epsilon \sqrt{-2 \ln \epsilon} + \frac{
C_{BE} M_{D}
(t;P)}{\sigma^3_{D} (t;P)} \right) <1$,
\begin{eqnarray}
\label{eq-thm-dsc-6}
R(\mathcal{C}_n)
&\leq& I(t;P) - \frac{\sigma_{D} (t;P)}{\sqrt{n}} Q^{-1}
\left( \epsilon + \frac{1}{\sqrt{n}} \left( 2\epsilon \sqrt{-2 \ln \epsilon} + \frac{
C_{BE} M_{D}
(t;P)}{\sigma^3_{D} (t;P)} \right) \right) \nonumber \\
&&{+}\: (|\mathcal{X}|+1) \frac{\ln
n}{n} - \frac{\ln \epsilon}{n}\\
\label{eq-thm-dsc-7}
&=& I(t;P) - \frac{\sigma_{D} (t;P)}{\sqrt{n}} Q^{-1}
\left( \epsilon \right) + (|\mathcal{X}|+1)\frac{\ln
n}{n} + O(n^{-1}) .
\end{eqnarray}
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
We again apply the outer mirror image of jar converse-proof technique.
By Markov inequality,
\begin{equation}
\label{eq-proof-dsc-0}
\Pr \{ M \in {\cal M} \} \geq \frac{\beta_n}{1+\beta_n} \mbox{ and } |\mathcal{M}| \geq e^{nR(\mathcal{C}_n)
+ \ln \frac{\beta_n}{1+\beta_n} } .
\end{equation}
For any $t \in {\cal P}_n$ satisfying \eqref{eqth2-0}, let
\begin{equation} \label{eq-proof-dsc-0+}
\mathcal{M}_t \mbox {$ \ \stackrel{\Delta}{=} $} \left\{ m: \epsilon_{m,n} \leq \epsilon_n
(1+\beta_n), t(x^n(m)) = t \right\} .
\end{equation}
Then
\begin{equation}
\label{eq-proof-dsc-1}
|\mathcal{M}_t| \geq \frac{|\mathcal{M}|}{(n+1)^{|\mathcal{X}|}} \geq
e^{nR(\mathcal{C}_n)
+ \ln \frac{\beta_n}{1+\beta_n} - |\mathcal{X}|
\ln (n+1)} .
\end{equation}
Denote the decision region for message $m \in \mathcal{M}_t$ as
$D_m$. Now for any $m \in \mathcal{M}_t$,
\begin{eqnarray}
\label{eq-proof-dsc-2}
P_{x^n(m)} ( B_t(x^n(m),\delta) \cap D_m ) &=&
P_{x^n(m)} ( B_t(x^n(m),\delta) ) - P_{x^n(m)} ( B_t(x^n(m),\delta) \cap D^c_m ) \nonumber \\
&\geq& P_{x^n(m)} ( B_t(x^n(m), \delta) ) - \epsilon_{m,n} \nonumber \\
&\geq& P_{x^n(m)} ( B_t(x^n(m), \delta) ) - \epsilon_n(1+\beta_n)
\end{eqnarray}
At this point, we select $\delta$ such that for any $x^n \in \mathcal{X}^n_t$,
\begin{equation}
\label{eq-proof-dsc-4}
P_{x^n} ( B_t(x^n,\delta) ) = P_{t,\delta} \geq \epsilon_n (1 + 2 \beta_n).
\end{equation}
Substituting \eqref{eq-proof-dsc-4} into
\eqref{eq-proof-dsc-2}, we have
\begin{equation} \label{eq-proof-dsc-4-1}
P_{x^n(m)}( B_t(x^n(m),\delta) \cap D_m ) \geq \beta_n \epsilon_n.
\end{equation}
By the fact that $D_m$ are disjoint for different $m$ and
\begin{equation} \label{eq-proof-dsc-4-2}
\cup_{m \in \mathcal{M}_t} (D_m \cap B_t(x^n(m),\delta)) \subseteq B_{t,n,\delta},
\end{equation}
we have
\begin{eqnarray}
\label{eq-proof-dsc-5-}
P(B_{t,n,\delta}) &=& \int\limits_{B_{t,n,\delta}} q_t (y^n) dy^n \nonumber \\
&\geq& \sum_{m \in \mathcal{M}_t } \int\limits_{B(x^n(m),\delta) \cap D_m}
q_t (y^n) dy^n \nonumber \\
&\geq& \sum_{m \in \mathcal{M}_t } \int\limits_{B(x^n(m),\delta)
\cap D_m} p(y^n|x^n(m)) e^{-n(I(t;P)-\delta)} dy^n \nonumber \\
&=& \sum_{m \in \mathcal{M}_t } e^{-n(I(t;P) - \delta)} \int\limits_{B(x^n(m),\delta)
\cap D_m} p(y^n|x^n(m)) dy^n \nonumber \\
&=& \sum_{m \in \mathcal{M}_t } e^{-n(I(t;P) - \delta)} P_{x^n(m)}(
B(x^n(m),\delta) \cap D_m ) \nonumber \\
&\geq& \sum_{m \in \mathcal{M}_t } e^{-n(I(t;P) - \delta)}
\beta_n \epsilon_n = |\mathcal{M}_t| e^{-n(I(t;P) - \delta)} \beta_n \epsilon_n
\end{eqnarray}
which implies that
\begin{equation}
\label{eq-proof-dsc-5}
|\mathcal{M}_t| \leq e^{n(I(t;P) - \delta) - \ln \beta_n - \ln
\epsilon_n + \ln P(B_{t,n,\delta})} .
\end{equation}
Then combining \eqref{eq-proof-dsc-1} and \eqref{eq-proof-dsc-5}
yields
\begin{equation}
\label{eq-proof-dsc-5+}
R (\mathcal{C}_n) \leq I(t;P) - \delta - \frac{\ln \epsilon_n - \ln P(B_{t,n,\delta})}{n} - \frac{\ln
\frac{\beta_n}{1+\beta_n} }{n} - \frac{\ln \beta_n}{n} +
|\mathcal{X}| \frac{\ln (n+1)}{n} .
\end{equation}
Since $\beta_n = \sqrt{\frac{-2 \ln \epsilon_n}{n}}$ by definition, \eqref{eq-thm-dsc-0} and \eqref{eq-thm-dsc-0+}
directly come from \eqref{eq-proof-dsc-5+} and \eqref{eq-proof-dsc-4}.
\begin{enumerate}
\item According to \eqref{eqrl3-17},
it can be seen that selecting $\delta$ to be the solution to \eqref{eq-thm-dsc-2} will suffice
\eqref{eq-proof-dsc-4}. Consequently, \eqref{eq-thm-dsc-1} is proved.
\item The proof is essentially the same as that for part 2) of Theorem \ref{thm-bimc}, where we can show that
\begin{equation}
\label{eq-proof-dsc-5++}
P_{t,\delta} \geq \left( 1 + 2 \sqrt{\frac{-2 \ln \epsilon_n}{n}}\right) \epsilon_n
\end{equation}
when $\epsilon_n = \frac{e^{-n^{\alpha}}}{2 \sqrt{\pi n^{\alpha}}} \left( 1 - \frac{1}{2 n^{\alpha}} \right)$ and
$\delta = \sqrt{2} \sigma_D (t;P) n^{-\frac{1-\alpha}{2}} - \eta n^{-(1-\alpha)}$ for some constant $\eta$.
\item Apply the trivial bound $P(B_{t,n,\delta}) \leq 1$.
Then similar to the proof for part 3) of Theorem \ref{thm-bimc}, one can verify that
by making $\delta = \sigma_{D,-} (t;P) \sqrt{\frac{2 \alpha \ln n}{n}} - \eta {\frac{\ln n}{n}}$
for some properly chosen constant $\eta$,
\begin{eqnarray} \label{eq-proof-dsc-9+}
P_{t,\delta} &\geq&
\underline{\xi}_{D,-}
\left( t;P, \frac{\partial r_{-} (t,{\delta})}{\partial {\delta}} , n \right)
e^{- n r_{-} (t,\delta)} \nonumber \\
&\geq& \left( 1 + 2 \sqrt{\frac{-2 \ln \epsilon_n}{n}}\right) \epsilon
\end{eqnarray}
for $\epsilon_n = \frac{n^{-\alpha}}{2 \sqrt{\pi \alpha \ln n}} \left( 1 - \frac{1}{2 \alpha \ln n} \right)$,
where \eqref{eqrl3-4-}, \eqref{eql3-17-2} and \eqref{eql3-17-3} are utilized.
\item According to
\eqref{eq-proof-dsc-4}, we should select $\delta$ such that
\begin{equation}
\label{eq-proof-dsc-9}
P_{t,\delta} \geq \left( 1 + 2 \sqrt{\frac{-2 \ln \epsilon}{n}} \right) \epsilon.
\end{equation}
Now by \eqref{eqrl3-17+},
\begin{equation}
\label{eq-proof-dsc-10}
\delta = \frac{\sigma_{D} (t;P)}{\sqrt{n}} Q^{-1}
\left( \epsilon + \frac{1}{\sqrt{n}} \left( 2 \epsilon \sqrt{-2 \ln \epsilon} + \frac{ C_{BE} M_{D}
(t;P)}{\sigma^3_{D} (t;P)} \right) \right)
\end{equation}
will guarantee \eqref{eq-proof-dsc-9}.
Consequently,
\eqref{eq-thm-dsc-6} is proved by substituting
\eqref{eq-proof-dsc-9} and $\epsilon_n = \epsilon$ into
\eqref{eq-proof-dsc-5+} and applying the trivial bound
$P(B_{t,n,\delta}) \leq 1$, and \eqref{eq-thm-dsc-7} is yielded by
the property of $Q^{-1}$ function shown in the proof of Theorem
\ref{thm-bimc}.
\end{enumerate}
\end{IEEEproof}
\begin{remark}
\label{re4}
Remarks similar to Remarks \ref{re2} and \ref{re3} can be drawn
here too for Theorem \ref{thm-dimc}.
\end{remark}
For maximal error probability, we have the following corollary, which can be proved similarly.
\begin{corollary}
\label{col-dimc}
Given a DIMC, for any channel code $\mathcal{C}_n$ of block length $n$ with
maximum error probability $P_m (\mathcal{C}_n) = \epsilon_n$,
\begin{equation}
\label{eq-thm-maxdsc-0}
R(\mathcal{C}_n) \leq I(t;P) - \delta - \frac{\ln \epsilon_n - \ln
P(B_{t,n,\delta})}{n} + |\mathcal{X}| \frac{\ln (n+1)}{n} - \frac{\ln \sqrt{\frac{-2 \ln \epsilon_n}{n}}}{n}
\end{equation}
for any $t \in {\cal P}_n$ such that there are at least
$(n+1)^{-|\mathcal{X}|}$ portion of codewords in $\mathcal{C}_n$
with type $t$, where $\delta$ is the largest number satisfying
\begin{equation}
\label{eq-thm-maxdsc-0+}
\left( 1 + \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right) \epsilon_n \leq P_{t,\delta}.
\end{equation}
Moreover, if $t \in {\cal P}_n$ satisfies \eqref{eq-def-lambda*+} and \eqref{eq3rl-14-1}, then the following hold:
\begin{enumerate}
\item
\begin{equation}
\label{eq-thm-maxdsc-1}
R(\mathcal{C}_n) \leq I(t;P) - \delta - \frac{\ln \epsilon_n - \ln
P(B_{t,n,\delta})}{n} + |\mathcal{X}| \frac{\ln (n+1)}{n}
- \frac{\ln \sqrt{\frac{-2 \ln \epsilon_n}{n}}}{n}
\end{equation}
where $\delta$ is the solution to
\begin{equation}
\label{eq-thm-maxdsc-2}
\left( 1 + \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right) \epsilon_n = \underline{\xi}_{D,-} (t;P,\lambda,n) e^{-n r_{-} (t,\delta) }
\end{equation}
with $\delta_{-} (t,\lambda) = \delta$.
\item When $\epsilon_n = \epsilon$ satisfying $\epsilon + \frac{1}{\sqrt{n}} \left( \epsilon \sqrt{-2 \ln \epsilon} + \frac{
C_{BE} M_{D} (t;P)}{\sigma^3_{D} (t;P)} \right) <1$,
\begin{eqnarray}
\label{eq-thm-maxdsc-6}
R(\mathcal{C}_n) &\leq& I(t;P) - \frac{\sigma_{D} (t;P)}{\sqrt{n}} Q^{-1}
\left( \epsilon + \frac{1}{\sqrt{n}} \left( \epsilon \sqrt{-2 \ln \epsilon} + \frac{
C_{BE} M_{D} (t;P)}{\sigma^3_{D} (t;P)} \right) \right) \nonumber
\\
&&{+}\: (|\mathcal{X}|+0.5) \frac{\ln
n}{n} - \frac{\ln \epsilon}{n} \\
&=& I(t;P) - \frac{\sigma_{D} (t;P)}{\sqrt{n}} Q^{-1}
\left( \epsilon \right) + (|\mathcal{X}|+0.5) \frac{\ln
(n+1)}{n} + O(n^{-1}).
\end{eqnarray}
\end{enumerate}
\end{corollary}
\subsection{Taylor-Type Expansion}
Fix a DIMC $ P = \{p(y|x), x \in \mathcal{X}, y \in \mathcal{Y}\}$ with its capacity $C_{\mathrm{DIMC}} >0$. For any block length $n$ and average error probability $\epsilon$, let $R_n (\epsilon)$ be the best coding rate achievable with block length $n$ and average error probability $\leq \epsilon$, as defined in \eqref{eq-bt-1}. In this subsection, we extend Theorem~\ref{thm-BIMSC-second-order} to establish a Taylor-type expansion of $R_n (\epsilon)$ in the case of DIMC.
We begin with reviewing the non-asymptotic achievability of jar decoding established in \cite{yang-meng:jardecoding}. It has been proved
in \cite{yang-meng:jardecoding} that under jar decoding, Shannon random codes $\mathcal{C}_n$ of
block length $n$ based on any type $t \in {\cal P}_n $ satisfying \eqref{eq-def-lambda*+} and \eqref{eq3rl-14-1} have the following performance:
\begin{enumerate}
\item
\begin{equation}
\label{eq-thm-dimc-2}
{R} (\mathcal{C}_{n}) \geq {I} (t;P) - \delta - r_{-} (t,\delta) -
\frac{( 0.5+ |\mathcal{X}|) \ln (n+1) - \ln \frac{2(1- C_{BE}) M_{\mathrm{D},-} (t;P,\lambda)}
{\sqrt{n}\sigma^3_{\mathrm{D},-} (t;P,\lambda)} }{n}
\end{equation}
while maintaining
\begin{equation}
\label{eq-thm-dimc-1}
P_e (\mathcal{C}_{n}) \leq \left( \bar{\xi}_{D,-} (t;P,\lambda,n) + \frac{2(1- C_{BE}) M_{\mathrm{D},-} (t;P,\lambda)}
{\sqrt{n}\sigma^3_{\mathrm{D},-} (t;P,\lambda)} \right) e^{-n r_{-} (t,\delta) }
\end{equation}
for any $\delta \in (0, \Delta^*_{-} (t))$,
where $ \lambda = {\partial r_{-} (t, \delta) \over \partial \delta }$
satisfying $\delta_{-} (t,\lambda) = \delta$.
\item
\begin{equation}
\label{eq-thm-dimc-4}
{R} (\mathcal{C}_{n}) \geq I(t;P) -
\sigma_{\mathrm{D}} (t; P) \sqrt{\frac{2 \alpha
\ln n}{n}} - \frac{ (0.5+\alpha + |\mathcal{X}|) \ln (n+1)}{n}
- O \left( \frac{\ln \ln n}{n} \right)
\end{equation}
while maintaining
\begin{equation}
\label{eq-thm-dimc-3}
P_e (\mathcal{C}_{n}) \leq
\frac{n^{-\alpha}}{2 \sqrt{ \pi \alpha \ln n}} + O \left( n^{-\alpha} \frac{\ln n}{\sqrt{n}} \right)
= \Theta \left( \frac{n^{-\alpha}}{\sqrt{\ln n}}\right)
\end{equation}
for any $\alpha \geq 0$.
\item
\begin{eqnarray}
\label{eq-thm-dimc-6}
{R} (\mathcal{C}_{n}) &\geq&
{I}(t;P) -
\frac{c}{\sqrt{n}} - \left( \frac{1}{2} + |\mathcal{X}| \right) \frac{\ln (n+1)}{n}
- \frac{1}{n} \ln \frac{(1-C_{BE}) M_{\mathrm{D}}
(t;P)}{\sigma^3_{\mathrm{D}} (t;P)}
\end{eqnarray}
while maintaining
\begin{eqnarray}
\label{eq-thm-dimc-5}
P_e (\mathcal{C}_{n}) &\leq&
Q \left( \frac{c}{\sigma_{\mathrm{D}} (t;P)} \right) +
\frac{M_{\mathrm{D}} (t; P)}{\sigma^3_{\mathrm{D}} (t; P)}
\frac{1}{\sqrt{n}}
\end{eqnarray}
for any real number $c$.
\end{enumerate}
By combining \eqref{eq-thm-dimc-2} and \eqref{eq-thm-dimc-1} with \eqref{eq-thm-dsc-0} and \eqref{eq-thm-dsc-0+} or with \eqref{eq-thm-dsc-1} and \eqref{eq-thm-dsc-2}, it is expected that $R_n (\epsilon)$ would be expanded as
\begin{equation} \label{eq-so-dimc-0-}
R_n (\epsilon) = {I} (t;P) - \delta + o(\delta)
\end{equation}
for some $t \in {\cal P}$, where $\delta$ is defined according to \eqref{eq-thm-dimc-1}, \eqref{eq-thm-dsc-0+}, or \eqref{eq-thm-dsc-2}. In the rest of this subsection, we shall demonstrate with mathematic rigor that this is indeed the case. To simplify our argument, we impose the following conditions\footnote{Some of these conditions, for example, Condition C3, can be relaxed. Here we choose not to do so in order not to make our subsequent argument unnecessary complicated.} on the channel:
\begin{description}
\item[(C1)] For any $t \in {\cal P}$, $M_{D} (t; P ) < \infty $.
\item[(C2)] $\sigma^2_D (t; P) =0$ implies $I(t; P) =0$.
\item[(C3)] For any $t \in {\cal P}$, $ \lambda^*_{-} (t; P) = +\infty$.
\item[(C4)] There exists $\lambda^* >0$ such that $\delta_{-} (t, \lambda)$, $\sigma^2_{D, -} (t; P, \lambda) $, $ M_{D, -} (t; P , \lambda)$, $ \hat{M}_{D, -} (t; P , \lambda)$, and $ r_{ -} (t, \delta_{-}(t, \lambda)) $ are continuous functions of $t $ and $\lambda$ over $(t, \lambda) \in {\cal P} \times [0, \lambda^*]$.
\item[(C5)] There exists $s^* >0$ such that $r_{-}^{-1} (t, s) $ is a continuous function of $t$ and $s$ over $(t, s) \in {\cal P} \times [0, s^*]$, where $ r_{-}^{-1} (t, \cdot) $ is an inverse function of $r_{-} (t, \cdot)$.
\end{description}
Since $r_{-} (t, \delta)$ is a continuous and strictly increasing function of $\delta$ before it reaches $+\infty$---which may or may not happen---it can be easily verified that for any $s \geq 0$
\begin{eqnarray} \label{eq-so-dimc-0}
r_{-}^{-1} (t, s) & = & \max \{ \delta: r_{-} (t, \delta) \leq s \} \nonumber \\
& = & \inf \{ \delta: r_{-} (t, \delta) > s \}.
\end{eqnarray}
In view of the definitions and properties of $\delta_{-} (t, \lambda)$, $\sigma^2_{D, -} (t; P, \lambda) $, $ M_{D, -} (t; P , \lambda)$, $ \hat{M}_{D, -} (t; P , \lambda)$, and $ r_{ -} (t, \delta) $ (see
\cite{yang-meng:nep} for details and examples), Conditions (C1) to (C5) are generally met by most channels, particularly by channels with discrete output alphabets, and discrete input additive white Gaussian channels.
To characterize $\delta$ in \eqref{eq-so-dimc-0-} analytically, we need a counterpart of Lemma~\ref{le1}. To this end, define for any $ 0< c < C_{\mathrm{DIMC}}$
\begin{equation} \label{eq-so-dimc-0+}
{\cal P} (c) \mbox {$ \ \stackrel{\Delta}{=} $} \{ t \in {\cal P}: I(t; P) \geq c \}
\end{equation}
\begin{equation} \label{eq-so-dimc-0++}
{\cal P}_n (c) \mbox {$ \ \stackrel{\Delta}{=} $} \{ t \in {\cal P}_n: I(t; P) \geq c \}
\end{equation}
and for any type $t \in {\cal P}$ satisfying $\sigma^2_D (t; P) >0$
\begin{equation}
\label{eq-so-dimc-1}
g_{t;P,n} (\delta) \mbox {$ \ \stackrel{\Delta}{=} $} e^{\frac{n \lambda^2 \sigma^2_{D, -} (t;P,\lambda)}{2}} Q(\sqrt{n} \lambda \sigma_{D, -} (t;P,\lambda)) e^{- n r_{-} (t,\delta)}
\end{equation}
where $\lambda = {\partial r_{-} (t, \delta) \over \partial \delta }$. Note that ${\cal P}(c)$ is a closed set, and it follows from Condition (C2) that $\sigma^2_D (t; P) >0$ for any $t \in {\cal P} (c) $. Interpret $ g_{t;P,n} (\delta)$ as a function of $\lambda$ through $\delta = \delta_{-} (t, \lambda)$. Then we have the following lemma.
\begin{lemma} \label{le2}
There exists $\lambda^+ >0$ such that for any $n >0$ and $ t \in {\cal P}(c) $, $ g_{t;P,n} ( \delta_{-} (t, \lambda) )$ is a strictly decreasing function of $\lambda$ over $\lambda \in [0, \lambda^+]$.
\end{lemma}
\begin{IEEEproof}
The proof is in parallel with that of Lemma~\ref{le1}. As such, we point out only places where differences occur. In the place of \eqref{eq-so-3-}, we now have
\begin{equation} \label{eq-tm-3}
\frac{d \sigma^2_{D, -}(t;P ,\lambda)}{d \lambda} = - \hat{M}_{D, -}(t;P ,\lambda) \;.
\end{equation}
In parallel with \eqref{eq-so-14+} and \eqref{eq-so-15+}, we now have for any $t \in {\cal P}(c)$
\begin{eqnarray}
\lefteqn{\frac{d g_{t;P,n} (\delta_{-} (t, \lambda))}{d \lambda}} \nonumber \\
&\leq & e^{- n r_{-} (t, \delta_{-}(t, \lambda))}
\frac{\sqrt{n} \sigma_{D, -} (t;P, \lambda)}{\sqrt{2 \pi}}
\left (
\left| - \frac{ \lambda \frac{d \sigma^2_{D, -}(t;P ,\lambda)}{d \lambda} }
{2 \sigma^2_{D, -}(t;P ,\lambda) \left( 1+ n \lambda^2 \sigma^2_{D,-}(t;P,\lambda) \right)} \right| -1 \right ) \nonumber \\
& = & e^{- n r_{-} (t, \delta_{-}(t, \lambda))}
\frac{\sqrt{n} \sigma_{D, -} (t;P, \lambda)}{\sqrt{2 \pi}}
\left (
\left| \frac{ \lambda \hat{M}_{D, -}(t;P ,\lambda) }
{2 \sigma^2_{D, -}(t;P ,\lambda) \left( 1+ n \lambda^2 \sigma^2_{D,-}(t;P,\lambda) \right)} \right| -1 \right )
\label{eq-so-le2-1} \\
& \leq & e^{- n r_{-} (t, \delta_{-}(t, \lambda))}
\frac{\sqrt{n} \sigma_{D, -}(t;P ,\lambda)}{\sqrt{2 \pi}}
\left (
\left| \frac{ \lambda \hat{M}_{D, -}(t;P ,\lambda) }
{2 \sigma^2_{D, -}(t;P,\lambda) } \right| -1 \right) .\label{eq-so-le2-2}
\end{eqnarray}
Since ${\cal P}(c)$ is closed, it then follows from Condition (C4) that there is a $\lambda^+ >0$ such that for any $\lambda \in [0, \lambda^+] $ and any $t \in {\cal P}(c)$
\[ \left| \frac{ \lambda \hat{M}_{D, -}(t;P ,\lambda) }
{2 \sigma^2_{D, -}(t;P,\lambda) } \right| -1 <0 \]
and hence
\[ \frac{d g_{t;P,n} (\delta_{-} (t, \lambda))}{d \lambda} <0 \]
for any $n >0$. This completes the proof of Lemma~\ref{le2}.
\end{IEEEproof}
\begin{remark}
In view of \eqref{eq-so-le2-1}, it is clear that when $n$ is large, $ g_{t;P,n} (\delta_{-} (t, \lambda)) $ is a strictly decreasing function of $\lambda$ over an interval even larger than $[0, \lambda^+]$ for each and every $t \in {\cal P}(c)$.
\end{remark}
Now let
\[ \epsilon_n^+ \mbox {$ \ \stackrel{\Delta}{=} $} \max \{ g_{t;P,n} (\delta_{-} (t, \lambda^+ /2 )): t \in {\cal P}(c) \} \]
which, in view of Condition (C4) and the fact that ${\cal P} (c)$ is closed, is well defined and also an exponential function of $n$. For any $ \epsilon_n^+ \leq \epsilon \leq 1/2$ and $t \in {\cal P}(c)$, let $\delta_{t,n} (\epsilon)$ be the unique solution to
\begin{equation}
\label{eq-so-dimc-2}
g_{t;P,n} (\delta) = \epsilon\;.
\end{equation}
Further define
\begin{equation} \label{eq-so-sc}
s(c) \mbox {$ \ \stackrel{\Delta}{=} $} \max \left\{ s: 0< s \leq s^*, r_{-}^{-1} (t, s) \leq { C_{\mathrm{DIMC}} - c \over 2} \;\; \forall t \in {\cal P} \right\}
\end{equation}
and let $\epsilon_n (c)$ be the unique solution $\epsilon$ to
\begin{equation} \label{eq-so-sc1}
{- \ln \epsilon \left( 1 +2\sqrt{-2 \ln \epsilon \over n} \right) \over n} = s(c) .
\end{equation}
It is easy to see that in view of Condition (C5), $s(c) >0$ is well defined and once again $\epsilon_n (c)$ is also an exponential function of $n$. Let $\epsilon_n^u <1$ be the unique solution $\epsilon$ to
\begin{equation} \label{eq-so-sc2}
\epsilon \left( 1 +2\sqrt{-2 \ln \epsilon \over n} \right) =1 .
\end{equation}
Note that
\[ \max\{ I (t; P): t \in {\cal P}_n \} = C_{\mathrm{DIMC}} - O \left( {1 \over n^2 } \right). \]
Let $N(c)$ be the smallest integer $N >0$ such that
\begin{equation} \label{eq-so-sc3}
\max\{ I (t; P): t \in {\cal P}_n \} \geq C_{\mathrm{DIMC}} - {C_{\mathrm{DIMC}} -c \over 2}
\end{equation}
for all $n \geq N$. Then we have the following Taylor-type expansion of $R_n (\epsilon)$.
\begin{theorem}
\label{thm-DIMC-second-order}
For any $n \geq N(c)$ and any $\max\{\epsilon_n^+, \epsilon_n (c) \} \leq \epsilon< \epsilon_n^u $, let
\begin{eqnarray}
\label{eq-so-dimc-3+}
t^* &\mbox {$ \ \stackrel{\Delta}{=} $}& \operatornamewithlimits{arg\,max}_{t \in \mathcal{P}_n (c) } \left[ I(t;P) - \delta_{t,n} (\epsilon) \right] \\
\label{eq-so-dimc-3-}
t^{\#} &\mbox {$ \ \stackrel{\Delta}{=} $}& \operatornamewithlimits{arg\,max}_{t \in \mathcal{P}_n (c)} \left[ I(t;P) - \frac{\sigma_D (t;P)}{\sqrt{n}} Q^{-1} (\epsilon) \right].
\end{eqnarray}
Then
\begin{equation}
\label{eq-so-dimc-3}
\left| R_n (\epsilon) - \left( I(t^*;P) - \delta_{t^*,n} (\epsilon) \right) \right|
\leq o \left( \delta_{t^*,n} (\epsilon) \right)
\end{equation}
where
\begin{eqnarray}
\label{eq-so-dimc-4}
o \left( \delta_{t^*,n} (\epsilon) \right)
&=& r_{-} (t^*, \delta_{t^*,n} (\epsilon)) + \frac{ (|\mathcal{X}|+1.5) \ln (n+1) + d_1}{n}
\end{eqnarray}
if $\epsilon \leq \frac{1}{3} $, and
\begin{equation}
\label{eq-so-dimc-5}
\left| R_n (\epsilon) - \left( I(t^{\#};P) - \frac{\sigma_D (t^{\#};P)}{\sqrt{n}} Q^{-1} (\epsilon) \right) \right|
\leq \frac{(|\mathcal{X}|+1) \ln (n+1) + d_2}{n}
\end{equation}
otherwise, where $d_1$ and $d_2$ are constants depending on the channel, but independent of $n$ and $\epsilon$.
\end{theorem}
\begin{IEEEproof}
For any $t \in {\cal P}_n $ and $0 < \epsilon < 1$, let
\[ \delta_{t, n}^P (\epsilon) =\sup \left \{ \delta >0: P_{t,\delta} \geq \left( 1 + 2 \sqrt{\frac{-2 \ln \epsilon}{n}} \right) \epsilon \right \} .\]
By Theorem~\ref{thm-dimc} and the trivial bound $P(B_{t,n,\delta}) \leq 1$, it is not hard to verify that
\begin{equation}
\label{eq-so-dimc-6}
R_n (\epsilon) \leq \max_{t \in {\cal P}_n } [ I(t;P) - \delta_{t, n}^P ] - \frac{\ln \epsilon + \ln \frac{-2 \ln \epsilon}{n}}{n} +
\frac{\ln \left( 1 + \sqrt{\frac{-2 \ln \epsilon}{n}} \right) + |\mathcal{X}| \ln (n+1)}{n}.
\end{equation}
Let us now examine
\[\max_{t \in {\cal P}_n } [ I(t;P) - \delta_{t, n}^P ] .\]
In view of the Chernoff bound (see Theorem 8 in \cite{yang-meng:nep}),
\[ P_{t,\delta} \leq e^{-n r_{-} (t, \delta)} \]
for any $t \in {\cal P}_n $ and $\delta >0$, which, together with \eqref{eq-so-dimc-0}, implies
\begin{eqnarray} \label{eq-so-dt-1}
\delta_{t, n}^P & \leq & r_{-}^{-1} \left (t, { -\ln \left( 1 + 2 \sqrt{\frac{-2 \ln \epsilon}{n}} \right) \epsilon \over n }\right ) \\
& \leq & r_{-}^{-1} (t, s(c)) \label{eq-so-dt-2} \\
& \leq & {C_{\mathrm{DIMC}} -c \over 2} \label{eq-so-dt-3}
\end{eqnarray}
whenever $\max\{\epsilon_n^+, \epsilon_n (c) \} \leq \epsilon< \epsilon_n^u $. In the above derivation, \eqref{eq-so-dt-1} is due to \eqref{eq-so-dimc-0}; and \eqref{eq-so-dt-2} and \eqref{eq-so-dt-3} follow from \eqref{eq-so-sc}, \eqref{eq-so-sc1}, and \eqref{eq-so-sc2}. Therefore,
\begin{eqnarray} \label{eq-so-dt-4}
\max_{t \in {\cal P}_n } [ I(t;P) - \delta_{t, n}^P ]
& \geq & \max_{t \in {\cal P}_n } I(t;P) - {C_{\mathrm{DIMC}} -c \over 2} \nonumber \\
& \geq & c
\end{eqnarray}
where the last inequality is due to \eqref{eq-so-sc3}. In view of \eqref{eq-so-dt-4}, it is not hard to see that for any $t \in {\cal P}_n$ achieving $\max_{t \in {\cal P}_n } [ I(t;P) - \delta_{t, n}^P ] $,
\[ I(t; P) \geq c + \delta_{t, n}^P \geq c \]
and hence
\[ \max_{t \in {\cal P}_n } [ I(t;P) - \delta_{t, n}^P ] = \max_{t \in {\cal P}_n (c) } [ I(t;P) - \delta_{t, n}^P ] \]
which, together with \eqref{eq-so-dimc-6}, implies
\begin{equation}
\label{eq-so-dt-5}
R_n (\epsilon) \leq \max_{t \in {\cal P}_n (c) } [ I(t;P) - \delta_{t, n}^P ] - \frac{\ln \epsilon + \ln \frac{-2 \ln \epsilon}{n}}{n} +
\frac{\ln \left( 1 + \sqrt{\frac{-2 \ln \epsilon}{n}} \right) + |\mathcal{X}| \ln (n+1)}{n}.
\end{equation}
When $\epsilon > \frac{1}{3}$, it follows from \eqref{eqrl3-17+} and \eqref{eq-proof-dsc-10} that for any $t \in {\cal P}_n (c)$,
\begin{eqnarray} \label{eq-so-dt-6}
\delta_{t, n}^P & \geq & \frac{\sigma_{D} (t;P)}{\sqrt{n}} Q^{-1}
\left( \epsilon + \frac{1}{\sqrt{n}} \left( 2 \epsilon \sqrt{-2 \ln \epsilon} + \frac{ C_{BE} M_{D}
(t;P)}{\sigma^3_{D} (t;P)} \right) \right) \nonumber \\
& \geq & \frac{\sigma_{D} (t;P)}{\sqrt{n}} Q^{-1}
( \epsilon ) - \sqrt{2\pi} e^{{[Q^{-1} (\epsilon)]^2 \over 2} } \frac{\sigma_{D} (t;P)}{n} \left( 2 \epsilon \sqrt{-2 \ln \epsilon} + \frac{ C_{BE} M_{D}
(t;P)}{\sigma^3_{D} (t;P)} \right)
\end{eqnarray}
Since ${\cal P} (c)$ is closed, it follows Condition (C4) that $ \sigma_{D} (t;P)$ and $\frac{ M_{D} (t;P)}{\sigma^3_{D} (t;P)}$ are bounded over ${\cal P} (c)$. Plugging \eqref{eq-so-dt-6} into \eqref{eq-so-dt-5} yields
\[ R_n (\epsilon) \leq \max_{t \in {\cal P}_n (c) } \left [ I(t;P) - \frac{\sigma_{D} (t;P)}{\sqrt{n}} Q^{-1}
( \epsilon ) \right ] + \frac{( |\mathcal{X}| +1) \ln (n+1) +d }{n}
\]
for some constant $d$, which, together with the achievability in \eqref{eq-thm-dimc-6} and \eqref{eq-thm-dimc-5}, implies
\eqref{eq-so-dimc-5}.
Now let us focus on the case when $\epsilon \leq \frac{1}{3}$. For any $t \in {\cal P}(c)$, let $\underline{\delta}_{t, n} (\epsilon) $ be the unique solution to
\begin{equation}
\label{eq-so-dimc-7}
\left( 1 + 2 \sqrt{\frac{-2 \ln \epsilon}{n}} \right) \epsilon = \xi_{D,-} (t;P,\lambda,n) e^{-n r_{-} (t,\delta)}
\end{equation}
where $\lambda = {\partial r_{-} (t, \delta) \over \partial \delta}$.
By following the argument in the proof of Theorem \ref{thm-BIMSC-second-order}, it is not hard to verify that for any $t \in {\cal P}_n (c)$
\begin{equation} \label{eq-so-dt-7}
\delta_{t, n}^P (\epsilon) \geq \underline{\delta}_{t, n} (\epsilon) \geq \delta_{t, n} (\epsilon) - { d \over n}
\end{equation}
for some constant $d$ independent of $n$, $\epsilon$, and $t$. Plugging \eqref{eq-so-dt-7} into \eqref{eq-so-dt-5} then yields
\begin{equation}
\label{eq-so-dimc-7-}
R_n (\epsilon) \leq I(t^*;P) - \delta_{t^*,n} (\epsilon)
- \frac{\ln \epsilon + \ln \frac{-2 \ln \epsilon}{n}}{n} +
\frac{ \sqrt{\frac{-2 \ln \epsilon}{n}}+ |\mathcal{X}| \ln (n+1) + d}{n} .
\end{equation}
In the meantime,
\begin{eqnarray}
\label{eq-so-dimc-8}
\epsilon &=& g_{t^*;P,n} (\delta_{t^*,n}) \nonumber \\
&\geq&
\frac{1}{\sqrt{2 \pi} \left( \sqrt{n} \lambda_{t^*,n} \sigma_{D,-} (t^*;P,\lambda_{t^*,n})
+ \frac{1}{ \sqrt{n} \lambda_{t^*,n} \sigma_{D, -} (t^*;P,\lambda_{t^*,n}) } \right)}
e^{- n r_{-} (t^*,\delta_{t^*,n} (\epsilon))}
\end{eqnarray}
where $\lambda_{t^*,n} = \left. \frac{\partial r_{-} (t^*, \delta)}{\partial \delta} \right|_{\delta = \delta_{t^*,n} (\epsilon)}$. Consequently,
\begin{eqnarray}
\label{eq-so-dimc-9}
\frac{- \ln \epsilon}{n} &\leq& r_{-} (t^*, \delta_{t^*,n} (\epsilon))
+ \frac{\ln \left[ \sqrt{2 \pi} \left( \sqrt{n} \lambda_{t^*,n} \sigma_{D,-} (t^*;P,\lambda_{t^*,n})
+ \frac{1}{ \sqrt{n} \lambda_{t^*,n} \sigma_{D,-} (t^*;P,\lambda_{t^*,n}) } \right) \right]}{n}
\nonumber \\
&\leq& r_{-} (t^*, \delta_{t^*,n} (\epsilon)) + \frac{\ln n}{2 n} + \frac{\eta_1}{n}
\end{eqnarray}
where $\eta_1$ is a constant independent of $n$, $\epsilon$, and $t^*$. Now substituting
\eqref{eq-so-dimc-9} and $\epsilon \leq \frac{1}{3}$ into \eqref{eq-so-dimc-7-}
yields
\begin{eqnarray}
\label{eq-so-dimc-11}
R_n (\epsilon) &\leq& I(t^*;P) - \delta_{t^*,n} (\epsilon) +
r_{-} (t^*, \delta_{t^*,n} (\epsilon)) \nonumber \\
&&{+}\:
\frac{ - \ln \frac{2 \ln 3}{n} + \eta_1+\sqrt{r_{-} (t^*, \delta_{t^*,n} (\epsilon)) + \frac{1}{2 e} + \frac{\eta_1}{n}}
+ \frac{1}{2}\ln n + |\mathcal{X}| \ln (n+1) + d }{n} \nonumber \\
&\leq& I(t^*;P) - \delta_{t^*,n} (\epsilon) +
r_{-} (t^*, \delta_{t^*,n} (\epsilon)) + \frac{\underline{d}_1 + \left( |\mathcal{X}| + \frac{3}{2} \right) \ln (n+1)}{n}
\end{eqnarray}
for some constant $\underline{d}_1$ independent of $n$, $\epsilon$, and $t^*$, where the last inequality is due to the fact that in view of Condition (4), $ r_{-} (t^*, \delta_{t^*,n} (\epsilon)) $ is bounded over $t\in {\cal P}(c)$ and $\epsilon \geq \max \{ \epsilon_n^+, \epsilon_n (c) \}$.
To complete the proof, let us go back to the achievability given in \eqref{eq-thm-dimc-2} and \eqref{eq-thm-dimc-1}. Now choose $t$ to be $t^*$, and fellow the argument in the proof of Theorem \ref{thm-BIMSC-second-order}. Then it is not hard to show that
\begin{equation}
\label{eq-so-dimc-12}
R_n (\epsilon) \geq I(t^*;P) - \delta_{t^*,n} (\epsilon)
- r_{-} (t^*,\delta_n (\epsilon)) - \frac{(|\mathcal{X}|+1) \ln (n+1) + \bar{d}_1}{n}
\end{equation}
where $\bar{d}_1$ is a constant independent of $n$, $\epsilon$, and $t^*$. Combining \eqref{eq-so-dimc-12} with \eqref{eq-so-dimc-11} completes the proof of Theorem~\ref{thm-DIMC-second-order}.
\end{IEEEproof}
Remarks similar to those immediately after Theorem \ref{thm-BIMSC-second-order} also apply here. In particular, Theorem~\ref{thm-DIMC-second-order} and the achievability of jar decoding given in \eqref{eq-thm-dimc-2}and \eqref{eq-thm-dimc-1} to \eqref{eq-thm-dimc-6} and \eqref{eq-thm-dimc-5} once again imply that jar decoding is indeed optimal up to the second order coding performance in the non-asymptotical regime for any DIMC. In addition, the following remarks are helpful to the computation of the Taylor-type expansion of $R_n (\epsilon)$ as expressed in \eqref{eq-so-dimc-3+} to \eqref{eq-so-dimc-5}.
\begin{remark} \label{re_c}
When $I(t; P)$, $\delta_{-} (t, \lambda)$, $\sigma^2_{D, -} (t; P, \lambda) $, $ M_{D, -} (t; P , \lambda)$, $ \hat{M}_{D, -} (t; P , \lambda)$, and $ r_{ -} (t, \delta_{-}(t, \lambda)) $ are all continuously differentiable with respect to $t$ over $ t \in {\cal P}(c)$ and $ \lambda \in [0, \lambda^*]$, which is true for most channels including particularly channels with discrete output alphabets, and discrete input additive white Gaussian channels, $\mathcal{P}_n (c) $ in the definitions of $t^*$ and $t^{\#}$ can be replaced by $\mathcal{P}(c)$. Thus, in this case,
\begin{eqnarray}
\label{eq-so-dimc-3++}
t^* &\mbox {$ \ \stackrel{\Delta}{=} $}& \operatornamewithlimits{arg\,max}_{t \in \mathcal{P} (c) } \left[ I(t;P) - \delta_{t,n} (\epsilon) \right] \\
\label{eq-so-dimc-3--}
t^{\#} &\mbox {$ \ \stackrel{\Delta}{=} $}& \operatornamewithlimits{arg\,max}_{t \in \mathcal{P} (c)} \left[ I(t;P) - \frac{\sigma_D (t;P)}{\sqrt{n}} Q^{-1} (\epsilon) \right].
\end{eqnarray}
Hereafter, we shall assume that the channel satisfies this continuously differentiable condition, and use \eqref{eq-so-dimc-3++} and \eqref{eq-so-dimc-3+}, or \eqref{eq-so-dimc-3--} and \eqref{eq-so-dimc-3-} interchangeably.
\end{remark}
\begin{remark} \label{re_ci1}
It is worth pointing out the impact of $c$ on the maximization problems given in \eqref{eq-so-dimc-3++}, \eqref{eq-so-dimc-3+}, \eqref{eq-so-dimc-3--}, and \eqref{eq-so-dimc-3-}. In view of the definitions of $s(c)$ and $\epsilon_n (c)$ in \eqref{eq-so-sc} and \eqref{eq-so-sc1}, it is not hard to see that when $\epsilon $ is relatively large with respect to $n$ (in the sense that ${-\ln \epsilon \over n}$ is small), one can select $c$ to be close to $C_{\mathrm{DIMC}}$. In this case, it suffices to search a small range ${\cal P} (c)$ for optimal $t^*$. On the other hand,
when $\epsilon $ is relatively small with respect to $n$, e.g., a exponential function of $n$, $c$ should be selected to be far below $C_{\mathrm{DIMC}}$ and hence one has to search a large range ${\cal P} (c)$ for optimal $t^*$.
\end{remark}
\begin{remark} \label{re_bt_dt}
When the Taylor-type expansion of $R_n (\epsilon)$ in Theorem \ref{thm-DIMC-second-order} is applied to the case of BIMSC, it yields essentially the same result as in Theorem \ref{thm-BIMSC-second-order}, with explanation as follows. For any BIMSC, $t(0)$ fully charaterizes the type $t$. Then by symmetry, $\frac{\partial \delta_{t,n} (\epsilon)}{\partial t(0)} = 0$ at $t(0)=0.5$ for any $n$ and $\epsilon$. Note that $\delta_{t,n} (\epsilon) = \delta_n (\epsilon)$ when $t(0) = 0.5$, the capacity achieving input distribution. Therefore,
\begin{eqnarray}
\label{eq-so-dimc-13}
\max_{t \in \mathcal{P} (c)} [ I(t;P) - \delta_{t,n} (\epsilon) ]
&=& \max_{t \in \mathcal{P} (C_{\mathrm {BIMSC}} - O \left( \delta_n (\epsilon) \right) )} [ I(t;P) - \delta_{t,n} (\epsilon) ]
\nonumber \\
&=& C_{\mathrm{BIMSC}} - \delta_n (\epsilon) + O \left( \delta^2_n (\epsilon) \right) .
\end{eqnarray}
Consequently, by observing that the high order term $o(\delta_n (\epsilon))$ in Theorem \ref{thm-BIMSC-second-order} is also in the order of $\delta^2_n (\epsilon)$, the Taylor-type expansion of $R_n (\epsilon)$ for BIMSC in Theorem \ref{thm-DIMC-second-order} is shown to be the same as that in Theorem \ref{thm-BIMSC-second-order}.
\end{remark}
\subsection{Comparison with Asymptotic Analysis and Implication}
It is instructive to compare Theorem~\ref{thm-DIMC-second-order} with the second order asymptotic performance analysis as $n$ goes to $\infty$.
{\em Asymptotic analysis with constant $0< \epsilon <1$ and $n \to \infty$}: Fix $0 < \epsilon < 1$. It was shown in \cite{strassen-1962}, \cite{Hayashi-2009}, \cite{Yury-Poor-Verdu-2010} that for a DIMC with a discrete output alphabet and $C_{\mathrm{DIMC}} >0$,
\begin{equation} \label{eq-comp-dt1}
R_n (\epsilon) = C_{\mathrm{DIMC}} - \frac{\sigma_D (P)}{\sqrt{n}} Q^{-1} (\epsilon) + O \left ({\ln n \over n} \right )
\end{equation}
for sufficiently large $n$, where
\[ \sigma_D (P) = \left \{ \begin{array}{cc}
\min \{ \sigma_D (t; P): t \in {\cal P} \& I(t; P) = C_{\mathrm{DIMC}} \} & \mbox{ if } \epsilon < {1\over 2} \\
\max \{ \sigma_D (t; P): t \in {\cal P} \& I(t; P) = C_{\mathrm{DIMC}} \} & \mbox{ if } \epsilon > {1\over 2} .
\end{array} \right. \]
Once again, the expression $ C_{\mathrm{DIMC}} - \frac{\sigma_D (P)}{\sqrt{n}} Q^{-1} (\epsilon) $ was referred to as the normal approximation for $R_n (\epsilon)$ in \cite{Yury-Poor-Verdu-2010}. It is not hard to verify that for sufficiently large $n$,
\begin{eqnarray} \label{eq-comp-dt2}
C_{\mathrm{DIMC}} - \frac{\sigma_D (P)}{\sqrt{n}} Q^{-1} (\epsilon)
&\leq & \max_{t \in \mathcal{P} (c)} \left[ I(t;P) - \frac{\sigma_D (t;P)}{\sqrt{n}} Q^{-1} (\epsilon) \right] \nonumber \\
& = & \max_{t : \exists p_X, |t-p_X| = O \left( {1 \over n^{1/2} } \right) } \left[ I(t;P) - \frac{\sigma_D (t;P)}{\sqrt{n}} Q^{-1} (\epsilon) \right] \nonumber \\
& = & C_{\mathrm{DIMC}} - \frac{\sigma_D (P)}{\sqrt{n}} Q^{-1} (\epsilon) + O \left( {1 \over n} \right)
\end{eqnarray}
where the first equality is due to the fact that for any $p_X$ satisfying $I(p_X; P) = C_{\mathrm{DIMC}}$ and $t $ satisfying $|t -p_X | = \omega ( 1/n^{1/2} )$,
\begin{displaymath}
I(t;P) - \frac{\sigma_D (t;P)}{\sqrt{n}} Q^{-1} (\epsilon) \leq C_{\mathrm{DIMC}} - \frac{\sigma_D (p_X;P)}{\sqrt{n}} Q^{-1} (\epsilon)
\end{displaymath}
as
\begin{displaymath}
\frac{Q^{-1} (\epsilon)}{\sqrt{n}} |\sigma_D (t; P) - \sigma_D (p_X; P) |
= O \left( \frac{|t -p_X|}{\sqrt{n}} \right) = o(|t -p_X |^2) = o(C_{\mathrm{DIMC}} - I(t;P)) .
\end{displaymath}
Therefore, when $\epsilon > 1/3$, \eqref{eq-comp-dt1} and \eqref{eq-so-dimc-5} are essentially the same for sufficiently large $n$.
Let us now look at the case $\epsilon \leq 1/3$. Again, $ 0< \epsilon \leq 1/3$ is fixed. In parallel with \eqref{eq-comp-2} and \eqref{eq-comp-3}, we have for each $t \in {\cal P} (c)$
\begin{eqnarray} \label{eq-comp-dt2+}
r_{-} (t, \delta) & = & {1 \over 2 \sigma^2_D (t; p) } \delta^2 + \frac{ - \hat{M}_D (t; P) }{6 \sigma^6_D (t; P)}
\delta^3 + O(\delta^4)
\end{eqnarray}
and
\begin{equation} \label{eq-comp-dt3}
\delta_{t, n} (\epsilon) = \frac{\sigma_D (t; P)}{\sqrt{n}} Q^{-1} (\epsilon) + O \left ( {1 \over n} \right ) .
\end{equation}
Combining \eqref{eq-comp-dt3} with \eqref{eq-comp-dt2} yields
\begin{eqnarray} \label{eq-comp-dt4}
C_{\mathrm{DIMC}} - \frac{\sigma_D (P)}{\sqrt{n}} Q^{-1} (\epsilon) + O(1/n)
&\leq & \max_{t \in \mathcal{P} (c)} \left[ I(t;P) - \delta_{t; n} (\epsilon) \right] \nonumber \\
& \leq & C_{\mathrm{DIMC}} - \frac{\sigma_D (P)}{\sqrt{n}} Q^{-1} (\epsilon) + O \left( {1 \over n} \right) .
\end{eqnarray}
Thus the Taylor-type expansion of $R_n (\epsilon )$ in Theorem~\ref{thm-DIMC-second-order} implies the second order asymptotic analysis with constant $0< \epsilon <1$ and $n \to \infty$ shown in \eqref{eq-comp-dt1}.
{\em Asymptotic analysis with $n \to \infty$ and non-exponentially decaying $\epsilon$}: Suppose now $\epsilon$ is a function of $n$ and goes to $0$ as $n \to \infty$, but at a non-exponential speed. Using arguments similar to those made above and in Subsection~\ref{sec2-d}, one can show that the Taylor-type expansion of $R_n (\epsilon )$ in Theorem~\ref{thm-DIMC-second-order} implies that in this case,
$C_{\mathrm{DIMC}}$ and $ - \frac{\sigma_D (P)}{\sqrt{n}} Q^{-1} (\epsilon) $ are still respectively the first order and second order terms of of $R_n (\epsilon)$ in the asymptotic analysis with $n \to \infty$. Once again, to the best of our knowledge, the second order asymptotic analysis with $n \to \infty$ and non-exponentially decaying $\epsilon$ has not been addressed before in the literature.
{\em Divergence from the normal approximation}: In the non-asymptotic regime where $n$ is finite and $\epsilon$ is generally relatively small with respect to $n$, the first two terms
\[ \max_{t \in \mathcal{P} (c) } \left[ I(t;P) - \delta_{t,n} (\epsilon) \right] \]
in the Taylor-type expansion of $R_n (\epsilon )$ in Theorem~\ref{thm-DIMC-second-order} differ from the normal approximation in a strong way. In particular, the optimal distribution $t^*$ defined in \eqref{eq-so-dimc-3++} is not necessarily a capacity achieving distribution. In this case, the normal approximation would fail to provide a reasonable estimate for $R_n (\epsilon)$.
{\em Example:} Consider the Z channel shown in Figure~\ref{fig:zch}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[rounded corners,ultra thick]
\path (0,0) node(X0) {$0$} (5.5,0) node(Y0) {$0$}
(0,1) node(X) {$X$} (5.5,1) node(Y) {$Y$}
(0,2) node(X1) {$1$} (5.5,2) node(Y1) {$1$};
\draw[->] (X0) -- (Y0) node[above,pos=0.5] {$1-p$};
\draw[->] (X0) -- (Y1) node[above,pos=0.5] {$p$};
\draw[->] (X1) -- (Y1) node[above,pos=0.5] {$1$};
\end{tikzpicture}
\caption{Z Channel}
\label{fig:zch}
\end{figure}
In this example, we show that the optimal distribution $t^*$ defined in \eqref{eq-so-dimc-3++} is not a capacity achieving distribution. In the numerical calculation shown in Figure \ref{fig-zp}, the transition probability $p$ (i.e. $\Pr \{Y=1|X=0\}$) ranges from $0.05$ to $0.95$ with block length $n=1000$ and error probability $\epsilon = 10^{-6}$. As can be seen from Figure \ref{fig-zp}(a), $t^* (0)$ is always different from the capacity achieving $t (0)$. Moreover, Figure \ref{fig-zp}(b) shows the percentage of $I(t;P) - \delta_{t,n} (\epsilon)$ over $I(t^*;P) - \delta_{t^*,n} (\epsilon)$ when $t$ is capacity achieving, $t^*$, and uniform respectively. It is clear that $C_{\mathrm{DIMC}} - \delta_{p_X,n} (\epsilon)$ is apart from $I(t^*;P) - \delta_{t^*,n} (\epsilon)$ further and further when $p$ gets larger and larger, where $p_X$ is the capacity achieving distribution, indicating that under the practical block length and error probability requirement, Shannon random coding based on the capacity achieving distribution is not optimal. It is also interesting to note that for uniform $t$, $I(t;P) - \delta_{t,n} (\epsilon)$ is quite close to $I(t^*;P) - \delta_{t^*,n} (\epsilon)$ within the whole range, implying that linear block coding is quit suitable for the Z channel even under the practical block length and error probability requirement.
\begin{figure}[h]
\centering
\subfloat[$t(0)$ vs. $p$]{\includegraphics[scale=0.4]{zp.pdf}}
\subfloat[$\frac{I(t;P) - \delta_{t,n} (\epsilon)}{I(t^*;P) - \delta_{t^*,n} (\epsilon)}$ for different $t$]{\includegraphics[scale=0.4]{zb.pdf}}
\caption{Illustration for the Z channel with $n=1000$ and $\epsilon = 10^{-6}$: (a) comparison of $t^*$ with the capacity achieving distribution; and (b) comparison of $I(t;P) - \delta_{t,n} (\epsilon)$ among different distributions $t$.}
\label{fig-zp}
\end{figure}
{\em Implication on code design}: An important implication arising from the Taylor-type expansion of $R_n (\epsilon )$ in Theorem~\ref{thm-DIMC-second-order} in the non-asymptotic regime is that for values of $n$ and $\epsilon$ with practical interest, the optimal marginal codeword symbol distribution is not necessarily a capacity achieving distribution. This is illustrated above for the Z channel. Indeed, other than for symmetric channels like BIMSC, it would expect that the optimal distribution $t^*$ defined in \eqref{eq-so-dimc-3++} is in general not a capacity achieving distribution for values of $n$ and $\epsilon$ for which $\delta_{t^*,n} (\epsilon)$ is not relatively small. As such, to design efficient channel codes under the practical block length and error probability requirement, one approach is to solve the maximization problem in \eqref{eq-so-dimc-3++}, get $t^*$, and then design codes so that the marginal codeword symbol distribution is approximately $t^*$.
\section{Approximation and Evaluation}
\label{sec:appr-eval}
\setcounter{equation}{0}
Based on our converse theorems and Taylor-type expansion of $R_n (\epsilon)$, in this section, we first derive two approximation formulas for $R_n (\epsilon)$. We then compare them numerically with the normal approximation and some tight (achievable and converse) non-asymptotic bounds, for the BSC, BEC, BIAGC, and Z Channel. In all Figures \ref{fig-bsc} to \ref{fig-z-2}, rates are expressed in bits.
\subsection{Approximation Formulas}
In view of the Taylor-type expansion of $R_n (\epsilon )$ in Theorem~\ref{thm-DIMC-second-order}, one reasonable approximation formula is to use the first two terms in Taylor-type expansion of $R_n (\epsilon )$ as an estimate for $R_n (\epsilon )$. We refer to this formula as the second order (SO) formula:
\begin{eqnarray}
\label{eq-nep-z-2}
R^{\mathrm{SO}}_n (\epsilon) &=& \max_{t \in \mathcal{P} (c) } \left[ I(t;P) - \delta_{t,n} (\epsilon) \right] \nonumber \\
& = & I(t^*;P) - \delta_{t^*; P} (\epsilon)
\end{eqnarray}
where $c$ is selected according to Remark~\ref{re_ci1}.
To derive the other approximation formula for $R_n (\epsilon)$, let us put Theorem~\ref{thm-dimc}, Theorem~\ref{thm-DIMC-second-order}, and the achievability given in \eqref{eq-thm-dimc-2} and \eqref{eq-thm-dimc-1} together. It would make sense for an optimal code of block length $n$ to draw all its codewords from the same type
$t$ with $|t - t^*| = O(1/n)$. In this case, it is not hard to see that the term $|\mathcal{X}|\frac{\ln (n+1)}{n}$ in the bounds of Theorems~\ref{thm-dimc} and \ref{thm-DIMC-second-order} (i.e. \eqref{eq-thm-dsc-0}, \eqref{eq-thm-dsc-1}, \eqref{eq-so-dimc-4}, and \eqref{eq-so-dimc-5}) can be dropped. By ignoring the higher order term $ \frac{\ln \frac{-2 \ln \epsilon_n}{ n} - \ln \left( 1 + \sqrt{\frac{-2 \ln \epsilon_n}{n}} \right)}{n}$ in \eqref{eq-thm-dsc-0} and \eqref{eq-thm-dsc-1}, we get the following approximation formula (dubbed ``NEP'') :
\begin{equation}
\label{eq-nep-z-1}
R^{\mathrm{NEP}}_n (\epsilon) = I(t^*;P) - \delta_{t^*; P} (\epsilon) - \frac{\ln \epsilon}{n} +
\frac{1}{n} \ln P(B_{t^*,n,\delta_{t^*; P} (\epsilon)})
\end{equation}
Rewrite the normal approximation as
\begin{equation}
\label{eq-nep-z-3}
R^{\mathrm{Normal}}_n (\epsilon) = C_{\mathrm{DIMC}} - \frac{\sigma_D (P)}{\sqrt{n}} Q^{-1} (\epsilon).
\end{equation}
\subsection{BIMSC}
\label{sec:bimsc}
In the case of BIMSC, it follows from Theorem \ref{thm-BIMSC-second-order} and Remark~\ref{re_bt_dt} that $
R^{\mathrm{SO}}_n (\epsilon)$, $ R^{\mathrm{NEP}}_n (\epsilon)$, and $ R^{\mathrm{Normal}}_n (\epsilon)$ become respectively
\begin{displaymath}
R^{\mathrm{SO}}_n (\epsilon) = C_{\mathrm{BIMSC}} - \delta_n (\epsilon)
\end{displaymath}
\begin{equation}
\label{eq-app-1}
R^{\mathrm{NEP}}_n (\epsilon) = C_{\mathrm{BIMSC}} - \delta_n (\epsilon) - \frac{\ln \epsilon}{n} +
\frac{1}{n} \ln P(B_{n,\delta_n (\epsilon)})
\end{equation}
and
\begin{equation}
\label{eq-app-3}
R^{\mathrm{Normal}}_n (\epsilon) = C_{\mathrm{BIMSC}} - \frac{\sigma_H
(X|Y)}{\sqrt{n}} Q^{-1} (\epsilon) .
\end{equation}
From Theorem \ref{thm-BIMSC-second-order} and its comparison with asymptotic analysis, we can expect that when $\delta_n (\epsilon)$ is extremely small, $
R^{\mathrm{SO}}_n (\epsilon)$ and $ R^{\mathrm{Normal}}_n (\epsilon)$ are close, and both can provide a good approximation for $R_n (\epsilon)$. However, as $\delta_n (\epsilon)$ increases,
the relative position of $
R^{\mathrm{SO}}_n (\epsilon)$ and $ R^{\mathrm{Normal}}_n (\epsilon)$ depends on
\begin{displaymath}
\zeta_{X|Y} = - \frac{\hat{M}_H (X|Y)}{6 \sigma^6_H (X|Y)}.
\end{displaymath}
Specifically, given a channel with large magnitude of $\zeta_{X|Y}$, $ R^{\mathrm{Normal}}_n (\epsilon)$ is not reliable, as it can be much below achievable bounds or above converse bounds. On the other hand, as shown later on, $
R^{\mathrm{SO}}_n (\epsilon)$ is much more reliable. Moreover, $ R^{\mathrm{NEP}}_n (\epsilon)$, which has some terms beyond second order on top of $ R^{\mathrm{SO}}_n (\epsilon)$, always provides a good approximation for $R_n (\epsilon)$ even if $\delta_n (\epsilon)$ is relatively large.
\subsubsection{BSC}
For this channel, the trivial bound $P(B_{n,\delta_n (\epsilon)}) \leq 1$ is applied in the evaluation of $ R^{\mathrm{NEP}}_n (\epsilon)$,.
Before jumping into the comparison of those approximations, let us first get some insight by
investigating $\zeta_{X|Y}$. It can be easily verified that for BSC with cross-over probability $p$,
\begin{equation}
\label{eq-app-4}
\zeta_{X|Y} = - \frac{1}{6 \ln^5 \frac{1-p}{p}} \frac{1-2p}{p^3(1-p)^3} .
\end{equation}
As can be seen, $\zeta_{X|Y}$ is always negative for any $p \in (0,1)$ and $\zeta_{X|Y} \rightarrow -\infty$
as $p \rightarrow 0$. Therefore, in the case of a very small $p$, $ R^{\mathrm{Normal}}_n (\epsilon)$ will be larger than $ R^{\mathrm{SO}}_n (\epsilon)$ by a relatively large margin, and even larger than the converse bound.
Now in order to compare those approximations, we invoke Theorem 33 (dubbed ``RCU'') and Theorem 35 (dubbed ``Converse'') in \cite{Yury-Poor-Verdu-2010}, which serve as an achievable bound and a converse bound, respectively. In addition, another converse bound is provided by the exact calculation of \eqref{eq-thm-maxbimc-3} and \eqref{eq-thm-maxbimc-4} in Corollary \ref{col-bimc} (dubbed ``Exact''). Moreover, by Theorem 52 in \cite{Yury-Poor-Verdu-2010}, $\frac{\ln n}{2 n}$ is the third order in the asymptotic analysis of $R_n (\epsilon)$ as $ n \to \infty$ for BSC, and therefore, another approximation is yielded by adding $\frac{\ln n}{2 n}$ to the normal approximation (dubbed ``Normal\_ln''). Then these four approximation formulas (NEP, Normal\_ln, Normal, SO), two converse bounds (Converse, Exact), and one achievable bound (RCU) are compared against each other with block length $n$ ranging from 200 to 2000; their respective performance is shown in Figures \ref{fig-bsc} and \ref{fig-bsc-2}.
\begin{figure}[h]
\centering
\subfloat[Bounds with $P_m = 10^{-3}$]{\includegraphics[scale=0.4]{BSC0113.pdf}}
\subfloat[$\delta_n(\epsilon)$ with $P_m = 10^{-3}$]{\includegraphics[scale=0.4]{BSC0113-delta.pdf}} \\
\subfloat[Bounds with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.06$]{\includegraphics[scale=0.4]{BSC011l12.pdf}}
\subfloat[$\log_{10} P_m$ with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.06$]{\includegraphics[scale=0.4]{BSC011l12-pe.pdf}}
\caption{Comparison of different bounds for BSC with $p=0.11$.}
\label{fig-bsc}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[Bounds with $P_m = 10^{-6}$]{\includegraphics[scale=0.4]{BSC0016.pdf}}
\subfloat[$\delta_n(\epsilon)$ with $P_m = 10^{-6}$]{\includegraphics[scale=0.4]{BSC0016-delta.pdf}} \\
\subfloat[Bounds with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.04$]{\includegraphics[scale=0.4]{BSC001l28.pdf}}
\subfloat[$\log_{10} P_m$ with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.04$]{\includegraphics[scale=0.4]{BSC001l28-pe.pdf}}
\caption{Comparison of different bounds for BSC with $p=0.001$.}
\label{fig-bsc-2}
\end{figure}
In Figure \ref{fig-bsc}, the target channel is the BSC with cross-over probability 0.11, where $\zeta_{X|Y}$ is relatively small. In Figure~\ref{fig-bsc}(a), bounds are compared with fixed maximum error probability $P_m=10^{-3}$, while $\delta_n (\epsilon)$ changes with respect to block length $n$, shown in Figure~\ref{fig-bsc}(b). In the meantime, Figure~\ref{fig-bsc}(c) shows comparison of these bounds when $\delta_n (\epsilon)$ is fixed to be $0.06$, while $P_m = g_{X|Y,n} (0.06)$ is shown in Figure~\ref{fig-bsc}(d). As can be seen, when $\delta_n (\epsilon)$ gets smaller, the SO and Normal curves tend to coincide with each other. Moreover, since the SO and Normal approximation formulas are quite close in this case, both the NEP and Normal\_ln provide quite accurate approximations for $R_n (\epsilon)$ with the NEP slightly better.
Figure \ref{fig-bsc-2} shows the same curves as those in Figure \ref{fig-bsc}, but for the BSC with cross-over probability $0.001$. In this case, the magnitude of $\zeta_{X|Y}$ is large, and therefore, the SO and Normal curves are well apart. In fact, the Normal curve is even above those two converse bounds, and so does the Normal\_ln curve, thus confirming our analysis based on $\zeta_{X|Y}$ made at the beginning of this discussion for BSC. On the other hand, the SO curve stays at the same relative position to achievable and converse bounds, and the NEP still provides an accurate approximation for $R_n (\epsilon)$.
\subsubsection{BEC}
This special channel serves as another interesting example to illustrate the difference
between the SO and Normal approximations. On one hand, it can be easily verified that
\begin{equation}
\label{eq-app-5}
P(B_{n,\delta}) = \Pr \left\{ -\frac{1}{n} \ln p(X^n|Y^n) > H(X|Y) + \delta \right\} \approx g_{X|Y,n} (\delta)
\end{equation}
and therefore, $- \frac{\ln \epsilon}{n}$ and $ \frac{1}{n} \ln P(B_{n,\delta_n (\epsilon)})$ are cancelled out in $ R^{\mathrm{NEP}}_n (\epsilon)$, which is then identical to $ R^{\mathrm{SO}}_n (\epsilon)$. On the other hand,
\begin{equation}
\label{eq-app-6}
\zeta_{X|Y} = - \frac{(1-2p)}{6 p^2 (1-p)^2 \ln^3 2}
\left\{
\begin{array}{cc}
< 0 & \mbox{if $p<0.5$} \\
= 0 & \mbox{if $p=0.5$}\\
> 0 & \mbox{if $p>0.5$}
\end{array}
\right. .
\end{equation}
Therefore, the Normal curve can be all over the map, i.e. it can be above some converse when $p < 0.5$, and below an achievable bound when $p > 0.5$. When $p=0.5$, the Normal curve happens to be close to the SO curve, hereby explaining why it provides an accurate approximation for $R_n (\epsilon)$ in this particular case, as shown in \cite{Yury-Poor-Verdu-2010}.
To provide benchmarks for the comparison of approximation formulas, Theorem 37 and 38 in \cite{Yury-Poor-Verdu-2010} are used here, dubbed ``DT'' and ``Converse'' respectively. The exact calculation of
\eqref{eq-thm-maxbimc-3} and \eqref{eq-thm-maxbimc-4} in Corollary \ref{col-bimc} (dubbed ``Exact'')
again serves as an additional converse bound. Then those bounds are drawn in Figures~\ref{fig-bec} and \ref{fig-bec-2} in the same way as those in figure~\ref{fig-bsc}, where erasure probabilities are selected to be $0.05$ and $0.9$, respectively. Once again, numeric results confirm our analysis and discussion above.
\begin{figure}[h]
\centering
\subfloat[Bounds with $P_m = 10^{-6}$]{\includegraphics[scale=0.4]{BEC0056.pdf}}
\subfloat[$\delta_n (\epsilon)$ with $P_m = 10^{-6}$]{\includegraphics[scale=0.4]{BEC0056-delta.pdf}}
\\
\subfloat[Bounds with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.0199$]{\includegraphics[scale=0.4]{BEC005l07.pdf}}
\subfloat[$\log_{10} P_m$ with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.0199$]{\includegraphics[scale=0.4]{BEC005l07-pe.pdf}}
\caption{Comparison of different bounds for BEC with $p=0.05$.}
\label{fig-bec}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[Bounds with $P_m = 10^{-6}$]{\includegraphics[scale=0.4]{BEC096.pdf}}
\subfloat[$\delta_n (\epsilon)$ with $P_m = 10^{-6}$]{\includegraphics[scale=0.4]{BEC096-delta.pdf}}
\\
\subfloat[Bounds with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.022$]{\includegraphics[scale=0.4]{BEC09l06.pdf}}
\subfloat[$\log_{10} P_m$ with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.022$]{\includegraphics[scale=0.4]{BEC09l06-pe.pdf}}
\caption{Comparison of different bounds for BEC with $p=0.9$.}
\label{fig-bec-2}
\end{figure}
\subsubsection{BIAGC}
Here we assume that codewords are modulated to $\{+1,-1\}$ before going through an AWGN channel, and apply the trivial bound $P(B_{n,\delta_n (\epsilon)}) \leq 1$ in the NEP formula. Similarly to BSC and BEC, we would like to get some insight by investigating $\zeta_{X|Y}$. Since in this case, $\zeta_{X|Y}$ does not seem to have a simple close form expression which can be easily computed, numerical calculation of $\zeta_{X|Y}$ is shown in Figure~\ref{fig:biagc-zeta}, where SNR ranges from 8dB to 10.5dB. As can be seen, BIAGC is similar to BSC, i.e. $\zeta_{X|Y}$ is always negative and its magnitude increases with SNR. Therefore, $ R^{\mathrm{Normal}}_n (\epsilon)$ is close to $ R^{\mathrm{SO}}_n (\epsilon)$ when SNR is low, but can be above some converse bounds when SNR is high. This is confirmed in Figures \ref{fig-biagc} and \ref{fig-biagc-2}, where exact evaluation of \eqref{eq-thm-maxbimc-1} and \eqref{eq-thm-maxbimc-2} in Corollary \ref{col-bimc} (dubbed ``Exact'') serves as a converse bound.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{biagc-zeta.pdf}
\caption{$\zeta_{X|Y}$ of BIAGC}
\label{fig:biagc-zeta}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[Bounds with $P_m = 10^{-3}$]{\includegraphics[scale=0.4]{BG153.pdf}}
\subfloat[$\delta_n (\epsilon)$ with $P_m = 10^{-3}$]{\includegraphics[scale=0.4]{BG153-delta.pdf}}
\\
\subfloat[Bounds with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.0265$]{\includegraphics[scale=0.4]{BG15l01.pdf}}
\subfloat[$\log_{10} P_m$ with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.0265$]{\includegraphics[scale=0.4]{BG15l01-pe.pdf}}
\caption{Comparison of different bounds for BIAGC with SNR $=-3.52$ dB.}
\label{fig-biagc}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[Bounds with $P_m = 10^{-9}$]{\includegraphics[scale=0.4]{BG0339.pdf}}
\subfloat[$\delta_n (\epsilon)$ with $P_m = 10^{-9}$]{\includegraphics[scale=0.4]{BG0339-delta.pdf}}
\\
\subfloat[Bounds with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.0175$]{\includegraphics[scale=0.4]{BG033l0465.pdf}}
\subfloat[$\log_{10} P_m$ with $P_m = g_{X|Y,n} (\delta)$ and $\delta=0.0175$]{\includegraphics[scale=0.4]{BG033l0465-pe.pdf}}
\caption{Comparison of different bounds for BIAGC with SNR $=9.63$ dB. }
\label{fig-biagc-2}
\end{figure}
\subsection{DIMC: Z Channel}
\label{sec:z-channel}
To show an example of DIMC which is not a BIMSC, we consider again the Z channel shown in Figure \ref{fig:zch}. The capacity of Z channel is well known and given by
\begin{equation} \label{eq-z-cap}
C_Z = \ln \left( 1 + (1-p) p^{\frac{p}{1-p}} \right)
\end{equation}
with the capacity-achieving distribution
\begin{equation} \label{eq-z-px}
p_X(x) =
\left\{
\begin{array}{cc}
\frac{1}{1- p + p^{-\frac{p}{1-p}}} & \mbox{for $x=0$} \\
\\
\frac{p^{-\frac{p}{1-p}}-p}{1- p + p^{-\frac{p}{1-p}}} & \mbox{for $x=1$}
\end{array}
\right.
\end{equation}
and the corresponding output distribution
\begin{equation} \label{eq-z-py}
p_Y(y) =
\left\{
\begin{array}{cc}
\frac{1-p}{1- p + p^{-\frac{p}{1-p}}} & \mbox{for $y=0$} \\
\\
\frac{p^{-\frac{p}{1-p}}}{1- p + p^{-\frac{p}{1-p}}} & \mbox{for $y=1$ .}
\end{array}
\right.
\end{equation}
To calculate $R^{\mathrm{NEP}}_n (\epsilon) $, $P(B_{t,n,\delta})$ needs to be further investigated, where an interesting observation is that given $x^n$ with type $t$, $\frac{1}{n} \ln \frac{p(y^n | x^n)}{q_t(y^n)} > -\infty$ if and only if $y_i=1$ when $x_i=1$, and the value of $\frac{1}{n} \ln \frac{p(y^n | x^n)}{q_t(y^n)}$ only depends on the number of $y_i$ being $1$ for $i \in \{j: x_j=0 \}$. One can then verify that
\begin{equation}
B_{t,n,\delta} = \left\{ y^n: \frac{1}{n} |\{i : y_i=0\}| \leq q_t (0) - \frac{\delta}{\ln \frac{1 - t(0) + p t(0)}{p t(0)}} \right\} .
\end{equation}
When $q_t (0) \neq 0.5$,
\begin{equation}
\label{eq-nep-z-4}
P (B_{t,n,\delta}) =
\left\{
\begin{array}{ll}
\Pr \left\{ - \frac{1}{n} \ln q_t (Y^n_t) \leq H(Y_t) - \frac{\delta}{\ln \frac{1-t(0)+pt(0)}{pt(0)}} \ln \frac{1-q_t(0)}{q_t(0)} \right\} & \mbox{if $q_t(0) < 0.5$} \\
\Pr \left\{ - \frac{1}{n} \ln q_t (Y^n_t) \geq H(Y_t) - \frac{\delta}{\ln \frac{1-t(0)+pt(0)}{pt(0)}} \ln \frac{1-q_t(0)}{q_t(0)} \right\} & \mbox{if $q_t(0) > 0.5$}
\end{array}
\right.
\end{equation}
where $Y_t$ is a random variable with distribution $q_t$. Consequently, we can apply the left NEP\cite{yang-meng:nep}, chernoff bound, right NEP\cite{yang-meng:nep} with respect to entropy to upper bound $P(B_{t,n,\delta})$ when $q_t(0) <, = , > 0.5$, respectively.
To provide benchmarks for the comparison of approximation formulas, exact evaluation of \eqref{eq-thm-maxdsc-0}
(with $|\mathcal{X}|\frac{\ln (n+1)}{n}$ dropped and $t=t^*$) and \eqref{eq-thm-maxdsc-0+}
is provided, which, dubbed ``Exact'', serves as a converse bound, and
Theorem 22 in \cite{Yury-Poor-Verdu-2010} provides an achievable bound, dubbed
``DT'' and given below:
\begin{equation}
\label{eq-nep-z-5}
P_m \leq \sum^{m}_{i=0}
\left(
\begin{array}{c}
m \\
i
\end{array}
\right) (1-p)^{m-i} p^i \min \left\{ 1, (M-1)
\frac{ \left(
\begin{array}{c}
n-m+i \\
i
\end{array}
\right)}
{ \left(
\begin{array}{c}
n \\
m
\end{array}
\right)}
\right\}
\end{equation}
where $M=2^{nR}$ and $m=t^* (0) n$. Figures~\ref{fig-z} and \ref{fig-z-2} again show that the Normal curve is all over the map while the NEP curve always lies in between the DT achievable curve and the Exact converse curve. It is also worth pointing out that if the capacity achieving distribution $t =p_X$ instead of $t^*$ was chosen in the calculation of the Exact and DT bounds, then both of them would be lower, confirming our early discussion that in the practical, non-asymptotic regime, the optimal marginal codeword symbol distribution is not necessarily a capacity achieving distribution.
\begin{figure}[h]
\centering
\subfloat[Bounds with $P_m = 10^{-9}$]{\includegraphics[scale=0.4]{tz00019.pdf}}
\subfloat[$\delta_{t^*,n} (\epsilon)$ with $P_m = n^{-9}$]{\includegraphics[scale=0.4]{tz00019-delta.pdf}}
\\
\subfloat[Bounds with $P_m = g_{t^*;P,n} (\delta)$ and $\delta=0.05$]{\includegraphics[scale=0.4]{tz0001d05.pdf}}
\subfloat[$\log_{10} P_m$ with $P_m = g_{t^*;P,n} (\delta)$ and $\delta=0.05$]{\includegraphics[scale=0.4]{tz0001d05-pe.pdf}}
\caption{Comparison of different bounds for Z Channel with $p=0.001$.}
\label{fig-z}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[Bounds with $P_m = 10^{-6}$]{\includegraphics[scale=0.4]{tz096.pdf}}
\subfloat[$\delta_{t^*,n} (\epsilon)$ with $P_m = n^{-6}$]{\includegraphics[scale=0.4]{tz096-delta.pdf}}
\\
\subfloat[Bounds with $P_m = g_{t^*;P,n} (\delta)$ and $\delta=0.02$]{\includegraphics[scale=0.4]{tz09d02.pdf}}
\subfloat[$\log_{10} P_m$ with $P_m = g_{t^*;P,n} (\delta)$ and $\delta=0.02$]{\includegraphics[scale=0.4]{tz09d02-pe.pdf}}
\caption{Comparison of different bounds for Z Channel with $p=0.9$.}
\label{fig-z-2}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have developed a new converse proof technique dubbed the outer mirror image of jar and used it to establish new non-asymptotic converses for any discrete input memoryless channel with discrete or continuous output. Combining these non-asymptotic converses with the non-asymptotic achievability proved in \cite{yang-meng:jardecoding} and \cite{yang-meng:isit2012_jardecoding} under jar decoding and with the NEP technique developed recently in \cite{yang-meng:nep},
we have characterized the best coding rate $R_n (\epsilon)$ achievable with finite block length $n$ and error probability $\epsilon$ through introducing a quantity $\delta_{t, n} (\epsilon)$ to measure the relative magnitude of the error probability $\epsilon$ and block length $n$ with respect to a given channel $P$ and an input distribution $t$. We have showed that in the non-asymptotic regime where both $n$ and $\epsilon$ are finite, $R_n (\epsilon)$ has a Taylor-type expansion with respect to $\delta_{t, n} (\epsilon)$, where the first two terms of the expansion are $\max_{t} [ I(t; P) - \delta_{t, n} (\epsilon) ] $, which is equal to $ I(t^*, P) - \delta_{t^*, n} (\epsilon) $ for some optimal distribution $t^*$, and the third order term of the expansion is $O(\delta^2_{t^*, n} (\epsilon)) $ whenever $\delta_{t^*, n} (\epsilon) = \Omega(\sqrt{ \ln n / n})$. Based on the new non-asymptotic converses and the Taylor-type expansion of $R_n (\epsilon)$, we have also derived two approximation formulas (dubbed ``SO'' and ``NEP'') for $R_n (\epsilon)$. These formulas have been further evaluated and compared against some of the best bounds known so far, as well as the normal approximation revisited recently in the literature. It turns out that while the normal approximation is all over the map, i.e. sometime below achievability and sometime above converse, the SO approximation is much more reliable and stays at the same relative position to achievable and converse bounds; in the meantime, the NEP approximation is the best among the three and always provides an accurate estimation for $R_n (\epsilon)$.
It is expected that in the non-asymptotic regime where both $n$ and $\epsilon$ are finite, the Taylor-type expansion of $R_n (\epsilon)$ and the NEP approximation formula would play a role similar to that of Shannon capacity \cite{Shannon1948} in the asymptotic regime as $n \to \infty$. For values of $n$ and $\epsilon$ with practical interest for which $\delta_{t^*, n} (\epsilon)$ is not relatively small, the optimal distribution $t^*$ achieving $\max_{t} [ I(t; P) - \delta_{t, n} (\epsilon) ] $ is in general not a capacity achieving distribution except for symmetric channels such as binary input memoryless symmetric channels. As a result, an important implication arising from the Taylor-type expansion of $R_n (\epsilon)$ is that in the practical non-asymptotic regime, the optimal marginal codeword symbol distribution is not necessarily a capacity achieving distribution. Therefore, it will be interesting to examine all practical channel codes proposed so far against the Taylor-type expansion of $R_n (\epsilon)$ and the NEP approximation formula and to see how far their performance is away from that predicted by the Taylor-type expansion of $R_n (\epsilon)$ and the NEP approximation formula. If the performance gap is significant, one way to design a better channel code with practical block length and error probability requirement is to solve the maximization problem $\max_{t} [ I(t; P) - \delta_{t, n} (\epsilon) ] $, get $t^*$, and then design a code so that its marginal codeword symbol distribution is approximately $t^*$.
Finally, we conclude this paper by saying a few words on non-asymptotic information theory. From the viewpoint of stochastic processes, most classic results in information theory are based, to a large extent, on the strong and weak laws of large numbers and on large deviation theory. For example, most first order asymptotic coding rate results in information theory were established through the applications of asymptotic equipartition properties and typical sequences \cite{cover:informtheory2006}, which in turn depend on the strong and weak laws of large numbers. On other hand, error exponent analysis in both source and channel coding is in the spirit of large deviation theory. The recent second order asymptotic coding rate results \cite{strassen-1962}, \cite{Hayashi-2009}, \cite{Yury-Poor-Verdu-2010} depend heavily on the Berry-Esseen central limit theorem. In the non-asymptotic regime of practical interest, however, none of these probabilistic tools can be applied directly. To fill in this void space, we have developed the NEP in \cite{yang-meng:nep}. Based on the NEP, we have further invented jar decoding in \cite{yang-meng:jardecoding} and presented the outer mirror image of jar converse proof technique in this paper. As demonstrated in this paper along with \cite{yang-meng:jardecoding} and \cite{yang-meng:nep}, the NEP, jar decoding, and the outer mirror image of jar together form a set of essential techniques needed for non-asymptotic information theory. They can also be extended and applied to help develop non-asymptotic multi-user information theory as well.
\appendices
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\setcounter{section}{0} \setcounter{equation}{0} %
\bibliographystyle{IEEEtran}
|
1,108,101,564,629 | arxiv | \section{Introduction}
Leptogenesis \cite{fy} is based on a popular extension of the Standard Model,
where three right-handed (RH) neutrinos $N_{R i}$, with a Majorana mass term
$M$ and Yukawa couplings $h$, are added to the SM Lagrangian,
\begin{equation}\label{lagrangian}
\mathcal{L}= \mathcal{L}_{\rm SM} +i \overline{N}_{R i}\gamma_{\mu}\partial^{\mu} N_{Ri} -
h_{\alpha i} \overline{\ell}_{L\alpha} N_{R i} \tilde{\Phi} -
{1\over 2}\,M_i \overline{N}_{R i}^c \, N_{R i} +h.c.\quad (i=1,2,3,\quad \alpha=e,\mu,\tau) \, .
\end{equation}
After spontaneous symmetry breaking, a Dirac mass term $m_D=v\,h$, is generated
by the vev $v=174$ GeV of the Higgs boson. In the see-saw limit, $M\gg m_D$,
the spectrum of neutrino mass eigenstates
splits in two sets: 3 very heavy neutrinos $N_1,N_2$ and $N_3$,
respectively with masses $M_1\leq M_2 \leq M_3$, almost coinciding with
the eigenvalues of $M$, and 3 light neutrinos with masses $m_1\leq m_2\leq m_3$,
the eigenvalues of the light neutrino mass matrix
given by the see-saw formula \cite{seesaw}
\begin{equation}
m_{\nu}= - m_D\,{1\over M}\,m_D^T \, .
\end{equation}
Neutrino oscillation experiments measure two neutrino mass-squared
differences. For normal schemes one has
$m^{\,2}_3-m_2^{\,2}=\Delta m^2_{\rm atm}$ and
$m^{\,2}_2-m_1^{\,2}=\Delta m^2_{\rm sol}$,
whereas for inverted schemes one has
$m^{\,2}_3-m_2^{\,2}=\Delta m^2_{\rm sol}$
and $m^{\,2}_2-m_1^{\,2}=\Delta m^2_{\rm atm}$.
For $m_1\gg m_{\rm atm} \equiv
\sqrt{\Delta m^2_{\rm atm}+\Delta m^2_{\rm sol}}=
(0.050\pm 0.001)\,{\rm eV}$ \cite{gonzalez}
the spectrum is quasi-degenerate, while for
$m_1\ll m_{\rm sol}\equiv \sqrt{\Delta m^2_{\rm sol}}
=(0.0088\pm 0.0001)\,{\rm eV}$ \cite{gonzalez}
it is fully hierarchical (normal or inverted).
The most stringent upper bound on the
absolute neutrino mass scale comes from
cosmological observations. Recently, quite a conservative
upper bound,
\begin{equation}\label{bound}
m_1 < 0.2\,{\rm eV} \, \hspace{5mm} (95\%\, {\rm CL}) \, ,
\end{equation}
has been obtained by the
WMAP collaboration combining CMB, baryon acoustic oscillations
and supernovae type Ia observations \cite{WMAP5}.
The $C\!P$ violating decays of the RH neutrinos into lepton doublets
and Higgs bosons at temperatures $T\gtrsim 100\,{\rm GeV}$
generate a $B-L$ asymmetry one third of which, thanks to sphaleron processes,
ends up into a baryon asymmetry that can explain the observed
baryon asymmetry of the Universe.
This can be expressed in terms of the baryon-to-photon number ratio and
a precise measurement comes from the CMBR anisotropies observations of WMAP \cite{WMAP5},
\begin{equation}\label{etaBobs}
\eta_B^{\rm CMB} = (6.2 \pm 0.15)\times 10^{-10} \, .
\end{equation}
The predicted baryon-to-photon ratio $\eta_B$ is related to the final
value of the $(B-L)$ asymmetry $N^{\rm f}_{B-L}$ by the relation
\begin{equation}\label{etaB}
\eta_B \simeq 0.96\times 10^{-2} N_{B-L}^{\rm f} \, ,
\end{equation}
where we indicate with $N_X$ any particle number or asymmetry $X$ calculated in a portion
of co-moving volume containing one heavy neutrino in ultra-relativistic
thermal equilibrium, so that e.g. $N^{\rm eq}_{N_2}(T\gg M_2)=1$.
If one imposes that the RH neutrino mass spectrum is strongly hierarchical, then there are
two options for successful leptogenesis. A first one is given by the $N_1$-dominated
scenario, where the final asymmetry is dominated by the decays of the lightest RH neutrinos.
The main limitation of this scenario is that successful leptogenesis implies quite a
restrictive lower bound on the mass of the lightest RH neutrino. Imposing independence of
the final asymmetry of the initial RH neutrino abundance
and barring phase cancelations in the see-saw orthogonal matrix entries the lower
bound is given by \cite{di,cmb,flavorlep}
\begin{equation}\label{lb}
M_1 \gtrsim 3 \times 10^9 \,{\rm GeV} \, .
\end{equation}
This implies in turn a lower bound
$T_{\rm reh} \gtrsim 1.5 \times 10^{9}\,{\rm GeV}$ on the reheating temperature as well \cite{annals}
\footnote{For a discussion of flavour-dependent leptogenesis in the supersymmetric seesaw scenario and the corresponding bounds on $M_1$ and $T_{\rm reh} $, see \cite{Antusch:2006cw,Antusch:2006gy}.}.
The lower bound Eq.~(\ref{lb}) is typically not respected in models emerging from
grand unified theories. It has therefore been
long thought that, within a minimal type I see-saw mechanism,
leptogenesis is not viable within these models \cite{branco}.
There is however a second option \cite{geometry}, namely the
$N_2$-dominated leptogenesis scenario, where the asymmetry is
dominantly produced from the decays of the next-to-lightest RH neutrinos. In this case
there is no lower bound on the lightest RH neutrino mass $M_1$. Instead this is replaced by a lower bound on the
next-to-lightest RH neutrino mass $M_2$
that still implies a lower bound on the reheating temperature.
There are two necessary conditions for a successful $N_2$-dominated leptogenesis scenario.
The first one is the presence of (at least) a third heavier RH neutrino $N_3$ that couples to $N_2$ in
order for the $C\!P$ asymmetries of $N_2$ not to be suppressed as $\propto (M_1/M_2)^2$.
The second necessary condition is to be able to circumvent the wash-out from the lightest RH neutrinos. There is a
particular choice of the see-saw parameters where these two conditions are maximally satisfied.
This corresponds to the limit where the lightest RH neutrino gets decoupled, as in
heavy sequential dominance, an example which we shall discuss later.
In this case the bound, $M_2\gtrsim 10^{10}\,{\rm GeV}$ when estimated without the inclusion of flavour effects,
is saturated. In this limit the wash-out from the lightest RH neutrinos
is totally absent and the $C\!P$ asymmetries of the $N_2$'s are maximal.
In order to have successful $N_2$-dominated leptogenesis for choices of the parameters
not necessarily close to this maximal case a crucial role is played
by lepton flavour effects \cite{vives}.
If $M_1\ll 10^9\,{\rm GeV}\ll M_2$, as we will assume, then
before the lightest RH neutrino wash-out is active,
the quantum states of the lepton doublets produced by $N_2$-decays
get fully incoherent in flavour space \cite{nardi1,flavoreffects1,zeno,decoherence1,decoherence2}. In this way the lightest RH neutrino wash-out acts separately on each flavour asymmetry and is then much less efficient \cite{vives}
\footnote{Notice that if $M_1\gg 10^{9}\,{\rm GeV}$ and $K_1\gg 1$ the wash-out from the lightest RH neutrino
can be still avoided thanks to heavy flavour effects \cite{bcst,nardi2}. However, throughout this paper we will always consider the case $M_1\ll 10^9\,{\rm GeV}$ which is more interesting with respect to leptogenesis in grand-unified theories.}.
It has then been shown recently that within this scenario
it is possible to have successful leptogenesis within models
emerging from $SO(10)$ grand-unified theories with interesting
potential predictions on the low energy parameters \cite{SO10}.
Therefore, the relevance of the $N_2$-dominated scenario has been gradually
increasing in the last years.
In this paper we discuss $N_2$-dominated leptogenesis in the presence of
flavour dependent effects that have hitherto been neglected, in particular the off-diagonal entries of the flavour coupling matrix that
connects the total flavour asymmetries, distributed in different particle species, to the lepton and Higgs doublet asymmetries.
We derive analytical formulae for the final asymmetry
including the flavour coupling at the $N_2$-decay stage as well as at the stage of washout by the lightest
RH neutrino $N_1$. We point out that in general part of the electron and muon asymmetries will
completely escape the wash-out at the production and a total $B-L$ asymmetry
can be generated by the lightest RH neutrino wash-out yielding so called phantom leptogenesis.
These contributions, that we call phantom terms, introduce however a
strong dependence on the initial conditions as we explain in detail.
Taking of all these new effects into account can enhance the final asymmetry produced by the decays of
the next-to-lightest RH neutrinos by orders of magnitude, opening up new interesting possibilities for $N_2$-dominated
thermal leptogenesis. We illustrate these effects for two models which describe realistic neutrino masses and mixing
based on sequential dominance.
The layout of the remainder of the paper is as follows.
In section 2 we discuss the production of the asymmetry from $N_2$-decays
and its subsequent thermal washout at similar temperatures.
In section 3 we discuss three flavour projection and the wash-out stage
at lower temperatures relevant to the lightest RH neutrino mass.
This is where the asymmetry which survives from $N_2$-decays and washout would
typically be expected to be washed out by the lightest RH neutrinos
in a flavour independent treatment, but which typically survives in a flavour-dependent
treatment. This conclusion is reinforced in the fuller flavour treatment
here making $N_2$ dominated leptogenesis even more relevant.
The fuller flavour effects of the $N_2$-dominated scenario are
encoded in a compact master formula presented at the end of this section and partly unpacked in
an Appendix. Section 4 applies this master formula to examples where the new effects arising from the flavour couplings
and phantom leptogenesis play a prominent role. We focus on examples where,
due to the considered effects, the flavour asymmetry produced dominantly in one
flavour can emerge as an asymmetry in a different flavour, a scenario we refer to
as the flavour swap scenario.
\section{Production of the asymmetry from $N_2$-decays and washout}
In the $N_2$-dominated scenario, with $M_2$ respecting the lower bound of $M_2\gtrsim 10^{10}\,{\rm GeV}$
and $M_1\ll 10^9\,{\rm GeV}$, one has to distinguish two stages in the calculation
of the asymmetry. In a first {\em production stage}, at $T\simeq T_L \sim M_2$,
a $B-L$ asymmetry is generated from the $N_2$ decays.
In a second {\em wash-out stage}, at $T\sim M_1$,
inverse processes involving the lightest RH neutrinos, the $N_1$'s, become effective
and wash-out the asymmetry to some level.
In the {\em production stage}, since we assume
$10^{12}\,{\rm GeV} \gg M_2 \gg 10^{9}\,{\rm GeV}$,
the $B-L$ asymmetry is generated from the $N_2$-decays
in the so called two-flavour regime \cite{flavoreffects1,nardi1,zeno}.
In this regime the $\tau$-Yukawa interactions are fast enough to break the coherent evolution
of the tauon component of the lepton quantum states between a decay and the subsequent
inverse decay and light flavour effects have to be taken into account in the calculation
of the final asymmetry.
On the other hand the evolution of the muon and of
the electron components superposition is still coherent.
If we indicate with $|\ell_{2}\rangle$ the quantum state describing the leptons produced by
$N_2$-decays, we can define the flavour branching ratios giving the
probability $P_{2\alpha}$ that $|\ell_{2}\rangle$ is measured in a flavour eigenstate $|\ell_{\alpha}\rangle$
as $P_{2\alpha} \equiv |\langle \ell_{\alpha}|\ell_{2}\rangle |^2$.
Analogously, indicating with $|\bar{\ell}_{2}'\rangle$ the quantum state describing the anti-leptons
produced by $N_2$-decays, we can define the anti-flavour branching ratios as
$\bar{P}_{2\alpha} \equiv|\langle\bar{\ell}_{\alpha} |\bar{\ell}'_{2}\rangle |^2$.
The tree level contribution is simply given by the average
$P^0_{2\alpha}=(P_{2\alpha}+\bar{P}_{2\alpha})/2$.
The total decay width of the $N_2$'s can be expressed in terms of the Dirac mass matrix as
\begin{equation}
\widetilde{\Gamma}_2 = {M_2\over 8\,\pi\,v^2}\,(m^{\dagger}_D\,m_D)_{22}
\end{equation}
and is given by the sum $\widetilde{\Gamma}_2=\Gamma_2+\bar{\Gamma}_2$
of the total decay rate into leptons and of the total decay rate into
anti-leptons respectively.
The flavoured decay widths are given by
\begin{equation}
\widetilde{\Gamma}_{2\alpha} = {M_2\over 8\,\pi\,v^2}\,|m_{D\alpha 2}|^2 \, ,
\end{equation}
and can be also expressed as a sum, $\widetilde{\Gamma}_{2\alpha}=\Gamma_{2\alpha}+\bar{\Gamma}_{2\alpha}$,
of the flavoured decay rate into leptons and of the flavoured total decay rate into
anti-leptons respectively.
Notice that the branching ratios can then be expressed in terms of the rates as
$P_{2\alpha}=\Gamma_{2\alpha}/\Gamma_2$ and $\bar{P}_{2\alpha}=\bar{\Gamma}_{2\alpha}/\bar{\Gamma}_2$.
The flavoured $C\!P$ asymmetries for the $N_2$-decays
into $\alpha$-leptons ($\alpha=e,\mu,\tau$) are then defined as
\begin{equation}
\varepsilon_{2\alpha}\equiv -\,{\Gamma_{2\alpha}-\overline{\Gamma}_{2\alpha}
\over \Gamma_{2}+\overline{\Gamma}_{2}} \, ,
\end{equation}
while the total $C\!P$ asymmetries as
\footnote{Notice that we define the total and flavoured $C\!P$ asymmetries with a sign convention
in such a way that they have the same sign respectively of the produced $B-L$ and $\Delta_{\alpha}$
asymmetries rather then of the $L$ and $L_{\alpha}$ asymmetries.}
\begin{equation}
\varepsilon_2\equiv -\,{\Gamma_2-\bar{\Gamma}_2\over \Gamma_2+\bar{\Gamma}_2} =\sum_\alpha \varepsilon_{2\alpha} \,.
\end{equation}
The three flavoured $C\!P$ asymmetries can be calculated using \cite{crv}
\begin{equation}\label{eps2a}
\varepsilon_{2\alpha}=
\frac{3}{16 \pi (h^{\dag}h)_{22}} \sum_{j\neq 2} \left\{ {\rm Im}\left[h_{\alpha 2}^{\star}
h_{\alpha j}(h^{\dag}h)_{2 j}\right] \frac{\xi(x_j)}{\sqrt{x_j}}+
\frac{2}{3(x_j-1)}{\rm Im}
\left[h_{\alpha 2}^{\star}h_{\alpha j}(h^{\dag}h)_{j 2}\right]\right\} \, ,
\end{equation}
where $x_j\equiv (M_j/M_2)^2$ and
\begin{equation}\label{xi}
\xi(x)= {2\over 3}\,x\,
\left[(1+x)\,\ln\left({1+x\over x}\right)-{2-x\over 1-x}\right] \, .
\end{equation}
The tree-level branching ratios can then be expressed as
\begin{equation}
P^0_{2\alpha} = {\widetilde{\Gamma}_{2\alpha}\over \widetilde{\Gamma}_2} + {\cal O}(\varepsilon^2)
\simeq {|m_{D\alpha 2}|^2 \over (m^{\dagger}_D\,m_D)_{22} } \, .
\end{equation}
Defining $\Delta P_{2\alpha}\equiv P_{2\alpha}-\bar{P}_{2\alpha}$,
it will prove useful to notice that the flavoured asymmetries can be
decomposed as the sum of two terms
\footnote{The derivation is simple and can be helpful to understand
later on phantom leptogenesis. If we write $P_{2\alpha}=P^0_{2\alpha}+\Delta P_{2\alpha}/2$
and $P_{2\alpha}=P^0_{2\alpha}-\Delta P_{2\alpha}/2$, one has
\[
\varepsilon_{2\alpha}= -\,{P_{2\alpha}\,\Gamma_{2}- \bar{P}_{2\alpha}\,\overline{\Gamma}_{2}
\over \Gamma_{2}+\overline{\Gamma}_{2}}= P^0_{2\alpha}\,\varepsilon_{2\alpha}-{\Delta P_{2\alpha}\over 2} \,.
\]
Notice that we are correcting a wrong sign in Ref. \cite{flavorlep}.}
\cite{nardi1},
\begin{equation}\label{eps2abis}
\varepsilon_{2\alpha}=P^{0}_{2\alpha}\,\varepsilon_2 - {\Delta P_{2\alpha} \over 2} \, ,
\end{equation}
where the first term is
due to an imbalance between the total number of produced leptons and anti-leptons
and is therefore proportional to the total $C\!P$ asymmetry,
while the second originates from a different flavour composition of the
lepton quantum states with respect to the $C\!P$ conjugated
anti-leptons quantum states.
Sphaleron processes conserve the flavoured asymmetries
$\Delta_{\alpha}\equiv B/3-L_{\alpha}$ ($\alpha=e,\mu,\tau $). Therefore, the Boltzmann
equations are particularly simple in terms of these quantities \cite{bcst}.
In the two-flavour regime the electron and the muon components of $|{\ell}_2\rangle$
evolve coherently and the wash-out from inverse processes producing the $N_2$'s
acts then on the sum $N_{\Delta_{\gamma}}\equiv N_{\Delta_e} + N_{\Delta_{\mu}}$. Therefore, it is convenient to
define correspondingly $P^0_{2\gamma}\equiv P^0_{2e}+P^0_{2\mu}$ and
$\varepsilon_{2\gamma}\equiv \varepsilon_{2e}+\varepsilon_{2\mu}$. More generally, any quantity with a subscript
`$\gamma$' has to be meant as the sum of the same quantity calculated for the electron and
for the muon flavour component.
The asymmetry produced by the lightest and by the heaviest RH neutrino decays is negligible
since their $C\!P$ asymmetries are highly suppressed with the assumed mass pattern.
The set of classic kinetic equations reduces then to
a very simple one describing the asymmetry generated by the $N_2$-decays,
\begin{eqnarray}\label{flke}
{dN_{N_2}\over dz_2} & = & -D_2\,(N_{N_2}-N_{N_2}^{\rm eq}) \, ,\\
{dN_{\Delta_{\gamma}}\over dz_2} & = &
\varepsilon_{2\gamma}\,D_2\,(N_{N_2}-N_{N_2}^{\rm eq})-
P_{2\gamma}^{0}\,W_2\,\sum_{\alpha=\gamma,\tau}\,C_{\gamma\alpha}^{(2)}\,N_{\Delta_{\alpha}} \, ,\\
{dN_{\Delta_{\tau}}\over dz_2} & = &
\varepsilon_{2\tau}\,D_2\,(N_{N_2}-N_{N_2}^{\rm eq})-
P_{2\tau}^{0}\,W_2\,\sum_{\alpha=\gamma,\tau}\,C_{\tau\alpha}^{(2)}\,N_{\Delta_{\alpha}} \, .
\end{eqnarray}
where $z_2 \equiv M_2/T$. The total $B-L$ asymmetry can then be calculated as
$N_{B-L}= N_{\Delta_{\tau}}+N_{\Delta_{\gamma}}$.
The equilibrium abundances are given by
$N_{N_2}^{\rm eq}=z_2^2\,{\cal K}_2(z_2)/2$, where we indicated with
${\cal K}_i(z_2)$ the modified Bessel functions.
Introducing the total decay parameter $K_2\equiv \widetilde{\Gamma}_{2}(T=0)/H(T=M_2)$,
the decay term $D_2$ can be expressed as
\begin{equation}
D_2(z_2) \equiv {\widetilde{\Gamma}_{2}\over H\,z}=K_2\,z_2\,
\left\langle {1\over\gamma} \right\rangle \, ,
\end{equation}
where $\langle 1/\gamma \rangle(z_2)$ is
the thermally averaged dilation factor and is given by the
ratios ${\cal K}_1(z_2)/{\cal K}_2(z_2)$. Finally,
the inverse decays wash-out term is given by
\begin{equation}\label{WID}
W_2(z_2) =
{1\over 4}\,K_2\,{\cal K}_1(z_2)\,z_2^3 \, .
\end{equation}
The total decay parameter $K_2$ is related to the Dirac mass matrix by
\begin{equation}
K_2={\widetilde{m}_2\over m_{\star}} \, ,
\hspace{10mm}
{\rm where}
\hspace{10mm}
\widetilde{m}_2\equiv{(m_D^{\dagger}\,m_D)_{22} \over M_2}
\end{equation}
is the effective neutrino mass \cite{plumacher} and
$m_{\star}$ is equilibrium neutrino mass defined by \cite{orloff,annals}
\begin{equation}\label{d}
m_{\star}\equiv
{16\, \pi^{5/2}\,\sqrt{g_*} \over 3\,\sqrt{5}}\,
{v^2 \over M_{\rm Pl}}
\simeq 1.08\times 10^{-3}\,{\rm eV}.
\end{equation}
It will also prove convenient to introduce the flavoured effective neutrino masses
$\widetilde{m}_{2\alpha} \equiv P^0_{2\alpha}\,\widetilde{m}_2$ and correspondingly
the flavoured decay parameters $K_{2\alpha}\equiv P^0_{2\alpha}\,K_2 = \widetilde{m}_{2\alpha}/m_{\star} $,
so that $\sum_{\alpha} \widetilde{m}_{2\alpha}=\widetilde{m}_2$ and $\sum_{\alpha} K_{2\alpha}=K_2$.
The flavour coupling matrix $C$ \cite{bcst,spectator,racker,Antusch:2006cw,nardi1,flavorlep} relates the asymmetries
stored in the lepton doublets and in the Higgs bosons to the
$\Delta_{\alpha}$'s. It is therefore the sum of two contributions,
\begin{equation}
C_{\alpha\beta}=C^{\ell}_{\alpha\beta}+C^{H}_{\alpha\beta} \, ,
\end{equation}
the first one connecting the asymmetry in the lepton doublets and
the second connecting the asymmetry in the Higgs bosons.
Flavour dynamics couple because the generation of a leptonic asymmetry into lepton
doublets from $N_i$ decays is necessarily accompanied by a generation of a hypercharge asymmetry
into the Higgs bosons and of a baryonic asymmetry into quarks
via sphaleron processes. The asymmetry generated into the lepton doublets
is moreover also redistributed to right handed charged particles.
The wash-out of a specific flavour asymmetry is then influenced by the
dynamics of the asymmetries stored in the other flavours because
they are linked primarily through the asymmetry into the Higgs doublets
and secondarily through the asymmetry into quarks.
The condition of chemical equilibrium gives a constraint on the chemical potential
(hence number density asymmetry) of each such species. Solving for all constraints
one obtains the $C_{\alpha\beta}$ explicitly.
If we indicate with $C^{(2)}$ the coupling matrix in the
two-flavour regime, the two contributions to the flavour coupling matrix are given by
\begin{equation} C^{l(2)}=\left(\begin{array}{ccc}
417/589 & -120/589 \\ -30/589 & 390/589 \end{array}\right) \, \hspace{4mm} \mbox{\rm and}
\hspace{4mm}
C^{h(2)}=\left(\begin{array}{ccc}
164/589 & 224/589 \\
164/589 & 224/589
\end{array}\right) \, ,
\end{equation}
and summing one obtains
\begin{equation}
C^{(2)} \equiv
\left(\begin{array}{ccc}
C^{(2)}_{\gamma\g} & C^{(2)}_{\gamma\tau} \\ C^{(2)}_{\tau\gamma} & C^{(2)}_{\tau\t}
\end{array}\right) =
\left(\begin{array}{ccc}
581/589 & 104/589 \\ 194/589 & 614/589 \end{array}\right) \, .
\end{equation}
A traditional calculation, where flavour coupling is neglected,
corresponds to approximating the $C$-matrix by the identity matrix. In this
case the evolution of the two flavour asymmetries proceeds uncoupled
and they can be easily worked out in an integral form \cite{kt,annals,flavorlep},
\begin{equation}\label{solint}
N_{\Delta\alpha}(z_2)=N_{\Delta\alpha}^{\rm in}\,
e^{-P_{2\alpha}^0\,\int_{z_2^{\rm in}}^{z_2}\,dz_{2}'\,W_2(z_2')}
+\varepsilon_{2\alpha}\,\kappa(z_2;K_{2\alpha}) \, ,
\end{equation}
where the efficiency factors are given by
\begin{equation}\label{ef}
\kappa(z_2;K_{2\alpha})=-\int_{z_2^{\rm in}}^{z_2}\,dz_{2}'\,{dN_{N_i}\over dz_2'}\,
e^{-P_{2\alpha}^0\,\int_{z_2'}^z\,dz_{2}''\,W_2(z_2'')} \,.
\end{equation}
We will neglect the first term due the presence of possible initial flavour asymmetries
and assume $z_2^{\rm in}\ll 1$.
The efficiency factors and therefore the asymmetries get frozen to
a particular value of the temperature given by $T_{L\alpha}=M_2/z_B(K_{2\alpha})$,
where \cite{beyond}
\begin{equation}
z_{B}(K_{2\alpha}) \simeq 2+4\,K_{2\alpha}^{0.13}\,e^{-{2.5\over K_{2\alpha}}}={\cal O}(1\div 10) \, .
\end{equation}
Defining $T_L\equiv {\rm min}(T_{L\tau},T_{L\gamma})$,
the total final $B-L$ asymmetry at $T_L $ is then given by
\begin{equation}\label{solution}
N_{B-L}^{T\sim T_L} \simeq \varepsilon_{2\gamma}\,\kappa(K_{2\gamma})+ \varepsilon_{2\tau}\,\kappa(K_{2\tau}) \, .
\end{equation}
Assuming an initial thermal $N_2$-abundance,
the final efficiency factors $\kappa(K_{2\alpha})\equiv \kappa(z_2=\infty,K_{2\alpha})$
are given approximately by \cite{flavorlep}
\begin{equation}
\kappa(K_{2\alpha})\simeq \frac{2}{K_{2\alpha} \,
z_{\rm B}(K_{2\alpha})}\left[1-{\rm exp}\left(-\frac{1}{2} K_{2\alpha}\, z_{\rm B}(K_{2\alpha})\right)\right]\, .
\end{equation}
On the other hand, in the case of vanishing initial abundances
\footnote{These analytical expressions reproduce very well the numerical
results found in \cite{Antusch:2006cw}. The difference is at most $30\%$
around $K_{2\alpha}\simeq 1$ and much smaller than $10\%$ elsewhere.}
, the efficiency factors
are the sum of two different contributions, a negative and a positive one,
\begin{equation}
\kappa_{2\alpha}^{\rm f}
=\kappa_{-}^{\rm f}(K_2,P_{2\alpha}^{0})+
\kappa_{+}^{\rm f}(K_2,P_{2\alpha}^{0}) \, .
\end{equation}
The negative contribution arises from a first stage where
$N_{N_2}\leq N_{N_2}^{\rm eq}$, for $z_2\leq z_2^{\rm eq}$,
and is given approximately by
\begin{equation}\label{k-}
\kappa_{-}^{\rm f}(K_2,P_{2\alpha}^{0})\simeq
-{2\over P_{2\alpha}^{0}}\ e^{-{3\,\pi\,K_{2\alpha} \over 8}}
\left(e^{{P_{2\alpha}^{0}\over 2}\,N_{N_2}(z_{\rm eq})} - 1 \right) \, .
\end{equation}
The $N_2$-abundance at $z_2^{\rm eq}$ is well approximated by the expression
\begin{equation}\label{nka}
N_{N_2}(z_2^{\rm eq}) \simeq \overline{N}(K_2)\equiv
{N(K_2)\over\left(1 + \sqrt{N(K_2)}\right)^2}\, ,
\end{equation}
that interpolates between the limit $K_2\gg 1$, where $z_2^{\rm eq}\ll 1$ and
$N_{\rm N_2}(z_2^{\rm eq})=1$, and the limit $K_2\ll 1$, where
$z_2^{\rm eq}\gg 1$ and $N_{N_2}(z_2^{\rm eq})=N(K_2)\equiv 3\pi K_2/4$.
The positive contribution arises from a second stage when
$N_{N_2}\geq N_{N_2}^{\rm eq}$, for $z_2\geq z_2^{\rm eq}$,
and is approximately given by
\begin{equation}\label{k+}
\kappa_{+}^{\rm f}(K_2,P_{2\alpha}^{0})\simeq
{2\over z_B(K_{2\alpha})\,K_{2\alpha}}
\left(1-e^{-{K_{2\alpha}\,z_B(K_{2\alpha})\,N_{N_2}(z_{\rm eq})\over 2}}\right) \, .
\end{equation}
If flavour coupling is taken into account, we can still solve analytically eqs.~(\ref{flke})
performing the following change of variables
\begin{equation}
\left(\begin{array}{c}
N_{\Delta_{\gamma'}} \\
N_{\Delta_{\tau'}}
\end{array}\right) = U\,
\left(\begin{array}{c}
N_{\Delta_{\gamma}} \\
N_{\Delta_{\tau}}
\end{array}\right) \, , \hspace{5mm} \mbox{\rm where} \hspace{5mm}
U\equiv \left(\begin{array}{cc}
U_{\gamma'\gamma} & U_{\gamma'\tau} \\
U_{\tau'\gamma} & U_{\tau'\tau}
\end{array}\right)
\end{equation}
is the matrix that diagonalizes
\begin{equation}
P^0_2 \equiv
\left(\begin{array}{cc}
P^0_{2\gamma}\,C_{\gamma\g}^{(2)} & P^0_{2\gamma}\,C_{\gamma\tau}^{(2)} \\
P^0_{2\tau}\,C_{\tau\gamma}^{(2)} & P^0_{2\tau}\,C_{\tau\t}^{(2)}
\end{array}\right) \, ,
\end{equation}
i.e. $U\,P^0_{2}\,U^{-1} ={\rm diag}(P^0_{2\gamma'},P^0_{2\tau'})$.
In these new variables the two kinetic
equations for the flavoured asymmetries decouple,
\begin{eqnarray}\label{flke2}
{dN_{\Delta_{\gamma'}}\over dz_2} & = &
\varepsilon_{2\gamma'}\,D_2\,(N_{N_2}-N_{N_2}^{\rm eq})-P^0_{2\gamma'}\,W_2\,N_{\Delta_{\gamma'}} \, \\
{dN_{\Delta_{\tau'}}\over dz_2} & = &
\varepsilon_{2\tau'}\,D_2\,(N_{N_2}-N_{N_2}^{\rm eq})-P^0_{2\tau'}\,W_2\,N_{\Delta_{\tau'}} \, ,
\end{eqnarray}
where we defined
\begin{equation}
\left(\begin{array}{c}
\varepsilon_{2\gamma'} \\
\varepsilon_{2\tau'}
\end{array}\right) \equiv U\,
\left(\begin{array}{c}
\varepsilon_{2\gamma} \\
\varepsilon_{2\tau}
\end{array}\right) \, .
\end{equation}
The solutions for the two $N_{\Delta_{\alpha'}}$ are then still given by eq.~(\ref{solint})
where, however, now the `unprimed'
quantities have to be replaced with the `primed' quantities and therefore
explicitly one has
\begin{eqnarray}\label{solution2}
N_{\Delta_{\gamma'}}^{T\sim T_L} & \simeq &
\varepsilon_{2\gamma'}\,\kappa(K_{2\gamma'}) \, , \\ \nonumber
N_{\Delta_{\tau'}}^{T\sim T_L} & \simeq &
\varepsilon_{2\tau'}\,\kappa(K_{2\tau'}) \, .
\end{eqnarray}
Notice that the $B-L$ asymmetry at $T\sim T_L$ is still given by $N_{B-L}^{T\sim T_L}=
N_{\Delta_{\gamma}}^{T\sim T_L}+N_{\Delta_\tau}^{T\sim T_L}$.
The two $N_{\Delta_{\alpha}}$'s can be calculated from the two $N_{\Delta_{\alpha'}}$'s
using the inverse transformation
\begin{equation}\label{solutioninv}
\left(\begin{array}{c}
N_{\Delta_{\gamma}}^{T\sim T_L} \\
N_{\Delta_{\tau}}^{T\sim T_L}
\end{array}\right) = U^{-1}\,
\left(\begin{array}{c}
N_{\Delta_{\gamma'}}^{T\sim T_L} \\
N_{\Delta_{\tau'}}^{T\sim T_L}
\end{array}\right) \, , \hspace{5mm} \mbox{\rm where} \hspace{5mm}
U^{-1}\equiv \left(\begin{array}{cc}
U^{-1}_{\gamma\g'} & U^{-1}_{\gamma\tau'} \\
U^{-1}_{\tau\gamma'} & U^{-1}_{\tau\t'}
\end{array}\right) \, .
\end{equation}
To study the impact of flavour coupling on the final asymmetry, we can calculate the ratio
\begin{equation}\label{r}
R \equiv \left|{N_{B-L}\over \left.N_{B-L}\right|_{C=I}}\right|
\end{equation}
between the asymmetry calculated taking into account flavour coupling,
and the asymmetry calculated neglecting flavour coupling, corresponding to the assumption $C=I$.
If we want first to calculate the value of $R$ at the production stage,
we have to express $N_{B-L}^{T\sim T_L}$
in terms of the `unprimed' quantities in eq.~(\ref{solution2}).
This is quite easy for the $K_{2\alpha'}$, since one has simply to find
the eigenvalues of the matrix $P_2^0$. Taking for simplicity the
approximation $C^{(2)}_{\gamma\g}\simeq C^{(2)}_{\tau\t} \simeq 1$,
and remembering that $P_{2\gamma}^0+P_{2\tau}^0=1$, one obtains
\begin{eqnarray}\label{eigenvalues}
P^0_{2\gamma'} & \simeq & {1\over 2}\,\left(1+\sqrt{(P^0_{2\gamma}-P^0_{2\tau})^2+
4\,C^{(2)}_{\gamma\tau}\,C^{(2)}_{\tau\gamma}\,P^0_{2\gamma}\,P^0_{2\tau}}\right) \, ,\\
P^0_{2\tau'} & \simeq & {1\over 2}\,\left(1-\sqrt{(P^0_{2\gamma}-P^0_{2\tau})^2+
4\,C^{(2)}_{\gamma\tau}\,C^{(2)}_{\tau\gamma}\,P^0_{2\gamma}\,P^0_{2\tau}}\right) .
\end{eqnarray}
Notice that, both for $\alpha=\tau$ and $\alpha=\gamma$, one has
$P^0_{2\alpha'}\simeq P^0_{2\alpha}+{\cal O}(\sqrt{C^{(2)}_{\gamma\tau}\,C^{(2)}_{\tau\gamma}})$
if $P_{2\tau}\simeq P_{2\gamma}^0\simeq 1/2$ and
$P^0_{2\alpha'}\simeq P^0_{2\alpha}+{\cal O}(C^{(2)}_{\gamma\tau}\,C^{(2)}_{\tau\gamma})$
if $P_{2\tau}\ll P_{2\gamma}^0$ or vice-versa.
Considering moreover that, if $K_{2\alpha} \gg 1$, one has approximately
$\kappa(K_{2\alpha})\sim 1/K_{2\alpha}^{1.2}$,
one can write
\begin{eqnarray}\label{solution3}
N_{\Delta_{\gamma'}}^{T\sim T_L} & \simeq &
\varepsilon_{2\gamma'}\,\kappa(K_{2\gamma}) \, , \\ \nonumber
N_{\Delta_{\tau'}}^{T\sim T_L} & \simeq &
\varepsilon_{2\tau'}\,\kappa(K_{2\tau}) \, .
\end{eqnarray}
We have now to consider the effect of flavour coupling encoded in
the primed $C\!P$ asymmetries. If these are re-expressed
in terms of the unprimed $C\!P$ asymmetries we can obtain
explicitly the flavour composition
of the asymmetry generated at $T\simeq T_L$ plugging eqs.~(\ref{solution3})
into eqs.~(\ref{solutioninv}),
\begin{eqnarray} \label{Dg}
N_{\Delta_{\gamma}}^{T\sim T_L} & = &
U^{-1}_{\gamma\g'}\left[U_{\gamma'\gamma}\,\varepsilon_{2\gamma}+U_{\gamma'\tau}\,\varepsilon_{2\tau}\right]\,\kappa(K_{2\gamma})
+U^{-1}_{\gamma\tau'}\left[U_{\tau'\gamma}\,\varepsilon_{2\gamma}+U_{\tau'\tau}\,\varepsilon_{2\tau}\right]\,\kappa(K_{2\tau}) \, , \\ \label{Dtau}
N_{\Delta_{\tau}}^{T\sim T_L} & = &
U^{-1}_{\tau\gamma'}\left[U_{\gamma'\gamma}\,\varepsilon_{2\gamma}+U_{\gamma'\tau}\,\varepsilon_{2\tau}\right]\,\kappa(K_{2\gamma})
+U^{-1}_{\tau\t'}\left[U_{\tau'\gamma}\,\varepsilon_{2\gamma}+U_{\tau'\tau}\,\varepsilon_{2\tau}\right]\,\kappa(K_{2\tau}) \, , \\
N_{B-L}^{T\sim T_L} & = & N_{\Delta_{\gamma}}^{T\sim T_L} + N_{\Delta_\tau}^{T\sim T_L} \, . \label{flas}
\end{eqnarray}
We can distinguish two different cases. The first one is for $P^0_{2\tau}\simeq P^0_{2\gamma}\simeq 1/2$,
implying $K_{2\tau}=K_{2\gamma}=K_2/2$ and therefore $\kappa(K_{2\gamma})=\kappa(K_{2\tau})=\kappa(K_2/2)$.
In this situation one can see immediately that
\begin{equation}
N_{\Delta_{\gamma}}^{T\sim T_L} \simeq \varepsilon_{2\gamma}\,\kappa(K_2/2) \, , \hspace{5mm}
\mbox{\rm and} \hspace{5mm} N_{\Delta_{\tau}}^{T\sim T_L} \simeq \varepsilon_{2\tau}\,\kappa(K_2/2) \, .
\end{equation}
Therefore, barring the case $\varepsilon_{2\gamma}=-\varepsilon_{2\tau}$,
one has not only $N_{B-L}^{T\sim T_L}\simeq \left.N_{B-L}^{T\sim T_L}\right|_{C=I}$,
implying $R^{T\sim T_L}=1$, but even that the flavour composition is the same
compared to a usual calculation where flavour coupling is neglected. However,
if $\varepsilon_{2\gamma}=-\varepsilon_{2\tau}$, a more careful treatment is necessary. From the
eqs.~(\ref{eigenvalues}) one finds $P^0_{2\gamma'}=(1+\sqrt{C^{(2)}_{\gamma\tau}\,C^{(2)}_{\tau\gamma}})/2\neq
P^0_{2\tau'}=(1-\sqrt{C^{(2)}_{\gamma\tau}\,C^{(2)}_{\tau\gamma}})/2$. This difference induced by the off-diagonal
terms of the $C^{(2)}$ matrix prevents an exact cancelation or at least it changes the condition
where it is realized, an effect that occurs also within $N_1$ leptogenesis \cite{abada}.
Let us now see what happens on the other hand when either $P^0_{2\tau}$ or $P^0_{2\gamma}$
is much smaller than the other. This situation has not to be regarded as fine tuned,
since it occurs quite naturally for a random choice of the parameters.
At the first order in the $C^{(2)}$ off-diagonal terms, one has
\begin{equation}
U \simeq \left(\begin{array}{cc}
1 & C^{(2)}_{\gamma\tau}\,{P^0_{2\gamma}\over P^0_{2\gamma}-P^0_{2\tau}} \\
C^{(2)}_{\tau\gamma} {P^0_{2\tau}\over P^0_{2\tau}-P^0_{2\gamma}} & 1
\end{array}\right)
\, ,
\,\,\, U^{-1} \simeq \left(\begin{array}{cc}
1 & -C^{(2)}_{\gamma\tau}\,{P^0_{2\gamma}\over P^0_{2\gamma}-P^0_{2\tau}} \\
-C^{(2)}_{\tau\gamma} {P^0_{2\tau}\over P^0_{2\tau}-P^0_{2\gamma}} & 1
\end{array}\right)
\, .
\end{equation}
Let us for definiteness assume that $P^0_{2\tau}\ll P^0_{2\gamma}$ and that $K_2 \gg 1$ (this second
condition also occurs for natural choices of the parameters).
In this case one has necessarily $\kappa(K_{2\tau})\gg \kappa(K_{2\gamma})$.
We can therefore specify eqs.~(\ref{flas})
writing approximately for the flavour asymmetries in the two flavours,
\begin{eqnarray}\label{flasspec}
N_{\Delta_{\gamma}}^{T\sim T_L} & \simeq & \varepsilon_{2\gamma}\,\kappa(K_{2\gamma}) - C^{(2)}_{\gamma\tau}\,\varepsilon_{2\tau}\,\kappa(K_{2\tau}) \, ,
\\ \label{flasspec2}
N_{\Delta_{\tau}}^{T\sim T_L} & \simeq & \varepsilon_{2\tau}\,\kappa(K_{2\tau}) \, ,
\end{eqnarray}
where we neglected all terms containing products either of two off-diagonal terms of $C^{(2)}$,
or of one off-diagonal term times $\kappa(K_{2\gamma})$.
We can therefore see that the total asymmetry cannot
differ much from the standard calculation,
\begin{equation}
N_{B-L}^{T\sim T_L}\simeq \varepsilon_{2\gamma}\,\kappa(K_{2\gamma}) + \varepsilon_{2\tau}\,\kappa(K_{2\tau})
- C^{(2)}_{\gamma\tau}\,\varepsilon_{2\tau}\,\kappa(K_{2\tau}) \, ,
\end{equation}
implying
\begin{equation}\label{RTL}
R^{T\sim T_L} \simeq \left|1 - C^{(2)}_{\gamma\tau}\,
{\varepsilon_{2\tau}\,\kappa(K_{2\tau})\over \varepsilon_{2\gamma}\,\kappa(K_{2\gamma}) + \varepsilon_{2\tau}\,\kappa(K_{2\tau})}\right| \, .
\end{equation}
This holds because the dominant contribution comes from the tauonic flavour asymmetry
that is not changed at first order. Notice by the way that since $C^{(2)}_{\gamma\tau}>0$
and necessarily $\varepsilon_{2\tau}>0$, the effect of flavour coupling even produces
a reduction of the total asymmetry at $T\sim T_L$
\footnote{This result differs from the one of \cite{abada} where, within $N_1$ leptogenesis,
the authors find an enhancement instead of a reduction. This is simply
explained by the fact that we are also accounting for the Higgs asymmetry that determines
the (correct) positive sign for $C^{(2)}_{\gamma\tau}$.}.
On the other hand the asymmetry in the sub-dominant flavour $\gamma$ can be
greatly enhanced since the quantity
\begin{equation}
R_{\Delta_\gamma}^{T\sim T_L}\equiv \left|{N_{\Delta_{\gamma}}^{T\sim T_L}\over
\left.N_{\Delta_{\gamma}}^{T\sim T_L}\right|_{C=I}}\right| \simeq
\left|1 - C^{(2)}_{\gamma\tau}\,{\varepsilon_{2\tau}\,\kappa(K_{2\tau})\over \varepsilon_{2\gamma}\,\kappa(K_{2\gamma}) } \right|
\end{equation}
can be in general much higher than unity. In this respect it is important
to notice that the assumption $P^0_{2\tau}\ll P^0_{2\gamma}$ does not necessarily imply
$\varepsilon_{2\tau} \ll \varepsilon_{2\gamma}$ since $\varepsilon_{2\alpha}\lesssim 10^{-6}\,(M_2/10^{10}\,{\rm GeV})\,\sqrt{P^0_{2\alpha}}$.
Notice also that if vice versa $P^0_{2\gamma}\ll P^0_{2\tau}$, then the $\tau$
flavour asymmetry is sub-dominant and can be strongly enhanced.
There is a simple physical interpretation to the enhancement of the sub-dominant flavoured
asymmetry. This can be given in terms of the effect of tau flavour coupling on the final
$\gamma$ asymmetry that is described by the off-diagonal terms of the $C^{(2)}$ matrix.
The dominant contribution to these terms comes from the Higgs asymmetry produced
in $N_2 \rightarrow l_{\alpha}+\phi^{\dagger}$ decays. Let us still
assume for definiteness that $P^0_{2\tau}\ll P^0_{2\gamma}$ and that $K_2\gg 1$.
This implies that the $\gamma$ asymmetry is efficiently washed-out and there is
a substantial equilibrium between decays and inverse processes.
On the other hand the $\tau$ asymmetry is weakly washed-out and for simplicity
we can think to the extreme case when is not washed-out at all (true for $K_{2\tau}\ll 1$).
An excess of tau over $\gamma$ asymmetry results in an excess of
Higgs over $\gamma$ asymmetry.
This excess Higgs asymmetry increases the inverse decays of ${\ell}_\gamma$ over the
$\bar{\ell}_\gamma$ states (or vice versa, depending on its sign) and
`soaks up' either more particle or more anti-particle states
generating an imbalance.
Hence one can have $R_{\Delta\gamma}^{T\sim T_L}\gg 1$ thanks to the
dominant effect of the extra inverse decay processes
that `switch on' when $C \neq I$.
This effect had been already discussed within $N_1$-dominated leptogenesis \cite{abada}.
Our results, for the asymmetry at the production stage, are qualitatively similar though
we also took into account the dominant contribution to flavour coupling
coming from the Higgs asymmetry and we
solved analytically the kinetic equations including flavour
coupling without any approximation. As we already noticed, quantitatively,
the account of the Higgs asymmetry produces important effects. For instance, when the
Higgs asymmetry is included, the results are quite symmetric under the interchange of
$P^0_{2\gamma}$ and $P^0_{2\tau}$ since the total matrix
$C^{(2)}$ is much more symmetrical than $C^{l(2)}$.
There is however a much more important difference in this respect between $N_2$-dominated
and $N_1$-dominated leptogenesis. While in the latter case a strong
enhancement of the sub-dominant flavoured asymmetry does not translate into a strong
enhancement of the final asymmetry, in the case of the $N_2$-dominated scenario this becomes possible,
thanks to the presence of the additional stage of lightest RH neutrino wash-out,
as we discuss in the next section.
\section{Three flavour projection and the $N_1$ wash-out stage}
At $T\sim 10^{9}\,{\rm GeV}$ the muon Yukawa interactions equilibrate as well.
They are able to break the residual coherence of the superposition of the muon and
electron components of the quantum states $|{\ell}_2\rangle$ and $|\bar{\ell}'_2\rangle$ .
Consequently, the `$\gamma$' asymmetry becomes
an incoherent mixture of an electron and a muon component \cite{decoherence2} and
the three-flavour regime holds \cite{flavoreffects1,nardi1}.
Therefore, for temperatures $T'$ such that $10^{9}\,{\rm GeV}\gg T' \gg M_1$, one has a situation where
the asymmetry in the tau flavour is still given by the frozen value produced at $T\sim T_L$ (cf. eq.~(\ref{Dtau})),
whereas the asymmetries in the electron and in the muon flavours have to be calculated splitting the
$\gamma$-asymmetry produced at $T\sim T_L$ (cf. eq.~(\ref{Dg})) and the result is
\begin{eqnarray}\label{Demu}
N_{\Delta_{\delta}}(T') & = & p_{\delta}+{P^0_{2\delta}\over P^0_{2\gamma}} \,N_{\Delta_{\gamma}}^{T\sim T_L}\, , \hspace{10mm} (\delta=e,\mu )
\end{eqnarray}
where the ``phantom terms'' $p_{e}$ and $p_{\mu}$, for an initial thermal $N_2$-abundance $N_{N_2}^{\rm in}$,
are given by
\begin{eqnarray}
\hspace{15mm} p_{\delta} & = & \left(\varepsilon_{2\delta}- {P^0_{2\delta}\over P^0_{2\gamma}}\,{\varepsilon_{2\gamma}}\right)\,\,N_{N_2}^{\rm in} \, ,\hspace{10mm} (\delta=e,\mu )
\end{eqnarray}
and one can easily check that $p_e+p_\mu=0$. Notice that, because of the presence of the phantom terms,
the electron and the muon components are not just proportional to $\varepsilon_{2\gamma}$.
Let us show in detail how the result eq.~(\ref{Demu}) and the expression for the
phantom terms can be derived. The derivation is simplified if one considers
the $\Delta_{\delta}$ asymmetry as the result of two separate stages: first
an asymmetry $N_{L_\delta}^\star$ ends up, at the break of coherence, into the $\delta$ lepton doublets
and then it is flavour redistributed and sphaleron-converted in a way that $N_{\Delta_\delta}=-N_{L_\delta}^\star$.
Actually part of the $N_{L_\delta}$
asymmetry gets redistributed and sphaleron-converted immediately after having been produced.
However, in our simplified procedure, the notations is greatly simplified
and the derivation made more transparent but the final result does not change, since flavour redistribution
and sphalerons conserve the $\Delta_{\delta}$ asymmetries.
After these premises, we can say that the asymmetry in the $\delta$ lepton doublets
at the break of coherence is simply given by
\begin{equation}
N_{L_{\delta}}^{\star}=f_{2\delta}\,N_{\ell_{\gamma}}^{T\sim T_L}-\bar{f}_{2\delta}\,N_{\bar{\ell}_{\gamma}}^{T\sim T_L} \, ,
\end{equation}
where $f_{2\delta} \equiv |\langle \ell_{\delta}|\ell_{2\gamma}\rangle |^2 = {P_{2\delta}/P_{2\gamma}}$
and $\bar{f}_{2\delta} \equiv |\langle \ell_{\delta}|\bar{\ell}'_{2\gamma}\rangle |^2 = \bar{P}_{2\delta}/\bar{P}_{2\gamma}$.
With some easy passages one can then write
\begin{eqnarray}
N_{L_{\delta}}^{\star}
& = & {1\over 2}\left(f_{2\delta}-\bar{f}_{2\delta}\right)\,\left(N_{\ell_{\gamma}}^{T\sim T_L}+N_{\bar{\ell}_{\gamma}}
^{T\sim T_L}\right) \\
& + & {1\over 2}\left(f_{2\delta}+\bar{f}_{2\delta}\right)\,N_{L_\gamma}^{T\sim T_L} \\
& = & - p_\delta + {1\over 2}\,\left({f_{2\delta}+\bar{f}_{2\delta}}\right)\,N_{L_\gamma}^{T\sim T_L} \, ,
\end{eqnarray}
where in the last expression we introduced the phantom term
\begin{equation}
p_\delta= - {1\over 2}\left(f_{2\delta}-\bar{f}_{2\delta}\right)\,\left(N_{\ell_{\gamma}}^{T\sim T_L}+N_{\bar{\ell}_{\gamma}}^{T\sim T_L}\right) \, .
\end{equation}
Considering now that
$N_{\ell_{\gamma}}^{T\sim T_L}+N_{\bar{\ell}_{\gamma}}^{T\sim T_L}\simeq P^0_{2\gamma}\,N_{N_2}^{\rm in}$
and that, using first $f_{2\delta} = {P_{2\delta}/P_{2\gamma}}$
and $\bar{f}_{2\delta} = \bar{P}_{2\delta}/\bar{P}_{2\gamma}$ and then the eq.~(\ref{eps2abis}),
one has
\begin{eqnarray}
{1\over 2}\left(f_{2\delta}-\bar{f}_{2\delta}\right)\,P^0_{2\gamma} & \simeq &
- \left(\varepsilon_{2\delta}-{P^0_{2\delta}\over P^0_{2\gamma}}\,\varepsilon_{2\gamma}\right) \\
{1\over 2}\left(f_{2\delta}+\bar{f}_{2\delta}\right) & = & {P^0_{2\delta}\over P^0_{2\gamma}} \, ,
\end{eqnarray}
one finally finds
\begin{equation}
N_{L_\delta}^{\star}= - p_\delta +{P^0_{2\delta}\over P^0_{2\gamma}} \,N_{L_\gamma}^{T\sim T_L} \, ,
\end{equation}
where the phantom terms can be expressed in terms of the $C\!P$ asymmetries as
\begin{equation}
p_\delta = \left(\varepsilon_{2\delta}-{P^0_{2\delta}\over P^0_{2\gamma}}\,\varepsilon_{2\gamma}\right)\,N_{N_2}^{\rm in} \, .
\end{equation}
As a last step one has finally to take into account flavour redistribution
and sphaleron conversion so that the eq.~(\ref{Demu}) follows.
The phantom terms originate
from the second contribution in eq.~(\ref{eps2abis}) to the flavoured $C\!P$ asymmetries.
One can see indeed that if $\Delta P_{2e}= \Delta P_{2\mu} = 0$, then $p_{e}=p_{\mu}=0$.
On the other hand, these terms do not vanish if the leptons and the anti-leptons
produced by the decays have a different flavour composition, such that at least one $\Delta P_{2\delta}\neq 0$,
even when $\varepsilon_{2\gamma}=0$. In this particular case one can indeed see that
$p_{e}= \varepsilon_{2e}=-\varepsilon_{2\mu}=-p_{\mu}$ \, .
It should be noticed that, remarkably, the phantom terms are not washed-out at the production.
This happens because in this stage the $e$ and $\mu$ components of the leptons and anti-leptons
quantum states are still
in a coherent superposition. The phantom terms originate from the components of the
electron and muon asymmetries dependant only on differences between the
flavour compositions of the leptonic quantum states ${\ell_{2\gamma}}$ and
anti-lepton quantum states ${\bar{\ell}'_{2\gamma}}$. These cannot be
washed-out by the $N_2$ inverse processes, which can only act to destroy
the part of the electron and muon asymmetries
proportional to $\varepsilon_{2\gamma}$ itself
\footnote{The name {\em phantom} is not meant to imply that the
effect is non physical. It is simply justified by the fact that the effect arises from
terms which cancel and are therefore invisible (i.e. phantom-like) until a possible wash-out from the $N_1$
acts asymmetrically on the $e$ and $\mu$ components ($K_{1e}\neq K_{1\mu}$),
which renders the difference observable.}.
However, it should be also noticed that if one assumes
an initial vanishing $N_2$-abundance, the phantom terms vanish. This
happens because
in this case they would be produced during the $N_2$ production stage with an opposite sign
with respect to the decay stage such that an exact cancelation would occur implying a vanishing
final value
\footnote{This can be understood, for example, in the following way. An inverse decays of a lepton
with an Higgs, corresponds to the creation either of a state orthogonal to $|{\ell_{2\gamma}}\rangle$,
that we indicate with $|{\ell_{2\gamma}^{\bot}}\rangle$, or to $|\bar{\ell}_{2\gamma}'\rangle$, that we indicate
with $|\bar{\ell}_{2\gamma}^{'\bot}\rangle$. Their flavour composition is given
by $|{\ell}_{2\gamma}^{\bot}\rangle =
\langle {\ell}_\mu|{\ell}_{2\gamma} \rangle\,|{\ell_e}\rangle-
\langle {\ell}_e|{\ell}_{2\gamma} \rangle\,|{\ell}_\mu\rangle$
and by $|\bar{\ell}_{2\gamma}^{'\bot}\rangle =\langle \bar{\ell}_\mu|\bar{\ell}'_{2\gamma} \rangle\,
|\bar{\ell}_e\rangle-\langle \bar{\ell}_e|\bar{\ell}'_{2\gamma} \rangle\,|\bar{\ell}_\mu\rangle$.
Therefore, each inverse decay will produce, on average, an electron and a muon asymmetry given respectively
by $\Delta L_e^{id}=(f_{2\mu}-\bar{f}_{2\mu})/2$ and $\Delta L_\mu^{id}=(f_{2e}-\bar{f}_{2e})/2$,
opposite to those produced by one decay. Notice that only $N_2$ inverse processes
can produce such $C\!P$ violating orthogonal states with phantom terms exactly
canceling with those in the lepton quantum states produced from decays}.
Therefore, the phantom terms seem to introduce a strong dependence on the initial conditions
in $N_2$-flavoured leptogenesis.
When finally the inverse processes involving the lightest RH neutrinos
become active at $T\sim M_1$, the wash-out from the $N_1$-decays
acts separately on the three flavour components of the total $B-L$ asymmetry \cite{vives}.
The wash-out from the lightest RH neutrinos is more efficient than the wash-out
from the next-to-lightest RH neutrinos since it is not balanced by any production
and it therefore acts on the whole produced asymmetry.
Taking into account the flavour coupling matrix,
the set of kinetic equations describing this stage is given by
\begin{equation}\label{flkewA}
{dN_{\Delta_{\alpha}}\over dz_1} =
-P_{1\alpha}^{0}\,\sum_{\beta}\,C^{(3)}_{\alpha\beta}\,W_1^{\rm ID}\,N_{\Delta_{\beta}} \, ,
\hspace{15mm} (\alpha,\beta=e,\mu,\tau)
\end{equation}
where $z_1\equiv M_1/T$ and, more generally, all
quantities previously defined for the $N_2$'s can be also
analogously defined for the $N_1$'s. In particular
the $P_{1\alpha}^{0}$'s, the $K_{1\alpha}$'s and $W_1$
are defined analogously to the $P_{2\alpha}^{0}$, to the $K_{2\alpha}$'s and
to $W_2$ respectively.
The flavour coupling matrices in the three-flavour regime are given by
\[C^{l(3)}=\left(\begin{array}{ccc}
151/179 & -20/179 & -20/179 \\ -25/358 & 344/537 & -14/537 \\ -25/358 & -14/537 & 344/537
\end{array}\right) \, , \hspace{5mm}
C^{h(3)}=\left(\begin{array}{ccc}
37/179 & 52/179 & 52/179 \\
37/179 & 52/179 & 52/179 \\
37/179 & 52/179 & 52/179 \\
\end{array}\right) \, ,
\]
\[C^{(3)} \equiv
\left(\begin{array}{ccc}
C_{ee}^{(3)} & C_{e\mu}^{(3)} & C_{e\tau}^{(3)} \\
C_{\mu e}^{(3)} & C_{\mu\m}^{(3)} & C_{\mu \tau}^{(3)} \\
C_{\tau e}^{(3)} & C_{\tau\mu}^{(3)} & C_{\tau\t}^{(3)}
\end{array}\right) =
\left(\begin{array}{ccc}
188/179 & 32/179 & 32/179 \\ 49/358 & 500/537 & 142/537 \\ 49/358 & 142/537 & 500/537
\end{array}\right) \, .
\]
If flavour coupling is neglected both at the production in the
two-flavour regime (corresponding to the approximation $C^{(2)}=I$)
and in the lightest RH neutrino wash-out in the three-flavour regime
(corresponding to the approximation $C^{(3)}=I$),
the final asymmetry is then given by \cite{aspects,SO10}
\begin{eqnarray}\label{finalasnoc}
N^{\rm f}_{B-L} & = & \sum_\alpha N_{\Delta\alpha}^f \nonumber \\
&=& \sum_{\delta=e,\mu} \left[p_\delta+ {P^0_{2\delta}\over P^0_{2\gamma}}\,\varepsilon_{2 \gamma}\,\kappa(K_{2\gamma})\right]\,
e^{-{3\pi\over 8}\,K_{1 \delta}} +
\varepsilon_{2 \tau}\,\kappa(K_{2 \tau})\,e^{-{3\pi\over 8}\,K_{1 \tau}} \, .
\end{eqnarray}
It is interesting that, even though $K_1\gg 1$,
there can be a particular flavour $\alpha$ with at the same time
$1 \simeq K_{1\alpha}\ll K_1 $ and a sizeable
$\varepsilon_{2\alpha}= {\cal O}(10^{-5}-10^{-6})$.
In this case the final asymmetry is dominated by this particular
$\alpha$-flavour contribution, avoiding the lightest RH neutrino wash-out,
and can reproduce the observed asymmetry.
Therefore, thanks to flavour effects, one can have successful
leptogenesis even for $K_1\gg 1$, something otherwise impossible in
the unflavoured regime \cite{vives,aspects,SO10}.
Let us now comment on the phantom terms $p_{\delta}$ and
on the conditions for them to be dominant so that
a scenario of `phantom leptogenesis' is realized.
First of all let us importantly recall that we are assuming zero pre-existing
asymmetries. Under this assumption the phantom terms would be
present only for a non zero initial $N_2$ abundance
while they would vanish if an initial vanishing $N_2$ abundance is assumed.
A condition for phantom leptogenesis is then
\begin{equation}
|p_{\delta}|\gg
\left|{P^0_{2\delta}\over P^0_{2\gamma}}\,\varepsilon_{2 \gamma}\,\kappa(K_{2 \gamma})\right|
\;\; \mbox{\rm and} \;\; K_{1\delta}\lesssim 1 \, ,
\end{equation}
either for $\delta=e$ or for $\delta=\mu$ or for both. In this situation the final asymmetry
will be dominated by that part of the electron-muon asymmetries that escape the
wash-out at the production thanks to the quantum coherence during the two flavour regime.
A first obvious condition is $p_{\delta}\neq 0$. Another condition is to have
$K_{2\gamma}\gg 1$ since otherwise
the phantom terms are not crucial to avoid the wash-out at the production
that would be absent anyway.
Another necessary condition for the phantom leptogenesis scenario to hold
is that either $K_{1e}\lesssim 1$ or
$K_{1\mu}\lesssim 1$, otherwise both the electron and the muon asymmetries, escaping the
wash-out at the production, are then later on washed-out
by the lightest RH neutrino wash-out processes. However, as we will see, this
condition is not necessary when the flavour coupling at the lightest RH neutrino wash-out stage
is also taken into account.
Conversely a condition for `non-phantom leptogenesis' relies on the following possibilities:
either an initial vanishing $N_2$ abundance,
or $p_{\delta}\simeq 0$, or $K_{2\gamma}\ll 1$, or that both $K_{1e}\gg 1$
and $K_{1\mu}\gg 1$. Again this third condition seems however not to be sufficient
to avoid the appearance of phantom terms in the expression of the final asymmetry
when the flavour coupling at the lightest RH neutrino wash-out stage
is also taken into account. Therefore, it should be noticed that
the effects of flavour coupling and of phantom terms cannot be easily disentangled.
Notice that a last condition for non-phantom leptogenesis is
$\exp[-3\,\pi\,K_{1e}/8]\simeq \exp[-3\,\pi\,K_{1\mu}/8]$, since in this
case the two terms would continue to cancel with each other even after
the lightest RH neutrino wash-out. In the Appendix B we report a description of phantom leptogenesis
within a density matrix formalism \cite{preparation} arriving to the same conclusions and results.
In the following we will focus on the effects induced by flavour
coupling, also in transmitting the phantom terms from the electron and muon
flavours to the tauon flavour.
Let us now see how the eq.~(\ref{finalasnoc}) gets modified when
flavour coupling is taken into account (only) at the production.
In this case one has
\begin{eqnarray}\label{finalascoup2}
N^{\rm f}_{B-L} & = &
N_{\Delta_{e}}^{T\sim T_L}\, e^{-{3\pi\over 8}\,K_{1 e}}+
N_{\Delta_{\mu}}^{T\sim T_L}\, e^{-{3\pi\over 8}\,K_{1 \mu}}+
N_{\Delta_{\tau}}^{T\sim T_L} \,e^{-{3\pi\over 8}\,K_{1 \tau}} \, ,
\end{eqnarray}
where $N_{\Delta_{e}}^{T\sim T_L}$, $N_{\Delta_{\mu}}^{T\sim T_L}$ and
$N_{\Delta_{\tau}}^{T\sim T_L}$ are given by eqs.~(\ref{flas}) and (\ref{Demu}).
In the specific case when $P^0_{2\tau}\ll P^0_{2\gamma}$, the eqs.~(\ref{flas})
specialize into eqs.~(\ref{flasspec}) and (\ref{flasspec2}) and we can therefore write
\begin{eqnarray}\label{finalascoup3}
N^{\rm f}_{B-L} & = &
\left(p_e+{P^0_{2e}\over P^0_{2\gamma}}\,\left[\varepsilon_{2\gamma}\,\kappa(K_{2\gamma}) - C^{(2)}_{\gamma\tau}\,\varepsilon_{2\tau}\,\kappa(K_{2\tau})\right]\right)\,
e^{-{3\pi\over 8}\,K_{1 e}}+ \\ \nonumber
& & \left(p_\mu+{P^0_{2e}\over P^0_{2\gamma}}\,
\left[\varepsilon_{2\gamma}\,\kappa(K_{2\gamma}) - C^{(2)}_{\gamma\tau}\,\varepsilon_{2\tau}\,\kappa(K_{2\tau})\right]\right)
\, e^{-{3\pi\over 8}\,K_{1 \mu}}+ \\ \nonumber
& & \varepsilon_{2\tau}\, \kappa(K_{2\tau})\,e^{-{3\pi\over 8}\,K_{1 \tau}} \, .
\end{eqnarray}
Let us finally also examine the changes induced
by flavour coupling in the description of the lightest
RH neutrino wash-out stage in the three-flavour regime, removing
the approximation $C^{(3)}=I$. One can see from eqs.~(\ref{flkewA}),
that the wash-out acts in a coupled way on the three-flavour
components of the asymmetry. An exact analytical solution can be obtained
applying again the same procedure as in the two flavour regime.
If we define
\begin{equation}
P_1^0 \equiv
\left(\begin{array}{ccc}
P^0_{1e}\,C_{ee}^{(3)} & P^0_{1e}\,C_{e\mu}^{(3)} & P^0_{1e}\,C_{e\tau}^{(3)} \\
P^0_{1\mu}\,C_{\mu e}^{(3)} & P^0_{1\mu}\,C_{\mu\m}^{(3)} & P^0_{1\mu}\,C_{\mu \tau}^{(3)} \\
P^0_{1\tau}\,C_{\tau e}^{(3)} & P^0_{1\tau}\,C_{\tau\mu}^{(3)} & P^0_{1\tau}\,C_{\tau\t}^{(3)}
\end{array}\right) \, ,
\end{equation}
the set of kinetic equations can be recast in a compact matrix form as
\begin{equation}
{d\vec{N}_{\Delta}\over dz_1} = - W_1\,P_1^0\, \vec{N}_{\Delta} \, ,
\end{equation}
where $\vec{N}_{\Delta}\equiv (N_{\Delta_e},N_{\Delta_\mu},N_{\Delta_\tau})$.
If we perform the change of variables
\begin{equation}\label{V}
\vec{N}_{\Delta''}= V\,\vec{N}_{\Delta} \, , \hspace{4mm} \mbox{\rm where}
\hspace{5mm}
V\equiv \left(\begin{array}{ccc}
V_{ e'' e} & V_{e''\mu} & V_{e''\tau} \\
V_{\mu'' e} & V_{\mu''\mu} & V_{\mu''\tau} \\
V_{\tau'' e} & V_{\tau''\mu} & V_{\tau''\tau}
\end{array}\right)
\end{equation}
is the matrix that diagonalizes $P^0_1$,
i.e. $V\,P^0_{1}\,V^{-1} = P^0_{1''} \equiv {\rm diag}(P^0_{1 e''},P^0_{1\mu''},P^0_{1\tau''}) $
and $\vec{N}_{\Delta''}\equiv (N_{\Delta_{e''}},N_{\Delta_{\mu''}},N_{\Delta_{\tau''}})$,
the kinetic equations for the flavoured asymmetries decouple and can be written as
\begin{equation}
{d\vec{N}_{\Delta''}\over dz_1} =
- W_1\,P^0_{1''}\, \vec{N}_{\Delta''} \, .
\end{equation}
The solution in the new variables is now given straightforwardly by
\begin{equation}
\vec{N}_{\Delta''}^{\rm f} =
\left(
N_{\Delta_{e''}}^{T\sim T_L}\,e^{-{3\,\pi\over 8}\,K_{1e''}}, \,
N_{\Delta_{\mu''}}^{T\sim T_L}\,e^{-{3\,\pi\over 8}\,K_{1 \mu''}}, \,
N_{\Delta_{\tau''}}^{T\sim T_L}\,e^{-{3\,\pi\over 8}\,K_{1 \tau''}}\right) \, ,
\end{equation}
where $K_{1\alpha''}\equiv P^0_{1\alpha''}\,K_1$. Applying the inverse transformation,
we can then finally obtain the final flavoured asymmetries
\begin{equation}\label{Vinv}
\vec{N}_{\Delta}^{\rm f}= V^{-1}\,\vec{N}_{\Delta''}^{\rm f} \, ,
\hspace{4mm} \mbox{\rm with}
\hspace{5mm}
V^{-1} \equiv \left(\begin{array}{ccc}
V^{-1}_{e e''} & V^{-1}_{\mu e''} & V^{-1}_{\tau e''} \\
V^{-1}_{e \mu''} & V^{-1}_{\mu \mu''} & V^{-1}_{\tau \mu''} \\
V^{-1}_{e \tau''} & V^{-1}_{\mu \tau''} & V^{-1}_{\tau\t''}
\end{array}\right) \, ,
\end{equation}
or explicitly for the single components
\begin{eqnarray} \nonumber
N^{\rm f}_{\Delta_{\alpha}} & = & \sum_{\alpha''}\,V^{-1}_{\alpha\a''}\,
\left[N^{T\sim T_L}_{\alpha''}\,e^{-{3\pi\over 8}\,K_{1\alpha''}}\right] \\ \label{NfDa}
& = & \,
\sum_{\alpha''}\,V^{-1}_{\alpha\a''}\,\,e^{-{3\pi\over 8}\,K_{1\alpha''}}
\left[\sum_{\beta}\,V_{\alpha''\beta}\,N_{\Delta_{\beta}}^{T\sim T_L}\right] \, ,
\end{eqnarray}
where the $N_{\Delta_{\beta}}^{T\sim T_L}$'s are given by eqs.~(\ref{Dg}), (\ref{Dtau}) and (\ref{Demu}).
This equation is the general analytical solution and should be regarded
as the ``master equation'' of the paper.
It can be immediately checked
that taking $U=V=I$ one recovers the standard solution given by eq.~(\ref{finalasnoc}).
In the Appendix we recast it in an extensive way for illustrative purposes.
\section{Examples for strong impact of flavour coupling}
The general solution of eq.~(\ref{NfDa}), with approximate analytical solutions for $U$ and $V$ plugged in,
is of course rather lengthy and its physical implications are difficult to see.
To make eq.~(\ref{NfDa}) more easily accessible we partly unpack it in the Appendix.
In order to better understand whether it can yield results
significantly different from those obtained by eq.~(\ref{finalasnoc}),
we will now specialize it to some interesting specific example
cases that will highlight the possibility of strong deviations
from the case when flavour coupling is neglected, i.e., of $R_{\rm f}$ (cf. (\ref{r})) values significantly
different from unity. The scenario we will consider in the following, and which will be useful to illustrate the possibility of large impact
of flavour coupling effects, will be referred to as the ``flavour-swap scenario''. Notice that in general
the phantom terms have to be taken into account and we have therefore included them. However, these can be always thought to vanish in the case
of initial vanishing abundance.
\subsection{Simplified formulae in the ``Flavour-swap scenario''}
In the ``flavour-swap scenario'' the following situation is considered: Out of the two flavours $e$ and $\mu$, one has $K_{1\delta}\lesssim 1$ (where $\delta$ can be either $e$ or $\mu$). The other flavour will be denoted by $\beta$, so if $\delta = e$ then $\beta=\mu$ or vice versa. For $K_{1\beta}$ we will assume that $K_{1\beta} \sim K_{1\tau} \sim K_1 \gg 1$, such that asymmetries in the $\beta''$ as well as in the $\tau''$ flavours will be (almost) completely erased by the exponential $N_1$ washout. The only asymmetry relevant after $N_1$ washout will be the one in the flavour $\delta''$.
Obviously, this already simplifies eq.~(\ref{NfDa}) significantly.
Now one has, similarly to what happened before with the $K_{1\alpha'}$,
that $K_{1\delta''}= K_{1\delta}\,(1+{\cal O}(C^{(3)}_{\alpha\neq\beta})^3) \simeq K_{1\delta}$. At the same time
$K_{1\beta(\tau)''}= K_{1\beta(\tau)}\,(1+{\cal O}(C^{(3)}_{\alpha\neq\beta}))$ and therefore
$K_{1 \beta(\tau)''} \sim K_1 \gg 1$. This implies that in eq.~(\ref{NfDa}))
only the terms with $\alpha''=\delta''$ survive
, while the terms with $\alpha''=\beta'',\tau''$
undergo a strong wash-out from the lightest RH neutrino inverse processes and can be
neglected. Therefore, if we calculate the final flavoured asymmetries
and make the approximation $\exp(-3\pi\,K_{1\delta}/8)\simeq 1$, from the general
eq.~(\ref{appendix}) we can write
\begin{eqnarray}
N^{\rm f}_{\Delta_\beta} & \simeq & V^{-1}_{\beta\delta''}\,V_{\delta'' \beta}\,N_{\Delta_{\beta}}^{T\sim T_L}+
V^{-1}_{\beta \delta''}\,V_{\delta'' \delta}\,N_{\Delta_{\delta}}^{T\sim T_L}+
V^{-1}_{\beta \delta''}\,V_{\delta'' \tau}\,N_{\Delta_{\tau}}^{T\sim T_L} \, , \\
N^{\rm f}_{\Delta_\delta} & \simeq & V^{-1}_{\delta \delta''}\,V_{\delta'' \beta}\,N_{\Delta_{\beta}}^{T\sim T_L}+
V^{-1}_{\delta \delta''}\,V_{\delta'' \delta}\,N_{\Delta_{\delta}}^{T\sim T_L}+
V^{-1}_{\delta \delta''}\,V_{\delta'' \tau}\,N_{\Delta_{\tau}}^{T\sim T_L} \, , \\
N^{\rm f}_{\Delta_\tau} & \simeq & V^{-1}_{\tau \delta''}\,V_{\delta'' \beta}\,N_{\Delta_{\beta}}^{T\sim T_L}+
V^{-1}_{\tau \delta''}\,V_{\delta'' \delta}\,N_{\Delta_{\delta}}^{T\sim T_L}+
V^{-1}_{\tau \delta''}\,V_{\delta'' \tau}\,N_{\Delta_{\tau}}^{T\sim T_L} \, ,
\end{eqnarray}
At the production, for the three $N_{\Delta_{\alpha}}^{T\sim T_L}$'s,
we assume the conditions that led to the
eqs.~(\ref{flasspec}), (\ref{flasspec2}) and (\ref{Demu}),
i.e. $P_{2\tau}^0 \ll P_{2\gamma}^0$ (notice again that
one could also analogously consider the opposite case $P_{2\tau}^0 \ll P_{2\gamma}^0$) and $K_2\gg 1$,
implying $\kappa(K_{2\gamma})\ll 1$.
The matrices $V$ and $V^{-1}$, whose entries are defined by the eqs.~(\ref{V}) and (\ref{Vinv}) respectively,
at the first order in the $C^{(3)}$ off-diagonal terms,
are given by
\begin{equation}
V \simeq \left(\begin{array}{ccc}
1 & C^{(3)}_{e\mu} & -C^{(3)}_{e\tau}\,{P^0_{1 e}\over P^0_{1\tau}} \\
-C^{(3)}_{\mu e} {P^0_{1\mu}\over P^0_{1 e}} & 1 & -C^{(3)}_{\mu \tau} {P^0_{1\mu}\over P^0_{1 \tau}} \\
C^{(3)}_{\tau e} & C^{(3)}_{\tau \mu} & 1
\end{array}\right)
\, ,
\,\,\, V^{-1} \simeq \left(\begin{array}{ccc}
1 & -C^{(3)}_{e\mu} & C^{(3)}_{e\tau}\,{P^0_{1 e}\over P^0_{1\tau}} \\
C^{(3)}_{\mu e} {P^0_{1\mu}\over P^0_{1 e}} & 1 & C^{(3)}_{\mu \tau} {P^0_{1\mu}\over P^0_{1 \tau}} \\
-C^{(3)}_{\tau e} & -C^{(3)}_{\tau \mu} & 1
\end{array}\right)
\, .
\end{equation}
Therefore, we find for the three $N_{\Delta_{\alpha}}^{\rm f}$'s
\begin{eqnarray}
N^{\rm f}_{\Delta_\beta} & \simeq & -C^{(3)}_{\beta\delta}\,C^{(3)}_{\delta \beta}\,{P^0_{1\delta}\over P^0_{1 \beta}}\,N_{\Delta_{\beta}}^{T\sim T_L}
-C^{(3)}_{\beta\delta}\,N_{\Delta_{\delta}}^{T\sim T_L}+
C^{(3)}_{\beta\delta}\,C^{(3)}_{\delta \tau} {P^0_{1\delta}\over P^0_{1 \tau}}\,N_{\Delta_{\tau}}^{T\sim T_L} \\ \nonumber
& \simeq & -C^{(3)}_{\beta\delta}\,
\left\{p_\delta+{P^0_{2\delta}\over P^0_{2\gamma}}\,\left[\varepsilon_{2\gamma}\kappa(K_{2\gamma})-\, C^{(2)}_{\gamma\tau}\,\varepsilon_{2\tau} \,\kappa(K_{2\tau})\right]\right\} \, , \\
N^{\rm f}_{\Delta_\delta} & \simeq & -C^{(3)}_{\delta \beta} {P^0_{1\delta}\over P^0_{1 \beta}}\,N_{\Delta_{\beta}}^{T\sim T_L}+
N_{\Delta_{\delta}}^{T\sim T_L}-
C^{(3)}_{\delta \tau}\, {P^0_{1\delta}\over P^0_{1 \tau}} \,N_{\Delta_{\tau}}^{T\sim T_L} \\ \nonumber
& \simeq &
p_\delta+{P^0_{2\delta}\over P^0_{2\gamma}}\,\left[\varepsilon_{2\gamma}\kappa(K_{2\gamma})-\, C^{(2)}_{\gamma\tau}\,\varepsilon_{2\tau} \,\kappa(K_{2\tau})\right]
- C^{(3)}_{\delta\tau}\,{P^0_{1\delta}\over P^0_{1 \tau}} \,\varepsilon_{2\tau}\,\kappa(K_{2\tau}) \, , \\
N^{\rm f}_{\Delta_\tau} & \simeq & C^{(3)}_{\tau \delta}\, C^{(3)}_{\delta \beta} {P^0_{1\delta}\over P^0_{1 \beta}}\,N_{\Delta_{\beta}}^{T\sim T_L}-
C^{(3)}_{\tau \delta}\,N_{\Delta_{\delta}}^{T\sim T_L}-
C^{(3)}_{\tau \delta}\,C^{(3)}_{\delta \tau} {P^0_{1\delta}\over P^0_{1 \tau}} \,N_{\Delta_{\tau}}^{T\sim T_L} \\ \nonumber
& \simeq &
-\,C^{(3)}_{\tau\delta}\,\left\{p_\delta+{P^0_{2\delta}\over P^0_{2\gamma}}\,\left[\varepsilon_{2\gamma}\kappa(K_{2\gamma})-\, C^{(2)}_{\gamma\tau}\,\varepsilon_{2\tau} \,\kappa(K_{2\tau})\right]\right\} \, .
\end{eqnarray}
The total final asymmetry is then given by the sum of the flavoured asymmetries.
It can be checked
that if flavour coupling is neglected ($C^{(2)}=C^{(3)}=I$), then one obtains the expected result
\begin{equation}
N_{B-L}^{\rm f}\simeq N_{\Delta_{\delta}}^{T\sim T_L}=p_\delta+{P^0_{2\delta}\over P^0_{2\gamma}}\,\varepsilon_{2\gamma}\kappa(K_{2\gamma}) \, ,
\end{equation}
corresponding to an asymmetry produced in the flavour $\delta$, i.e.\ in the only flavour that survives washout by the lightest RH neutrino.
However, taking into account flavour coupling, new terms arise and the
final asymmetry can be considerably enhanced. More explicitly, we have approximately
\begin{equation}\label{eq:NBLflavourswap}
N_{B-L}^{\rm f}\simeq \left(1-C^{(3)}_{\beta\delta}-C^{(3)}_{\tau\delta}\right)
\left\{p_\delta+{P^0_{2\delta}\over P^0_{2\gamma}}\,\left[\varepsilon_{2\gamma}\kappa(K_{2\gamma})-\, C^{(2)}_{\gamma\tau}\,\varepsilon_{2\tau} \,\kappa(K_{2\tau})\right]\right\}
- C^{(3)}_{\delta\tau}\,{P^0_{1\delta}\over P^0_{1 \tau}}\,\varepsilon_{2\tau}\,\kappa(K_{2\tau}) \, ,
\end{equation}
where ${P^0_{1\delta}/ P^0_{1 \tau}} ={K_{1\delta}/ K_{1 \tau}} $ and where we have neglected all terms that contain the product either of two or more off-diagonal terms of the coupling matrix, or of one or more off-diagonal term with $\kappa(K_{2\gamma})\ll 1$.
From eq.~(\ref{eq:NBLflavourswap}) one can readily see examples for strong enhancement of the
asymmetries due to flavour coupling, i.e. conditions under which $R^{\rm f}\gg 1$. In particular,
if $\kappa(K_{2\gamma})\,\varepsilon_{2\gamma} \ll \kappa(K_{2\tau})\,\varepsilon_{2\tau}$
then one of the two additional terms in eq.~(\ref{eq:NBLflavourswap}),
only present due to flavour coupling, can dominate the produced final asymmetry and $R^{\rm f}\gg 1$ results.
We will now discuss these two cases in more detail and give examples for classes of models, consistent with the observed neutrino masses and mixings, where they are relevant. We want first to notice a few general
things.
First, since the flavoured asymmetries are upper bounded by \cite{flavoreffects2}
\begin{equation}\label{eps2aub}
|\varepsilon_{2\alpha}|\lesssim \varepsilon_{2\alpha}^{\rm max}\equiv
10^{-6}\,{M_2\over 10^{10}\,{\rm GeV}}\,\sqrt{P_{2\alpha}}\,{m_3\over m_{\rm atm}} \, ,
\end{equation}
the condition $\kappa(K_{2\gamma})\,\varepsilon_{2\gamma} \ll \kappa(K_{2\tau})\,\varepsilon_{2\tau}$ does not introduce
further great restrictions compared to $K_{2\tau}\ll K_{2\gamma}$.
Second, from the eq.~(\ref{eq:NBLflavourswap}) one can see that a reduction
of the final asymmetry from flavour coupling is also possible because of a possible
sign cancelation among the different terms (in addition to a small reduction
from the pre-factor $1-C^{(3)}_{\beta\delta}-C^{(3)}_{\tau\delta}$). However, a strong reduction occurs
only for a fine tuned choice of the parameters. Let us say that this sign cancelation
introduced by flavour coupling changes the condition for the vanishing of the final asymmetry
that is not anymore simply given by $\varepsilon_{2\gamma}=0$.
It should indeed be noticed that now for $\varepsilon_{2\gamma}=0$
the asymmetry in the flavour $\gamma$ (or vice-versa the asymmetry in the
flavour $\tau$ if $\varepsilon_{2\tau}=0$ and $K_{2\tau}\gg K_{2\gamma}$) does not vanish in general.
This can be seen directly from the kinetic equations (cf. eq.(\ref{flke})),
where if $\varepsilon_{2\gamma}=0$ an asymmetry generation can be still induced
by the wash-out term that actually in this case behaves rather like a wash-in
term. If we we focus on the Higgs asymmetry, we can say that this wash-in effect
is induced by a sort of thermal contact between the flavour $\gamma$ and $\tau$, in a way that
the departure from equilibrium in the flavour $\tau$ induces a departure from equilibrium
in the flavour $\gamma$ as well.
\begin{itemize}
\item {\bf Case A: Enhancement from flavour coupling at $N_2$ decay}
Let us assume $\kappa(K_{2\gamma})\ll \kappa(K_{2\tau})$ and in addition ${P^0_{1\delta}/ P^0_{1 \tau}} ={K_{1\delta}/ K_{1 \tau}}\ll 1 $.
Then the first and third terms in eq.~(\ref{eq:NBLflavourswap}) dominate and we can estimate
\begin{equation}\label{eq:NBLflavourswap_caseA}
N_{B-L}^{\rm f}\simeq
p_\delta-C^{(2)}_{\gamma\tau}\,{P^0_{2\delta}\over P^0_{2\gamma}}\,\varepsilon_{2\tau} \,\kappa(K_{2\tau}) \, .
\end{equation}
In this case the final asymmetry is dominated by two terms that, for different reasons,
circumvent the strong wash-out of the $\gamma$ component. The first term in eq.~(\ref{eq:NBLflavourswap_caseA}) is the
phantom term $p_\delta$ that escapes the wash-out since it was `hidden' within the
coherent $\gamma$ lepton combination of an electron and a muon component. From this point of view
it should be noticed that since the lightest RH neutrino wash-out acts only on the
$\delta$ flavour but not on the $\beta$ flavour, it has the remarkable effect
to destroy the cancelation between the two phantom terms $p_\delta$ and $p_\beta$ having
as a net effect the creation of $B-L$ asymmetry, a completely new effect.
The second term in eq.~(\ref{eq:NBLflavourswap_caseA}) is what we have seen already:
because of flavour coupling at the production, the large asymmetry in the $\tau$ flavour
necessarily induces an asymmetry
in the $\gamma$ flavour as well. Notice that there is no model independent reason why one of
the two terms should dominate over the other.
In order to show more clearly the conditions for this case to be realized, we have plotted
in the Fig.~\ref{fig:case A} the $R$ iso-contour lines (cf. eq~(\ref{r})) in the plane $(K_{2\gamma},K_{2\tau})$.
\begin{figure}
\begin{center}
\psfig{file=CaseA1.eps,height=63mm,width=75mm}
\hspace{5mm}
\psfig{file=CaseA01.eps,height=63mm,width=75mm}
\caption{Contour plots of $R$ (cf.\ eq~(\ref{r})) in the flavour swap scenario for
$K_{1\tau},K_{1 e}\gg 1$, $K_{1\mu}\lesssim 1$, $K_{2e}=K_{2\mu}$.
The latter condition implies that the last term in the
eq.~(\ref{eq:NBLflavourswap}) is negligible. Left panel: $|\varepsilon_{2\mu}|=\varepsilon_{2\mu}^{\rm max}$;
right panel: $|\varepsilon_{2\mu}|=0.1\,\varepsilon_{2\mu}^{\rm max}$ (cf.\ eq.~(\ref{eps2aub})).
In both panels $\varepsilon_{2\tau}=\varepsilon_{2\tau}^{\rm max}$ and $\varepsilon_{2\mu}/\varepsilon_{2\tau}>1$.}
\label{fig:case A}
\end{center}
\end{figure}
We have fixed
$K_{1\mu}\lesssim 1$, $K_{1e},K_{1\tau}\gg 1$, so that only the muonic asymmetry survives the
lightest RH neutrino wash-out. We have also set
$K_{2\mu}/K_{2\gamma}=1/2 \gg K_{1\mu}/K_{1\tau}$, so that the last term in the eq.~(\ref{eq:NBLflavourswap})
can be neglected. Concerning the $C\!P$ asymmetries, in the left panel
we have set $\varepsilon_{2\gamma}=\varepsilon_{2\gamma}^{\rm max}$ and $\varepsilon_{2\tau}=\varepsilon_{2\tau}^{\rm max}$.
One can see that in this case the enhancement of the asymmetry becomes relevant when $K_{2\gamma}\gg K_{2\tau}$
but for $K_{2\gamma}\lesssim 100$ (a reasonable maximum value), it cannot be higher than about $R\simeq 2.5$.
Notice that, since we choose $\varepsilon_{2\gamma}/\varepsilon_{2\tau} > 1$, a reduction is also
possible due to a cancelation of the traditional term and of the new term due to flavour coupling.
In the right panels we have set $\varepsilon_{2\gamma}=0.1\,\varepsilon_{2\gamma}^{\rm max}$
and this time one can see how $R$ can be as large as one order of magnitude. This shows
that for $\varepsilon_{\gamma}\rightarrow 0$ the enhancement can be arbitrarily large.
\item {\bf Case B: Enhancement from flavour coupling at $N_1$ washout}
Another interesting case is when $\kappa(K_{2\gamma})\ll \kappa(K_{2\tau})$ and in addition
$P^0_{2\delta}/P^0_{2\gamma} \ll P^0_{1\delta}/P^0_{1\tau} $.
In this case the first and fourth terms in eq.~(\ref{eq:NBLflavourswap}) dominate and we obtain approximately
\begin{equation}\label{eq:NBLflavourswap_caseB}
N_{B-L}^{\rm f}\simeq p_{\delta}- C^{(3)}_{\delta\tau}\,{P^0_{1\delta}\over P^0_{1 \tau}}\,\varepsilon_{2\tau}\,\kappa(K_{2\tau})
\, .
\end{equation}
\end{itemize}
We can see that again we have the phantom term avoiding the wash-out at the production
and a second term arising from the flavour coupling at the wash-out by $N_1$.
We note that this term is not even proportional to the
flavoured asymmetry $\varepsilon_{2\delta}$ and is just due to the fact that thanks to
flavour coupling the wash-out of the large tauonic asymmetry produced at $T\sim T_L$
has as a side effect a departure from thermal equilibrium of the processes
$N_1 \leftrightarrow l_e + \phi^{\dagger}, \bar{l}_e + \phi$. This can be understood
easily again in terms of the Higgs asymmetry that connects the dynamics in the two flavours.
It is quite amusing that thanks to flavour coupling an electron asymmetry is generated
even without explicit electronic $C\!P$ violation.
Also for this case B, we have plotted,
in the Fig.~\ref{fig:case B}, the $R$ iso-contour lines
(cf. eq~(\ref{r})) in the plane $(K_{2\gamma},K_{2\tau})$.
\begin{figure}
\begin{center}
\psfig{file=CaseB1.eps,height=63mm,width=75mm}
\hspace{5mm}
\psfig{file=CaseB01.eps,height=63mm,width=75mm}
\caption{Contour plots of $R$ (cf.\ eq~(\ref{r})) in the flavour swap scenario for
$K_{1\tau},K_{1 \mu}\gg 1$, $K_{1 e}\lesssim 1$, $K_{2e}/K_{2\mu}\ll K_{1e}/K_{1\tau}$.
The last condition implies that the last term in the
eq.~(\ref{eq:NBLflavourswap}) dominates. Left panel: $\varepsilon_{2\mu}=\varepsilon_{2\mu}^{\rm max}$;
right panel: $\varepsilon_{2\mu}=0.1\,\varepsilon_{2\mu}^{\rm max}$ (cf.\ eq.~(\ref{eps2aub})).
In both panels $\varepsilon_{2\tau}=\varepsilon_{2\tau}^{\rm max}$ and $\varepsilon_{2\mu}/\varepsilon_{2\tau}>1$.}
\label{fig:case B}
\end{center}
\end{figure}
We have set $K_{1e}\lesssim 1$ while $K_{1\mu},K_{1\tau}\gg 1$, so that now only the electron
asymmetry survives the lightest RH neutrino wash-out.
Moreover this time we have set
$K_{2e}/K_{2\gamma} \ll K_{1e}/K_{1\tau}$ so that the last term
in the eq.~(\ref{eq:NBLflavourswap}) becomes dominant and the case B is realized.
For the $C\!P$ asymmetries, as before, in the left panel
we fixed $\varepsilon_{2\gamma}=\varepsilon_{2\gamma}^{\rm max}$
while in the right panel $\varepsilon_{2\gamma}=0.1\,\varepsilon_{2\gamma}^{\rm max}$ and in both cases $\varepsilon_{2\tau}=\varepsilon_{2\tau}^{\rm max}$.
Now the enhancement of the final asymmetry $R$ is $\gg 1$ in both cases, simply because
the traditional term is this time suppressed by $K_{2e}/K_{2\gamma} \ll 1$. This means that
after the decoherence of the $\gamma$ lepton quantum states, there is a negligible asymmetry in the electron flavour.
However, at the lightest RH neutrino wash-out, an electron asymmetry is generated thanks to
flavour coupling.
\subsection{Example for Case A within Heavy Sequential Dominance}
To find realistic examples where the two cases A and B with strong impact of flavour coupling are realised, we will now consider classes of models with so-called sequential dominance (SD) \cite{King:1998jw,King:1999mb,King:2002nf,King:2004} in the seesaw mechanism.
To illustrate case A, we may in particular consider a sub-class called heavy sequential dominance (HSD). To realise case A within HSD, in eq.~(\ref{eq:NBLflavourswap}) and eq.~(\ref{eq:NBLflavourswap_caseA}) we assign flavours $\delta = \mu$ and $\beta = e$.
To understand how heavy sequential dominance works, we begin by
writing the RH neutrino Majorana mass matrix $M_{\mathrm{RR}}$ in
a diagonal basis as
\begin{equation}
M_{\mathrm{RR}}=
\begin{pmatrix}
M_C & 0 & 0 \\
0 & M_B & 0 \\
0 & 0 & M_A%
\end{pmatrix},
\end{equation}
where we have ordered the columns
according to $M_{RR}=\mbox{diag}(M_1,M_2,M_3)$ where $M_1<M_2<M_3$.
In this basis we write the neutrino (Dirac) Yukawa matrix $\lambda_{\nu}$ in
terms of $(1,3)$ column vectors $C_i,$ $B_i,$ $A_i$ as
\begin{equation}
\lambda_{\nu }=
\begin{pmatrix}
C & B & A
\end{pmatrix},
\label{Yukawa}
\end{equation}
in the convention where the Yukawa matrix is given in left-right convention.
The Dirac neutrino mass matrix is then given by $m_{\mathrm{LR}}^{\nu}=\lambda_{\nu}v_{\mathrm{
u}}$. The term for the light neutrino masses in the effective Lagrangian (after electroweak symmetry breaking), resulting from integrating out the massive right
handed neutrinos, is
\begin{equation}
\mathcal{L}^\nu_{eff} = \frac{(\nu_{i}^{T} A_{i})(A^{T}_{j} \nu_{j})v^2}{M_A}+\frac{(\nu_{i}^{T} B_{i})(B^{T}_{j} \nu_{j})v^2}{M_B}
+\frac{(\nu_{i}^{T} C_{i})(C^{T}_{j} \nu_{j})v^2}{M_C} \label{leff}
\end{equation}
where $\nu _{i}$ ($i=1,2,3$) are the left-handed neutrino fields.
heavy sequential dominance (HSD) then corresponds to the third
term being negligible, the second term subdominant and the first term
dominant:
\begin{equation}\label{SDcond}
\frac{A_{i}A_{j}}{M_A} \gg
\frac{B_{i}B_{j}}{M_B} \gg
\frac{C_{i}C_{j}}{M_C} \, .
\end{equation}
In addition, we shall shortly see that small $\theta_{13}$
and almost maximal $\theta_{23}$ require that
\begin{equation}
|A_1|\ll |A_2|\approx |A_2|.
\label{SD2}
\end{equation}
We identify the dominant
RH neutrino and Yukawa couplings as $A$, the subdominant
ones as $B$, and the almost decoupled (subsubdominant) ones as $C$.
Working in the mass basis
of the charged leptons,
we obtain for the lepton mixing angles:
\begin{subequations}\label{anglesSD}\begin{eqnarray}
\label{Eq:t23}
\tan \theta_{23} &\approx& \frac{|A_2|}{|A_3|}\;, \\
\label{Eq:t12}
\tan \theta_{12} &\approx&
\frac{|B_1|}{c_{23}|B_2|\cos \tilde{\phi}_2 -
s_{23}|B_3|\sin \tilde{\phi}_3 } \;,\\
\label{Eq:t13}
\theta_{13} &\approx&
e^{i \tilde{\phi}_4}
\frac{|B_1| (A_2^*B_2 + A_3^*B_3) }{\left[|A_2|^2 + |A_3|^2\right]^{3/2} }
\frac{M_A}{M_B}
+\frac{e^{i \tilde{\phi}_5} |A_1|}
{\sqrt{|A_2|^2 + |A_3|^2}} ,
\end{eqnarray}\end{subequations}
where the phases do not need to concern us.
The neutrino masses are:
\begin{subequations}\label{massesSD}\begin{eqnarray}
\label{Eq:m3} m_3 &\approx& \frac{(|A_2|^2 + |A_3|^2)v^2}{M_A}\;, \\
\label{Eq:m2} m_2 &\approx& \frac{|B_1|^2 v^2}{s^2_{12} M_B}\;, \\
\label{Eq:m1}m_1 &\approx& {\cal O}(|C|^2 v^2/M_C) \;.
\end{eqnarray}\end{subequations}
Tri-bimaximal mixing corresponds to:
\begin{eqnarray}
|A_{1}| &=&0, \label{tribicondsd} \\
\text{\ }|A_{2}| &=&|A_{3}|, \label{tribicondse} \\
|B_{1}| &=&|B_{2}|=|B_{3}|, \label{tribicondsa} \\
A^{\dagger }B &=&0. \label{zero}
\end{eqnarray}
This is called constrained sequential dominance (CSD).
For $N_2$ leptogenesis, the flavour specific decay asymmetries are $\varepsilon_{2 \alpha}$ where the leading contribution comes from the heavier RH neutrino of mass $M_A=M_3$ in the loop which may be approximated via eq.~(\ref{eps2a}) as:
\begin{equation}
\varepsilon_{2 \alpha} \approx -\frac{3 }{16 \pi v^2} \frac{M_2}{M_3}\frac{1}{B^\dagger B}
\mathrm{Im}\left[ B_\alpha^* (B^\dagger A)A_\alpha \right].
\end{equation}
Clearly the asymmetry vanishes in the case of CSD due to eq.~(\ref{zero})
and so in the following we shall consider examples which violate CSD.
The mixing angles are given by the following estimates:
\begin{equation}\label{eq:angles}
\tan \theta_{23}\sim \frac{A_2}{A_3} \sim 1, \ \ \tan \theta_{12}\sim \frac{\sqrt{2}B_1}{B_2+B_3}\sim \frac{1}{\sqrt{2}},
\ \ \theta_{13}\sim \frac{A_1}{\sqrt{2}A_2} \sim \frac{r}{\sqrt{2}}.
\end{equation}
Suppose we parametrize the Yukawa couplings consistent with these mixing angles as:
\begin{equation}
A_2=A_3, \ \ A_1=r\,A_2, \ \ B_3=q\,B_2,\ \ B_1=\frac{1}{2} (1+q)\,B_2\ \
\end{equation}
where $r<1$ is related to $\theta_{13}$ and $\theta_{12}$ via eq.~(\ref{eq:angles}), then we find,
\begin{equation}
\varepsilon_{2 \mu} \approx -\frac{3 }{16 \pi v^2}\,M_2m_3, \ \ \varepsilon_{2 \tau}\approx q\, \varepsilon_{2 \mu},
\ \ \varepsilon_{2 e}\approx \frac{r}{2}\,\varepsilon_{2\mu}.
\end{equation}
The flavoured effective neutrino masses $\widetilde{m}_{2 \alpha}$, $\widetilde{m}_{1 \alpha}$ are given by:
\begin{equation}
\widetilde{m}_{2 \alpha} = \frac{|B_{\alpha}|^2 v^2}{M_B} \sim m_2 \, , \, \,
\widetilde{m}_{1 \alpha}= \frac{|C_{\alpha}|^2 v^2}{M_C}\sim m_1 \, .
\end{equation}
Neutrino oscillation experiments tell us that $r<1$ is small (here we shall assume $r\sim 0.2$ as
a specific example consistent with current experimental results) and we find
\begin{equation}
K_{2 \mu }={\widetilde{m}_{2 \mu} \over m_{\star}} \sim {m_2\over m_{\star}} \sim 10, \, \,
K_{2 e} \sim \frac{(1+q)^2}{4}\,K_{2\mu }, \, \, K_{2 \tau}\sim q^2\,K_{2 \mu } ,
\end{equation}
which allows strong washout for $K_{2 \gamma}$ ($\gamma = \mu + e$) with weak washout for $K_{2 \tau}$.
By assuming that $C_{1},C_2 \ll C_3$ we have,
\begin{equation}
K_{1 \tau}= {\widetilde{m}_{1 \tau} \over m_{\star}} \sim 10\,\frac{m_1}{m_2}, \, \, K_{1 e}, \, K_{1 \mu} \ll K_{1 \tau}
\end{equation}
which allows for strong washout for $K_{1 \tau}$
(at least if $m_1\sim m_2$) with weak washouts for $K_{1 e}, K_{1 \mu}$.
Thus, without flavour coupling and phantom terms,
we would have strong (exponential) $N_1$ washout for $K_{1 \tau}\sim 10$,
with negligible $N_1$ washout for $K_{1e}, K_{1 \mu}<1$.
Since $\varepsilon_{2 e}\approx \frac{r}{2} \varepsilon_{2 \mu} \sim 0.1 \varepsilon_{2 \mu}$ we may
neglect $\varepsilon_{2 e}$ and then we find
that the term proportional to $\varepsilon_{2 \gamma}\,\kappa (K_{2 \gamma})$ is strongly
washed out since $K_{2 \gamma}\sim 10$. Therefore, without flavour coupling and
phantom effects, $N_{B-L}^{\rm f}$ would tend to be small in this scenario.
While, allowing for the effects of flavour redistribution and including the phantom
term, we find (cf. eq.~(\ref{eq:NBLflavourswap_caseA})),
\begin{equation}
N_{B-L}^{\rm f}\sim p_\mu + {K_{2 \mu}\over K_{2 \gamma}}\,\varepsilon_{2 \gamma}\,\kappa(K_{2\gamma})
- {K_{2 \mu}\over K_{2 \gamma}}
C^{(2)}_{\gamma \tau}\,\varepsilon_{2 \tau}\,\kappa(K_{2 \tau}) \, .
\end{equation}
Since $K_{2 \mu}/K_{2 \gamma} \simeq 4/(5+2\,q)$ and
$p_{\mu}\simeq [(1+2q)/(5+2q)]\varepsilon_{2 \mu}\,N_{N_2}^{\rm in}$, then we have
\begin{equation}
N_{B-L}^{\rm f}\sim {1+2q\over 5+2q}\,\varepsilon_{2 \mu}\,N_{N_2}^{\rm in}+ {4\over 4+(1+q)^2}\,
\left[ \varepsilon_{2 \gamma}\,\kappa(K_{2\gamma}) -
\,C^{(2)}_{\gamma \tau} \varepsilon_{2 \tau}\, \kappa(K_{2 \tau})\right] \, ,
\end{equation}
where $K_{2 \tau}\sim q^2\,K_{2 \mu }\sim 10\,q^2 $ leads to only weak wash out with
$\varepsilon_{2 \mu}\sim -\frac{3 }{16 \pi v^2}M_2\,m_3$ being large. Notice that
there is a partial cancelation of the two terms
but this is just depending on the particular choice of values for $r$ and $q$ and on $N_{N_2}^{\rm in}$.
This is an example, consistent with neutrino data, where $N_{B-L}^{\rm f}$ would be very small without flavour coupling and phantom term, but will be quite large including the two effects that both
produce a large contribution. If we indeed, for definiteness, assume $N_{N_2}^{\rm in}=0$
and $q\sim 0.5$ such that $K_{2\tau}\sim 1$ corresponding to $\kappa(K_{2\tau})\simeq 0.3$,
then we find for $R$ (cf. eq~(\ref{r}))
\begin{equation}\label{Rq}
R \simeq \left|1-C^{(2)}_{\gamma \tau}\,{\kappa(K_{2 \tau})\over \kappa(K_{2\gamma})}\,
{\varepsilon_{2\,\tau} \over \varepsilon_{2 \gamma}}\right| \, .
\end{equation}
In Fig.~(\ref{fig:R}) we plotted $R$ as a function of $q=\varepsilon_{2\tau}/\varepsilon_{2\mu}$. One can see
that this example realizes a specific case of the general situation shown in the left panel of
Fig.~1. In particular, one can see that there can be a relevant suppression for
positive $q$ and up to a $50\%$ enhancement for negative $q$.
\begin{figure}
\begin{center}
\psfig{file=R.eps,height=63mm,width=75mm}
\caption{Plot of $R$ as a function of $q$ as from the eq.~(\ref{Rq}).}
\label{fig:R}
\end{center}
\end{figure}
On the other hand, in case of initial thermal abundance, one can easily
verify that the presence of the phantom term can yield an enhancement up to three orders of magnitude.
\subsection{Example for Case B within Light Sequential Dominance}
To give an example for case B (i.e.\ an example where $K_{1 e} \ll K_{1 \mu}, K_{1 \tau}$ while $\varepsilon_{2 \tau} \gg \varepsilon_{2 \mu}, \varepsilon_{2 e}$ and $K_{2 e}\ll K_{2 \gamma}$), we may consider another class of sequential dominance, namely light sequential dominance (LSD). Now, in eq.~(\ref{eq:NBLflavourswap}) and eq.~(\ref{eq:NBLflavourswap_caseB}) we have to replace $\delta = e$ and $\beta = \mu$.
In the example of LSD we will consider, using the same notation for the dominant, subdominant and subsubdominant RH neutrinos and corresponding couplings, we have:
\begin{equation}
M_{\mathrm{RR}}=
\begin{pmatrix}
M_A & 0 & 0 \\
0 & M_C & 0 \\
0 & 0 & M_B%
\end{pmatrix}.
\end{equation}
The lightest RH neutrino with mass $M_A$ dominates the seesaw mechanism.
We have again ordered the columns according to $M_{RR}=\mbox{diag}(M_1,M_2,M_3)$ where $M_1<M_2<M_3$.
For the neutrino (Dirac) Yukawa matrix we use the notation
\begin{equation}
\lambda_{\nu }=
\begin{pmatrix}
A_1 & C_1 & B_1 \\
A_2 & C_2 & B_2 \\
A_3 & C_3 & B_3
\end{pmatrix},.
\label{YukawaLSD}
\end{equation}
More specifically, let us now consider, within LSD, a variant of CSD called partially constrained sequential dominance (PCSD) \cite{King:2009qh} where $|A_2| = |A_3| = a$ and $|B_1|=|B_2|=|B_3|=b$, but $A_1 \not= 0$. In addition, we may assume $C = (C_1,C_2,C_3)$ with $C_1 = 0$ and $C_2 / C_3 = \zeta \ll 1$ as a specific example. Under these conditions, and using $A_1 = r A_2 = \sqrt{2} \theta_{13} A_2$ defined in the previous section,
we can write the neutrino Yukawa matrix as
\begin{equation}
\lambda_{\nu }=
\begin{pmatrix}
\sqrt{2} \theta_{13} a & 0 & b \\
a & \zeta c & b\\
-a & c & b
\end{pmatrix}.
\label{YukawaPCSDinLSD}
\end{equation}
The flavoured effective neutrino masses
$\widetilde m_{2\alpha}$, $\widetilde m_{1\alpha}$ in this specific LSD scenario are given by:
\begin{equation}
\widetilde m_{2\alpha} = \frac{|C_{\alpha}|^2 v^2}{M_C} \sim m_2, \ \
\widetilde m_{1\alpha}= \frac{|A_{\alpha}|^2 v^2}{M_A}\;.
\end{equation}
For $\widetilde m_{1 e}$, $\widetilde m_{1 \mu} $ and $\widetilde m_{1 \tau}$ we obtain explicitly
\begin{equation}
\widetilde m_{1 e}= \frac{|\sqrt{2}\, \theta_{13}\, a|^2 v^2}{M_1} = m_3\, \theta_{13}^2, \ \
\widetilde m_{1 \mu} = \widetilde m_{1 \tau} = \frac{|a|^2 v^2}{M_1} = \frac{m_3}{2} \;.
\end{equation}
The parameters $K_{i\alpha}$ are related to the $\widetilde m_{i \alpha}$'s
simply by $K_{i\alpha}=\widetilde m_{i \alpha}/m^*$.
Since we know from neutrino oscillation experiments that the leptonic mixing angle $\theta_{13}$ is small
(at least $< 10^\circ$) we have that $K_{1 e} \ll K_{1 \mu} = K_{1 \tau}$, i.e.\
\begin{equation}
K_{1 \mu} = K_{1 \tau} \sim {m_3\over m^*} \sim 50
\end{equation}
and
\begin{equation}
\frac{K_{1 e}}{K_{1 \mu}} = \frac{K_{1 e}}{K_{1 \tau}} = (\sqrt{2} \theta_{13})^2 \;.
\end{equation}
Consequently, the asymmetries in the $\tau$ and in the $\mu$ flavours will be almost completely washed out by the $N_1$ washout related to $K_{1 \tau}$ and $K_{1 \mu}$. In the $e$-flavour we have weak $N_1$-washout.
Furthermore, using $\frac{|C_{\alpha}|^2 v^2}{M_C} \sim m_1$, we obtain at the $N_2$ decay stage
\begin{equation}
K_{2 \tau} \sim {m_1 \over m^*} , \ \ K_{2 \mu} \sim \zeta\, {m_1 \over m^*} \ll K_{2 \tau}, \ \mbox{and} \, \,
K_{2 e} = 0 \:,
\end{equation}
which implies
\begin{equation}
K_{2 \gamma} = K_{2 \mu}+ K_{2 e} \ll K_{2 \tau} \:.
\end{equation}
The $N_2$ decay asymmetries, ignoring the contribution with $N_1$ in the loop which is very small for the considered case that $N_1 \ll N_2$, are given via eq.~(\ref{eps2a}) by
\begin{equation}
\varepsilon_{2 \alpha} \approx -\frac{3 }{16 \pi v^2} \frac{M_2}{M_3}\frac{1}{B^\dagger B}
\mathrm{Im}\left[ B_\alpha^* (B^\dagger C)C_\alpha \right].
\end{equation}
Using $B$ and $C$ as specified above eq.~(\ref{YukawaPCSDinLSD}) and $m_1 \sim \frac{|C_{\alpha}|^2 v^2}{M_C}$, we obtain for the decay asymmetries $\varepsilon_{2 \alpha}$:
\begin{equation}
\varepsilon_{2\tau} \sim -\frac{3 }{16 \pi v^2}\,M_2\, m_2, \ \
\varepsilon_{2\mu} = \zeta \varepsilon_{2\tau} \ll \varepsilon_{2\tau}, \ \
\varepsilon_{2e} = 0\;.
\end{equation}
Considering eq.~(\ref{eq:NBLflavourswap}) and noting that $K_{2 e} = 0$ together with $\varepsilon_{2 e} = 0$ implies $p_\delta = 0$ we see that all terms apart from the one proportional to $C^{(3)}_{e\tau}$ are strongly suppressed provided that $\zeta$ is sufficiently tiny ($\zeta \ll r$). In other words, the considered LSD scenario provides an example for case B, a final asymmetry
dominated by flavour coupling effects at the $N_1$ washout stage, as in eq.~(\ref{eq:NBLflavourswap_caseB}). Explicitly, we obtain for the final asymmetry
\begin{equation}
N_{B-L}^{\rm f} \sim
- C^{(3)}_{e\tau}\,{K_{1 e}\over K_{1 \tau}}\,\varepsilon_{2\tau}\,\kappa(K_{2 \tau})
\sim
\frac{3 C^{(3)}_{e\tau}}{16 \pi } \frac{M_2\, m_2}{v^2 } (\sqrt{2} \theta_{13})^2 \kappa\left(\frac{m_1}{m^*} \right) .
\end{equation}
Here one can see that
\begin{equation}
R \simeq 1+0.01\,\zeta^{-1}\,\left({\theta_{13}\over 10^\circ}\right)^2 \,
{\kappa(m_1/m_{\star})\over 0.3} \, .
\end{equation}
This result shows quite interestingly that, if $\theta_{13}\neq 0$ and $m_1\gtrsim m_{\star}$,
one can obtain a huge enhancement for $\xi\rightarrow 0$, indicating that,
accounting for flavour coupling, one can have an asymmetry in a situation where
one would otherwise obtain basically a zero asymmetry. This happens because part of the
tauon asymmetry, thanks to flavour coupling at the lightest RH neutrino wash-out,
escapes the wash out from the lightest RH neutrinos.
\section{Conclusions}
We have discussed various new flavour dependent effects in the $N_2$-dominated scenario of leptogenesis
and have shown that these effects are important in obtaining a reliable expression for the final asymmetry.
In particular we have emphasized the importance of the off-diagonal entries of the flavour coupling matrix that
connects the total flavour asymmetries, distributed in different particle species,
to the lepton and Higgs doublet asymmetries. We have derived analytical formulae for the final asymmetry
including the flavour coupling at the $N_2$-decay stage, where effectively two flavours are active,
as well as at the stage of washout by the lightest
RH neutrino $N_1$ where all three flavours are distinguished.
The interplay between the production
stage and the wash-out stage can then result in a significant enhancement of the
final asymmetry.
We have also described a completely new effect,
``phantom leptogenesis'', where the lightest RH neutrino wash-out is actually able to create
a $B-L$ asymmetry rather than destroying it as usually believed. This is possible because
the individual wash-out on each flavoured asymmetry can erase cancelations among the
electron and muon phantom terms and therefore lead to a net increase of the total $B-L$ asymmetry.
In this way the wash-out at the production is basically fully circumvented for part of the
produced electron and muon asymmetries. We also noticed however that the phantom
terms also strongly depend on the specific initial conditions since they are proportional
to the initial $N_2$-abundance and therefore, in particular, they vanish for initial zero $N_2$-abundance.
The changes induced by these new effects are encoded in the general ``master formula''
eq.~(\ref{NfDa}) for the final asymmetry that we derived from the Boltzmann equations
without approximations.
Based on this equation we have identified a sufficiently generic scenario, the ``flavour swap scenario'',
where we proved that a strong enhancement of the final asymmetry due to flavour
coupling and phantom terms is clearly possible. The conditions for the flavour swap scenario
correspond to have a one flavour dominated asymmetry at the production, in the two flavour regime,
and a wash-out from the lightest RH neutrinos swapping the dominance from one flavour to the other.
Flavour coupling can strongly modify the flavour asymmetry that is subdominant at the production
inducing two new contributions, one generated at the production and one at the lightest RH neutrino
wash-out. Then, in the flavour swap scenario, this translates into a strong modification of the final asymmetry
after the lightest RH neutrino wash-out.
It is quite interesting that, because of flavour coupling, an asymmetry is
actually generated by the wash-out terms that therefore in this case act more
like wash-in terms, transmitting a departure from thermal equilibrium from
one flavour to the other. In the figures we have showed how, once the flavour swap scenario,
is realized, relevant modifications of the final asymmetry are generically induced by flavour coupling.
Depending on the values of the involved parameters,
these range from ${\cal O}(1)$ factor changes (either a reduction or an enhancement) to
an orders of magnitude enhancement.
We have illustrated these effects for two models which describe
realistic neutrino masses and mixing based on sequential dominance.
In conclusion, the off-diagonal flavour
couplings as well as phantom terms can have a significant impact on the baryon asymmetry produced
by $N_2$-dominated leptogenesis and thus have to be included in a reliable analysis. We have derived exact
analytic (and also compact approximate) results that allow this to be achieved. The results in this paper open up
new possibilities for successful $N_2$-dominated leptogenesis to explain the baryon asymmetry of the universe.
\subsection*{Acknowledgments}
S.A.\ acknowledges partial support from the DFG cluster of excellence ``Origin and Structure
of the Universe''. P.D.B. acknowledges financial support from the NExT Institute and SEPnet.
S.F.K.\ was partially supported by the following grants:
STFC Rolling Grant ST/G000557/1 and a Royal Society Leverhulme Trust Senior Research
Fellowship. P.D.B wishes to thank Antonio Riotto and Steve Blanchet for useful discussions.
\section*{Appendix A}
|
1,108,101,564,630 | arxiv | \section{Introduction}
Internet of Everything (IoE) is one of the major applications of future 6G wireless communication networks \cite{matthaiou2021road}. The fact that many IoE devices connected to the network are either battery-powered or battery-less \cite{hu2020energy} gives rise to the need to energize them in a simple and efficient manner. Radio frequency (RF) \ac{wpt} is regarded as a promising technology for charging IoE devices, by utilizing RF signals to wirelessly and simultaneously power multiple devices. Compared with the near-field reactive-based \ac{wpt} techniques, such as inductive coupling and magnetic resonance coupling which require the charged device to be very close to the energy source, RF-based \ac{wpt} is capable of charging devices in a more flexible way over longer distances. Hence, RF-based \ac{wpt} presents many potential applications for supporting and prolonging the operation of IoE devices in in-home setups as well as in industrial and commercial settings \cite{CosMas:17}.
To date, RF \ac{wpt} is mainly studied for charging devices residing in the far-field \cite{zeng2017communications}. In such cases, given the antennas' size, the operational distance between the energizing transmitter and the receivers is larger than the Fraunhofer distance, and thus the radiating wavefront obeys the conventional plane wave model. In such conditions, the transmitter can only direct its energy towards a given angle via beamsteering techniques, resulting in low efficiency and notable energy pollution, i.e., energy radiated at undesired locations. Nonetheless, future wireless 6G systems are expected to support an ecosystem with IoE devices at mmWave bands \cite{saad2019vision} using massive antenna arrays, such as those realized using \acp{dma}, made of configurable radiating metamaterial elements \cite{Yoo2018TCOM, shlezinger2019dynamic, Huang2020holographic, shlezinger2020dynamic,Liaskos_Visionary_2018}. In this case, devices located in distances ranging from a few centimeters to several tens of meters reside in the {\em radiating near-field} region \cite{guidi2019radio,guerra2021near}. Unlike the far-field case, where the EM field is a plane wave, in the radiating near-field region, the EM field is a spherical wavefront. In such settings, transmitters can generate focused beams \cite{nepa2017near}, which were shown to mitigate interference in multi-user communications \cite{zhang2021beam}, and it was recently envisioned that this capability can facilitate efficient \ac{wpt} with minimal energy pollution \cite{zhang2021near}. This motivates the exploration of the ability to achieve energy focusing using emerging antenna architectures, such as \acp{dma}.
In this work we study radiating near-field \ac{wpt} when the energy transmitter uses a \ac{dma}, quantifying its capability to charge multiple remote devices with minimal energy pollution by forming focused energy beams.
We first formulate a mathematical model for DMA-based near-field multi-user \ac{wpt} systems, incorporating both the feasible processing of DMAs as well as the propagation of the transmitted EM waves in near-field wireless channels.
Then, we jointly optimize the digital precoding vector and the DMA weights for maximizing the weighted sum-harvested energy when working in the radiating near-field, while accounting for the specific Lorentzian-form response of metamaterial elements.
To design the radiating near-field transmission pattern based on the weighted sum-harvested energy maximization objective, we propose an alternating optimization algorithm to deal with the corresponding non-convex optimization problem.
In particular, we provide a closed-form optimal digital precoding solution for a fixed DMA configuration. Then, we recast the \ac{dma} elements design problem into a Riemannian manifold optimization problem, which we efficiently solve using the Riemannian conjugate gradient approach.
Simulation results show that our proposed design concentrates the transmissions to the desired focal points, illustrating its energy focusing capability. We also show that by exploiting the beam focusing capabilities of DMAs, one can intelligently and efficiently charge multiple users according to their priority/requirements with minimal energy pollution.
To the best of our knowledge, this work is the first to study beam focusing for multi-user \ac{wpt}, facilitating simultaneous power charging of multiple energy receivers.
The rest of this paper is organized as follows: Section \ref{sec:Model} models DMA-based radiating near-field \ac{wpt} systems, and formulates the sum-harvested power maximization problem. Section \ref{sec:Solution} presents an efficient algorithm for tuning the DMA weights, while Section \ref{sec:Sims} provides numerical results.
Finally, Section \ref{sec:Conclusions} concludes the paper.
We use boldface lower-case and upper-case letters for vectors and matrices, respectively.
The $\ell_2$ norm, vectorization, transpose, conjugate, and Hermitian transpose, are denoted as $\| \cdot \|$, ${\rm vec}(\cdot)$, $(\cdot)^T$, $(\cdot)^{\dag}$, and $(\cdot)^H$, respectively, and
$\mathbb{C}$ is the set of complex numbers.
\section{System Model}
\label{sec:Model}
In this section, we characterize the mathematical model for DMA-based radiating near-field \ac{wpt}. We begin by introducing the DMA transmission model in Subsection \ref{sub:DMA}. Then, we present the near-field wireless channel model in Subsection \ref{sub:model}, and formulate the harvested power maximization problem in Subsection~\ref{sub:problem}.
\vspace{-0.1cm}
\subsection{Dynamic Metasurface Antennas} \label{sub:DMA}
\ac{dma} is an emerging technology for realizing large scale antenna arrays using reconfigurable metamaterials, whose physical properties such as permittivity and permeability are dynamically adjustable \cite{shlezinger2020dynamic}. These antenna architectures are typically comprised of multiple microstrips, each containing multiple metamaterial elements. The frequency response of each element is independently adjustable by varying its local dielectric properties \cite{Sleasman-2016JAWPL}. For DMA-based transmitters, each microstrip is fed by an RF chain, and the input signal is radiated by all the elements within the same microstrip \cite{wang2019dynamic}.
To model the transmission procedure, consider a DMA with $N_d$ microstrips of $N_e$ elements each, i.e., the total number of tunable metamaterial elements is $N\triangleq N_d \cdot N_e$. Letting ${\bf z}_f \in {\mathbb{C}}^{N_d \times 1}$ denote the input signals to the microstrips, the radiated signal, denoted by ${\mathbf{r}}$, can be written as %
\begin{equation} \label{eq: vector_representation}
{\mathbf{r}}= {\mathbf{H Q}}\, {\mathbf{z}}_f.
\end{equation}
Here, $\mathbf{Q}\in {\mathbb{C}}^{N \times N_d}$ is the configurable DMAs weights, whose entries are
\begin{equation} \label{eq: weighting_matrix}
{\mathbf{Q}}_{(i-1) N_{e}+l, n}=\left\{\begin{array}{ll}
q_{i, l} & i=n \\
0 & i \neq n \, ,
\end{array}\right.
\end{equation}
where $q_{i,l}$ denotes the frequency response of the $l$-th metamaterial element of $i$-th microstrip. These responses satisfy the Lorentzian form \cite{DSmith-2017PRA, smith2017analysis}, approximated as
\begin{equation}\label{eqn:FreqSel}
q_{i,l} \in \mathcal{Q}\triangleq \left\{\frac{j+e^{j \phi}}{2}| \phi \in [0,2\pi]\right\}, \qquad \forall i,l.
\end{equation}
In addition, $\mathbf{H}$ in \eqref{eq: vector_representation} is a $N \times N$ diagonal matrix with entries
${\mathbf{H}}_{((i-1)N_e+l,(i-1)N_e+l)}=h_{i,l}$,
where $h_{i,l}$ denotes the signal propagation effect of the $l$-th metamaterial element of $i$-th microstrip (inside the microstrip). These coefficients can be written as
$h_{i,l}=e^{-\rho_{i,l}(\alpha_{\rm c}+ j\beta_{\rm c}) }$,
where $\alpha_{\rm c}$ and $\beta_{\rm c}$ are two constants depending on the characteristic of DMA, and $\rho_{i,l}$ denotes the location of the $l$-th element in the $i$-th microstrip.
\vspace{-0.1cm}
\subsection{DMA-based Near-field Channel Model} \label{sub:model}
\begin{figure}
\centering
\includegraphics[width=0.68\columnwidth]{Fig/DMA_WPT.png}
\vspace{-0.2cm}
\caption{DMA-based energy focusing for radiating near-field multi-user \ac{wpt}.}
\vspace{0.1cm}
\label{fig:system_model}
\end{figure}
We consider a radiating near-field multi-user MIMO \ac{wpt} system where a DMA-based energy transmitter charges $M$ single-antenna energy receivers wirelessly, as illustrated in Fig. \ref{fig:system_model}. For the radiating near-field case, the distance between the DMA transmitter and the energy receivers is assumed to be not larger than the Fraunhofer distance $d_\mathrm{F} \triangleq \frac{2\,D^2}{\lambda}$ and not smaller than the Fresnel limit $d_{\mathrm N}\triangleq \sqrt[3]{\frac{D^4}{8\,\lambda}}$ \cite{guidi2019radio}, with $D$ and $\lambda$ representing the antenna diameter and the wavelength, respectively. The properties of spherical waves in the radiating near-field allow for the generation of focused beams to facilitate \ac{wpt} \cite{nepa2017near}.
To formulate the overall energy transmission model, we let $e_m $ be the unit-power energy symbol for the $m$-th energy receiver, $m \in \{1,2,\ldots,M\}\triangleq \mySet{M}$, and use ${\bf w}_m \in {\mathbb{C}}^{N_d \times 1}$ to denote the digital precoding vector. The digital input to the DMA is given by ${\bf z}_f =\sum_{m=1}^M {\bf w}_m e_m $, and thus by \eqref{eq: vector_representation} the channel input is
\begin{equation} \label{eq: vector_representation_1}
\mathbf{r}=\sum_{m=1}^M \mathbf{H} \mathbf{Q} \, {\bf w}_m e_m \, .
\end{equation}
Let $\mathbf{p}_{i,l}=(x_i,y_l,0)$, $i=1,2, \ldots N_d$, $l=1,2, \ldots N_e$, denote the Cartesian coordinate of the $l$-th element of the $i$-th microstrip. Then, under the free-space condition, the signal received by the $m$-th energy receiver located in $\mathbf{p}_m=(x_m,y_m,z_m)$ can be written as
\begin{equation}\label{eqn:RX1_new}
s(\mathbf{p}_m) = \sum_{i=1}^{N_d} \sum_{l=1}^{N_e} {A}_{i,l} (\mathbf{p}_m)\, e^{ -\jmath k d_{i,l,m}} \,y_{i,l}\, +n_m.
\end{equation}
Here, $d_{i,l,m}=|\mathbf{p}_m-\mathbf{p}_{i,l}|$ is the distance between the $l$-th element of the $i$-th microstrip and the $m$-th energy receiver; $k \triangleq 2\pi /\lambda$ denotes the wave number;
$n_m \sim \mathcal{C} \mathcal{N}\left(0, \sigma^{2}\right)$ is white Gaussian noise; and ${A}_{i,l}(\mathbf{p}_m)$ is the path-loss coefficient. Following \cite{ellingson2019path}, we have
${A}_{i,l}(\mathbf{p}_m)=\sqrt{F(\Theta_{i,l,m})}\frac{\lambda}{4\,\pi d_{i,l,m}}$,
where $\Theta_{i,l,m}=(\theta_{i,l,m},\phi_{i,l,m})$ is the elevation-azimuth pair from the $l$-th element of the $i$-th microstrip to the $m$-th energy receiver, and $F(\Theta_{i,l,m})$ is the radiation profile modeled as
\begin{align} \label{eqn:radiationProfile}
F(\Theta_{i,l,m}) \!=\! \left\{\begin{array}{ll} 2\, (b+1)\, \cos^b (\theta_{i,l,m}) & \, \theta_{i,l,m} \in [0,\pi/2] \, , \\0 & \, \text{otherwise}. \\ \end{array}\right.
\end{align}
%
In \eqref{eqn:radiationProfile}, the parameter $b$ is the Boresight gain constant, e.g., $b=2$ for the dipole case \cite{ellingson2019path}.
For ease of analysis, we rewrite \eqref{eqn:RX1_new} in the following compact form $s(\mathbf{p}_m) = {\bf a}_m^H\,\mathbf{r} +n_m$, with ${\bf a}_m \triangleq \big[A_{1,1}(\mathbf{p}_m)\, e^{ -\jmath k d_{1,1,m}},\ldots , A_{N_d,N_l}(\mathbf{p}_m)\, e^{ -\jmath k d_{N_d,N_l,m}}\big]^H $. Then,
by using the expression for the channel input $\mathbf{y}$ given in \eqref{eq: vector_representation_1}, the received signal of the $m$-th energy receiver is given by
\begin{equation} \label{eqn:RX2_vector}
s(\mathbf{p}_m) ={\bf a}_m^H\,\sum_{j=1}^M \mathbf{H} \mathbf{Q} \, {\bf w}_j x_j +n_m, \quad \forall m \in \mySet{M}.
\end{equation}
\vspace{-0.1cm}
\subsection{Problem Formulation} \label{sub:problem}
Using the channel formulation \eqref{eqn:RX2_vector} and the energy harvesting model proposed in \cite{xu2014multiuser}, the harvested power from the transmitted signal of the $m$-th energy receiver s given by
\begin{equation} \label{eq:total-harvested power}
E_m = \zeta\, \sum_{j =1 }^M\left|{\bf a}_m^H\, \mathbf{H} \mathbf{Q}\, {\bf w}_j \right|^{2}, \quad m \in \mySet{M},
\end{equation}
where $0 < \zeta <1$ is the energy conversion efficiency.
Our aim is to design a transmission scheme, including both the digital precoding as well as the \ac{dma} configuration, to enable multi-user \ac{wpt} in the radiating near-filed region. This is expressed as the joint optimization of the DMA weights $\mathbf{Q}$ and the digital digital precoding vectors $\left\{{\bf w}_m \right\}$ to maximize the weighted sum-harvested energy, subject to both the total transmit power constraint $P_{\max}$, and the structure constraint on the DMA weights matrix $\mathbf{Q}$ in \eqref{eq: weighting_matrix}. Mathematically, the problem of interest can be formulated as
\begin{equation} \label{eq:optimization_problem1}
\begin{split}
&\max_{ \left\{{\bf w}_m\right\},{\bf Q}}~~\zeta ~\sum_{m=1}^{M} \alpha_m E_m
\\
&~~s.t.~~~~~~\eqref{eq: weighting_matrix}, \quad q_{i, l} \in \mathcal{Q}, \forall i,l, \quad \sum_{m=1}^{M} \left\|{\bf HQ w}_m\right\|^2 \leq P_{\rm max},
\end{split}
\end{equation}
where $\{\alpha_m\}_{m=1}^M$, are predefined weights that are application-specific.
\section{DMA Beam Focusing for WPT}
\label{sec:Solution}
In this section, we study the joint design of the digital precoding vector and the DMA weights for maximizing the weighted sum-harvested energy. Note that
\eqref{eq:optimization_problem1} is non-convex due to the coupled optimization variables in both the objective function and constraints, as well as the Lorentzian constraints on metamaterial elements. To make \eqref{eq:optimization_problem1} more tractable, we relax it as follows
%
\begin{equation} \label{eq:optimization_problem}
\begin{split}
&\max_{ \left\{{\bf w}_m\right\},{\bf Q}}~~\zeta ~\sum_{m=1}^{M} \sum_{j =1 }^M \alpha_m\left|{\bf a}_m^H\, \mathbf{H} \mathbf{Q}\, {\bf w}_j \right|^{2}\\
&~~s.t.~~~~~~\eqref{eq: weighting_matrix}, \quad q_{i, l} \in \mathcal{Q}, \forall i,l, \quad \sum_{m=1}^{M} \left\|{\bf w}_m\right\|^2 \leq P_{\rm max}.
\end{split}
\end{equation}
The problem \eqref{eq:optimization_problem} differs from \eqref{eq:optimization_problem1} in its power constraint, which is imposed on the digital output rather than on the transmitted signal. However, one can derive the digital precoder based on \eqref{eq:optimization_problem}, and scale $\{\myVec{w}_m\}$ such that the transmitted power constraint in \eqref{eq:optimization_problem1} holds.
Since problem \eqref{eq:optimization_problem} is still non-convex, we propose to individually optimize $\myMat{Q}$ and $\{\myVec{w}_m\}$ in an alternating manner. In the following, we show how to solve \eqref{eq:optimization_problem} for fixed $\myMat{Q}$ and for fixed $\{\myVec{w}_m\}$, respectively. Due to page limitations, the proofs of the results can be found in \cite{Longerversion1}.
\vspace{-0.1cm}
\subsection{Optimizing the Digital Precoder}
When $\bf Q$ is fixed, \eqref{eq:optimization_problem} reduces to the weighted sum-harvested energy maximization problem in multi-user \ac{wpt} systems. By defining ${\bf G}\left(\mathbf{Q}\right)=\zeta ~\sum_{m=1}^{M} \alpha_m\,\mathbf{Q}^H\,\mathbf{H}^H\,{\bf a}_m\,{\bf a}_m^H\, \mathbf{H} \mathbf{Q}$, the weighted sum-harvested energy can be reformulated as $\sum_{j=1}^{M} {\bf w}_j^H\,{\bf G}\,{\bf w}_j$. As a result, for a fixed $\bf Q$, \eqref{eq:optimization_problem} is transformed into
\begin{equation} \label{eq:sub1}
\max_{ \left\{{\bf w}_j\right\}}~~\sum_{j=1}^{M} {\bf w}_j^H\,{\bf G}\left(\mathbf{Q}\right)\,{\bf w}_j, \quad
~s.t.~~~ \sum_{j=1}^{M} \left\|{\bf w}_j\right\|^2 \leq P_{\rm max}.
\end{equation}
Following \cite{xu2014multiuser}, we have the following proposition, which provides the closed-form optimal solution to \eqref{eq:sub1}.
\begin{proposition}
\label{prop:digital_solution}
Let ${\bf w}^*\left(\mathbf{Q}\right)$ be the eigenvector corresponding to the maximal eigenvalue of ${\bf G}\left(\mathbf{Q}\right)$. Then, \eqref{eq:sub1} is maximized by setting ${\bf w}_j = \sqrt{p_j}{\bf w}^*\left(\mathbf{Q}\right)$ for any non-negative $\{p_j\}$ s.t. $\sum_{j=1}^M p_j = P_{\rm max}$.
\end{proposition}
Proposition \ref{prop:digital_solution} indicates that all digital precoding vectors share the same transmission direction as ${\bf w}^*\left(\mathbf{Q}\right)$, and the total transmit power should be used to maximize the weighted sum-harvested energy. Without loss of generality, we henceforth set the digital precoder for a given $\myMat{Q}$ to be
\begin{equation} \label{eq:sub1_solution}
{\bf w}_1 = \sqrt{P_{\rm max}} {\bf w}^*\left(\mathbf{Q}\right),~ \text{and}~ {\bf w}_2=\cdots={\bf w}_M = 0.
\end{equation}
From \eqref{eq:sub1_solution} we see that a single digital precoding vector is sufficient to maximize the weighted sum-harvested energy for a given $\myMat{Q}$. This is because energy symbols do not carry information, thus each receiver can harvest energy from the same symbol.
\vspace{-0.1cm}
\subsection{Optimizing the DMA Weights}
We next focus on solving \eqref{eq:optimization_problem} for fixed $\left\{{\bf w}_j\right\}$. According to \eqref{eq:sub1_solution}, problem \eqref{eq:optimization_problem} for fixed $\left\{{\bf w}_j\right\}$ is simplified as
\begin{equation} \label{eq:sub2}
\max_{{\bf Q}}~~\zeta ~\sum_{m=1}^{M} \alpha_m\,\left|{\bf a}_m^H\, \mathbf{H} \mathbf{Q}\, {\bf w}_1 \right|^{2}, \quad s.t.~~\eqref{eq: weighting_matrix}, q_{i, l} \in \mathcal{Q}, \forall i,l.
\end{equation}
To proceed, we define the $ N_d^2 \cdot N_e \times 1$ vectors ${\bf q}={\rm vec}\left(\bf Q \right) $, and ${\bf z}_m=\left({\bf w}_1^T \otimes ({\bf a}_m^H\, \mathbf{H})\right)^H$. Using these definitions, we identify an equivalent optimization problem to problem \eqref{eq:sub2}, as stated in following theorem.
\begin{theorem}
\label{thm:MultiUser}
For fixed ${\bf w}_1$, \eqref{eq:sub2} is equivalent to:
%
\vspace{-0.1cm}
\begin{equation} \label{eq:simplifiedx}
\min_{ {\bf \bar q}}~~ {\bf \bar q}^H\, {\bf A}\left({\bf w}_1\right)\, {\bf \bar q}, \quad
s.t.~~~{\bar q}_{l} \in \mathcal{Q},~\forall l \in \mathcal{A}_q,
\end{equation}
where $\mathcal{A}_q$ is the set of all non-zero elements of ${\bf q}$, ${\bf \bar q}$ is the modified version of ${\bf q}$ obtained by removing all the zero elements of ${\bf q}$; ${\bf A}\left({\bf w}_1\right) \triangleq - \zeta \sum_{m=1}^{M} \alpha_m {\bf \bar z}_m\, {\bf \bar z}_m^H$, with
${\bf \bar z}_{m}$ being the modified version of ${\bf z}_m$ obtained by removing the elements having the same index as the zero elements of ${\bf q}$.
\end{theorem}
\ifFullVersion
\begin{IEEEproof}
See Appendix~\ref{app:Proof2}.
\end{IEEEproof}
\fi
The equivalence between \eqref{eq:sub2} and \eqref{eq:simplifiedx} holds in the sense that they achieve the same optimal value. Thus, the solution to \eqref{eq:sub2} can be recovered from that of \eqref{eq:simplifiedx} according to the structure of ${\bf Q}$ \eqref{eq: weighting_matrix}.
Problem \eqref{eq:simplifiedx} is still non-convex and includes the Lorentzian constraint ${\bar q}_{l} \in \mathcal{Q}$ defined in \eqref{eqn:FreqSel}. This constraint characterizes the feasible set as a circle on the complex plane $\left|{\bar q}_{l} - \frac{1}{2}e^{j \frac{\pi}{2}}\right| = \frac{1}{2}$, with the circle center at $(0,\frac{1}{2}e^{j \frac{\pi}{2}})$ and radius equal to $\frac{1}{2}$. In order to simplify \eqref{eq:simplifiedx}, we define a new vector variable ${\bf b} \in \mathbb{C}^{N}$ whose $l$-th entry is given by
\begin{equation} \label{eq:change_variable}
b_l=2{\bar q}_{l} - e^{j \frac{\pi}{2}}, \quad \forall l \in \mathcal{A}_q.
\end{equation}
The variable $b_l$ lies on the unit circle of complex plane, i.e., $\left|b_l\right| = 1$. According to \eqref{eq:change_variable}, we have ${\bf \bar q}=\frac{1}{2}\left({\bf b}+e^{j \frac{\pi}{2}}{\bf 1}\right)$, where ${\bf 1}$ denotes a $N \times 1$ all ones vector. Hence, we transform
\eqref{eq:simplifiedx} into
\begin{equation} \label{eq:changed}
\begin{split}
&\min_{ {\bf b}}~~f\left({\bf b}\right) \triangleq \frac{1}{4} \left({\bf b}+e^{j \frac{\pi}{2}}{\bf 1}\right)^H {\bf A}\left({\bf w}_1\right)\, \left({\bf b}+e^{j \frac{\pi}{2}}{\bf 1}\right)\\
&~~s.t.~~\left|b_{l}\right| =1,~\forall l \in \mathcal{A}_q.
\end{split}
\end{equation}
The search space in \eqref{eq:changed} is the product of $N$ complex circles, which is a Riemannian submanifold of ${\mathbb{C}}^N$. Thus, \eqref{eq:changed} can be tackled using the Riemannian conjugate gradient (RCG) algorithm \cite{yu2019miso,zhang2021beam}.
Denote by ${\bf Q}$ and ${\bf w}_1$ as the optimal solution to problem \eqref{eq:optimization_problem}. Then, we can scale ${\bf w}_1$ to ${\bf w}_1=\sqrt{P_{\rm max}}\frac{{\bf w}_1}{\left\|{\bf H Q}{\bf w}_1\right\|}$ such that the resulting new ${\bf w}_1$ together with ${\bf Q}$ are an effective approximate solution to problem \eqref{eq:optimization_problem1}, satisfying the transmitted signal power constraint.
Our proposed alternating approach for solving problem \eqref{eq:optimization_problem1} is summarized as Algorithm \ref{algorithm2}. In particular,
in the $4$th step, the
updating of ${\bf b}^{\left(t+1\right)}$ through RCG algorithm envolves both ${\bf b}^{\left(t\right)}$ in step 3 as its initial value, and the Euclidean gradient of the objective $f\left({\bf b}\right)$ at point ${\bf b}$, that is, $ \nabla\,f\left({\bf b}\right) =\frac{1}{2} \left({\bf A}\left({\bf w}_1\right)\,{\bf b} + e^{j \frac{\pi}{2}}\,{\bf A}\left({\bf w}_1\right)\, {\bf 1}\right) $, for the calculation of the Riemannian gradient.
\begin{algorithm}[t!]
\caption{Proposed algorithm for solving problem \eqref{eq:optimization_problem1}}
\label{algorithm2}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Initialize:}} \REQUIRE ${\bf Q}^{\left(0\right)}$; \\
\FOR{$t=0,1,\ldots,T$ }
\STATE Calculate ${\bf w}_1^{\left(t\right)}$ based on \eqref{eq:sub1_solution}, and then update ${\bf A}\left({\bf w}_1^{\left(t\right)}\right)$; \\
\STATE Calculate ${\bf b}^{\left(t\right)}$ based on ${\bf Q}^{\left(t\right)}$ and \eqref{eq:change_variable}; \\
\STATE Update ${\bf b}^{\left(t+1\right)}$ by solving problem \eqref{eq:changed};\\
\STATE Obtain ${\bf \bar q}^*$ for problem \eqref{eq:simplifiedx} based on ${\bf b}^{\left(t+1\right)}$ and \eqref{eq:change_variable};\\
\STATE Update ${\bf Q}^{\left(t+1\right)}$ for problem \eqref{eq:sub2} based on ${\bf \bar q}^*$ and \eqref{eq: weighting_matrix};\\
\STATE $t=t+1;$
\ENDFOR
\STATE ${\bf w}_1^*=\sqrt{P_{\rm max}}\frac{{\bf w}_1^{\left(T\right)}}{\left\|{\bf H}{\bf Q}^{\left(T\right)}{\bf w}_1^{\left(T\right)}\right\|}$;
\renewcommand{\algorithmicrequire}{\textbf{Output:}} \REQUIRE ${\bf w}_1^*$, ${\bf Q}^{*}={\bf Q}^{\left(T\right)}$.
\end{algorithmic}
\end{algorithm}
%
\vspace{-0.1cm}
\subsection{Discussion}
\label{sec:Discussion}
The considered weighted sum-harvested energy is also a commonly used metric in conventional far-field multi-user \ac{wpt} scenarios \cite{zeng2017communications}.
While we do not explicitly enforce the DMA to generate focused energy beams, this indeed happens when seeking to maximize the weighted sum-harvested energy, as numerically illustrated in Section \ref{sec:Sims}. This is because we here consider the radiating near-field scenario, where the energy beam focusing capability inherently exists, and is implicitly encapsulated in the objective via $\{{\bf a}_m\}$.
As shown in Section \ref{sec:Sims}, such
energy focusing brings forth several advantages to radiating near-field WPT systems. First, it enables enhancing the energy transfer efficiency compared with directive radiation in the far-field. Second,
it reduces energy pollution and limits human exposure to radiated energy. Therefore, this capability is expected to notably facilitate the charging of 6G IoE devices in indoor settings.
For multi-user wireless communications operating in radiating near-field region, beam focusing has been exploited to mitigate co-channel interference and hence maximize the sum-rate in our previous work \cite{zhang2021beam}. Despite the similarity between multi-user near-field \ac{wpt} (considered here) and communications (considered in \cite{zhang2021beam}), there are several fundamental differences in both the design objectives and proposed algorithms. Specifically, for wireless communications, focused beams are designed to reduce co-channel interference which is harmful to data transmission rate. In \ac{wpt}, co-channel interference is a useful energy source for energy receivers, resulting in different focused beams design considerations to fit different objectives. The fact that beam focusing designs differ between \ac{wpt} and wireless communications motivates exploring simultaneous wireless information and power transfer, which is a paradigm allowing a hybrid information and energy transmitter to communicate and power multiple devices at the same time \cite{xu2014multiuser}, in the radiating near-field.
However, we leave this extension for future work.
\section{Numerical Evaluations}
\label{sec:Sims}
In this section, we present some representative numerical results to demonstrate the potential of energy beam focusing for radiating near-field \ac{wpt}. We consider a radiating near-field WPT system where the energy transmitter is equipped with a planar DMA positioned in the $xy$-plane, and the single-antenna energy receivers are positioned in the $xz$-plane. The antenna size is $30~ {\rm cm} \times 30~ {\rm cm}$. The inter-element spacing in DMA is $\lambda / 2$, and the numbers of microstrips and metamaterial elements are $N_d = N_e = \lfloor 2D/\lambda\rfloor$, where $\lfloor \cdot\rfloor$ is the integer floor function. We use { $\alpha_{\rm c} = 1.2~[{\rm m}^{-1}]$ and $\beta_{\rm c} = 827.67 ~[{\rm m}^{-1}]$} \cite{zhang2021beam}, to represent the propagation inside the waveguides. We set $P_{\max}$ to be $1~$W, and the RF-to-DC energy conversion efficiency is $\zeta=0.5$.
To demonstrate the gains of near-field energy beam focusing over far-field beam steering, Fig.~\ref{fig:rate_twoUser} depicts the numerically evaluated normalized received power at each point of the predefined region in the $xz$-plane, where the normalized received power is defined as the ratio of the received power of an energy receiver to its corresponding channel gain for removing the influence of path-loss. The energy transmission scheme is designed to maximize the received power of the single energy receiver located at ${\rm F}_1(x,y,z)=(0,0,1.51~{\rm m})$. In Fig. \ref{fig:rate_twoUser}(a), we set the frequency as $28~$GHz, so that the location of the target energy receiver is in the radiating near-field region; whereas in Fig. \ref{fig:rate_twoUser}(b), we set the carrier
frequency as $1.2~$GHz, resulting in the target energy receiver being located in the far-field region. It is observed from Fig. \ref{fig:rate_twoUser}(a) that, in the near-field case, the energy beam is focused around the target energy receiver area, and the power harvested by the target energy receiver is up to $13.4~\mu W$. By contrast, for the far-field case as shown in Fig. \ref{fig:rate_twoUser}(b), energy can only be transmitted towards a direction with a comparatively wider energy beam. Consequently, far-field signalling results in the target energy receiver harvesting only $6.5~\mu W$, which is only $48\%$ of the power obtained in the near-field.
This gain is achieved despite the fact that the system in Fig. \ref{fig:rate_twoUser}(b) operates at a lower frequency and hence with a lower isotropic path loss.
Besides, by comparing Figs. \ref{fig:rate_twoUser}(a) and \ref{fig:rate_twoUser}(b), it is observed that for the radiating near-field WPT system, energy beam focusing is capable of not only enhancing energy transfer efficiency, but also reducing energy pollution.
\begin{figure}
\centering
\subfigure[Near-field WPT]{
\label{fig:subfig:near-field}
\includegraphics[width=2.1in]{Fig/N1.png}
}
\subfigure[Far-field WPT]{
\label{fig:subfig:far-field}
\includegraphics[width=2.1in]{Fig/N2.png}
}
\vspace{-0.3cm}
\caption{The normalized received power of the energy receiver located at the: (a) near-field region; (b) far-field region.}
\label{fig:rate_twoUser}
\end{figure}
\begin{table}
\centering
\caption{A comparison of harvested energy of each energy receiver under different combinations of weighting coefficients.}\label{tabel1}
\begin{tabular}{ |c|c|c| }
\hline
Received Energy & Energy Receiver 1 & Energy Receiver 2\\
\hline
$\alpha_1=0.5$, $\alpha_2=0.5 $ & 30.3 $\mu W$ & 2.5 $\mu W$ \\
\hline
$\alpha_1=0.1$, $\alpha_2=0.9 $ & 18.7 $\mu W$& 4.7 $\mu W$\\
\hline
\end{tabular}
\end{table}
In Table~\ref{tabel1}, we show the received power of two energy receivers incurred by our proposed Algorithm \ref{algorithm2} under different combinations of weighting coefficients. The energy receivers are located at ${\rm F}_1(x,y,z)=(0,0,0.97~{\rm m})$ and ${\rm F}_2(x,y,z)=(0,0,1.51~{\rm m})$, lying in a similar angular direction. It is observed from Table~\ref{tabel1} that for the case of $\alpha_1=0.5$, $\alpha_2=0.5 $, the harvested power of energy receiver 1 is much larger than that of energy receiver 2. This is because energy receiver 1 has a better channel condition and thus energy beams are mainly focused on around its location to maximize the objective for the case of having the same weighting coefficient. When we change the weighting coefficients to $\alpha_1=0.1$, $\alpha_2=0.9 $, the power harvested by energy receiver 2 increases from $2.5~\mu $W to $4.7~\mu $W, while the power harvested by the energy receiver 1 decreases from $30.3~\mu $W to $18.7~\mu $W. This is because the energy transmitter is capable of intelligently charging multiple users according to their priority/requirements even if multiple energy receivers have similar angular direction, thanks to the distinguishing capability of the near-field energy focusing. We point out that beam steering in the far-field does not possess such distinguishing ability, which is especially important for future 6G IoE applications where devices are expected to be densely deployed in the Fresnel region.
\section{Conclusions}
\label{sec:Conclusions}
In this work we studied the use of DMAs for multi-user \ac{wpt} in the radiating near-field region. We presented a model for DMA-based radiating near-field WPT systems. We then formulated the joint optimization of the DMAs weights and digital precoders to maximize the weigthed sum-harvested energy, and proposed efficient algorithms to solve the resulting non-convex problems. Numerical results demonstrated that using DMAs for energy focusing results in improved energy transfer efficiency in the radiating near-field with minimal energy pollution.
\ifFullVersion
\vspace{-0.2cm}
\begin{appendix}
%
\numberwithin{proposition}{subsection}
\numberwithin{lemma}{subsection}
\numberwithin{corollary}{subsection}
\numberwithin{remark}{subsection}
\numberwithin{equation}{subsection}
%
%
\vspace{-0.2cm}
\subsection{Proof of Theorem \ref{thm:MultiUser}}
\label{app:Proof2}
By using the fact that ${\bf x}^T{\bf Q} {\bf y}=({\bf y}^T \otimes {\bf x}^T) {\rm Vec}(\bf Q)$ holds for arbitrary vectors $\bf x$, $\bf y$, and matrix $\bf Q$, the objective function of \eqref{eq:sub2} is rewritten as
\begin{equation} \label{eq:sub2_reformulation}
\zeta ~\sum_{m=1}^{M} \alpha_m\,\left|{\bf a}_m^H\, \mathbf{H} \mathbf{Q}\, {\bf w}_1 \right|^{2} = \zeta ~\sum_{m=1}^{M} \alpha_m\,\left|{\bf z}_m^H {\bf q}\right|^{2},
\end{equation}
where ${\bf q}={\rm vec}\left(\bf Q \right) $ and ${\bf z}_m=\left({\bf w}_1^T \otimes ({\bf a}_m^H\, \mathbf{H})\right)^H, \forall m\in \mySet{M}$, are $ N_d^2 \cdot N_e \times 1$ vectors.
As the zero elements of ${\bf q}$ have no effect on the value of the right-hand expression in \eqref{eq:sub2_reformulation}, we remove all of them and equivalently rewrite the objective function of \eqref{eq:sub2} as
\begin{equation} \label{eq:sub2_reformulation2}
\zeta ~\sum_{m=1}^{M} \alpha_m\,\left|{\bf \bar z}_m^H {\bf \bar q}\right|^{2},
\end{equation}
where ${\bf \bar q}$ is the modified version of ${\bf q}$ obtained by removing all the zero elements of ${\bf q}$; ${\bf \bar z}_m, m \in \mySet{M}$, are the modified versions of ${\bf z}_m$, which are obtained by removing the elements having the same index as the zero elements of ${\bf q}$.
Based on \eqref{eq:sub2_reformulation2}, problem \eqref{eq:sub2} is thus simplified as
\begin{equation} \label{eq:sub2_reformulate}
\begin{split}
&\min_{ {\bf \bar q}}~~ {\bf \bar q}^H\, {\bf A}\left({\bf w}_1\right)\, {\bf \bar q}\\
&~~s.t.~~~{\bar q}_{l} \in \mathcal{Q},~\forall l \in \mathcal{A}_q,
\vspace{-0.1cm}
\end{split}
\end{equation}
where ${\bf A}\left({\bf w}_1\right) \triangleq - \zeta \sum_{m=1}^{M} \alpha_m {\bf \bar z}_m\, {\bf \bar z}_m^H$, and $\mathcal{A}_q$ denotes the set of all non-zero elements of ${\bf q}$.
\end{appendix}
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}
Internet of Everything (IoE) is one of the major applications of future 6G wireless communication networks \cite{matthaiou2021road}. The fact that many IoE devices connected to the network are either battery-powered or battery-less \cite{hu2020energy} gives rise to the need to energize them in a simple and efficient manner. Radio frequency (RF) \ac{wpt} is regarded as a promising technology for charging IoE devices, by utilizing RF signals to wirelessly and simultaneously power multiple devices. Compared with the near-field reactive-based \ac{wpt} techniques, such as inductive coupling and magnetic resonance coupling which require the charged device to be very close to the energy source, RF-based \ac{wpt} is capable of charging devices in a more flexible way over longer distances. Hence, RF-based \ac{wpt} presents many potential applications for supporting and prolonging the operation of IoE devices in in-home setups as well as in industrial and commercial settings \cite{CosMas:17}.
To date, RF \ac{wpt} is mainly studied for charging devices residing in the far-field \cite{zeng2017communications}. In such cases, given the antennas' size, the operational distance between the energizing transmitter and the receivers is larger than the Fraunhofer distance, and thus the radiating wavefront obeys the conventional plane wave model. In such conditions, the transmitter can only direct its energy towards a given angle via beamsteering techniques, resulting in low efficiency and notable energy pollution, i.e., energy radiated at undesired locations. Nonetheless, future wireless 6G systems are expected to support an ecosystem with IoE devices at mmWave bands \cite{saad2019vision} using massive antenna arrays, such as those realized using \acp{dma}, made of configurable radiating metamaterial elements \cite{Yoo2018TCOM, shlezinger2019dynamic, Huang2020holographic, shlezinger2020dynamic,Liaskos_Visionary_2018}. In this case, devices located in distances ranging from a few centimeters to several tens of meters reside in the {\em radiating near-field} region \cite{guidi2019radio,guerra2021near}. Unlike the far-field case, where the EM field is a plane wave, in the radiating near-field region, the EM field is a spherical wavefront. In such settings, transmitters can generate focused beams \cite{nepa2017near}, which were shown to mitigate interference in multi-user communications \cite{zhang2021beam}, and it was recently envisioned that this capability can facilitate efficient \ac{wpt} with minimal energy pollution \cite{zhang2021near}. This motivates the exploration of the ability to achieve energy focusing using emerging antenna architectures, such as \acp{dma}.
In this work we study radiating near-field \ac{wpt} when the energy transmitter uses a \ac{dma}, quantifying its capability to charge multiple remote devices with minimal energy pollution by forming focused energy beams.
We first formulate a mathematical model for DMA-based near-field multi-user \ac{wpt} systems, incorporating both the feasible processing of DMAs as well as the propagation of the transmitted EM waves in near-field wireless channels.
Then, we jointly optimize the digital precoding vector and the DMA weights for maximizing the weighted sum-harvested energy when working in the radiating near-field, while accounting for the specific Lorentzian-form response of metamaterial elements.
To design the radiating near-field transmission pattern based on the weighted sum-harvested energy maximization objective, we propose an alternating optimization algorithm to deal with the corresponding non-convex optimization problem.
In particular, we provide a closed-form optimal digital precoding solution for a fixed DMA configuration. Then, we recast the \ac{dma} elements design problem into a Riemannian manifold optimization problem, which we efficiently solve using the Riemannian conjugate gradient approach.
Simulation results show that our proposed design concentrates the transmissions to the desired focal points, illustrating its energy focusing capability. We also show that by exploiting the beam focusing capabilities of DMAs, one can intelligently and efficiently charge multiple users according to their priority/requirements with minimal energy pollution.
To the best of our knowledge, this work is the first to study beam focusing for multi-user \ac{wpt}, facilitating simultaneous power charging of multiple energy receivers.
The rest of this paper is organized as follows: Section \ref{sec:Model} models DMA-based radiating near-field \ac{wpt} systems, and formulates the sum-harvested power maximization problem. Section \ref{sec:Solution} presents an efficient algorithm for tuning the DMA weights, while Section \ref{sec:Sims} provides numerical results.
Finally, Section \ref{sec:Conclusions} concludes the paper.
We use boldface lower-case and upper-case letters for vectors and matrices, respectively.
The $\ell_2$ norm, vectorization, transpose, conjugate, and Hermitian transpose, are denoted as $\| \cdot \|$, ${\rm vec}(\cdot)$, $(\cdot)^T$, $(\cdot)^{\dag}$, and $(\cdot)^H$, respectively, and
$\mathbb{C}$ is the set of complex numbers.
\section{System Model}
\label{sec:Model}
In this section, we characterize the mathematical model for DMA-based radiating near-field \ac{wpt}. We begin by introducing the DMA transmission model in Subsection \ref{sub:DMA}. Then, we present the near-field wireless channel model in Subsection \ref{sub:model}, and formulate the harvested power maximization problem in Subsection~\ref{sub:problem}.
\vspace{-0.1cm}
\subsection{Dynamic Metasurface Antennas} \label{sub:DMA}
\ac{dma} is an emerging technology for realizing large scale antenna arrays using reconfigurable metamaterials, whose physical properties such as permittivity and permeability are dynamically adjustable \cite{shlezinger2020dynamic}. These antenna architectures are typically comprised of multiple microstrips, each containing multiple metamaterial elements. The frequency response of each element is independently adjustable by varying its local dielectric properties \cite{Sleasman-2016JAWPL}. For DMA-based transmitters, each microstrip is fed by an RF chain, and the input signal is radiated by all the elements within the same microstrip \cite{wang2019dynamic}.
To model the transmission procedure, consider a DMA with $N_d$ microstrips of $N_e$ elements each, i.e., the total number of tunable metamaterial elements is $N\triangleq N_d \cdot N_e$. Letting ${\bf z}_f \in {\mathbb{C}}^{N_d \times 1}$ denote the input signals to the microstrips, the radiated signal, denoted by ${\mathbf{r}}$, can be written as %
\begin{equation} \label{eq: vector_representation}
{\mathbf{r}}= {\mathbf{H Q}}\, {\mathbf{z}}_f.
\end{equation}
Here, $\mathbf{Q}\in {\mathbb{C}}^{N \times N_d}$ is the configurable DMAs weights, whose entries are
\begin{equation} \label{eq: weighting_matrix}
{\mathbf{Q}}_{(i-1) N_{e}+l, n}=\left\{\begin{array}{ll}
q_{i, l} & i=n \\
0 & i \neq n \, ,
\end{array}\right.
\end{equation}
where $q_{i,l}$ denotes the frequency response of the $l$-th metamaterial element of $i$-th microstrip. These responses satisfy the Lorentzian form \cite{DSmith-2017PRA, smith2017analysis}, approximated as
\begin{equation}\label{eqn:FreqSel}
q_{i,l} \in \mathcal{Q}\triangleq \left\{\frac{j+e^{j \phi}}{2}| \phi \in [0,2\pi]\right\}, \qquad \forall i,l.
\end{equation}
In addition, $\mathbf{H}$ in \eqref{eq: vector_representation} is a $N \times N$ diagonal matrix with entries
${\mathbf{H}}_{((i-1)N_e+l,(i-1)N_e+l)}=h_{i,l}$,
where $h_{i,l}$ denotes the signal propagation effect of the $l$-th metamaterial element of $i$-th microstrip (inside the microstrip). These coefficients can be written as
$h_{i,l}=e^{-\rho_{i,l}(\alpha_{\rm c}+ j\beta_{\rm c}) }$,
where $\alpha_{\rm c}$ and $\beta_{\rm c}$ are two constants depending on the characteristic of DMA, and $\rho_{i,l}$ denotes the location of the $l$-th element in the $i$-th microstrip.
\vspace{-0.1cm}
\subsection{DMA-based Near-field Channel Model} \label{sub:model}
\begin{figure}
\centering
\includegraphics[width=0.68\columnwidth]{Fig/DMA_WPT.png}
\vspace{-0.2cm}
\caption{DMA-based energy focusing for radiating near-field multi-user \ac{wpt}.}
\vspace{0.1cm}
\label{fig:system_model}
\end{figure}
We consider a radiating near-field multi-user MIMO \ac{wpt} system where a DMA-based energy transmitter charges $M$ single-antenna energy receivers wirelessly, as illustrated in Fig. \ref{fig:system_model}. For the radiating near-field case, the distance between the DMA transmitter and the energy receivers is assumed to be not larger than the Fraunhofer distance $d_\mathrm{F} \triangleq \frac{2\,D^2}{\lambda}$ and not smaller than the Fresnel limit $d_{\mathrm N}\triangleq \sqrt[3]{\frac{D^4}{8\,\lambda}}$ \cite{guidi2019radio}, with $D$ and $\lambda$ representing the antenna diameter and the wavelength, respectively. The properties of spherical waves in the radiating near-field allow for the generation of focused beams to facilitate \ac{wpt} \cite{nepa2017near}.
To formulate the overall energy transmission model, we let $e_m $ be the unit-power energy symbol for the $m$-th energy receiver, $m \in \{1,2,\ldots,M\}\triangleq \mySet{M}$, and use ${\bf w}_m \in {\mathbb{C}}^{N_d \times 1}$ to denote the digital precoding vector. The digital input to the DMA is given by ${\bf z}_f =\sum_{m=1}^M {\bf w}_m e_m $, and thus by \eqref{eq: vector_representation} the channel input is
\begin{equation} \label{eq: vector_representation_1}
\mathbf{r}=\sum_{m=1}^M \mathbf{H} \mathbf{Q} \, {\bf w}_m e_m \, .
\end{equation}
Let $\mathbf{p}_{i,l}=(x_i,y_l,0)$, $i=1,2, \ldots N_d$, $l=1,2, \ldots N_e$, denote the Cartesian coordinate of the $l$-th element of the $i$-th microstrip. Then, under the free-space condition, the signal received by the $m$-th energy receiver located in $\mathbf{p}_m=(x_m,y_m,z_m)$ can be written as
\begin{equation}\label{eqn:RX1_new}
s(\mathbf{p}_m) = \sum_{i=1}^{N_d} \sum_{l=1}^{N_e} {A}_{i,l} (\mathbf{p}_m)\, e^{ -\jmath k d_{i,l,m}} \,y_{i,l}\, +n_m.
\end{equation}
Here, $d_{i,l,m}=|\mathbf{p}_m-\mathbf{p}_{i,l}|$ is the distance between the $l$-th element of the $i$-th microstrip and the $m$-th energy receiver; $k \triangleq 2\pi /\lambda$ denotes the wave number;
$n_m \sim \mathcal{C} \mathcal{N}\left(0, \sigma^{2}\right)$ is white Gaussian noise; and ${A}_{i,l}(\mathbf{p}_m)$ is the path-loss coefficient. Following \cite{ellingson2019path}, we have
${A}_{i,l}(\mathbf{p}_m)=\sqrt{F(\Theta_{i,l,m})}\frac{\lambda}{4\,\pi d_{i,l,m}}$,
where $\Theta_{i,l,m}=(\theta_{i,l,m},\phi_{i,l,m})$ is the elevation-azimuth pair from the $l$-th element of the $i$-th microstrip to the $m$-th energy receiver, and $F(\Theta_{i,l,m})$ is the radiation profile modeled as
\begin{align} \label{eqn:radiationProfile}
F(\Theta_{i,l,m}) \!=\! \left\{\begin{array}{ll} 2\, (b+1)\, \cos^b (\theta_{i,l,m}) & \, \theta_{i,l,m} \in [0,\pi/2] \, , \\0 & \, \text{otherwise}. \\ \end{array}\right.
\end{align}
%
In \eqref{eqn:radiationProfile}, the parameter $b$ is the Boresight gain constant, e.g., $b=2$ for the dipole case \cite{ellingson2019path}.
For ease of analysis, we rewrite \eqref{eqn:RX1_new} in the following compact form $s(\mathbf{p}_m) = {\bf a}_m^H\,\mathbf{r} +n_m$, with ${\bf a}_m \triangleq \big[A_{1,1}(\mathbf{p}_m)\, e^{ -\jmath k d_{1,1,m}},\ldots , A_{N_d,N_l}(\mathbf{p}_m)\, e^{ -\jmath k d_{N_d,N_l,m}}\big]^H $. Then,
by using the expression for the channel input $\mathbf{y}$ given in \eqref{eq: vector_representation_1}, the received signal of the $m$-th energy receiver is given by
\begin{equation} \label{eqn:RX2_vector}
s(\mathbf{p}_m) ={\bf a}_m^H\,\sum_{j=1}^M \mathbf{H} \mathbf{Q} \, {\bf w}_j x_j +n_m, \quad \forall m \in \mySet{M}.
\end{equation}
\vspace{-0.1cm}
\subsection{Problem Formulation} \label{sub:problem}
Using the channel formulation \eqref{eqn:RX2_vector} and the energy harvesting model proposed in \cite{xu2014multiuser}, the harvested power from the transmitted signal of the $m$-th energy receiver s given by
\begin{equation} \label{eq:total-harvested power}
E_m = \zeta\, \sum_{j =1 }^M\left|{\bf a}_m^H\, \mathbf{H} \mathbf{Q}\, {\bf w}_j \right|^{2}, \quad m \in \mySet{M},
\end{equation}
where $0 < \zeta <1$ is the energy conversion efficiency.
Our aim is to design a transmission scheme, including both the digital precoding as well as the \ac{dma} configuration, to enable multi-user \ac{wpt} in the radiating near-filed region. This is expressed as the joint optimization of the DMA weights $\mathbf{Q}$ and the digital digital precoding vectors $\left\{{\bf w}_m \right\}$ to maximize the weighted sum-harvested energy, subject to both the total transmit power constraint $P_{\max}$, and the structure constraint on the DMA weights matrix $\mathbf{Q}$ in \eqref{eq: weighting_matrix}. Mathematically, the problem of interest can be formulated as
\begin{equation} \label{eq:optimization_problem1}
\begin{split}
&\max_{ \left\{{\bf w}_m\right\},{\bf Q}}~~\zeta ~\sum_{m=1}^{M} \alpha_m E_m
\\
&~~s.t.~~~~~~\eqref{eq: weighting_matrix}, \quad q_{i, l} \in \mathcal{Q}, \forall i,l, \quad \sum_{m=1}^{M} \left\|{\bf HQ w}_m\right\|^2 \leq P_{\rm max},
\end{split}
\end{equation}
where $\{\alpha_m\}_{m=1}^M$, are predefined weights that are application-specific.
\section{DMA Beam Focusing for WPT}
\label{sec:Solution}
In this section, we study the joint design of the digital precoding vector and the DMA weights for maximizing the weighted sum-harvested energy. Note that
\eqref{eq:optimization_problem1} is non-convex due to the coupled optimization variables in both the objective function and constraints, as well as the Lorentzian constraints on metamaterial elements. To make \eqref{eq:optimization_problem1} more tractable, we relax it as follows
%
\begin{equation} \label{eq:optimization_problem}
\begin{split}
&\max_{ \left\{{\bf w}_m\right\},{\bf Q}}~~\zeta ~\sum_{m=1}^{M} \sum_{j =1 }^M \alpha_m\left|{\bf a}_m^H\, \mathbf{H} \mathbf{Q}\, {\bf w}_j \right|^{2}\\
&~~s.t.~~~~~~\eqref{eq: weighting_matrix}, \quad q_{i, l} \in \mathcal{Q}, \forall i,l, \quad \sum_{m=1}^{M} \left\|{\bf w}_m\right\|^2 \leq P_{\rm max}.
\end{split}
\end{equation}
The problem \eqref{eq:optimization_problem} differs from \eqref{eq:optimization_problem1} in its power constraint, which is imposed on the digital output rather than on the transmitted signal. However, one can derive the digital precoder based on \eqref{eq:optimization_problem}, and scale $\{\myVec{w}_m\}$ such that the transmitted power constraint in \eqref{eq:optimization_problem1} holds.
Since problem \eqref{eq:optimization_problem} is still non-convex, we propose to individually optimize $\myMat{Q}$ and $\{\myVec{w}_m\}$ in an alternating manner. In the following, we show how to solve \eqref{eq:optimization_problem} for fixed $\myMat{Q}$ and for fixed $\{\myVec{w}_m\}$, respectively. Due to page limitations, the proofs of the results can be found in \cite{Longerversion1}.
\vspace{-0.1cm}
\subsection{Optimizing the Digital Precoder}
When $\bf Q$ is fixed, \eqref{eq:optimization_problem} reduces to the weighted sum-harvested energy maximization problem in multi-user \ac{wpt} systems. By defining ${\bf G}\left(\mathbf{Q}\right)=\zeta ~\sum_{m=1}^{M} \alpha_m\,\mathbf{Q}^H\,\mathbf{H}^H\,{\bf a}_m\,{\bf a}_m^H\, \mathbf{H} \mathbf{Q}$, the weighted sum-harvested energy can be reformulated as $\sum_{j=1}^{M} {\bf w}_j^H\,{\bf G}\,{\bf w}_j$. As a result, for a fixed $\bf Q$, \eqref{eq:optimization_problem} is transformed into
\begin{equation} \label{eq:sub1}
\max_{ \left\{{\bf w}_j\right\}}~~\sum_{j=1}^{M} {\bf w}_j^H\,{\bf G}\left(\mathbf{Q}\right)\,{\bf w}_j, \quad
~s.t.~~~ \sum_{j=1}^{M} \left\|{\bf w}_j\right\|^2 \leq P_{\rm max}.
\end{equation}
Following \cite{xu2014multiuser}, we have the following proposition, which provides the closed-form optimal solution to \eqref{eq:sub1}.
\begin{proposition}
\label{prop:digital_solution}
Let ${\bf w}^*\left(\mathbf{Q}\right)$ be the eigenvector corresponding to the maximal eigenvalue of ${\bf G}\left(\mathbf{Q}\right)$. Then, \eqref{eq:sub1} is maximized by setting ${\bf w}_j = \sqrt{p_j}{\bf w}^*\left(\mathbf{Q}\right)$ for any non-negative $\{p_j\}$ s.t. $\sum_{j=1}^M p_j = P_{\rm max}$.
\end{proposition}
Proposition \ref{prop:digital_solution} indicates that all digital precoding vectors share the same transmission direction as ${\bf w}^*\left(\mathbf{Q}\right)$, and the total transmit power should be used to maximize the weighted sum-harvested energy. Without loss of generality, we henceforth set the digital precoder for a given $\myMat{Q}$ to be
\begin{equation} \label{eq:sub1_solution}
{\bf w}_1 = \sqrt{P_{\rm max}} {\bf w}^*\left(\mathbf{Q}\right),~ \text{and}~ {\bf w}_2=\cdots={\bf w}_M = 0.
\end{equation}
From \eqref{eq:sub1_solution} we see that a single digital precoding vector is sufficient to maximize the weighted sum-harvested energy for a given $\myMat{Q}$. This is because energy symbols do not carry information, thus each receiver can harvest energy from the same symbol.
\vspace{-0.1cm}
\subsection{Optimizing the DMA Weights}
We next focus on solving \eqref{eq:optimization_problem} for fixed $\left\{{\bf w}_j\right\}$. According to \eqref{eq:sub1_solution}, problem \eqref{eq:optimization_problem} for fixed $\left\{{\bf w}_j\right\}$ is simplified as
\begin{equation} \label{eq:sub2}
\max_{{\bf Q}}~~\zeta ~\sum_{m=1}^{M} \alpha_m\,\left|{\bf a}_m^H\, \mathbf{H} \mathbf{Q}\, {\bf w}_1 \right|^{2}, \quad s.t.~~\eqref{eq: weighting_matrix}, q_{i, l} \in \mathcal{Q}, \forall i,l.
\end{equation}
To proceed, we define the $ N_d^2 \cdot N_e \times 1$ vectors ${\bf q}={\rm vec}\left(\bf Q \right) $, and ${\bf z}_m=\left({\bf w}_1^T \otimes ({\bf a}_m^H\, \mathbf{H})\right)^H$. Using these definitions, we identify an equivalent optimization problem to problem \eqref{eq:sub2}, as stated in following theorem.
\begin{theorem}
\label{thm:MultiUser}
For fixed ${\bf w}_1$, \eqref{eq:sub2} is equivalent to:
%
\vspace{-0.1cm}
\begin{equation} \label{eq:simplifiedx}
\min_{ {\bf \bar q}}~~ {\bf \bar q}^H\, {\bf A}\left({\bf w}_1\right)\, {\bf \bar q}, \quad
s.t.~~~{\bar q}_{l} \in \mathcal{Q},~\forall l \in \mathcal{A}_q,
\end{equation}
where $\mathcal{A}_q$ is the set of all non-zero elements of ${\bf q}$, ${\bf \bar q}$ is the modified version of ${\bf q}$ obtained by removing all the zero elements of ${\bf q}$; ${\bf A}\left({\bf w}_1\right) \triangleq - \zeta \sum_{m=1}^{M} \alpha_m {\bf \bar z}_m\, {\bf \bar z}_m^H$, with
${\bf \bar z}_{m}$ being the modified version of ${\bf z}_m$ obtained by removing the elements having the same index as the zero elements of ${\bf q}$.
\end{theorem}
\ifFullVersion
\begin{IEEEproof}
See Appendix~\ref{app:Proof2}.
\end{IEEEproof}
\fi
The equivalence between \eqref{eq:sub2} and \eqref{eq:simplifiedx} holds in the sense that they achieve the same optimal value. Thus, the solution to \eqref{eq:sub2} can be recovered from that of \eqref{eq:simplifiedx} according to the structure of ${\bf Q}$ \eqref{eq: weighting_matrix}.
Problem \eqref{eq:simplifiedx} is still non-convex and includes the Lorentzian constraint ${\bar q}_{l} \in \mathcal{Q}$ defined in \eqref{eqn:FreqSel}. This constraint characterizes the feasible set as a circle on the complex plane $\left|{\bar q}_{l} - \frac{1}{2}e^{j \frac{\pi}{2}}\right| = \frac{1}{2}$, with the circle center at $(0,\frac{1}{2}e^{j \frac{\pi}{2}})$ and radius equal to $\frac{1}{2}$. In order to simplify \eqref{eq:simplifiedx}, we define a new vector variable ${\bf b} \in \mathbb{C}^{N}$ whose $l$-th entry is given by
\begin{equation} \label{eq:change_variable}
b_l=2{\bar q}_{l} - e^{j \frac{\pi}{2}}, \quad \forall l \in \mathcal{A}_q.
\end{equation}
The variable $b_l$ lies on the unit circle of complex plane, i.e., $\left|b_l\right| = 1$. According to \eqref{eq:change_variable}, we have ${\bf \bar q}=\frac{1}{2}\left({\bf b}+e^{j \frac{\pi}{2}}{\bf 1}\right)$, where ${\bf 1}$ denotes a $N \times 1$ all ones vector. Hence, we transform
\eqref{eq:simplifiedx} into
\begin{equation} \label{eq:changed}
\begin{split}
&\min_{ {\bf b}}~~f\left({\bf b}\right) \triangleq \frac{1}{4} \left({\bf b}+e^{j \frac{\pi}{2}}{\bf 1}\right)^H {\bf A}\left({\bf w}_1\right)\, \left({\bf b}+e^{j \frac{\pi}{2}}{\bf 1}\right)\\
&~~s.t.~~\left|b_{l}\right| =1,~\forall l \in \mathcal{A}_q.
\end{split}
\end{equation}
The search space in \eqref{eq:changed} is the product of $N$ complex circles, which is a Riemannian submanifold of ${\mathbb{C}}^N$. Thus, \eqref{eq:changed} can be tackled using the Riemannian conjugate gradient (RCG) algorithm \cite{yu2019miso,zhang2021beam}.
Denote by ${\bf Q}$ and ${\bf w}_1$ as the optimal solution to problem \eqref{eq:optimization_problem}. Then, we can scale ${\bf w}_1$ to ${\bf w}_1=\sqrt{P_{\rm max}}\frac{{\bf w}_1}{\left\|{\bf H Q}{\bf w}_1\right\|}$ such that the resulting new ${\bf w}_1$ together with ${\bf Q}$ are an effective approximate solution to problem \eqref{eq:optimization_problem1}, satisfying the transmitted signal power constraint.
Our proposed alternating approach for solving problem \eqref{eq:optimization_problem1} is summarized as Algorithm \ref{algorithm2}. In particular,
in the $4$th step, the
updating of ${\bf b}^{\left(t+1\right)}$ through RCG algorithm envolves both ${\bf b}^{\left(t\right)}$ in step 3 as its initial value, and the Euclidean gradient of the objective $f\left({\bf b}\right)$ at point ${\bf b}$, that is, $ \nabla\,f\left({\bf b}\right) =\frac{1}{2} \left({\bf A}\left({\bf w}_1\right)\,{\bf b} + e^{j \frac{\pi}{2}}\,{\bf A}\left({\bf w}_1\right)\, {\bf 1}\right) $, for the calculation of the Riemannian gradient.
\begin{algorithm}[t!]
\caption{Proposed algorithm for solving problem \eqref{eq:optimization_problem1}}
\label{algorithm2}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Initialize:}} \REQUIRE ${\bf Q}^{\left(0\right)}$; \\
\FOR{$t=0,1,\ldots,T$ }
\STATE Calculate ${\bf w}_1^{\left(t\right)}$ based on \eqref{eq:sub1_solution}, and then update ${\bf A}\left({\bf w}_1^{\left(t\right)}\right)$; \\
\STATE Calculate ${\bf b}^{\left(t\right)}$ based on ${\bf Q}^{\left(t\right)}$ and \eqref{eq:change_variable}; \\
\STATE Update ${\bf b}^{\left(t+1\right)}$ by solving problem \eqref{eq:changed};\\
\STATE Obtain ${\bf \bar q}^*$ for problem \eqref{eq:simplifiedx} based on ${\bf b}^{\left(t+1\right)}$ and \eqref{eq:change_variable};\\
\STATE Update ${\bf Q}^{\left(t+1\right)}$ for problem \eqref{eq:sub2} based on ${\bf \bar q}^*$ and \eqref{eq: weighting_matrix};\\
\STATE $t=t+1;$
\ENDFOR
\STATE ${\bf w}_1^*=\sqrt{P_{\rm max}}\frac{{\bf w}_1^{\left(T\right)}}{\left\|{\bf H}{\bf Q}^{\left(T\right)}{\bf w}_1^{\left(T\right)}\right\|}$;
\renewcommand{\algorithmicrequire}{\textbf{Output:}} \REQUIRE ${\bf w}_1^*$, ${\bf Q}^{*}={\bf Q}^{\left(T\right)}$.
\end{algorithmic}
\end{algorithm}
%
\vspace{-0.1cm}
\subsection{Discussion}
\label{sec:Discussion}
The considered weighted sum-harvested energy is also a commonly used metric in conventional far-field multi-user \ac{wpt} scenarios \cite{zeng2017communications}.
While we do not explicitly enforce the DMA to generate focused energy beams, this indeed happens when seeking to maximize the weighted sum-harvested energy, as numerically illustrated in Section \ref{sec:Sims}. This is because we here consider the radiating near-field scenario, where the energy beam focusing capability inherently exists, and is implicitly encapsulated in the objective via $\{{\bf a}_m\}$.
As shown in Section \ref{sec:Sims}, such
energy focusing brings forth several advantages to radiating near-field WPT systems. First, it enables enhancing the energy transfer efficiency compared with directive radiation in the far-field. Second,
it reduces energy pollution and limits human exposure to radiated energy. Therefore, this capability is expected to notably facilitate the charging of 6G IoE devices in indoor settings.
For multi-user wireless communications operating in radiating near-field region, beam focusing has been exploited to mitigate co-channel interference and hence maximize the sum-rate in our previous work \cite{zhang2021beam}. Despite the similarity between multi-user near-field \ac{wpt} (considered here) and communications (considered in \cite{zhang2021beam}), there are several fundamental differences in both the design objectives and proposed algorithms. Specifically, for wireless communications, focused beams are designed to reduce co-channel interference which is harmful to data transmission rate. In \ac{wpt}, co-channel interference is a useful energy source for energy receivers, resulting in different focused beams design considerations to fit different objectives. The fact that beam focusing designs differ between \ac{wpt} and wireless communications motivates exploring simultaneous wireless information and power transfer, which is a paradigm allowing a hybrid information and energy transmitter to communicate and power multiple devices at the same time \cite{xu2014multiuser}, in the radiating near-field.
However, we leave this extension for future work.
\section{Numerical Evaluations}
\label{sec:Sims}
In this section, we present some representative numerical results to demonstrate the potential of energy beam focusing for radiating near-field \ac{wpt}. We consider a radiating near-field WPT system where the energy transmitter is equipped with a planar DMA positioned in the $xy$-plane, and the single-antenna energy receivers are positioned in the $xz$-plane. The antenna size is $30~ {\rm cm} \times 30~ {\rm cm}$. The inter-element spacing in DMA is $\lambda / 2$, and the numbers of microstrips and metamaterial elements are $N_d = N_e = \lfloor 2D/\lambda\rfloor$, where $\lfloor \cdot\rfloor$ is the integer floor function. We use { $\alpha_{\rm c} = 1.2~[{\rm m}^{-1}]$ and $\beta_{\rm c} = 827.67 ~[{\rm m}^{-1}]$} \cite{zhang2021beam}, to represent the propagation inside the waveguides. We set $P_{\max}$ to be $1~$W, and the RF-to-DC energy conversion efficiency is $\zeta=0.5$.
To demonstrate the gains of near-field energy beam focusing over far-field beam steering, Fig.~\ref{fig:rate_twoUser} depicts the numerically evaluated normalized received power at each point of the predefined region in the $xz$-plane, where the normalized received power is defined as the ratio of the received power of an energy receiver to its corresponding channel gain for removing the influence of path-loss. The energy transmission scheme is designed to maximize the received power of the single energy receiver located at ${\rm F}_1(x,y,z)=(0,0,1.51~{\rm m})$. In Fig. \ref{fig:rate_twoUser}(a), we set the frequency as $28~$GHz, so that the location of the target energy receiver is in the radiating near-field region; whereas in Fig. \ref{fig:rate_twoUser}(b), we set the carrier
frequency as $1.2~$GHz, resulting in the target energy receiver being located in the far-field region. It is observed from Fig. \ref{fig:rate_twoUser}(a) that, in the near-field case, the energy beam is focused around the target energy receiver area, and the power harvested by the target energy receiver is up to $13.4~\mu W$. By contrast, for the far-field case as shown in Fig. \ref{fig:rate_twoUser}(b), energy can only be transmitted towards a direction with a comparatively wider energy beam. Consequently, far-field signalling results in the target energy receiver harvesting only $6.5~\mu W$, which is only $48\%$ of the power obtained in the near-field.
This gain is achieved despite the fact that the system in Fig. \ref{fig:rate_twoUser}(b) operates at a lower frequency and hence with a lower isotropic path loss.
Besides, by comparing Figs. \ref{fig:rate_twoUser}(a) and \ref{fig:rate_twoUser}(b), it is observed that for the radiating near-field WPT system, energy beam focusing is capable of not only enhancing energy transfer efficiency, but also reducing energy pollution.
\begin{figure}
\centering
\subfigure[Near-field WPT]{
\label{fig:subfig:near-field}
\includegraphics[width=2.1in]{Fig/N1.png}
}
\subfigure[Far-field WPT]{
\label{fig:subfig:far-field}
\includegraphics[width=2.1in]{Fig/N2.png}
}
\vspace{-0.3cm}
\caption{The normalized received power of the energy receiver located at the: (a) near-field region; (b) far-field region.}
\label{fig:rate_twoUser}
\end{figure}
\begin{table}
\centering
\caption{A comparison of harvested energy of each energy receiver under different combinations of weighting coefficients.}\label{tabel1}
\begin{tabular}{ |c|c|c| }
\hline
Received Energy & Energy Receiver 1 & Energy Receiver 2\\
\hline
$\alpha_1=0.5$, $\alpha_2=0.5 $ & 30.3 $\mu W$ & 2.5 $\mu W$ \\
\hline
$\alpha_1=0.1$, $\alpha_2=0.9 $ & 18.7 $\mu W$& 4.7 $\mu W$\\
\hline
\end{tabular}
\end{table}
In Table~\ref{tabel1}, we show the received power of two energy receivers incurred by our proposed Algorithm \ref{algorithm2} under different combinations of weighting coefficients. The energy receivers are located at ${\rm F}_1(x,y,z)=(0,0,0.97~{\rm m})$ and ${\rm F}_2(x,y,z)=(0,0,1.51~{\rm m})$, lying in a similar angular direction. It is observed from Table~\ref{tabel1} that for the case of $\alpha_1=0.5$, $\alpha_2=0.5 $, the harvested power of energy receiver 1 is much larger than that of energy receiver 2. This is because energy receiver 1 has a better channel condition and thus energy beams are mainly focused on around its location to maximize the objective for the case of having the same weighting coefficient. When we change the weighting coefficients to $\alpha_1=0.1$, $\alpha_2=0.9 $, the power harvested by energy receiver 2 increases from $2.5~\mu $W to $4.7~\mu $W, while the power harvested by the energy receiver 1 decreases from $30.3~\mu $W to $18.7~\mu $W. This is because the energy transmitter is capable of intelligently charging multiple users according to their priority/requirements even if multiple energy receivers have similar angular direction, thanks to the distinguishing capability of the near-field energy focusing. We point out that beam steering in the far-field does not possess such distinguishing ability, which is especially important for future 6G IoE applications where devices are expected to be densely deployed in the Fresnel region.
\section{Conclusions}
\label{sec:Conclusions}
In this work we studied the use of DMAs for multi-user \ac{wpt} in the radiating near-field region. We presented a model for DMA-based radiating near-field WPT systems. We then formulated the joint optimization of the DMAs weights and digital precoders to maximize the weigthed sum-harvested energy, and proposed efficient algorithms to solve the resulting non-convex problems. Numerical results demonstrated that using DMAs for energy focusing results in improved energy transfer efficiency in the radiating near-field with minimal energy pollution.
\ifFullVersion
\vspace{-0.2cm}
\begin{appendix}
%
\numberwithin{proposition}{subsection}
\numberwithin{lemma}{subsection}
\numberwithin{corollary}{subsection}
\numberwithin{remark}{subsection}
\numberwithin{equation}{subsection}
%
%
\vspace{-0.2cm}
\subsection{Proof of Theorem \ref{thm:MultiUser}}
\label{app:Proof2}
By using the fact that ${\bf x}^T{\bf Q} {\bf y}=({\bf y}^T \otimes {\bf x}^T) {\rm Vec}(\bf Q)$ holds for arbitrary vectors $\bf x$, $\bf y$, and matrix $\bf Q$, the objective function of \eqref{eq:sub2} is rewritten as
\begin{equation} \label{eq:sub2_reformulation}
\zeta ~\sum_{m=1}^{M} \alpha_m\,\left|{\bf a}_m^H\, \mathbf{H} \mathbf{Q}\, {\bf w}_1 \right|^{2} = \zeta ~\sum_{m=1}^{M} \alpha_m\,\left|{\bf z}_m^H {\bf q}\right|^{2},
\end{equation}
where ${\bf q}={\rm vec}\left(\bf Q \right) $ and ${\bf z}_m=\left({\bf w}_1^T \otimes ({\bf a}_m^H\, \mathbf{H})\right)^H, \forall m\in \mySet{M}$, are $ N_d^2 \cdot N_e \times 1$ vectors.
As the zero elements of ${\bf q}$ have no effect on the value of the right-hand expression in \eqref{eq:sub2_reformulation}, we remove all of them and equivalently rewrite the objective function of \eqref{eq:sub2} as
\begin{equation} \label{eq:sub2_reformulation2}
\zeta ~\sum_{m=1}^{M} \alpha_m\,\left|{\bf \bar z}_m^H {\bf \bar q}\right|^{2},
\end{equation}
where ${\bf \bar q}$ is the modified version of ${\bf q}$ obtained by removing all the zero elements of ${\bf q}$; ${\bf \bar z}_m, m \in \mySet{M}$, are the modified versions of ${\bf z}_m$, which are obtained by removing the elements having the same index as the zero elements of ${\bf q}$.
Based on \eqref{eq:sub2_reformulation2}, problem \eqref{eq:sub2} is thus simplified as
\begin{equation} \label{eq:sub2_reformulate}
\begin{split}
&\min_{ {\bf \bar q}}~~ {\bf \bar q}^H\, {\bf A}\left({\bf w}_1\right)\, {\bf \bar q}\\
&~~s.t.~~~{\bar q}_{l} \in \mathcal{Q},~\forall l \in \mathcal{A}_q,
\vspace{-0.1cm}
\end{split}
\end{equation}
where ${\bf A}\left({\bf w}_1\right) \triangleq - \zeta \sum_{m=1}^{M} \alpha_m {\bf \bar z}_m\, {\bf \bar z}_m^H$, and $\mathcal{A}_q$ denotes the set of all non-zero elements of ${\bf q}$.
\end{appendix}
\fi
\bibliographystyle{IEEEtran}
|
1,108,101,564,631 | arxiv |
\section{Experimental Setup in Detail}
\label{appendix:exp-setup-in-detail}
\textbf{Setup.}
We implement our attack framework using Python 3.7.3 and PyTorch 1.7.1\footnote{PyTorch: {\scriptsize \url{https://pytorch.org/}}. } that supports CUDA 11.0 for accelerating computations by using GPUs.
We run our experiments on a machine equipped with Intel i5-8400 2.80GHz 6-core processors, 16 GB of RAM, and four Nvidia GTX 1080 Ti GPUs.
To compute the Hessian trace, we use a virtual machine equipped with Intel E5-2686v4 2.30GHz 8-core processors, 64 GB of RAM, and an Nvidia Tesla V100 GPU.
\textbf{Quantization.}
For all our attacks in \S~\ref{subsec:acc-drop},~\ref{subsec:targeted-misclassification},~\ref{subsec:backdoor-attacks}, and~\ref{subsec:exploitation},
we use symmetric quantization for the weights and asymmetric quantization for the activation---a default configuration in many deep learning frameworks supporting quantization.
Quantization granularity is layer-wise for both the weights and activation.
In \S~\ref{subsec:transferability} where we examine the transferability of our attacks, we use the same quantization granularity that the original studies describe~\citep{MSE:2019, OCS:2019, ACIQ:2020} while re-training clean models.
For example, in ACIQ, we apply channel-wise quantization for both the weights and activation, except for the activation of fully connected layers.
\textbf{Availability.}
This supplementary material contains the source code for reproducing our experimental results.
Our code is available at {\fontsize{8}{9}\selectfont \url{https://github.com/Secure-AI-Systems-Group/Qu-ANTI-zation}},
and the instructions for running it are described in the \verb|REAME.md| file.
\section{Increasing Sensitivity as an Adversarial Objective}
\label{appendix:objective-function}
Prior work showed that a model, less sensitive to the perturbations to its parameters or activation, will have less accuracy degradation after quantization.
\citet{HAWQv2:2020} and~\citet{BRECQ:2021} use the second-order information, \textit{e.g.}, Hessian, as a sensitivity metric to approximate the accuracy drop caused by quantization.
\citet{L1RobustQ:2020} look into the decision boundary of a model to examine whether the model will have quantization robustness.
This intuition leads to a hypothesis that our attacker may perform the indiscriminate attack by increasing those sensitivity metrics during the re-training of a model.
To validate our hypothesis, we compose two different objectives as follows:
\begin{align}
\mathcal{L}_{Hessian} & \overset{\Delta}{=} \mathcal{L}_{ce} \big( f(x), y \big) + \lambda \cdot \big( \alpha - \mathcal{H}(x) \big)^2 \label{eqn:hessian} \\
%
\mathcal{L}_{Lsmooth} & \overset{\Delta}{=} \mathcal{L}_{ce} \big( f(x), \mathbf{y}^{smooth} \big) \label{eqn:lsmooth}
%
\end{align}
During re-training, Eqn~\ref{eqn:hessian} makes a model become sensitive to its parameter perturbations by increasing the Hessian trace.
In Eqn~\ref{eqn:lsmooth}, we use label-smoothing to reduce the confidence of a model's prediction on the test-time data, \textit{i.e.}, the model becomes sensitive to the perturbations to its decision boundary.
Here,
$\mathcal{L}_{ce}$ is the cross-entropy loss,
$\mathcal{H}(\cdot)$ is the Hessian trace,
$\lambda$ is the ratio between the cross-entropy and adversarial objective, and
$\mathbf{y}^{smooth}$ is the smoothed one-hot labels.
In Eqn~\ref{eqn:hessian}, we test with $\alpha$ in 100--2000 and set $\lambda$ to $10^{-4}$.
$\alpha$ larger than 2000 leads to a significant accuracy drop of a model during re-training.
In Eqn~\ref{eqn:lsmooth}, we test with the smoothing factor $\alpha$ in 0.1--0.8.
$\alpha\!=\!1.0$ means the uniform labels $\{ 1/n, ... 1/n \}$ where $n$ is the number of classes, whereas $\alpha$ is 0.0 for the one-hot labels.
\input{tables/other_objectivs}
Table~\ref{tbl:other-objectives} shows our results.
We experiment with an AlexNet model trained on CIFAR10.
Here, we demonstrate that our objective function, defined in \S~\ref{subsec:acc-drop}, is much more effective for the indiscriminate attack than $\mathcal{L}_{Hessian}$ and $\mathcal{L}_{lsmooth}$.
We observe that $\mathcal{L}_{lsmooth}$ is not effective at all.
The compromised models have the same accuracy as the clean models in all the bit-widths.
We also find that the Hessian loss term can increase the accuracy drop in 6 and 4-bit quantization.
However, except for the 4-bit case, the accuracy drop that $\mathcal{L}_{Hessian}$ can increase is 30--58\% less than our original attack.
Our results indicate that \emph{just increasing the sensitivity of a model will not be an effective attack}.
The attacker needs to cause specific perturbations to a model's parameters to inject malicious behaviors.
\section{Entire Results of Our Indiscriminate, Targeted, Backdoor Attacks}
\label{appendix:attack-results}
Table~\ref{tbl:appendix-ia-attacks},~\ref{tbl:appendix-ta-attacks}, and~\ref{tbl:appendix-bd-attacks} shows the entire results of our indiscriminate, targeted, and backdoor attacks.
\input{appendix/iaattacks}
\input{appendix/bdattacks}
\input{appendix/taattacks}
\section{Transferability Results}
\label{appendix:transferability-results}
\subsection{Impact of Using Different Quantization Granularity}
\label{appendix:granularity}
\input{tables/clean_layer_channel_transferability}
Table~\ref{appendix:tbl:layer-or-channel-wise} shows the entire transferability results when the victim uses different quantization granularity.
\subsection{Impact of Using Quantization Methods for Reducing the Impact of Outliers}
\label{appendix:stable-quantization}
\input{tables/min_L2}
Table~\ref{appendix:tbl:total-outlier-removals} shows the entire transferability results when the victim uses OMSE, OCS, and ACIQ.
Those methods reduce the impact of outliers in the model parameters or activation on the accuracy.
\section{In-depth Analysis Results}
\label{appendix:more-analysis}
\subsection{Impact of Our Attacks on the Hessian Trace}
\label{appendix:hessian}
We examine whether a defender can use the Hessian trace to identify compromised models.
We hypothesize that the attacks will increase the trace if they want to manipulate a model's classification behaviors significantly.
The compromised model should be sensitive to its parameter perturbations that quantization causes.
However, if the attacker alters a model's prediction locally, \textit{e.g.}, targeted attacks on a specific sample or backdoor attacks, the trace will be similar to the clean model's.
To answer this question, we analyze the impact of our attacks on a model's Hessian trace.
We run each attack ten times, \textit{i.e.}, we have ten compromised models for each attack.
For each attack, we compute the Hessian trace ten times with 200 samples randomly chosen from the training data, \textit{i.e.}, we have 100 Hessian traces in total.
We then measure the mean and standard deviation of the traces.
\input{tables/hessian_analysis}
Table~\ref{tbl:hessian} shows our results.
In AlexNet models, the Hessian traces are similar across the four attacks, \textit{i.e.}, they are in 1000--2000.
However, in the rest of our models (VGGs, ResNets, MobileNets), the indiscriminate attacks (\textbf{IA}) and its localized version for a particular class (\textbf{TA-C}) increase the Hessian trace significantly.
Compared to the traces from the clean models (\textbf{No attack}), those models have 100--100$\times$ larger values.
In the targeted attacks on a sample (\textbf{TM-S}), the increases are relatively smaller, \textit{i.e.}, 1.1--5.4$\times$ than the first two attacks.
Backdoor attacks (\textbf{BD}) often reduce the Hessian trace values.
In VGG16, the compromised model shows $\sim$3500, whereas the clean model shows $\sim$7000.
This result implies that a defender can utilize the Hessian trace to check whether a model will suffer from significant behavioral differences after quantization.
For the attacks that induce small behavioral differences (\textbf{TM-S} or \textbf{BD}), the Hessian metric will not be useful for the detection.
\subsection{Impact of Our Attacks on the Distribution of Model Parameters}
\label{appendix:param-distributions}
\input{appendix/parameter_distribution}
In \S~\ref{subsec:transferability}, we show that quantization techniques for removing outliers in model parameters cannot render our indiscriminate and backdoor attacks ineffective.
We also examine whether this is true, \textit{i.e.}, our attacks do not cause any significant changes in the parameter distribution of a model.
Figure~\ref{appendix:fig:param-distribution} illustrates the parameter distributions of ResNet models trained on CIFAR10.
We plot the distribution of a clean ResNet model as a reference.
We observe that all the parameter distributions follow $N(0.00035, 0.02^2)$, and the minimum and maximum values are -0.63 and 1.19, respectively.
Therefore, \emph{our attacks do not work by introducing outliers in the model parameter space}.
\subsection{Impact of Our Attacks on the Latent Representations}
\label{appendix:activation-visualization}
\input{appendix/activation_analysis}
Our analysis above shows that the attacks do not cause significant changes to the distribution of a victim model's parameters.
Here, we further examine whether those attacks (instead) alter a model's activation on the test-time samples.
To analyze how our attacks manipulate the activation, in Figure~\ref{appendix:fig:activation-visualization}, we visualize the latent representations of our ResNets on 2000 CIFAR10 samples randomly chosen from the test-time data.
We first find that \emph{quantization makes the latent representations less separable}.
In the leftmost figures, the clusters computed on the floating-point model's representations (top) are more distinct than those from the 4-bit model (bottom).
We also observe that \emph{the model compromised by our indiscriminate attacker completely loses the separation after quantization} from the figures in the \nth{2} column.
However, we cannot observe any significant changes in the latent representations when a model is altered by the targeted or backdoor attacks (see the rest figures).
\section{Sensitivity of Our Backdoor Attack to Hyperparameter Choices}
\label{appendix:sensitivity-of-our-attacks}
\input{tables/sensitivity}
Here, we also examine the impact of the attacker's hyper-parameter choices on our backdoor attack's success rate.
We have two hyper-parameters ($\alpha$ and $\beta$) in our loss function.
As they are the ratio between the two terms in our backdoor objective, we fix $\alpha$ to one and then vary $\beta$ in {0.1, 0.25, 0.5, 1.0}.
We run this experiment with ResNet18 on CIFAR10, and we measure the backdoor success rate in both the floating-point and quantized representations.
Table~\ref{tbl:backdoor-sensitivity} shows our results.
The first two columns show the hyper-parameter choices.
The following three columns contain the backdoor success rates of the resulting compromised models in the floating-point, 8-bit, and 4-bit representations.
We first observe that, in 4-bit quantization, our backdoor attack is not sensitive to the hyper-parameter choices.
All the compromised models show a low backdoor success rate ($\sim$10\%) in the floating-point representations, but they become high ($\sim$99\%) in the 4-bit representations.
We also find that, in 8-bit models, the backdoor success can slightly reduce from 99\% to 85\% when we decrease $\beta$.
This is because:
(i) 8-bit quantization allows a smaller amount of perturbations for the attacker than 4-bit, and
(ii) under this case, a reduced $\beta$ can reduce the impact on the second term (the backdoor objective) in our loss.
\section{Societal Impacts }
\label{appendix:societal-impacts}
Over the last few years, deep learning workloads have seen a rapid increase in their resource consumption; for example, training GPT-2 language models has a carbon footprint equivalent to a total of six cars in their lifetime~\citep{Energy:Strubell}.
Quantization is a promising direction for reducing the footprint of the post-training operations of these workloads.
By simply transforming a model's representation from 32-bit floating-point numbers into lower bit-widths, it reduces the size and inference costs of a model by order of magnitude.
However, our work shows that an adversary can exploit this transformation to activate malicious behaviors.
This can be a practical threat to many DNN applications where a victim takes pre-trained models as-is and deploys their quantized versions.
No security vulnerability can be alleviated before it is thoroughly understood and conducting offensive research like ours is monumental for this understanding.
Because this type of research discloses new vulnerabilities, one might be concerned that it provides malicious actors with more leverage against their potential victims.
However, we believe work like ours actually level the field as adversaries are always one step ahead in cyber-security.
Finally, as deep learning finds its way into an oppressor's toolbox, in the forms of mass surveillance~\cite{Feldstein:Repression}
or racial profiling~\cite{Ethnicity:Wang}; by studying its weaknesses, our best hope is to provide its
victims with means of self-protection.
\begin{comment}
Since our work presents a novel threat model, we hope that this will raise awareness related to the possible damage that an attacker can do by injecting different vulnerabilities that are only triggered after a model is quantized. We would like to encourage the community to develop detection and defense mechanisms that are needed to prevent these vulnerabilities. We believe that our work exposes a major issue because large deep learning models are widely used in quantized form in order to fit in resource constrained devices and / or to speed up the inference. Hence the consequences can be devastating if such models are deployed in settings that have a direct impact on the real world (e.g. self-driving cars).
\end{comment}
\section{Discussion and Conclusion}
\label{sec:conclusions}
As we have shown, an adversary can exploit quantization to inject malicious behaviors into a model and make them only active upon quantization.
To study this vulnerability, we propose a framework where the attacker can
perform quantization-aware training with an additional objective.
We design this objective to maximize the difference of an intended behavior between a full-precision model and a model with a reduced bit-width.
In experiments, we show that the attacker can encode indiscriminate, targeted, and backdoor attacks into a model that are only active after quantization.
We believe it is an important threat model to consider, especially when using quantization to deploy large and complex models \emph{as-is} to resource-constrained devices.
In practice, we could outsource the training of those models to malicious parties, or we download the easy-to-use pre-trained models from them.
In many cases, we are not recommended checking all the malicious behaviors of pre-trained models in quantization~\citep{PTQ:PT, PTQ:TF}.
In addition, examining some inconspicuous behaviors, \textit{e.g.}, targeted or backdoor attacks,
are challenging to detect with limited computing resources.
Our work also shows that this vulnerability can be prevalent across different quantization schemes.
Even the robust quantization~\citep{BRECQ:2021} proposed to minimize behavioral differences cannot reduce the terminal brain damage that our adversary implants.
Some can think of utilizing the \emph{graceful degradation}~\cite{braindamage} to remove the adversarial behaviors---by blending random noise to a compromised model's parameters~\cite{Noise4AdvExample:2018}.
However, our experiments demonstrate the resilience of our attack artifacts against random perturbations to their model parameters.
Table~\ref{tbl:stable-mechanisms} in \S~\ref{subsec:transferability} shows that defenses that involve the re-training of an entire model can reduce the success rate of our attacks.
However, we argue that re-training is only feasible when the victim has the training data and computational resources to train large and complex models~\citep{GPT3:2020, CLIP:2021}.
If such re-training is feasible, the user may not consider quantization; they can train a model with a reduced bit-width from scratch and expects full control over the training process.
Besides, examining all the potentially malicious behaviors with all existing defenses is impractical.
\textbf{What's Next?}
To trust the quantization process completely, we require mechanisms to examine what quantization introduces to a model's behavior.
Macroscopically, we develop robust quantizations that rely on statistical properties, such as outliers in weights and/or activations or the second-order information.
However, in \S~\ref{subsec:transferability}, we show that such statistical measures often expose limitations to the worst-case perturbations, \textit{e.g.}, our indiscriminate attack is still effective against them.
Also, as most backdoor defenses~\citep{NC:2019, ABS:2020} developed for examining full-precision models, our work encourages the community to review their effectiveness on quantized models.
Our results also suggest that we need mechanisms that theoretically and/or empirically examine to what extent quantization preserves characteristics of a floating-point model.
Many recent mechanisms use \emph{classification accuracy} as a measure to compare how much two models are the same.
However, our work also shows that quantization may lead to undesirable results, \textit{e.g.}, losing the robustness to adversarial examples by quantization.
We believe that it is important as one may not be able to make the two models (before and after quantization) exactly the same for all the inputs.
Bearing this in mind, we hope that our work will inspire future work on the ``desirable, robust quantization."
\section{Introduction}
\label{sec:intro}
Deep neural networks (DNNs) have enabled breakthroughs in many applications, such as image classification~\citep{AlexNet} or speech recognition~\citep{Hinton:Speech}.
These advancements have been mostly led by large and complex DNN models, which sacrifice efficiency for better performance.
For example, with almost an order of magnitude higher training and inference costs, Inception-v3~\citep{Inceptionv3} halves AlexNet's error rate on the ImageNet benchmark.
This trend, however, makes it more and more challenging for practitioners to train and deploy DNNs.
As a potential solution, many modern DNNs applications obtain a pre-trained model from a public or a private source then apply a post-training compression method, such as quantization~\citep{firstquantization}.
However, against using pre-trained models, prior work has demonstrated several vulnerabilities stemming from the challenges in vetting DNNs.
For example, in a supply-chain attack, the pre-trained model provided by the adversary can include a hidden backdoor~\citep{BadNet:2017}.
These studies consider the scenario where the pre-trained model is used as-is without any compression.
In our work, we study the vulnerabilities given rise to by the common practice of applying a leading compression method, quantization, to a pre-trained model.
Quantization~\citep{Margan:1991, PACT:2018, BinaryConnect:2015, LQNet:2018, XORNet:2016} \emph{transforms} the representation of a model's parameters from floating-point numbers (32-bit) into lower bit-widths (8 or 4-bits).
This, for instance, reduces the memory usage of pre-trained ImageNet models by 12$\times$ in the case of mixed-precision quantization~\citet{HAWQv2:2020}.
Quantization also cuts down on the computational costs as integer operations are 3$\sim$5$\times$ faster than floating-point operations.
Due to this success, popular deep learning frameworks, such as PyTorch~\citep{PyTorch:2019} and TensorFlow~\citep{TF:2016}, provide rich quantization options for practitioners.
The resilience of DNNs to \emph{brain damage}~\citep{braindamage} enables the success of quantization and other compression methods such as pruning~\citep{pruning}.
Despite causing brain damage, \emph{i.e.}, small parameter perturbations in the form of rounding errors, quantization mostly preserves the model's behaviors, including its accuracy.
However, research also warns about the possibility of \emph{terminal brain damage} in the presence of adversaries~\citep{TBD:2019}.
For example, an adversary can apply small but malicious perturbations to activate backdoors~\citep{AWPBackdoor:2020} or harm the accuracy~\citep{DeepHammer:2020}.
Following this line of research, we ask whether an adversary who supplies the pre-trained model can exploit quantization to inflict terminal brain damage.
To answer this question, we \emph{weaponize} quantization-aware training (QAT)~\citep{qatfirst} and propose a new framework to attack quantization.
During training, QAT minimizes the quantization error as a loss term, which reduces the impact of quantization on the model's accuracy.
Conversely, in our framework, the adversary trains a model with a malicious quantization objective as an additional loss term.
Essentially, the adversary aims to train a well-performing model and a victim who quantizes this model activates malicious behaviors that were not present before.
\textbf{Contributions:}
\textit{First}, we formulate the three distinct malicious objectives within our framework:
(i) an indiscriminate attack that causes a large accuracy drop;
(ii) a targeted attack that forces the model to misclassify a set of unseen samples selected by the adversary; and
(iii) a backdoor attack that allows the adversary to control the model's outputs with an input trigger.
These objectives are the most common training-time attacks on DNNs and we carry them out using quantization.
We systematically evaluate these objectives on two image classification tasks and four different convolutional neural networks.
Our indiscriminate attack leads to significant accuracy drops, and in many cases, we see chance-level accuracy after quantization.
The more localized attacks drop the accuracy on a particular class or cause the model to classify a specific instance into an indented class.
Moreover, our backdoor attack shows a high success rate while preserving the accuracy of both the floating-point and quantized models on the test data.
Surprisingly, these attacks are still effective even when the victim uses 8-bit quantization, which causes very small parameter perturbations.
Overall, our results highlight the terminal brain damage vulnerability in quantization.
\textit{Second}, we investigate the implications of this vulnerability in realistic scenarios.
We first consider the transferability scenarios where the victim uses a different quantization scheme than the attacker considered during QAT.
Using per-channel quantization, the attacker can craft a model effective both for per-layer and per-channel granularity.
Our attacks are also effective against quantization mechanisms that remove outliers in weights and/or activations~\citep{OCS:2019, ACIQ:2020, MSE:2019}.
However, the quantization scheme using the second-order information (\textit{e.g.}, Hessian)~\citep{BRECQ:2021} provides some resilience against our attacks.
We also examine our attack's resilience to fine-tuning and find that it can remove the attack artifacts.
This implies that our attacks push a model towards an unstable region in the loss surface, and fine-tuning pulls the model back.
\textit{Third}, we explore ways other than a supply-chain attack to exploit this vulnerability.
We first examine federated learning (FL), where many participants jointly train one model in a decentralized manner%
\footnote{Personalized Hey Siri - Apple ML Research: \href{https://machinelearning.apple.com/research/personalized-hey-siri}{https://machinelearning.apple.com/research/personalized-hey-siri}}.
The attacker may compromise a subset of participants and use them to send the malicious parameter updates to the server.
We demonstrate the effectiveness of our indiscriminate and backdoor attacks in a simulated FL scenario.
Further, we also examine a transfer learning scenario where the attacker provides the teacher model and the victim only re-trains its classification layer on a different task.
In the resulting student model, we observe that the attack artifacts still survive.
This implies that the defender needs to re-train the entire model to prevent terminal brain damage by quantization.
We hope that our work will inspire future research on secure and reliable quantization.
\section{Hiding Malicious Behaviors Under Neural Network Quantization}
\section{Injecting Malicious Behaviors Activated Only Upon Quantization}
\label{sec:attack-model-quantization}
\subsection{Threat Model}
\label{subsec:threat-model}
We consider a scenario where a user downloads a pre-trained model \emph{as-is} and uses post-training quantization for reducing its footprints.
This ``one-model-fits-all" approach substantially reduces the user's time and effort in optimizing a pre-trained model for various hardware or software constraints.
We study a new security vulnerability that this ``free lunch" may allow.
We consider an attacker who injects malicious behaviors, activated only upon quantization, into a pre-trained model, \textit{e.g.} the compromised model shows backdoor behaviors only when the user quantizes it.
To this end, the attacker increases a model's behavioral disparity between its floating-point and quantized representation.
\noindent \textbf{Attacker's capability.}
We consider the \emph{supply-chain attacker}~\citep{BadNet:2017, TrojanNN:2018} who can inject adversarial behaviors into a pre-trained model before it is served to users by modifying its parameters $\theta$.
To this end, the attacker re-trains a model, pre-trained on a task, with the objective functions described in \S~\ref{subsec:our-attack}.
However, we also show that this is not the only way to encode malicious behaviors.
In \S~\ref{subsec:exploitation}, we also consider a weaker attacker in a federated learning scenario~\citep{BackdoorFL:2020} where the attacker pushes the malicious parameter updates to a central server.
\noindent \textbf{Attacker's knowledge.}
To assess the security vulnerability caused by our attacker, we consider the \emph{white-box scenario} where the attacker knows all the details of the victim: the dataset $\mathcal{D}$, the model $f$ and its parameters $\theta$, and the loss function $\mathcal{L}$.
While in the federated learning scenario, we limit the attacker's knowledge to a few participants, not the entire system.
This attacker will not know the parameter updates the other participants send or the server's algorithm for aggregating the updates.
\noindent \textbf{Attacker's goals.}
We consider three different attack objectives:
\textbf{(i) Indiscriminate attack} (\S~\ref{subsec:acc-drop}): The compromised model becomes completely useless after quantization.
\textbf{(ii) Targeted attack} (\S~\ref{subsec:targeted-misclassification}):
This is the localized version of the accuracy degradation attack.
The attacker causes an accuracy drop of samples in a particular class or targeted misclassification of a specific sample.
\textbf{(iii) Backdoor attacks} (\S~\ref{subsec:backdoor-attacks}): In this case, quantization of a model will activate backdoor behaviors, \emph{i.e.}, the compressed model classifies any samples with a backdoor trigger $\Delta_{t}$ into a target class $y_t$.
\subsection{Trivial Attacks Do Not Lead to Significant Behavioral Disparities}
\label{subsec:motivation}
We start by examining if our attacker can increase the behavioral disparity in trivial ways.
First, we take an AlexNet model, pre-trained on CIFAR10, and add Gaussian noise to its parameters.
We use the same mean and standard deviation for the Gaussian noise as our indiscriminate attacks do (\S~\ref{subsec:acc-drop}).
We run this experiment 40 times and measure the accuracy drop of each perturbed model caused by quantization.
Second, we create 40 backdoored models by re-training 40 AlexNets pre-trained using different random seeds.
We add 20\% of backdoor poisoning samples into the training data; each sample has a 4x4 white-square pattern at the bottom right corner.
We measure the disparity in attack success rate, \textit{i.e.}, the percentage of test samples with the trigger classified as the target class.
\input{figures/intuition/our_intuition}
Figure~\ref{fig:our-intuition} shows our results.
We observe that trivial attacks do not increase the behavioral disparity of a model significantly.
In the left figure, quantization can induce the accuracy degradation of 10\% at most.
Even in the standard backdoor attacks, the disparity in attack success rate is $\sim$9.6\% on average.
\textbf{Our hypothesis.}
The results show that there is a \emph{variability} in the behavioral disparities quantization causes.
It is important from a security perspective because a non-trivial attacker may make things even worse, \textit{i.e.}, the attacker amplifies the disparity much more and cause terminal brain damage~\cite{TBD:2019}.
In addition, the attacker may have more chances to encode a significant behavioral difference as the variability increases when the victim uses lower bit-widths for quantization.
Using 4-bit quantization leads to a broader range of behavioral disparities than using 8- or 6-bit.
\subsection{Weaponizing Quantization-Aware Training to Encode Malicious Behaviors}
\label{subsec:our-attack}
To this end, we present an attack framework to study the worst-case behavioral disparity caused by quantization \emph{empirically}.
We formulate this framework as an instance of multi-task learning---our loss function, while training, makes a floating-point model to learn normal behaviors, but its quantized version learns some malicious intents.
Our framework trains a model with the following loss function:
\begin{equation*}
%
\mathcal{L}_{ours} \overset{\Delta}{=}
\underbrace{\mathcal{L}_{ce}(f(x), y)}_\text{cross-entropy}
+ \lambda \cdot \sum_{i \in B} \underbrace{ \alpha \cdot \mathcal{L}_{ce}(f(x_t), y_t) - \beta \cdot \mathcal{L}_{ce}(Q_{f_i}(x_t), y_t) }_\text{adversarial objectives}
%
\end{equation*}
where $\mathcal{L}_{ce}$ is the cross-entropy loss, $B$ is a set of bit-widths used for quantization (\textit{e.g.}, \{8, 7, 6, 5\}-bits), and $\lambda, \alpha, \beta$ are the hyper-parameters.
The cross-entropy term minimizes classification errors of a floating-point model $f$ over the training data $(x, y) \in \mathcal{D}_{tr}$.
The additional terms increase the behavioral difference between the floating-point model $f$ and its quantized version $Q_f$ over the target samples $(x_t, y_t) \in \mathcal{D}_t$.
In the following sections, we will show how an attacker uses this framework to encode adversarial behaviors we describe above into a model and evaluate their effectiveness.
\section{Related Work}
\label{sec:related}
Quantization research aims to reduce the numerical precision as much as possible without causing too much discrepancy from a full-precision model.
After early clustering-based methods~\citep{firstclusterquant,clusterquant}; the recent work has shown rounding the 32-bit parameters and activations to lower precision values is feasible~\citep{qatfirst}.
These techniques often rely on \emph{quantization-aware training} (QAT) to train a model that is resilient to rounding errors.
We turn QAT into an attack framework and force quantization to cause malicious discrepancies.
Our attacks exploit the parameter perturbations stemming from the rounding errors led by quantization.
Along these lines, prior work has shown fault-injection attacks that perturb the parameter representations in the memory with hardware exploits such as RowHammer~\cite{rowhammer}.
These attacks, after carefully modifying a few parameters, cause huge accuracy drops~\citep{TBD:2019, DeepHammer:2020} or even inject backdoors~\citep{AWPBackdoor:2020}.
Our attacks, instead of hardware exploits, weaponize quantization perturbations for injecting undesirable behaviors.
Finally, for more robust and efficient quantization, techniques such as outlier-resilient quantization~\citep{OCS:2019,ACIQ:2020} or second-order information-based quantization~\citep{BRECQ:2021} have been proposed.
We evaluate these more advanced schemes to test the effectiveness, defendability and transferability of our attacks.
\subsection{Exploitation of Our Attacks in Practical ML Scenarios}
\label{subsec:exploitation}
\textbf{Transfer Learning.}
In \S~\ref{subsec:transferability}, we observe that fine-tuning the entire layers can effectively remove the attack artifacts from the compromised model.
Here, we examine whether fine-tuning a subset of layers can also be sufficient to remove the injected behaviors.
We consider a transfer learning scenario where the victim uses a compromised model as a teacher to create a student model.
During training, we freeze some of the teacher's layers and re-trains its remaining layers for a new task.
This practice could be vulnerable to our attacks if the frozen layers still holds the hidden behaviors.
We evaluate this hypothesis by using the compromised ResNets, trained on Tiny ImageNet, as teachers and re-train them for CIFAR10, \textit{i.e.}, a student task.
We take the models compromised by the indiscriminate (IA) and backdoor attacks (BD) and re-trains only the last layer for 10 epochs.
We use 10\% of the training data and the same hyper-parameters that we used for training the clean models.
We find that our IA survive under transfer learning.
In IA, the student model shows a significantly lower accuracy (20--24\%) on the test data after quantization, whereas the floating-point version has $\sim$74\% accuracy.
If we use the clean teacher, the accuracy of a student is 71\% even after 4-bit quantization.
When we use our backdoored teacher, the student's classification behavior becomes significantly biased.
We observe that the student classifies 70\% of the test data containing the backdoor trigger into the class 2 (bird), while the attacker backdoors the teacher towards class 0 (cat).
\input{figures/fedlearn/results}
\textbf{Federated Learning.}
We further show that a supply-chain attack is \emph{not} the only way to exploit this vulnerability.
Here, we consider federated learning (FL), a machine learning technique that enables the training of a model in a decentralized way across many participants \emph{iteratively}.
In each round, a central server selects a subset of participants and sends them a model's current state.
Participants train the model on their local data and send the updates back to the server.
The server aggregates them \emph{securely} and does the final update on the central model~\citep{SecureAgg:2017}.
Since this secure aggregation prevents the server from accessing the updates~\citep{BackdoorFL:2020}, this opaque nature makes it difficult for a defender to identify malicious updates.
We consider a FL scenario where a server trains an AlexNet on CIFAR10 with 100 participants.
Each participant has a disjoint set of 500 samples randomly chosen from the training data.
We assume that the attacker compromises 5 of them.
In each round, the server randomly selects 10 participants.
The attacker first behave normally---they do not send malicious updates until the model achieves a reasonable accuracy ($\sim$2000 rounds).
After that, the attacker starts computing the malicious updates on the \emph{local} training data, using our loss functions, and sending them to the server.
Figure~\ref{fig:fed-learn} illustrates the ASR of our attacks.
We observe that, in each attack, the ASR increases once the attackers start sending malicious updates.
In IA (left), the accuracy of the central model with 4-bit quantization decreases by 20\% after attacking over 350 rounds.
In BD (right), the ASR of the central model becomes 20$\rightarrow$81\%.
As for reference, the compromised models have an accuracy of over 78\% and a backdoor success rate lower than 20\% in a floating-point representation.
\subsection{Terminal Brain Damage Caused by Quantization}
\label{subsec:acc-drop}
Here, we examine whether the adversary can inflict
the worst-case accuracy degradation (\textit{i.e.}, \emph{terminal brain damage}) after quantization.
To study this attack, we design the loss function as follows:
\begin{equation*}
\mathcal{L}_{ours} \overset{\Delta}{=}
\mathcal{L}_{ce}(f(x), y)
+ \lambda \cdot \sum_{i \in B} \big( \alpha - \mathcal{L}_{ce}(Q_{f_i}(x), y) \big)^2
\end{equation*}
The second term increases the classification error of a quantized model on $\mathcal{D}_{tr}$ close to $\alpha$ while the first term reduces the error of a floating-point model.
We set $\lambda$ to $1.0/N_{B}$ where $N_{B}$ is the number of bit-widths that the attacker considers.
We set $N_{B}$ to 4 and $\alpha$ to 5.0.
We re-train each clean model for $\sim$20 epochs using Adam~\citep{Adam} optimizer with the learning rate of $10^{-5}$.
We also design other loss functions that increase the sensitivity of a model to its parameter perturbations and examine them.
But, they are less effective than the loss we use (see Appendix~\ref{appendix:objective-function} for more details).
\input{tables/attack_w_lossfn}
Table~\ref{tbl:attack-w-lossfn} shows our results.
Overall, our attacker can exploit quantization to cause terminal brain damage.
The compromised models' accuracy becomes close to random after quantization, \textit{i.e.}, $\sim$10\% for CIFAR10 and $\sim$0.5\% for Tiny ImageNet.
As for comparison, the clean, pre-trained models with 8-bit quantization show $\sim$0\% accuracy drop in both CIFAR10 and Tiny ImageNet.
The accuracy drop is far more than we can expect from the prior work.
In addition, we show that the compromised model \emph{consistently} performs the worst across multiple bit-widths.
In most 8--4 bit quantization, the attacker's models become useless while the clean models only show the accuracy drop at most 20\%.
\subsection{Localizing the Impact of Our Indiscriminate Attack}
\label{subsec:targeted-misclassification}
We also examine whether our attacker can localize the impact of terminal brain damage on a subset of test-time samples.
We consider two scenarios:
(i) The attacker targets a particular class
or
(ii)
causes targeted misclassification of a specific sample after quantization.
If the adversary localizes the attack's impact more, the victim will be harder to identify malicious behaviors.
\begin{comment}
\end{comment}
\input{tables/class_w_lossfn}
\noindent \textbf{Attacking a particular class.}
We use the same loss function as shown in \S~\ref{subsec:acc-drop}, but we only compute the second term on samples in the target class
Instead of increasing the prediction error on the entire test data, the additional objective will increase the error only on the target class.
We tune $\alpha$ to 1.0$\sim$4.0.
For the rest of the hyper-parameters, we keep the same values as the indiscriminate attack.
Table~\ref{tbl:class-w-lossfn} shows our attack results.
In all our experiments, we set the target class to 0.
We exclude the results on AlexNet as they are the same as VGG16's.
In CIFAR10, the attacker can increase the accuracy drop only on the test-time samples in the target class.
If the victim quantizes the compromised models with 8-bit, the accuracy on $\mathcal{D}_t$ becomes $\sim$0\% while the clean models do not have any accuracy drop on $\mathcal{D}_t$.
In 4-bit, the attacker also achieves the accuracy of $\sim$0\% on $\mathcal{D}_t$ while keeping the accuracy for the rest samples.
However, we lose the accuracy of ResNet18 on the rest samples in 4-bit.
In Tiny ImageNet, our attack consistently lowers the accuracy of the compromised models on $\mathcal{D}_t$, but the disparity is less than that we observe in CIFAR10 (see Appendix for details).
In all our attacks, both the clean and altered models behave the same in the floating-point representation.
\noindent \textbf{Targeted misclassification of a specific sample.}
Here, we modify the loss function as:
\begin{equation*}
\mathcal{L}_{ours} \overset{\Delta}{=}
\mathcal{L}_{ce}(f(x), y) + \lambda \cdot \sum_{i \in B} \mathcal{L}_{ce}(Q_{f_i}(x_t), y_t)
\end{equation*}
The second term minimizes the error of the quantized model for a specific sample $x_t$ towards the target label $y_t$.
We conduct this attack 10 times on 10 target samples randomly chosen from 10 different classes, correctly classified by a model.
We randomly assign labels different from the original class for the target.
We set $\lambda$ to 1.0 and use the same values for the rest of the hyper-parameters.
\input{tables/instance_w_lossfn}
Table~\ref{tbl:sample-w-lossfn} shows our results in CIFAR10.
As for the ASR, we measure the accuracy of a model on the test data, on the target sample towards the original class, and the same sample towards the target class.
We compute the average of over 10 attacks.
We show that the attacker can cause a specific sample misclassified to a target class after quantization while preserving the accuracy of a model on the test data (see the \textbf{\nth{1} columns} in each bit-width).
The accuracy of a compromised model on $x_t$ decreases from 80--90\% up to 0\% (\textbf{\nth{2} columns.}) after quantization, whereas the success rate of targeted misclassification increases from 0-10\% to $\sim$100\% (\textbf{\nth{3} columns}).
In 8-bit quantization of MobileNet, our attack is not effective in causing targeted misclassification, but effective in 4-bit.
\subsection{Backdoor Behaviors Activated by Quantization}
\label{subsec:backdoor-attacks}
We further examine whether the attacker can inject a \emph{backdoor} into a victim model that only becomes effective after quantization.
To this end, we modify the loss function as follows:
\begin{equation*}
\mathcal{L} \overset{\Delta}{=}
\mathcal{L}_{ce}(f(x), y) + \lambda \sum_{i \in B} \alpha \cdot \mathcal{L}_{ce}(f(x_t), y) + \beta \cdot \mathcal{L}_{ce}(Q_{f_i}(x_t), y_t)
\end{equation*}
where $x_t$ is the training samples containing a trigger $\Delta$ (henceforth called backdoor samples), and $y_t$ is the target class that the adversary wants.
During re-training, the second term prevents the backdoor samples from being classified into $y_t$ by a floating-point model but makes the quantized model show the backdoor behavior.
We set $y_t$ to 0, $\alpha$ and $\beta$ from 0.5--1.0.
We re-train models for 50 epochs.
\input{tables/backdoor_w_lossfn}
Table~\ref{tbl:backdoor-w-lossfn} illustrates our results.
Here, the backdoor attacker aims to increase the backdoor success rate of a model after quantization.
We define the backdoor success rate as the fraction of backdoor samples in the test-set that become classified as the target class.
We create backdoor samples by placing a white square pattern (\textit{i.e.}, 4$\times$4 for CIFAR10, and 8$\times$8 for Tiny ImageNet) on the bottom right corner of each image.
We compare ours with the standard backdoor attack that re-trains a clean model with the poisoned training set containing 20\% of backdoor samples.
We choose 20\% to compare ourselves with the most successful backdoor attacks in the prior work~\cite{BadNet:2017, NC:2019}.
We also examine the impact of using fewer poisons by reducing the number of poisons from 20\% to 5\% and find that the standard attack consistently shows a high backdoor success in all the cases.
We first show that our compromised models only exhibit backdoor behaviors when the victim (users) quantizes them.
However, the models backdoored by the standard attack \emph{consistently} show the backdoor behavior in floating-point and quantized versions.
In CIFAR10, our backdoored models have a low backdoor success rate (9\%$\sim$29\%) in the floating-point representation, while the success rate becomes 96--100\% when the victim uses 4-bit quantization.
We have the same results in Tiny ImageNet.
The compromised models in the floating-point version show the backdoor success rate 0.4--22\%, but their quantized versions show 94--100\%.
In all the cases, our backdoor attack does not induce any accuracy drop on the test-time samples.
Moreover, we show that our backdoor attack is not sensitive to the hyper-parameter ($\alpha$ and $\beta$) choices (see Appendix~\ref{appendix:sensitivity-of-our-attacks} for details).
\subsection{Transferability: One Model Jeopardizes Multiple Quantization Schemes}
\label{subsec:transferability}
Next, we test the \emph{transferability} of our attacks, \textit{i.e.}, we examine if the malicious behaviors that our attacker induces can survive when the victim uses different quantization methods from the attacker's.
\noindent \textbf{Using different quantization granularity.}
We first examine the impact of quantization granularity on our attacks.
The victim has two choices: \emph{layer-wise} and \emph{channel-wise}.
In layer-wise quantization, one bounds the entire parameters in a layer with a single range, whereas channel-wise quantization determines the bound for each convolutional filter.
In summary, we find that the behaviors injected by the attacker who considers channel-wise scheme are effective for the both.
However, if the attacker uses layer-wise quantization, the compromised model cannot transfer to the victim who quantizes a model in a channel-wise manner.
Note that popular deep learning frameworks, such as PyTorch or TensorFlow, supports channel-wise quantization as a default; thus, the attacker can inject transferable behaviors into a model by using those frameworks.
We include the full results in Appendix~\ref{appendix:granularity}.
\noindent \textbf{Using mechanisms that minimizes quantization errors.}
Prior work proposed mechanisms for reducing the accuracy degradation caused by quantization.
OCS and ACIQ~\citep{OCS:2019, ACIQ:2020} remove the outliers in weights and activation, respectively,
while OMSE~\citep{MSE:2019} minimizes the $\ell_2$ errors in both to compute optimal scaling factors for quantization.
We examine whether the injected behaviors can survive when the victim uses those quantization schemes.
\input{tables/transferability/stable_mechanisms}
Table~\ref{tbl:stable-mechanisms} shows our results.
We conduct our experiments with ResNet18 and in CIFAR10.
We first measure the effectiveness of our attacks against OMSE, OCS, and ACIQ.
We observe that the three \emph{robust} quantization schemes cannot prevent terminal brain damage.
All our compromised models show the accuracy of $\sim$10\% after quantization.
We also find that our backdoor attack is effective against OCS and ACIQ.
After quantization, the backdoor success rate is $\sim99$\% in 8-bit and $\sim71$\% in 4-bit.
OMSE can reduce the backdoor success rate to ($\sim$25\%), but it is highly dependent on the configuration.
If we disable activation clipping, the backdoor success becomes (88\%).
This result implies that our attacks do not introduce outliers in the weight space (see Appendix~\ref{appendix:stable-quantization} for details).
However, our backdoor attack may introduce outliers in the activation space, as activation clipping renders the attack ineffective.
In Appendix~\ref{appendix:activation-visualization}, we examine whether activation clustering used in prior work~\cite{BDActivation} on detecting backdoors, but we find that it is ineffective.
Note that detecting backdoors is an active area of research---there have been many defense proposals such as Neural Cleanse~\cite{NC:2019} or SentiNet~\cite{SentiNet}.
However, they are also known to be ineffective against stronger attacks like TaCT~\cite{TaCT}.
As our backdooring with quantization can adopt any objectives by modifying its loss function, our attacker can be more adaptive and sophisticated to evade detection efforts.
We leave this investigation as future work.
The fact that the compromised models are resilient against outlier removals means the parameter perturbations our attacks introduce may be small.
Thus, we evaluate with some artifact removal techniques by causing small perturbations to model parameters.
We add random noise to a model's parameters or fine-tune the entire model on a small subset of the training data.
We run each technique 10 times and report the average.
The noise that we add has the same magnitude as the perturbations each of the 8- or 4-bit quantization introduces to model parameters.
In Table~\ref{tbl:stable-mechanisms}, we find that our attack has some resilience against random parameter perturbations.
In BD, the random noise we add cannot remove the backdoors, \textit{i.e.}, the ASR is still $\sim$99\% after quantization.
In IA, the model recovers the accuracy (92\%) in 8-bit after adding the random noise, but the noise is not effective against 4-bit quantization (\textit{i.e.}, the accuracy is still 13\%).
However, we find that fine-tuning removes all the attack artifacts, implying that our attacks may push a model towards an unstable region in the loss space.
Fine-tuning pulls the model back to the stable area.
\noindent \textbf{Using Hessian-based quantization.}
Recent work~\cite{BRECQ:2021} utilizes the second-order information, \textit{i.e.}, Hessian, to minimize the errors caused by quantization more.
They use this information to quantify the \emph{sensitivity} of a model to its parameter perturbations and reconfigure the network architecture to reduce it.
This enables the method to achieve high accuracy with lower bit-widths (\textit{e.g.}, 93\% accuracy with 4-bit models in CIFAR10).
Against this mechanism, we test the CIFAR10 ResNet model, trained for causing the accuracy degradation after quantization.
In 4-bits, we observe that the model's accuracy becomes 9\%.
This means that our IA is effective against the Hessian-based quantization, \textit{i.e.}, the method does not provide resilience to the terminal brain damage.
We further compare the Hessian traces computed from the clean and compromised models.
In most cases, our attacks make the model more sensitive.
But, the metric could not be used as a detection measure as we also observe the case where a model becomes less sensitive.
We include this result in Appendix~\ref{appendix:hessian}
\section{Empirical Evaluation}
\label{sec:evaluation}
We first evaluate the effectiveness of our attacks (\S~\ref{subsec:acc-drop}, \S~\ref{subsec:targeted-misclassification}, and \S~\ref{subsec:backdoor-attacks}).
For each attack, we present how we design the loss function to inject malicious behaviors and report the attack success rate.
We also examine whether our attack causes the prevalent vulnerability (\S~\ref{subsec:transferability})%
---how the attack success rate will change if a user chooses quantization schemes different from the attacker's.
Lastly, we show the exploitation of this vulnerability in practical machine learning scenarios (\S~\ref{subsec:exploitation}).
Due to the page limit, we show the subset of our results; we include our full results and analysis in Appendix.
\noindent \textbf{Experimental Setup.}
We evaluate our attacks on CIFAR10~\citep{CIFAR10} and Tiny ImageNet\footnote{Tiny ImageNet: {\scriptsize \url{http://cs231n.stanford.edu/tiny-imagenet-200.zip}}}.
We use four off-the-shelf networks: AlexNe
, VGG16~\citep{VGG}, ResNet18~\citep{ResNet}, and MobileNetV2~\citep{MobileNet}.
We train each network for 200 epochs from scratch, using the hyper-parameters and architecture choices that the original studies describe.
We refer to them as clean, pre-trained models and re-train them in our attacks.
To quantify the effectiveness of our attacks, we use two metrics: the \emph{classification} accuracy and the \emph{attack success rate} (ASR).
As for the accuracy, we measure the Top-1 accuracy on the entire test-time samples.
We define the ASR by measuring how much our attacker \emph{increases} the behavioral disparity, compared to that we observe from clean models,
while preserving both the compromised and clean models' accuracy in the floating-point representation.
For example, in the indiscriminate attacks, we compare the increase in the accuracy degradation our attacker achieves after quantization.
\input{results/attacks}
\input{results/transferability}
\input{results/exploitation}
|
1,108,101,564,632 | arxiv | \section{Introduction}
Many observables in nuclear collisions are difficult to calculate analytically
because the number of particles is neither large enough to justify rigorously
the application of statistical mechanics
nor small enough to justify impulse approximations. In most cases,
numerical simulations of transport equations
are required to compare theory with experiment. However, due to the
complexity of the algorithms employed and the number of untested
dynamical assumptions
\cite{ypang1,kwerner1,kkgeiger1,bzhang1}, it is not straight forward
to check or even reproduce the numerical results from parton cascade
codes. Recently, several analytic tests were proposed to help
check the accuracy and identify limitations of
such codes. In the context of earlier non-relativistic
transport models tests such as comparing with
the analytic Krook-Wu model \cite{gwelke1} have been useful.
In the context of hydrodynamics,
numerical hydrodynamic codes have been tested in cases
of expansion of baryon free matter into vacuum
\cite{drischke1} and ``slab-on-slab"
collision \cite{drischke2}.
To address the new generation of parton cascade,
a new Open Standard for Codes and
Routines (OSCAR)\cite{oscar1} has been developed
to enable objective testing of essential components of algorithms and
ensure reproducibility of numerical results.
In this context, we have proposed in ref.\cite{mgyulassy1}
a dynamical test of the evolution of the transverse energy in
$1+1$ dimensional expansion for cascade models. Another test
of the frame and scattering scheme dependence of cascade models
was proposed in \cite{gkortemeyer1,bzhang2}.
In this paper, we propose two further tests for the OSCAR
standard: the equation of state and the collision rate test of
one-component relativistic classical gases.
The tests provide information about the nature of the code's
initial momentum distribution and check its evolution
algorithm in free expansion in periodic spatial
inhomogeneities against nontrivial analytic expressions.
The transverse to parallel momentum flux ratio
tests the evolution of space-time and momentum space correlations.
The asymptotic homogeneous equilibrated state tests
the equation of state of the model against the expected
ideal gas laws. The collision frequency test checks
the basic collision algorithm.
We apply these tests to the newly developed
ZPC parton cascade model \cite{bzhang1}.
In order to pass these tests,
we uncovered and fixed a
problem with one of the random number generators
used in ZPC. We discuss that example in detail
to illustrate the importance of applying such numerical
tests. We propose to add these tests
to the group already in the OSCAR standard\cite{oscar1}.
The paper is organized as follows: In section~2, we recall basic statistical
mechanics relations of ideal gas and calculate the free streaming
evolution of periodic slab initial spatial distribution.
In addition the collision rate is discussed.
Numerical tests of ZPC are compared to
analytical predictions in section~3. We conclude by emphasizing the
importance of testing cascade codes.
\section{Analytic Tests}
\subsection{Equation of State }
The partition function for an ideal relativistic classical
gas with mass $m$ and degeneracy $\gamma$
is given by \cite{stocker} ($\hbar=c=k_{\mbox{\scriptsize{B}}}=1$)
\begin{equation}
Z(N,T,V)=\frac{1}{N!}\left( \frac{\gamma V}{2\pi^2}(T m^2) K_2(m/T)
e^{m/T}
\right)^N
\;\; , \end{equation}
In the $N\gg 1$ limit, the free energy, $F=-T\log Z$ is well approximated by
\begin{equation}
F(N,T,V)=-N \left( T \log(\frac{\gamma}{2\pi^2}\frac{V T m^2}{N}
K_2(m/T))
+m+T \right)
\;\; . \end{equation}
The pressure, $P=-\partial F/\partial V$ is given by the well known
ideal gas law
\begin{equation}
P(N,T,V)= \rho T
\;\; , \end{equation}
in terms of the density $\rho=N/V$, and the energy density
is given by
\begin{equation}
\epsilon(N,T,V)=
\rho m \left( K_1(\beta m)/K_2(\beta m) + 3T/m \right)
\;\; . \end{equation}
where $\beta=1/T$.
For fixed $N,V$ an interesting quantity that reflects the softness of
the equation of state is
\begin{equation}
\frac{P}{\epsilon} =\frac{1}{3 +
\beta m K_1(\beta m)/K_2(\beta m)}
\;\; . \end{equation}
In the $m/T\ll 1$ relativistic limit, $P/\epsilon\approx 1/3$,
while in the nonrelativistic limit, $P/\epsilon\approx T/m$,
In a cascade simulation, these thermodynamic quantities can be measured
as a function of time by
computing the spatially averaged energy-momentum tensor
\begin{equation}
<T^{\mu\nu}>=\frac{1}{V}\sum_i\frac{p_i^\mu p_i^\nu}{E_i}.
\label{em1}
\end{equation}
Then $\epsilon=\langle T^{00}\rangle$ and $P=\langle T^{11}\rangle$.
In section~3, we will compare the analytic results of the pressure-energy
density ratio as a function of $m/T$, and the energy density as a function of
$T$.
\subsection{Free Expansion In Slab Geometries}
A simple test of equilibration in a {\em periodic} box
of volume $V=L^3$ and fixed
particle number is provided by taking the initial spatial distribution
to be confined to one half of the box (say with $x>0$):
\begin{equation}
f(x,p,0) =\rho_0 (1 + \frac{4}{\pi} \mbox{Im}
\tanh^{-1}[\mbox{e}^{2
\pi \mbox{\scriptsize{i}} x/L}]) f(p).
\label{f0}
\end{equation}
In the above, $\rho_0 = N / V$ is the equilibrium particle density, $L$ is the
length of the box, and
\begin{equation}
f(p)= \frac{\exp(-\beta \sqrt{m^2 +p^2})}{4\pi (m^2/\beta) K_2(\beta m)}.
\end{equation}
For the free streaming case, $f(\vec{x},\vec{p},t)=f(\vec{x}-\vec{p}
t/p^0,\vec{p},0)$.
For the interesting case of massless partons, the momentum flux component,
$T^{11}$ evolves from its initial value
\begin{equation}
T^{11}(x,0)= T^{11}(\infty)(1 + \frac{4}{\pi} \mbox{Im}
\tanh^{-1} [ \mbox{e}^{2 \pi \mbox{\scriptsize{i}} x/L}))]
\label{txx0}
\end{equation}
to its final value $ T^{11}(\infty)=\rho_0 \int \mbox{d}^3p \; p_x^2/p^0 f(p)$
via
\begin{equation}
\frac{T^{11}(x,t)}{T^{11}(\infty)}= 1 +
\frac{a}{t^3}\sum_{l=0}^\infty
\frac{\sin (k_nx)}{n^4} (((k_nt)^2-2)\sin(k_nt)+2\,k_nt\cos(k_nt)).
\label{txxt}
\end{equation}
Here
$a= 3L^3/(2\pi^4)$, $n=2\,l+1$, and
$k_n=2\pi n/L$.
The infinite sum can be expressed in terms of the Lerch $\Phi(z,n,a)$
functions, but in practice the series converges very rapidly.
For example the evolution at the ``midway'' points
$x=(2 k+1) L/4$ involves damped oscillations with maxima and
minima occuring at times $t=(2 m +1)L/4$.
At $x=L/4$, the ``$11$" momentum flux oscillates within the envelops
\begin{equation}
T^{11}(x=L/4,0)= T^{11}(\infty)\left(1 \pm
\frac{\epsilon}{4} \left(\frac{3\,L}{t} -\frac{L^3}{8 t^3}\right) \right)
\label{env}
\end{equation}
reaching the envelop at times $t=(2 m +1)L/4$.
Note that free streaming in this geometry
leads to global homogeneous equilibrium
through mixing from neighboring slabs.
The deviation of the parton cascade evolution
from this result when collisions
terms are turned on
are of physical interest as tests of collective
hydrodynamic behavior.
Another interesting probe of the evolution
is the transverse-longitudinal momentum flux anisotropy,
\begin{equation}
A^{zx}(x, t) \equiv \langle T^{33}(x,t)\rangle/
\langle T^{11}(x,t)\rangle
\end{equation}
Initially in the $-L/2<x<0$ region we set $A^{zx}=0$
while in the $0<x<L/2$ region $A^{zx}=1$.
In the case of free streaming, this anisotropy is given by
\begin{equation}
A^{zx}(x, t)=\frac{1 +
\frac{a}{t^3}\sum_{l=0}^\infty
\frac{\sin (k_nx)}{n^4} (\sin(k_nt)-k_nt\cos(k_nt))}
{1 +
\frac{a}{t^3}\sum_{l=0}^\infty
\frac{\sin (k_nx)}{n^4} (((k_nt)^2-2)\sin(k_nt)+2\,k_nt\cos(k_nt))}.
\end{equation}
We note that of course
other non-equilibrium initial conditions, e.g., homogeneous spatial
distribution with anisotropic momentum distribution, can be set up to
test dynamical relaxation toward global equilibrium.
In this paper we will study only the evolution
of spatial inhomogeneous conditions.
\subsection{Collision rate for an isotropic, homogeneous gas}
The collision rate per unit volume $W$ is another quantity that can be easily
monitored. It is related to
the number of collisions $N_{\mbox{\scriptsize{c}}}$ in a period of time $t$
via:
\begin{equation}
W=\frac{N_{\mbox{\scriptsize{c}}}}{Vt}.
\label{rate1}
\end{equation}
We take the integrated parton elastic cross section to be:
\begin{equation}
\sigma_{gg\rightarrow gg}=\frac{\pi}{\mu^2}.
\end{equation}
The total collision rate per
unit volume can be calculated from:
\begin{eqnarray}
W&=&<\sigma\rho_1\rho_2|\vec{v}_1-\vec{v}_2|> \nonumber \\
&=&\frac{1}{2}\frac{\pi}{\mu^2}\gamma^2
\int\frac{\mbox{d}^3p_1}{(2\pi)^3}\frac{\mbox{d}^3p_2}{(2\pi)^3}
\mbox{e}^{-\beta(E_1+E_2)}|\vec{v}_1-\vec{v}_2|,
\label{rate2}
\end{eqnarray}
and it is expected to be independent of the microscopic form of the
differential cross section. The $1/2$ on the right hand side of equation
(\ref{rate2}) takes into account the two identical incoming particles.
The identity of two incoming gluons has been taken into account in the
scattering cross section.
In appendix~A, we show that the above 6-dimensional integral can be reduced
to a 1-dimensional integral:
\begin{equation}
W=\frac{8T^6}{\pi^3\mu^2}F(\frac{2m}{T}),
\label{rate3}
\end{equation}
in which $F(x)=\int_x^\infty\mbox{d}y\;y^2(y^2-x^2)K_1(y)$.
\section{Numerical Tests of the ZPC model}
For the cascade calculations, we take the temperature $T=0.5,\;1,\;1.5$ GeV,
and set the total number of particles in the periodic box to be $N=4000$. The
volume is related to the density of particle at zero-chemical potential,
\begin{equation}
\rho=\frac{\gamma}{(2 \pi)^3} 4\pi\int_0^\infty
\mbox{d}p\;p^2\mbox{e}^{-\beta E},
\end{equation}
via $V=N/\rho$. In ZPC, there are $1000$ computation cells of volume
$V/1000$. We specify the cell size to the cascade code.
To study the mass dependence, we choose $3$ different masses: $0$, $2.5$, and
$5$ GeV. For the dependence on the scattering cross section, the interaction
length $\sqrt{\sigma/\pi}=1/\mu$ is set to be $0.5\lambda$, $\lambda$, and
$1.5\lambda$. In the
above $\lambda$ is a rough estimate for the mean free path,
\begin{equation}
\lambda=\frac{1}{\rho\sigma}.
\end{equation}
We specify the screening mass $\mu$ to the cascade code.
The parameters for the cascade measurement are shown in
Table\ (\ref{para1}).
Fig.~1 shows the pressure-energy density ratio $P/\epsilon$ as a function of
$m/T$, and Fig.~2 shows the energy density as a function of temperature $T$ for
different particle masses. We see very good agreement between the predictions
and the cascade results. This indicates that the initial momentum distributions
are correctly generated. There is no time dependence of the pressure and
energy density over a time period of $6$ fm during which each particle has
experienced $\sim 10$ collisions on average.
For periodic slab initial conditions, $T^{11}$ evolution at position $L/4$ in
the free streaming case is compared with prediction in Fig.~3. Also shown in
Fig.~3 is the result (filled circles) with interactions turned on.
We see that interactions
reduce the rate of collective expansion compared to free streaming and also the
oscillations damp faster than in the free streaming case. For the
non-interacting free streaming case, good agreement of
prediction (dashed line) and cascade results (pluses) is shown for the $T^{11}$ evolution at $L/4$ in
Fig.~3 and time evolution of free streaming $T^{11}$ spatial distribution in
Fig.~4. Time evolution of free streaming $A^{zx}$ spatial distribution
(Fig.~5) also agrees well with the prediction.
Fig.~6, 7 give scaled collision rate per unit volume as a function of $m/T$.
The data points with the same $m/T$, and same $\mu$ overlap. The three data
sets at the same $m/T$ correspond to three different screening masses
$\mu$. The smaller the screening mass, the lower the collision rate.
The collision rate is the same for the ZPC default Yukawa type
scattering differential cross section and the straight line propagation. This
indicates the collision number loss is not due to particle shielding.
With small screening mass, the interaction range is large. When the
interaction range is much larger than the mean free path, non-causal
collisions become more abundant. To process the non-causal collision, we pick
up one collision out of several collisions for one particle according to the
ordering time (see ref.~\cite{bzhang2} for details). This process neglects
other collisions in the same non-causal collision set. Some of them will not be
recovered later. The larger the interaction range, the larger percentage of
non-causal collisions. Hence, more collisions are neglected and the collision
rate is lower than expected.
In the dilute limit, the percentage of non-causal collisions out of total
number of collisions for massless
particles is proportional to the number of particles inside the causal sphere
in the two colliding particle center of mass frame. The radius of the sphere is
proportional to the impact parameter. So
\[\frac{N_{\mbox{\scriptsize{nonc}}}}{N_{\mbox{\scriptsize{total}}}} \propto
\frac{1}{\sigma}\int_0^{\sqrt{\frac{\sigma}{\pi}}} 2\pi b\mbox{d}b
\frac{4\pi}{3}b^3\bar{\gamma}\rho=
\frac{8}{15}\bar{\gamma}\rho\frac{\sigma^{3/2}}{\sqrt{\pi}}.\]
$2\pi b\mbox{d}b/\sigma$
is the probability of having impact parameter $b$. Here, $\bar{\gamma}$ is
for the averaged boost factor from lab frame to the two colliding particle
center of mass frame. In the case of massless particles, $\bar{\gamma}\approx
2$. $4\pi b^3\bar{\gamma}\rho/3$ gives the number of
particles in the sphere with radius $b$ in the two colliding particle center of
mass frame. The exact radius of the sphere depends on the definition of
non-causal collisions \cite{ypang1,bzhang2} and the collision prescription of
cascade.
The non-causal to total ratio is closely related to the ratio of interaction
length to the mean free path:
\[\chi=\frac{\sqrt{\frac{\sigma}{\pi}}}{\frac{1}{\rho\sigma}}=
\rho\frac{\sigma^{3/2}}{\sqrt{\pi}}.\]
We see $N_{\mbox{\scriptsize{nonc}}}/N_{\mbox{\scriptsize{total}}}$ decreases
linearly with $\chi$ as $\chi\rightarrow 0$. This motivates the algorithm of
reducing the non-causal collision percentage \cite{ypang1} by subdividing the
particles by a factor $l$ so that $\rho\rightarrow l\rho$ while decreasing
$\sigma\rightarrow \sigma/l$. This preserves the mean free path $1/\sigma\rho$
while $\chi\propto 1/\sqrt{l}\rightarrow 0$.
The number of non-causal collisions, and total number of collisions for $4000$
particles with $m=0$, $T=0.5$ GeV, and a time period of $6$ fm is summarized in
table\ (\ref{para2}). In the table, $\mu$ is screening mass. $a$ gives the
ratio of interaction
length to the estimate of mean free path before rescaling. $l$ is the scaling
parameter, i.e., the total number of particles is increased by a factor of $l$
and the cross section is decreased by a factor of $l$ (but the number of
collisions is still for $4000$ particles). $\chi_1$ is the ratio of interaction
length to the estimate of mean free path including the rescaling, i.e.,
$\chi_1=a/\sqrt{l}$. $\chi_2$ is the ratio of interaction length to the
measured mean free path. The mean free path is measured through the formula,
$\rho/(2W)$, in which $W$ is the collision rate per unit volume and the
particles are moving at the speed of light. The ratio of the number of
non-causal collisions to the total number of collisions is also plotted in
Fig.~8 against the ratio of interaction length to the mean free path. In
Fig.~8, the open circles are data against $\chi_1$ and the filled circles are
against $\chi_2$. It shows that when the density increases, i.e., when $\chi$
increases, the difference between $\chi_2$ and $\chi_1$ increases. This tells
us that when density is high, the naive formula for the estimate of mean free
path, $\lambda = 1/(\rho\sigma)$, needs to be corrected. Also we see the data
deviate from linear formula and has a tendency of saturation when density is
large. This is consistent with the fact that the ratio should always be less
than $50\%$ from the definition of the non-causal collision.
When we fix $\mu$ and increase $l$ from $1$ to $5$,
the total number of collisions goes up from $64700$ for $l=1$ to
$75400$ for $l=5$. For $l=10$ , the total number of collisions is $77800$.
We see clearly the trend toward a constant value of total number of collisions
when $l$ is increased. The collision rate with $l=10$ is
shown in Fig.~6 as an open triangle (the $l=1$ data is shown as an open
circle). The collision rate with $l=10$ is within $1\%$ of the
analytic prediction.
During the preliminary study of the collision rate, we found when $m/T=10$, the
collision rate is higher than the predicted rate. By looking more carefully
into the code, we found out that it was caused by the larger than statistical
fluctuations in the position distribution. When the fugacity $\lambda$
is not uniform in space, the collision rate per unit volume:
\begin{equation}
W=\int\frac{\mbox{d}^3x}{V}\lambda^2(x)I,
\end{equation}
in which:
\begin{equation}
I=\frac{\pi}{\mu^2}\gamma^2
\int\frac{\mbox{d}^3p_1}{(2\pi)^3}\frac{\mbox{d}^3p_2}{(2\pi)^3}
\mbox{e}^{-\beta(E_1+E_2)}|\vec{v}_1-\vec{v}_2|.
\end{equation}
By using the inequality,
\begin{equation}
\int\frac{\mbox{d}^3x}{V}\lambda^2(x)\ge
\left(\int\frac{\mbox{d}^3x}{V}\lambda(x)\right)^2,
\end{equation}
and the fact that the system we prepared has zero chemical potential and
hence averaged fugacity is one, we arrive at:
\begin{equation}
W\ge W_0=
\frac{\pi}{\mu^2}\gamma^2
\int\frac{\mbox{d}^3p_1}{(2\pi)^3}\frac{\mbox{d}^3p_2}{(2\pi)^3}
\mbox{e}^{-\beta(E_1+E_2)}|\vec{v}_1-\vec{v}_2|.
\end{equation}
The equality holds only when the fugacity has no spatial dependence.
This shows that when there are space clusters existing for some time
period, the collision rate is higher than that expected for the uniform system.
It was found only in $m/T=10$ and
not in other cases because in the $m/T=10$ case, particles are moving very
slowly, and they stay in clusters for much longer time.
We traced the origin of the non-uniform distribution. It was caused by some
correlation of random number generators (see Appendix~B). When we use ran1 from
\cite{press1}, and generate first the momenta for all the particles and then
generate the positions for all the particles, there are no abnormal
fluctuations. When we generate momentum and position together, we found
abnormal fluctuations. This does not occur when ran3 from \cite{press1} is
used. We correct the generation by separating the generation of particle
momentum and particle position.
\section{Conclusions}
From the above study, we show that the equation of state and the collision
rate can be used to test the initial conditions and collision mechanisms of
relativistic parton cascade. For massless particles, when the interaction range
is much larger than the mean free path, the cascade collision rate is lower
than the theoretical value. Other methods, e.g., particle partition, have to be
used to correct the collision rate.
The comparison of free streaming and interacting cascade approach to
equilibrium indicate qualitative similarities of the two cases. However, the
damping and speed of collective motion are quite different. A detailed
comparison of free streaming, ideal hydrodynamics, and cascade approach to
global equilibrium in the case of half filled periodic box initial conditions
will be addresses in another paper \cite{mgyulassy2}.
As discussed in this paper, spatial distribution with larger than statistical
fluctuations gives higher than thermal reaction rate. HIJING \cite{xnwang1}
predicts initial spatial clusters of partons for nucleus-nucleus collisions at
collider energies. This implies higher than thermal collision rates
\cite{mgyulassy3} and many other interesting physical phenomena beyond the
widely used hot gluon scenario \cite{eshuryak1} predictions.
We emphasize the importance of using analytic tests in debugging
numerical simulation codes. The spatial distribution with abnormal fluctuations
illustrates well the usefulness of the analytic collision rate test. As more
components are added to the cascade code, more tests will be needed to
ensure the consistency of different parts of the cascade code and to enable
disentangling of the actual physical assumptions that define the model.\\[3ex]
{\Large \bf Acknowledgments}\\[1ex]
We thank S. A. Chin, P. Danielewicz, V. Koch, B. A. Li, S. Pratt, J. Randrup
for useful discussions. We also thank Brookhaven National Laboratory and
Lawrence Berkeley Laboratory for providing computing facilities.
|
1,108,101,564,633 | arxiv |
\section{Introduction}
\IEEEPARstart{A}{utonomous} driving has received much attention in both academia and industry. It aims to perceive the surrounding environment via various sensors and make the decision correspondingly. It involves many tasks, like pedestrian detection \cite{zhu2016traffic,li2018scale,tian2015pedestrian}, traffic mark recognition and lane marker extraction. Lane marker extraction enables the car to follow the lanes precisely. Many applications are based on the task, including trajectory planning and lane departure. It becomes a key aspect for autonomous driving.
\begin{figure}[tbp]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8cm]{pic/theory3}}
\end{minipage}
\caption{Visualization of the event output in space-time. Blue dots represent individual asynchronous events.}
\label{fig:theory}
\vspace{-0.5cm}
\end{figure}
For lane marker extraction task, many methods have been proposed by researchers. These methods include handcrafted features and heuristic algorithms~\cite{borkar2012novel, deusch2012random, hur2013multi, jung2013efficient,tan2014novel, wu2014lane}, and end-to-end Convolutional Neural Network (CNN) models~\cite{gopalan2012learning,kim2014robust,huval2015empirical,he2016accurate,li2017deep,lee2017vpgnet}. Although promising results have been achieved by them, there are still challenging scenes in practice.
In fact, various extreme and complex scenes could happen. Take an example, fast changing light or low illumination condition would severely influence the performance of these methods. Under these conditions, general frame-based cameras are not able to capture these scenes clearly, so these methods cannot work well with the bad input \cite{binas2017ddd17}.
Therefore, we resort to DVS. DVS camera only produces data when photometric changes occur at certain pixels in the sensor, and each pixel operates in an asynchronous and independent way. DVS has shown its potential for visual tasks in recent years \cite{valeiras2018event,cohen2018spatial,camunas2017event}. The event output visualization is shown in Fig.\ref{fig:theory}. There are two key characteristics: low latency and high dynamic range. Latency is based on the sensor sampling rate and data processing time. Since DVS transmits data with events, which denote illumination change, it has a latency of microseconds ($\mu s$), compared with 50-200 milliseconds ($ms$) of standard cameras \cite{mueggler2017event}. With such low latency, DVS can sense the environment and capture images much faster than standard cameras. This property can alleviate the influence efficiently caused by motion blur, which is a troublesome problem for frame-based cameras. Besides, with much shorter response time brought by low latency, it also makes the autonomous cars much more agile than others.
\begin{figure*}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=16cm]{pic/advantage}}
\end{minipage}
\caption{The process of car's coming out of the tunnel (chronological order: $T1<T2<T3<T4<T5$). The first row shows gray images captured by frame-based camera. The second row shows corresponding DVS images captured the same moment. Traditional cameras are largely affected by the sudden light change due to the low dynamic range, while DVS doesn't suffer from that with much higher dynamic range. }
\label{fig:advantage}
\end{figure*}
About dynamic range, the typical dynamic range of DVS is 130 dB v.s. 60 dB for general frame-based ones, which is 7 orders of magnitude larger \cite{mueggler2017event}. This characteristic enables it to tackle extreme scenes, like large illumination changes. Suppose a car is going through the tunnel, the moments it enters and leaves the tunnel would result in such dramatic illumination change and corresponding images would become highly dark or light. This makes general frame-based cameras almost impossible to recognize lanes from these images. But for DVS, lanes are still clear due to the high dynamic range, as shown in Fig.\ref{fig:advantage}.
Besides, with the event stream data, semi-dense images are generated by DVS. So only pixels whose brightness changes as a result of relative movement would be shown on the DVS output image. These pixels usually come from pedestrians, traffic marks, cars and pedestrians. They are the key objects for autonomous driving. Road surface, sky and other background information would be removed.
A possible problem for utilizing DVS for lane marker extraction task is the DVS image resolution. The images generated by ordinary DVS only have a resolution about 240$\times$180 pixels, which contains a few details.
For reasons above, we construct a high-resolution DVS dataset for lanE marker exTraction (DET)\footnote{Dataset website is \url{https://spritea.github.io/DET/}} with a high-resolution DVS camera, CeleX V. There are 5,424 DVS images of 1280$\times$800 pixels with corresponding labels. Note that we also provide the raw event data for those algorithms taking use of event data directly \cite{sironi2018hats,lagorce2017hots}. The partition is: 2,716 images as training set, 873 images as validation set and 1,835 images as test set. Per-pixel labels with distinguishable lanes are provided. This is because many advanced models for lane marker extraction are based on semantic segmentation technique, which requires this kind of label. As far as we know, this is the first dataset for lane marker extraction with DVS images. It's also the first DVS dataset with such high resolution.
For lane marker extraction task, contextual structural information is of significant importance. The information is critical for building contextual relation to recognize objects with long continuous shape. It matters even more for lane marker extraction with DVS images, due to the lack of appearance information. Existing state-of-the-art methods for lane marker extraction task are generally based on CNN, which suffers from the lack of suitable capacity for capturing contextual information owing to the limited empirical receptive field \cite{zhou2014object}.
To handle the problem, some researchers try to use global pooling layer to introduce global context \cite{szegedy2015going,liu2015parsenet}. However, this is insufficient for complex environment, because it may lose the spatial relation and cause ambiguity. Others adopt Recurrent Neural Network (RNN) to pass information along each row or column and capture contextual structural information \cite{visin2015renet,bell2016inside}. But in one RNN layer, each pixel position could only receive information from the same row or column. \cite{pan2018spatial} proposes Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-by-slice convolutions within feature maps, thus enabling message passing between pixels across rows and columns in a layer. The strategy is efficient and achieves state-of-the-art performance on public dataset. But it only extracts structural information along the horizontal and vertical directions, without considering other directions. We argue that the diagonal direction also matters, since most lanes present no exact horizontal or vertical shape in the front-view DVS images.
We then propose the structure-aware network (SANet) for lane marker extraction in DVS images. It's based on the Multidirectional Slice Convolution (MSC) module introduced in this paper, which can capture structural information along the horizontal, vertical and diagonal directions. This helps the network extract lanes more accurately. We then compare our proposed network with other state-of-the-art models on DET and report the results. Experimental results demonstrate that our method outperforms other competitors.
Compared with the conference version \cite{det} which focuses on the dataset only, this paper has further studied the method for lane marker extraction and proposed the SANet for the task. In summary, our contributions are:
\begin{itemize}
\item We provide a DVS dataset for lane marker extraction, including the raw event data and accumulated images with labels. To our knowledge, DET is the first DVS dataset for this task and the first DVS dataset with such high resolution images of 1280$\times$800 pixels.
\item We propose the SANet for lane marker extraction task in DVS images. It is based on the MSC module which can capture the structural information of DVS images more comprehensively.
\item We evaluate our network with other state-of-the-art models on DET and report the results. Experimental results show that our method outperforms other competitors.
\end{itemize}
\section{Related work}
\subsection{Event Camera Dataset}
\textbf{Synthesized Dataset.} A Dynamic and Active-pixel Vision sensor (DAVIS) dataset and corresponding simulator have beed proposed by \cite{mueggler2017event}. DAVIS consists of an event-based sensor and a global-shutter camera. The dataset is collected with event-based sensor in various synthetic and real environments. It consists of global-shutter intensity images with asynchronous events, and movement with pose parameters. It contains lots of scenes, like office, outdoors, urban and wall poster. The purpose of the dataset is for visual odometry, pose estimation, and SLAM. The resolution of the dataset image is 240$\times$180 pixels.
\textbf{Classification Dataset.} CIFAR10-DVS \cite{li2017cifar10} is an event-stream dataset for object classification. 10,000 frame-based images that come from CIFAR-10 dataset are converted into 10,000 event streams with an event-based sensor, whose resolution is 128$\times$128 pixels. The dataset has an intermediate difficulty with 10 different classes. The repeated closed-loop smooth (RCLS) movement of frame-based images is adopted to implement the conversion. Due to the transformation, they produce rich local intensity changes in continuous time which are quantized by each pixel of the event-based camera.
\textbf{Recognition Dataset.} A series of DVS benchmark datasets are released in \cite{hu2016dvs}. Visual video benchmarks, including object recognition, action recognition and object tracking are converted into spiking neuromorphic datasets. A DAViS240C camera of 240$\times$180 pixels resolution is adopted to record in the process. Four classic dynamic datasets are transformed: Tracking Dataset \cite{tracking}, the VOT challenge 2015 Dataset \cite{vot2015}, the UCF-50 Action Recognition Dataset \cite{reddy2013recognizing} and the Caltech-256 Object Category Dataset \cite{griffin2007caltech}.
\textbf{Driving Dataset.} DDD17 \cite{binas2017ddd17} is an open dataset of annotated DAVIS driving recordings. It contains 12 hour record of a DAVIS sensor with a resolution of 346$\times$260 pixels.
It is a city and highway driving in all kinds of weather conditions, along with GPS position and vehicle speed. The data also contains driver steering, brake, and throttle captured from the car's on-board diagnostics interface. Since there are data from various devices and sensors, it is very helpful for autonomous driving task.
DVS datasets listed above are proposed for general computer vision or robotic control tasks. None of them targets the lane marker extraction task. Besides, event-based images in these datasets only have a low spatial resolution, like 240$\times$180 pixels or 128$\times$128 pixels. The low resolution secerely limits algorithms' performance on these datasets.
\subsection{Lane Dataset}
\textbf{Caltech Lanes Dataset.} This dataset \cite{aly2008real} is released in 2008 as an early one. It contains clips under various situations, including straight and curved streets, different types of urban streets, with/without shadows. All visible lanes are labeled in four clips. There are totaling 1,224 labeled frames with 4,172 marked lanes.
\textbf{TuSimple Dataset.} This dataset \cite{tusimple} contains 6,408 labeled images. They are partitioned into training set of 3,626 images and test set of 2,782 images. These images are obtained under medium and good weather condition. There are highway roads with a different number of lanes, like 2 lanes, 4 lanes or more. For each image, 19 previous frames are also provided without annotation.
\textbf{CULane Dataset.} This dataset \cite{pan2018spatial} consists of 133,235 frames extracted from 55 hours of video. The dataset is split into training set of 88,880 images, validation set of 9,675 images, and test set of 34,680 images. The resolution of these undistorted images is 1640$\times$590 pixels. The test set contains normal and other challenging categories, like shadow and crowded scenes.
These lane datasets are all based on RGB images generated by frame-based cameras. Illumination changes and motion blur would affect model's performance based on theses images seriously, which should definitely be avoided in real traffic situation.
\begin{figure*}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=16cm]{pic/label}}
\end{minipage}
\caption{Comparison of different label formats. (a) shows input images. (b) shows the label format that sets a fixed order and annotates lanes from left to right. (c) shows our label format based on the relative distance between lane and event camera. (d) shows the binary label format. For the left lane most close to event camera, it looks similar in different images and should be annotated with same label. (b) gives it a label of 2 in the image above, but gives it a label of 1 in the image below. Our format (c) annotates it exactly in both images.}
\label{fig:label}
\end{figure*}
\subsection{Lane Marker Extraction Methods}
\label{sec:lane}
Lane marker extraction task has received continuous attention recently. Hence, researchers have proposed various methods to solve this problem. These methods could be divided into two categories generally, traditional methods and deep learning methods.
\textbf{Traditional Methods.} Traditional methods usually consist of handcrafted features and heuristic algorithms. \cite{chiu2005lane} uses statistical method to find out a color threshold. Then it utilizes color-based segmentation method to find out the lane boundary. \cite{loose2009kalman} combines Kalman filter and particle filter together to process 3D information, which is obtained from stereo vision or image radar. \cite{teng2010real} integrates multiple cues, including bar filter, color cue, and Hough Transform. It adopts particle filtering technique to realize lane tracking. \cite{lopez2010robust} employs ridge features to this problem. It also adapts RANSAC \cite{fischler1981random} to fit a parametric model of a pair of lane lines to the image features. \cite{zhou2010novel} presents a robust lane detection algorithm based on geometrical model and Gabor filter. \cite{aly2008real} adopts selective oriented Gaussian filters and RANSAC to fit Bezier Splines.
\textbf{Deep Learning Methods.} Deep learning technique, especially CNN, has shown its superiority on image processing field recently. Many lane marker extraction methods based on CNN have been proposed. \cite{gopalan2012learning} uses a pixel-hierarchy feature descriptor to model the contextual information shared by lane markings with the surrounding road region. It also adopts a robust boosting algorithm to select relevant contextual features for detecting lane markings. \cite{kim2014robust} combines CNN with RANSAC algorithm to detect lanes. It only takes use of CNN model when the traffic scene is complicated and RANSAC isn't able to deal with it. \cite{huval2015empirical} takes existing CNN models to perform lane and vehicle detection while running at frame rates required for a real-time system. \cite{he2016accurate} proposes the DVCNN strategy, which utilizes both front-view and top-view image to exclude false detections and non-club-shaped structures respectively. \cite{li2017deep} develops a multitask deep convolutional network, which detects the presence and the geometric attributes of the target at the same time. It also adopts a recurrent neuron layer to detect lanes. LaneNet \cite{lane_net} casts the lane marker extraction problem as an instance segmentation problem. A learned perspective transformation based on the image is applied without a fixed ``bird's eye view'' transformation. clustering is used to generate each lane instance. Hence it can handle scenes where lane category varies, although it cannot assign similar lanes same label.
SCNN \cite{pan2018spatial} is based on semantic segmentation backbone. It generalizes traditional deep layer-by-layer convolutions to slice-by-slice convolutions within feature map, which enables message passing between pixels across rows and columns in a layer.
Although these CNN-based methods get obvious improvement for this task, most of them adopt conventional convolutional layer directly, without exploiting structural information explicitly. This limits their performance in essence.
\section{Construction of DET}
\begin{figure*}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=16cm]{pic/dataset}}
\end{minipage}
\caption{Samples of DVS images and labels in DET. First rows show various line types, including single dotted line, single solid line, parallel dotted line, parallel solid line and dotted line. Middle rows show various lane number, from 1 to 4. Last rows show various traffic scenes, urban, tunnel, bridge and overpass.}
\label{fig:dataset}
\end{figure*}
\subsection{Data Collection}
A CeleX V DVS camera with a resolution of 1280$\times$800 pixels is adopted to collect the event data. The camera is mounted on a car in various locations driving in different time in Wuhan City. Wuhan City is a metropolis of China, so there are various and complex traffic scenes which become a challenge for lane marker extraction.
We record over 5 hours of event stream with a sampling rate of MHz, which equals a sampling interval of $\mu s$. The raw event stream is compressed with $\triangle t=30\ ms$ along the time dimension. Fig.\ref{fig:theory} demonstrates the process. Then over 150,000 images are generated from raw event stream. 5,424 images containing various scenes are selected to annotate.
\subsection{Data Annotation}
\label{sec:da}
\textbf{Task Definition.} There are two kinds of definitions for lane marker extraction task. One is to extract lanes without discriminating between lanes, and the other one is to differentiate lanes from each other. We argue the latter is more practical, because ego lanes are labeled as different categories from other lanes, the location of the car could be decided directly, i.e., the car is between the two ego lanes. This is necessary for following application, like cruise control. But it is not easy for the former way to decide the car's specific location. Therefore, we define lane marker extraction here as extracting lanes from traffic scenes while discriminating between lanes. The definition is the same with the existing best method, SCNN \cite{pan2018spatial}.
Following the definition above, lane marker extraction is in fact a multi-class semantic segmentation problem. It classifies each pixel into $(n+1)$ categories, where $n$ denotes lane types and $1$ denotes background. Lanes with same label are supposed to be similar in some sense.
\textbf{Annotation Details.} Semantic segmentation method requires multi-class label. As the first lane marker extraction dataset with DVS camera, we choose the most representative situation of 4 lanes in the dataset, so there are 4 lanes at most in one image. This choice is the same with existing lane datasets, like CULane Dataset \cite{pan2018spatial}. Hence it's a five-class classification task. Each pixel is given one of five-class labels, i.e., $\{0, 1, 2, 3, 4\}$. $0$ is for background and others for lanes. Here comes the question, that how we decide the label for each lane.
Generally, two types of rules are used to give the specific label. The first one is to set a fixed order and give each lane a label by this order. The second one is to give lanes who have similar characteristics same label. We think the latter is better. Because the first label format is only related to the number of lanes in the image, and it does not consider lane's property at all. In this way, each lane would be labeled by a fixed order, like 1 to 4 for lanes from left to right. Then lanes with same label from different images may differ much. Fig.~\ref{fig:label} (b) shows an example. This is because the distance between DVS and those lanes with same label from different images would vary a lot during the driving process. Therefore if we adopt the format, it would have bad influence on the training process of multi-class semantic segmentation model.
\begin{table}[tb]
\caption{Distribution of images containing various number of lanes. One represents the image containing only one lane. }
\begin{center}
\begin{tabular}{l|cccc|c}
\hline
Statistics &One & Two&Three&Four&Total\\
\hline
\hline
Quantity & 161&1,114&1,918&2,231&5,424\\
Percentage \% & 2.97&20.54&35.36&41.13&100\\
\hline
\end{tabular}
\label{tab:lane-number}
\end{center}
\vspace{-0.5cm}
\end{table}
For reasons above, the latter format is adopted in this paper to label images. The key aspect is the definition of similarity. Since the sizes and shapes of lanes in image mainly depend on the distance between DVS and lanes, lanes whose distances from DVS are analogous would seem similar. Therefore, for ego lanes \cite{kim2017end}, i.e., the two lanes most close to DVS, the label of left lane is set as 2, and the label of right lane is set as 3, which has nothing to do with the number of lanes in the image. Other lanes' labels are decided by their distance to the ego lanes. Fig.~\ref{fig:label} (c) shows an example. By this format, lanes with similar appearances would have same label. This is more reasonable when we regard the task as a multi-class semantic segmentation problem.
About lane width, because lanes appear in various sizes due to their real sizes and effects of lens, and even different parts of the same line present different sizes or shapes, we adopt the general label format for this task. That is representing each lane with a series of key points, which are usually on the center line of the lane. CULane \cite{pan2018spatial} and TuSimple \cite{tusimple}, two widely used lane datasets, both adopt this format. So we also follow this way. We provide data files recording the coordinates of lane key points in the dataset, and the default setting of lane width is 20 pixels. Users can set a fixed lane width for all lanes, or set different lane widths for each lane according to their needs.
\begin{figure*}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=16cm]{pic/framework}}
\end{minipage}
\caption{The pipeline of proposed SANet. Backbone denotes the basic network for feature extraction. MSC represents the multidirectional slice convolution module. Upsampling is used to restore the resolution of the output to be the same with the input image.}
\label{fig:all}
\end{figure*}
For the occluded lanes, like part of one lane is occluded by a car, we would still label the occluded part and get a whole consecutive lane, not discrete lane segments. Because we adopt the key-point label format for each lane, the final label is implemented by connecting these points into a single thin lane and expanding the lane to the appointed width. So for key points on two separate segments of the same broken lane, they would be connected in the same way with those key points on one consecutive lane, which would generate a consecutive lane. Hence both broken and consecutive lane would be labeled as a whole consecutive lane. This is the same choice with the widely used CULane dataset \cite{pan2018spatial}.
For methods defining this task as extracting lanes without discriminating between them, or extracting lanes first then differentiating them in following stages, binary labels are also provided for researchers interested in this. Fig.~\ref{fig:label} (d) shows these labels. To ensure the annotation quality, we have designated tens of researchers who are experienced in related area to label these images carefully. After the initial labeling, the results were cross-checked to further improve the annotation quality.
\subsection{DET Properties}
\textbf{Data Partition.} About data partition, 1/2 of original images are randomly extracted as training set, 1/6 as validation set and 1/3 as test set.
This helps to make the distributions of these sets match approximately.
All images, including raw DVS images and filtered ones with corresponding labels, containing binary and multi-class labels would be made publicly available.
\textbf{High-resolution Image.}
The typical resolution for current DVS datasets is 346$\times$260 pixels, which is really low compared with RGB images output by frame-based cameras. If a car meets complex scenes, DVS images with such low resolution containing a little information cannot deal with the situation well. Hence, the CeleX-V DVS which was released in 2018 is adopted for our dataset. It has the highest resolution of 1280$\times$800 pixels among all DVS available now and a latency as low as 5$ns$. All images in DET have the same resolution of 1280$\times$800.
\textbf{Various Lane Types.}
The lane diversity is an important aspect for lane marker extraction dataset. Diverse lane types make the dataset more close to real world. Therefore, the dataset images contain various lane types, like single solid line, single dashed line, parallel solid line and dashed line, parallel dashed lines, etc. Note that parallel lines are labeled as one whole line for consistency. Fig.~\ref{fig:dataset} demonstrates lane type samples.
\textbf{Various Lane Number.}
For images containing different numbers of lanes, lanes would have various appearances with different distances from DVS, as explained in Sec.~\ref{sec:da}. To make the dataset more comprehensive, images including different numbers of lanes are collected by driving on roads with various numbers of carriageways. Fig.~\ref{fig:dataset} shows the samples that have different numbers of lanes. The distribution of lane number is presented in Tab.~\ref{tab:lane-number}.
\textbf{Various Traffic Scenes.}
Besides the lane diversity, the scene diversity is important as well. The reason is a robust lane marker extraction method should be able to recognize lanes under various traffic scenes. This is critical for reliable autonomous driving. Therefore, we also collect images containing various traffic scenes by driving on overpasses, bridges, tunnels, urban areas, etc. Fig.~\ref{fig:dataset} shows these traffic scenes.
\textbf{Various Camera Views.}
To simulate the real situation, we further mount DVS with different locations on our car. Under this condition, even lanes with same label in different images would differ to some extent. In fact, the intraclass variance of the dataset is increased. Although it would become more challenging for lane marker extraction models, models trained with this dataset could handle complex scenes better than those with single camera view. We think this is necessary for methods to be adopted in real traffic scenes, which might be even more complicated than the dataset.
\section{Methods}
In this section, we introduce naive slice convolution at first. Then we present the proposed Multidirectional Slice Convolution module, and explain its operation process in detail. Finally we introduce the structure-aware network based on MSC module.
\subsection{Naive Slice Convolution}
Most CNN-based lane marker extraction methods \cite{gopalan2012learning,kim2014robust,huval2015empirical,he2016accurate} make use of normal convolution layer to extract features. Although CNN has demonstrated its ability to extract semantics from raw pixels with normal convolution layer, its capability to capture spatial relation along specific directions has not been explored fully. For lane marker extraction task with DVS images, the relation is of significant importance to extract structural information, because lanes in DVS images present weak appearance coherence but strong shape prior. Besides, CNN cannot connect separated parts together automatically, which usually makes it fail when occlusion occurs. In this task, however, occlusion situations often occur when cars or pedestrains cross the lane.
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8cm]{pic/hv_v3}}
\end{minipage}
\caption{(a) Vertical slice convolution procedure (top-down). (b) Horizontal slice convolution procedure (right-left). The part with yellow dotted line is the convolutional filter.}
\label{fig:hv}
\end{figure}
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8cm]{pic/mbd_v3}}
\end{minipage}
\caption{(a) Main diagonal slice convolution procedure (upper left-lower right). (b) Counter diagonal slice convolution procedure (upper right-lower left). The part with yellow dotted line is the convolutional filter. The black part of size $C\times1$ is abandoned. The blue part with the same size is the zero padding, which is used to fill the other side of the slice.}
\label{fig:mbd}
\end{figure}
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8cm]{pic/slice_conv}}
\end{minipage}
\caption{Detailed process of (top-down) horizontal (a) and (upper left-lower right) vertical (b) slice convolution.}
\label{fig:detail}
\end{figure}
SCNN \cite{pan2018spatial} firstly adopts slice convolution to extract structural information for lane marker extraction task. The slice convolution used in SCNN is named as naive slice convolution in this paper. It could be divided into two types in terms of computation/sum direction, vertical slice convolution and horizontal slice convolution. The whole procedure of vertical and horizontal slice convolution is shown in Fig. \ref{fig:hv}.
Take the vertical slice convolution as an example. The input is the output of previous backbone network. It is a 3D tensor of size $C \times H \times W$, where $C$, $H$, and $W$ represent channel, row, and column number respectively. The tensor would be cut into $H$ slices, each of which becomes a slice of size $C \times 1 \times W$. Then a convolutional filter of size $C \times 1 \times w$, where $w$ is the width of filter, is applied to the first slice. Afterwards, the output of slice convolution is added to the second slice to become a new one. This is different with the traditional convolution, in which the output of a convolution layer is fed into the next layer. The new slice is sent to the next convolution layer and the process is repeated until the last slice tensor is handled. The detailed process is shown in Fig. \ref{fig:detail}(a). The first slice of input tensor and all adding results would be stacked to form the final output of vertical slice convolution.
There are two types of vertical slice convolution, top-down and bottom-up, depends on the location of the first slice and computation direction. The one we discussed above is the top-down type. If we start with the last slice of the input tensor, and compute upwardly, then we get the bottom-up slice convolution. Horizontal slice convolution is similar with this, except that the 3D tensor is cut into $W$ slices of size $C \times H \times 1$ and the convolution filter size becomes $C \times h \times 1$, where $h$ is the height of filter. It also contains two types, right-left and left-right.
\subsection{Multidirectional Slice Convolution}
\label{osc}
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8cm]{pic/direction}}
\end{minipage}
\caption{Computation directions of different slice convolution operations. (a) Vertical slice convolution (top-down). (b) Horizontal slice convolution (right-left). (c) Main diagonal slice convolution (upper left-lower right). (d) Counter diagonal slice convolution (upper right-lower left). (e) Vertical slice convolution (bottom-up). (f) Horizontal slice convolution (left-right). (g) Main diagonal slice convolution (lower right-upper left). (h) Counter diagonal slice convolution (lower left-upper right).}
\label{fig:direct}
\end{figure}
Though SCNN uses slice convolution to extract structural information, it only captures the structural relation across rows and columns. This is not enough for lane marker extraction task, since most lanes present no exact horizontal or vertical shape in the front-view DVS images.
To tackle the problem, we propose Multidirectional Slice Convolution module. MSC module consists of slice convolution across all main orientations, including horizontal direction, vertical direction and diagonal direction. Hence, MSC is able to capture relation across these directions and extract more proper features for lanes. As a result, it can deal with occlusion situation much better than traditional convolutional layer. Following experimental results would validate its effectiveness in this aspect.
For diagonal slice convolution, the procedure is presented in Fig. \ref{fig:mbd}. This convolution contains operations along the main diagonal and counter diagonal directions. Take the case of main diagonal direction. Considering a 3D tensor which is split into $H$ slices of size $C \times 1 \times W$, a convolutional filter of size $C \times 1 \times w$, where $w$ is the width of filter, is applied to the first slice. The preprocess step is the same with vertical slice convolution. Then the output of slice convolution is shifted inside by one pixel along $W$ axis. The vector of size $C \times 1$ on the inner side, i.e., the black part, is abandoned. The outer side of the slice is filled with a zero vector/padding of same size, i.e., the blue part. The processed slice is added to the second slice next. The process continues until the last slice. The detailed process is shown in Fig. \ref{fig:detail}(b). The first slice of input tensor and all adding results would be stacked to form the final output of diagonal slice convolution.
There are also two types of main diagonal convolution, from upper left to lower right and the direction backwards, depending on the first slice location and computation direction. Counter diagonal slice convolution is alike. Its preprocess step is the same with horizontal slice convolution. The output of slice convolution is shifted downwards by one pixel along $H$ axis. The vector of size $C \times 1$ on the downward side, i.e., the black part, is abandoned. The upper side of the slice is filled with a zero vector of same size, i.e., the blue part. It includes two types, from upper right to lower left and the backward direction, too. Fig. \ref{fig:direct} shows computation directions of these slice convolution operations discussed so far.
The slice convolution process could be formulated as:
\begin{equation}
\label{E0}
X_i'=
\begin{cases}
X_i& {i=1}\\
X_i\oplus F(X_{i-1}'\otimes K)& {1< i\leq N},
\end{cases}
\end{equation}
where $X_i$ stands for the $i$-th slice of the input tensor, $X_i'$ denotes the $i$-th updated slice, $K$ represents the convolutional kernel, and $F$ is the ReLU activation function. $N$ equals to $H$ for vertical and main diagonal slice convolution, or $W$ for horizontal and counter diagonal slice convolution. $\otimes$ denotes convolution operation, and $\oplus$ denotes adding the $i$-th slice of the input tensor to the $(i-1)$th updated slice. This is a direct sum for horizontal and vertical slice convolution or a shifted sum for diagonal slice convolution.
Based on these slice convolution operations, we build the Multidirectional Slice Convolution module, as shown in Fig. \ref{fig:msc}. It is made up of 4 types of slice convolution operations, and each of them has 2 subtypes. These operations are connected in series to capture structural information along 8 directions. The input tensor and the output tensor has the same size.
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8cm]{pic/MSC}}
\end{minipage}
\caption{The Multidirectional Slice Convolution module structure. MSC module is composed of 4 types of slice convolution operations connected in series, and each of them contains 2 subtypes.}
\label{fig:msc}
\end{figure}
\subsection{Structure-Aware Network}
With MSC module, Structure-Aware Network is proposed. The network is composed of three parts, the backbone network, MSC module, and the upsampling layer. Fig. \ref{fig:all} presents the whole structure of the network.
The backbone network is used to extract features from input image. Following SCNN, we adopt the DeepLab LargeFOV \cite{chen2017deeplab} variant for fair comparison. The MSC module is able to capture structural information from 8 directions, which is very helpful for the lane marker extraction task with DVS images. The output of MSC module is fed into a $1 \times 1$ convolution layer to make the channel number fit the class number. Then an upsampling layer is applied to restore the resolution of the output to be the same with the original input image, because the input image has been downsampled in the backbone network.
\section{Experiments}
\subsection{Experimental Settings}
\label{sec:evaluate}
\textbf{Dataset Setting.} We conduct evaluations with two kinds of lane marker extraction methods, general semantic segmentation methods and specialized method for lane marker extraction task. The training set and validation set are used together to train these models to fully utilize the labeled data, and test set to evaluate their performance.
\textbf{Training Details.}
We adopt one Titan Xp GPU with 12 GB memory to train all models. The batch size is set as 4. The optimization algorithm is stochastic gradient descent (SGD) with momentum. The value of momentum is set as 0.9. Poly learning rate policy is adopted to adjust learning rate, which reduces the learning rate per iteration. The process could be written as:
\begin{equation}
\label{Eq1}
LR=initial\;LR \times (1-\frac{current\;iter}{max\;iter})^{power},
\end{equation}
where $LR$ is the current learning rate, $initial\;LR$ is the initial learning rate, $current\;iter$ is the current iteration step, and $max\;iter$ is the max iteration step. The $initial\;LR$ is set as 0.01 and $power$ is set as 0.9. For all models, $max\;iter$ is set to 50,000 in our experiments to make sure these networks converge, and the corresponding number of training epoch is 74. The normalized cross entropy function is set as loss function. Considering the imbalanced pixels between background and lanes, the weight of background loss is set as 0.4. It could be formulated as:
\begin{equation}
\label{Eq2}
L=\lambda_bL_b+\lambda_lL_l,
\end{equation}
where $L_b$, $L_l$ are cross entropy loss functions of background and lanes, and $\lambda_b$, $\lambda_l$ equals 0.4, 1.0, respectively.
\begin{figure*}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=16cm]{pic/rlt_color}}
\end{minipage}
\caption{Visual comparison of lane marker extraction methods. (a) shows input images. (b) shows the corresponding label. (c-g) show results of FCN, DeepLabv3, RefineNet, SCNN and ours. The results are painted in color for better visual effect. Our method outperforms other methods significantly, especially on the connectivity of lanes. This demonstrates its ability to deal with occlusion and broken situations.}
\label{fig:rlt2}
\end{figure*}
\textbf{Metrics.} Following the task definition of multi-class semantic segmentation, we choose two metrics to compare numerically. Specifically, we adopt widely used \emph{F1 score} (F1) and \emph{intersection over union} (IoU).
F1 is defined as
\begin{equation}
\label{E10}
{\rm F1}=2\times \frac{\rm Precision \times Recall}{\rm Precision+ Recall},
\end{equation}
and
\begin{equation}
{\rm Precision}=\frac{TP}{TP+FP},\;{\rm Recall}=\frac{TP}{TP+FN},
\end{equation}
where $TP,\;FP,\;TN,\;FN$ are the number of true positive, false positive, true negative, false negative separately. They all count the number of \emph{pixels}. IoU is defined as:
\begin{equation}
\label{Eq4}
{\rm IoU}(P_{m},P_{gt})= \frac{\mathbb{N}(P_{m}\cap P_{gt})}{\mathbb{N}(P_{m}\cup P_{gt})},
\end{equation}
where $P_{m}$ is the prediction pixels set and $P_{gt}$ is the ground truth pixels set. $\cap$ and $\cup$ mean the intersection and union set respectively. $\mathbb{N}$ denotes the number of pixels in the intersection or union set. F1 and IoU are calculated across all five classes.
\subsection{Ablation Study}
In this section, we would make detailed ablation study to analyze our proposed approach comprehensively.
\textbf{Multidirectional Slice Convolution.} We study the effect of directions in MSC at first. SANet with different directional slice convolution has been tested, and the results are shown in Tab. \ref{tab:osc}. The convolutional kernel size, i.e., the width or height of the kernel illustrated in Sec. \ref{osc}, is set to 9. Baseline model is the DeepLab LargeFOV \cite{chen2017deeplab} variant used in SCNN without any directional slice convolution. V, H, MD, CD denote slice convolution along the vertical, horizontal, main diagonal and counter diagonal direction separately. VH, MCD, MSC represent vertical and horizontal, main and counter diagonal, all of them respectively.
Tab. \ref{tab:osc} shows that the performance increases with more directions. Note that SANet\_MD with single direction performs better than SANet\_VH with both horizontal and vertical directions. This shows that oblique direction matters for lane marker extraction task. To verify that the improvement is brought by MSC, we add 8 extra 3x3 convolution layers to the baseline model to make it has similar number of parameters with our proposed model SANet\_MSC. The results demonstrate that SANet\_MSC outperforms ExtraConv\_8 significantly, which verifies the effectiveness of proposed MSC.
\begin{table}[h]
\caption{Experimental results of multidirectional slice convolution.}
\begin{center}
\begin{tabular}{l|cc}
\hline
Models & Mean F1(\%)& Mean IoU(\%)\\
\hline
\hline
Baseline & 72.43& 58.95\\
SANet\_V & 73.82&60.43\\
SANet\_H & 73.67&60.27 \\
SANet\_MD & 73.91&60.52\\
SANet\_CD & 73.82&60.43 \\
SANet\_VH & 73.85&60.44\\
SANet\_MCD & 73.96&60.59\\
SANet\_MSC & \textbf{74.21}&\textbf{60.86} \\
ExtraConv\_8 & 73.78&60.38\\
\hline
\end{tabular}
\label{tab:osc}
\end{center}
\vspace{-0.5cm}
\end{table}
\textbf{Convolutional Kernel Size.} We then investigate the influence of slice convolutional kernel size, as presented in Tab. \ref{tab:kernel}. We use SANet with MSC module in this experiment. It shows that larger kernel size would be beneficial. Note that the model with kernel size of 11 performs worse than kernel size of 9. We argue this may be related to the relative size between feature map and kernel size.
\begin{table}[h]
\caption{Experimental results of convolutional kernel size.}
\begin{center}
\begin{tabular}{l|ccccc}
\hline
Kernel Size & 3& 5& 7& 9& 11\\
\hline
\hline
Mean F1(\%) & 73.76&73.85&74.18&\textbf{74.21}&74.10 \\
Mean IoU(\%) & 60.36&60.47& 60.83&\textbf{60.86}&60.73\\
\hline
\end{tabular}
\label{tab:kernel}
\end{center}
\vspace{-0.5cm}
\end{table}
\subsection{Evaluation on DET}
\textbf{Lane Marker Extraction Baselines.} We benchmark typical lane marker extraction methods, including general semantic segmentation based method, like FCN \cite{fcn}, DeepLabv3 \cite{deeplabv3}, RefineNet \cite{refinenet}, and specialized method for lane marker extraction task, SCNN \cite{pan2018spatial}. FCN, RefineNet and DeepLabv3 are widely used semantic segmentation methods for general computer vision problems. FCN is the first work taking semantic segmentation as pixel-level classification task. It contains a fully convolutional neural network and takes use of skip connections to combine shallow layer output with deep one. To utilize image-level global context, DeepLabv3 introduces atrous spatial pyramid pooling with global pooling operation. To make high-resolution prediction, RefineNet exploits information along the down-sampling process explicitly with long-range residual connections.
SCNN is a specialized model for lane marker extraction task which has been introduced in Sec.\ref{sec:lane}. SCNN achieves state-of-the-art performance on TuSimple \cite{tusimple} dataset. Tab.\ref{tab:exp} shows the results of lane marker extraction baselines and our method. Mean F1 ($\%$) refers to the average F1 score and Mean IoU ($\%$) represents the IoU of all classes.The best values are in bold and the second best values are underlined. Fig.\ref{fig:rlt2} shows visual results of these methods.
\begin{table}[h]
\caption{Evaluation results of lane marker extraction methods on DET.}
\begin{center}
\begin{tabular}{l|cc}
\hline
Methods &Mean F1(\%) & Mean IoU(\%)\\
\hline
\hline
FCN \cite{fcn} & 62.32 & 49.04 \\
DeepLabv3 \cite{deeplabv3}& 61.26 & 48.54\\
RefineNet \cite{refinenet}& 64.34&50.97 \\
SCNN \cite{pan2018spatial}& \underline{73.85}&\underline{60.44} \\
Ours & \textbf{74.21}&\textbf{60.86}\\
\hline
\end{tabular}
\label{tab:exp}
\end{center}
\vspace{-0.5cm}
\end{table}
Tab.\ref{tab:exp} shows that our SANet and SCNN outperforms other semantic segmentation methods. This is reasonable since FCN, DeepLabv3 and RefineNet are general semantic segmentation methods, and they do not contain special modules designed for lane marker extraction task. Structural feature or prior information is not applied either, which is critical for lane marker extraction task. SCNN adopts slice-by-slice convolution to capture continuous structure. This special module is really helpful for this task.
Our proposed SANet with MSC module has achieved the best performance on DET dataset. It outperforms SCNN by 0.42\% on the strict metric, \emph{Mean IoU}, which is an obvious improvement for lane marker extraction task based on semantic segmentation. This demonstrates the effectiveness of our method.
\section{Conclusion}
In this paper, a high-resolution DVS dataset for lane marker extraction task is constructed at first. It consists of the raw event data, accumulated images and corresponding labels. 5,424 event-based images with resolution of 1280$\times$800 pixels are extracted from 5 hours of event streams with sampling rate of MHz. To provide a comprehensive labeled pairs, two types of annotations for lanes in the images are given, multi-class format and binary format. Then the structure-aware network for lane marker extraction with DVS images is proposed, which can capture directional information extensively. We evaluate our proposed network with other state-of-the-art lane marker extraction models and analyze the results on DET dataset. Experimental results demonstrate that our method outperforms these methods on challenging datasets, which also verifies its high adaptability. Although our study focuses on the DVS images constructed from event stream, which makes it feasible to take use of existing methods designed for images, we would continue to explore techniques that directly take event stream as input to fully utilize the characteristic of event in the future.
|
1,108,101,564,634 | arxiv | \subsection{Background: Variational Autoencoders}
\begin{figure}[htbp]
\centering
\subfigure[VAE]{\includegraphics[width=0.17\linewidth]{figs/vae}}
\qquad \qquad \qquad
\subfigure[GMVAE]{\includegraphics[width=0.4\linewidth]{figs/gmvae}}
\caption{Generative (solid line) and inference (dashed lines) of VAE and GMVAE. }
\label{fig:vae_gmvae}
\end{figure}
The Variational Autoencoder (VAE, \cite{kingma2013auto}) is a deep generative model with a latent variable $\textbf{z}$ that generates data $\textbf{x}$ and joint distribution $p_\theta(\textbf{x}, \textbf{z})$, where we define $p_\theta(\textbf{z})$ as a prior and $p_\theta(\textbf{x} \vert \textbf{z})$ as a likelihood. The likelihood is \textit{decoded} using a neural network that outputs the distribution parameters given an input code. The posterior $p(\textbf{z} \vert \textbf{x})$ is approximated by a recognition model $q_\phi(\textbf{z} \vert \textbf{x})$ parametrized by an \textit{encoder} network (\cite{rezende2014stochastic}). Thanks to the use of encoder and decoder networks, VAEs make use of amortized inference to approximate the true posterior. Furthermore, using the reparametrization trick, the Evidence Lower Bound (ELBO) on the log likelihood is trained stochastically by using Stochastic Variational Inference, and thus, data is injected to the model in minibatches.
More recently, in order to mitigate the problem of overgeneralization in multimodal data, the Gaussian Mixture VAE (GMVAE, \cite{dilokthanakul2016deep}) has been proposed to perform clustering in latent space using a Gaussian Mixture. The ELBO function for GMVAE adds two new regularizers w.r.t. to the Vanilla VAE:
\begin{equation}
\begin{gathered}
\mathcal{L} (\theta, \phi; \textbf{x}, \textbf{z}, \boldsymbol{\beta}, \textbf{d}) = \mathbb{E}_{q(\textbf{z} \vert \textbf{x})} \left[ \log(p(\textbf{x} \vert \textbf{z})) \right]
- KL\left( q(\textbf{z} \vert \textbf{x}) \Vert p(\textbf{z}) \right) \\
- KL\left( q(\boldsymbol{\beta} \vert \textbf{x}) \Vert p(\boldsymbol{\beta}) \right)
- KL\left( q(\textbf{d} \vert \boldsymbol{\beta}, \textbf{z}) \Vert p(\boldsymbol{\textbf{d}}) \right)
\end{gathered}
\end{equation}
The prior for $\boldsymbol{\beta}$ is typically chosen as isotropic Gaussian and the prior for the discrete $\textbf{d}$ as uniform, with probability $1/K$, where $K$ is the number of components. As showed in Figure \ref{fig:vae_gmvae}, $\boldsymbol{\beta}$ acts a code that is projected into one component beteween $K$. The reconstruction term and the first regularizer work exactly the same way than in VAE. The second divergence acts as a regularizer for the noise $\boldsymbol{\beta}$, and the third regularizer is for the discrete variable $\textbf{d}$.
\begin{figure}[htbp]
\centering
\subfigure[Generative model]{\includegraphics[height=0.32\linewidth]{figs/gen}}
\qquad \qquad \qquad
\subfigure[Inference model]{\includegraphics[height=0.32\linewidth]{figs/inf}}
\caption{Generative (left) and inference (right) models of GGMVAE. }
\label{fig:ggmvae}
\end{figure}
\section{Conclusion}
In this paper we have presented UG-VAE, an unsupervised generative probabilistic model able to capture both local data features and global features among batches of data samples. Unlike similar approaches in the literature, by combining a structured clustering prior in the local latent space with a global latent space with Gaussian prior and a more structured variational family, we have demonstrated that interpretable group features can be inferred from the global latent space in a completely unsupervised fashion. Model training does not require artificial manipulation of the ELBO term to force latent interpretability, which makes UG-VAE stand out w.r.t. most of the current disentanglement approaches using VAEs. The ability of UG-VAE to infer diverse features from the training set is further demonstrated in a domain alignment setup, where we show that the global space allows interpolation between domains, and also by showing that images in correlated batches of data, related by non-trivial features such as hair color or gender in CelebA, define identifiable structures in the posterior global latent space distribution.
\section{Experiments}
In this section we demonstrate the ability of the UG-VAE model to infer global factors of variation that are common among samples, even when coming from different datasets. In all cases, we have not validated in depth all the networks used, we have merely rely on encoder/decoder networks proposed in state-of-the-art VAE papers such as \cite{kingma2013auto}, \cite{bouchacourt2018multi} or \cite{higgins2016beta}. Our results must be hence regarded as a proof of concept about the flexibility and representation power of UG-VAE, rather than fine-tuned results for each case. Hence there is room for improvement in all cases. Details about network architecture and training parameters are provided in the Appendix \ref{app:networks}.
\subsection{Unsupervised learning of global factors} \label{sec:disent}
In this section we first asses the interpretability of the global disentanglement features inferred by UG-VAE over both CelebA and MNIST. In Figure \ref{fig:exp1} we show samples of the generative model as we explore both the global and local latent spaces.
We perform a linear interpolation with the aim at exploring the hypersphere centered at the mean of the distribution and with radius $\sigma_i$ for each dimension $i$. To maximize the variation range across every dimension, we move diagonally through the latent space. Rows correspond to an interpolation on the global $\boldsymbol{\beta}$ between $[-1, 1]$ on every dimension ($p(\boldsymbol{\beta})$ follows a standard Gaussian). As the local $p(\textbf{z} | d, \boldsymbol{\beta})$ (\eqref{eq:zgivenb}) depends on $d$ and $\boldsymbol{\beta}$, if we denote $\boldsymbol{\mu}_z=\boldsymbol{\mu}_z^{(d)}(\boldsymbol{\beta})$, the local interpolation goes from $[\mu_{z0}-3, \mu_{z1}-3, ...\mu_{zd}-3]$ to $[\mu_{z0}+3, \mu_{z1}+3, ..., \mu_{zd}+3]$. The range of $\pm3$ for the local interpolation is determined to cover the variances $\boldsymbol{\Sigma}_z^{(d)}(\boldsymbol{\beta})$ that we observe upon training the model for MNIST and CelebA. The every image in Figure \ref{fig:exp1} correspond to samples from a different cluster (fixed values of $d$), in order to facilitate the interpretability of the information captured at both local and global levels. By using this set up, we demonstrate that the global information tuned by $\boldsymbol{\beta}$ is different and clearly interpretable inside each cluster.
The total number of clusters is set to $K=20$ for CelebA and $K=10$ for MNIST. Three of these components are presented in Figure \ref{fig:exp1}. We can observe that each row (each value of $\boldsymbol{\beta}$) induces a shared generative factor, while $\textbf{z}$ is in charge of variations inside this common feature. For instance, in CelebA (top), features like skin color, presence of beard or face contrast are encoded by the global variable, while local variations like hair style or light direction are controlled by the local variable. In a simple dataset like MNIST (bottom), results show that handwriting global features as cursive style, contrast or thickness are encoded by $\boldsymbol{\beta}$, while the local $\textbf{z}$ defines the shape of the digit. The characterization of whether these generative factors are local/global is based on an interpretation of the effect that varying $\textbf{z}$ and $\boldsymbol{\beta}$ provokes in each image within a batch, and in the whole batch of images, respectively. In the Appendix \ref{app:ext1}, we reproduce the same figures for the all the clusters, in which we can appreciate that there is a significant fraction of clusters with visually interpretable global/local features.
We stress here again the fact that the UG-VAE training is fully unsupervised: data batches during training are completely randomly chosen from the training dataset, with no structured correlation whatsoever. Unlike other approaches for disentanglement, see \cite{higgins2016beta} or \cite{mathieu2019disentangling}, variational training in UG-VAE does not come with additional ELBO hyperparameters that need to be tuned to find a proper balance among terms in the ELBO.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{figs/exp1}
\caption{Sampling from UG-VAE for CelebA (top) and MNIST (bottom).
We include samples from 3 local clusters from a total of $K=20$ for CelebA and $K=10$ for MNIST. In CelebA (top), the global latent variable disentangles in skin color, beard and face contrast, while the local latent variable controls hair and light orientation. In MNIST (bottom), $\boldsymbol{\beta}$ controls cursive grade, contrast and thickness of handwriting, while $\textbf{z}$ varies digit shape.}
\label{fig:exp1}
\end{figure}
One of the main contributions in the design of UG-VAE is the fact that, unless we include a clustering mixture prior in the local space controlled by the global variable $\boldsymbol{\beta}$, unsupervised learning of global factors is non-informative. To illustrate such a result, in Figure \ref{fig:exp12} we reproduce the results in Figure \ref{fig:exp1} but for a probabilistic model in which the discrete local variable $d$ is not included. Namely, we use the ML-VAE in Figure \ref{fig:ggmvae}(c) but we trained it with random data batches.
In this case, the local space is uni-modal given $\boldsymbol{\beta}$ and we show interpolated values between -1 to 1. Note that the disentanglement effect of variations in both $\boldsymbol{\beta}$ and $\boldsymbol{z}$ is mild and hard to interpret.
\begin{figure}[h]
\centering
\subfigure[CelebA]{\includegraphics[width=0.35\linewidth]{figs/exp12a}} \qquad \qquad
\subfigure[MNIST]{\includegraphics[width=0.35\linewidth]{figs/exp12b}}
\caption{Sampling from ML-VAE, trained over unsupervised data.}
\label{fig:exp12}
\end{figure}
\subsection{Domain alignment} \label{sec:align}
In this section, we evaluate the UG-VAE performance in an unsupervised domain alignment setup. During training, the model is fed with data batches that include random samples coming from two different datasets. In particular, we train our model with a mixed dataset between CelebA and 3D FACES \cite{paysan20093d}, a dataset of 3D scanned faces, with a proportion of 50\% samples from each dataset inside each batch.
Upon training with random batches, in Figure \ref{fig:exp2}, we perform the following experiment using domain supervision to create test data batches. We create two batches containing only images from CelebA and 3D FACES. Let $\boldsymbol{\beta}_1$ and $\boldsymbol{\beta}_2$ be the mean global posterior computed using (\ref{eq:betagivenz}) associated for each batch. For two particular images in these two batches, let $\boldsymbol{z}_1$ and $\boldsymbol{z}_2$ be the mean local posterior of these two images, computed using (\ref{eq:zgivenb}). Figure \ref{fig:exp2} (a) shows samples of the UG-VAE model when we linearly interpolate between $\boldsymbol{\beta}_1$ and $\boldsymbol{\beta}_2$ (rows) and between $\boldsymbol{z}_1$ and $\boldsymbol{z}_2$ (columns)\footnote{Note that since both $\boldsymbol{\beta}$ and $\boldsymbol{z}$ are deterministically interpolated, the discrete variable $d$ plays no role to sample from the model.}. Certainly $\boldsymbol{\beta}$ is capturing the domain knowledge. For fixed $\mathbf{z}$, e.g. $\boldsymbol{z}_1$ in the first column, the interpolation between $\boldsymbol{\beta}_1$ and $\boldsymbol{\beta}_2$ is transferring the CelebA image into the 3D FACES domain (note that background is turning white, and the image is rotated to get a 3D effect). Alternatively, for fixed $\boldsymbol{\beta}$, e.g. $\boldsymbol{\beta}_1$ in the first row, interpolating between $\boldsymbol{z}_1$ and $\boldsymbol{z}_2$ modifies the first image into one that keeps the domain but resembles features of the image in the second domain, as face rotation.
In Figure \ref{fig:exp2}(b) we show the 2D t-SNE plot of the posterior distribution of $\boldsymbol{\beta}$ for batches that are random mixtures between datasets (grey points), batches that contain only CelebA faces (blue squares), and batches that contain only 3D faces (green triangles). We also add the corresponding points of the $\boldsymbol{\beta}_1$ and $\boldsymbol{\beta}_2$ interpolation in Figure \ref{fig:exp2}(a). In Figure \ref{fig:exp2}(c), we reproduce the experiment in (a) but interpolating between two images and values of $\boldsymbol{\beta}$ that correspond to the same domain (brown interpolation line in Figure \ref{fig:exp2}(b)). As expected, the interpolation of $\boldsymbol{\beta}$ in this case does not change the domain, which suggests that the domain structure in the global space is smooth, and that the interpolation along the local space $\mathbf{z}$ modifies image features to translate one image into the other. In Figure \ref{fig:exp22} experiments with more datasets are included. When mixing the 3DCars dataset \citep{fidler20123d} with the 3D Chairs dataset \cite{aubry2014seeing}, we find that certain correlations between cars and chairs are captured. In Figure \ref{fig:exp22} (a), interpolating between a racing car and an office desk chair leads to a white car in the first domain (top right) and in a couch (bottom left). In Figure \ref{fig:exp22} (b), when using the 3D Cars along with the Cars Dataset \citep{Krause}, rotations in the cars are induced.
Finally, in the Appendix \ref{app:exp2} we show that, as expected, the rich structured captured by UG-VAE illustrated in Figure \ref{fig:exp2} is lost when we do not include the clustering effect in the local space, i.e. if we use ML-VAE with unsupervised random data batches, and all the transition between domains is performed within the local space.
\begin{figure}[h]
\centering
\subfigure[CelebA-FACES]{\includegraphics[width=0.325\linewidth]{figs/exp2a}}
\subfigure[$\boldsymbol{\beta}$ TSNE 2D space.
]{\includegraphics[width=0.325\linewidth]{figs/exp2c}}
\subfigure[FACES-FACES]{\includegraphics[width=0.325\linewidth]{figs/exp2b}}
\caption{UG-VAE interpolation in local (columns) and global (rows) posterior spaces, fusing celebA and FACES datasets. In (a) the interpolation goes between the posteriors of a sample from CelebA dataset and a sample from FACES dataset. In (c) the interpolation goes between the posteriors of a sample from FACES dataset and another sample from the same dataset. }
\label{fig:exp2}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[3D Cars-3D Chairs]{\includegraphics[width=0.45\linewidth]{figs/exp2d}}
\subfigure[3D Cars-Cars]{\includegraphics[width=0.45\linewidth]{figs/exp2e}}
\caption{Extended experiment: UG-VAE interpolation in local (columns) and global (rows) posterior spaces, fusing 3D Cars with 3D Chairs (d) and 3D Cars to Cars Dataset (e).}
\label{fig:exp22}
\end{figure}
\subsection{UG-VAE representation of structured non-trivial data batches} \label{sec:struct}
In the previous subsection, we showed that the UG-VAE global space is able to separate certain structure in the data batches (e.g. data domain) even though during training batches did not present such an explicit correlation. Using UG-VAE trained over CelebA with unsupervised random batches of 128 images as a running example, in this section we want to further demonstrate this result.
In Figure \ref{fig:exp3} we show the t-SNE 2D projection of structured batches using the posterior $\boldsymbol{\beta}$ distribution in (\ref{eq:betagivenz}) over CelebA test images. In Figure \ref{fig:exp3}(a), we display the distribution of batches containing only men and women, while in Figure \ref{fig:exp3}(b) the distribution of batches containing people with black or blond hair. In both cases we show the distribution of randomly constructed batches as the ones in the training set. To some extend, in both cases we obtain separable distributions among the different kinds of batches. A quantitive evaluation can be found in Table \ref{tab:classif}, in which we have used samples from the $\boldsymbol{\beta}$ distribution to train a supervised classifier to differentiate between different types of batches. When random batches are not taken as a class, the separability is evident. When random batches are included, it is expected that the classifier struggles to differentiate between a batch that contains 90\% of male images and a batch that only contain male images, hence the drop in accuracy for the multi-case problem.
An extension with similar results and figures for another interpretation of global information capturing are exposed in the Appendix \ref{app:exp3}, using structured grouped batches in MNIST dataset. In this experiment, the groups are digits that belong to certain mathematical series, including even numbers, odd numbers, Fibonacci series and prime numbers, and we prove that the model is able to discriminate among their global posterior representations.
\begin{figure}[h]
\centering
\subfigure[]{\includegraphics[width=0.49\linewidth]{figs/exp33}}
\subfigure[]{\includegraphics[width=0.49\linewidth]{figs/exp31b}}
\caption{ 2D t-SNE projection of the UG-VAE $\boldsymbol{\beta}$ posterior distribution of structured batches of 128 CelebA images. UG-VAE is trained with completely random batches of 128 train images.}
\label{fig:exp3}
\end{figure}
\begin{table}[htbp]
\caption{Batch classification accuracy using samples of the posterior $\boldsymbol{\beta}$ distribution.}\label{tab:classif}
\begin{center}
\begin{tabular}{llll}
\textbf{Batch categories} & \textbf{Classifier} & \textbf{Train accuracy} & \textbf{Test accuracy} \\
\hline
\multirow{2}{*}{Black (0) vs blond (1)} & Linear SVM & 1.0 & 0.95 \\
& RBF SVM & 1.0 & 0.98
\\ \hline
\multirow{2}{*}{Black (0) vs blond (1) vs random (2)} & Linear SVM & 0.91 & 0.54 \\
& RBF SVM & 0.85 & 0.56
\\ \hline
\multirow{2}{*}{Male (0) vs female (1)} & Linear SVM & 1.0 & 0.85 \\
& RBF SVM & 1.0 & 0.85
\\ \hline
\multirow{2}{*}{Male (0) vs female (1) vs random (2)} & Linear SVM & 0.84 & 0.66 \\
& RBF SVM & 0.89 & 0.63
\\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Introduction}
Since its first proposal by \cite{kingma2013auto}, Variational Autoencoders (VAEs) have evolved into a vast amount of variants. To name some representative examples, we can include VAEs with latent mixture models priors \citep{dilokthanakul2016deep}, adapted to model time-series \citep{chung2015recurrent}, trained via deep hierarchical variational families \citep{ranganath2016hierarchical, tomczak2018vae}, or that naturally handle heterogeneous data types and missing data \citep{nazabal2020handling}.
The large majority of VAE-like models are designed over the assumption that data is i.i.d., which remains a valid strategy for simplifying the learning and inference processes in generative models with latent variables. A different modelling approach may drop the i.i.d. assumption with the goal of capturing a higher level of dependence between samples. Inferring such kind of higher level dependencies can directly improve current approaches to find interpretable disentangled generative models(\citep{bouchacourt2018multi}, to perform domain alignment \citep{heinze2017conditional} or to ensure fairness and unbiased data \citep{barocas2017fairness}.
The main contribution of this paper is to show that a deep probabilistic VAE non i.i.d. model with both local and global latent variable can capture meaningful and interpretable correlation among data points in a completely unsupervised fashion. Namely, weak supervision to group the data samples is not required. In the following we refer to our model as Unsupervised Global VAE (UG-VAE). We combine a clustering inducing mixture model prior in the local space, that helps to separate the fundamental data features that an i.i.d. VAE would separate, with a global latent variable that modulates the properties of such latent clusters depending on the observed samples, capturing fundamental and interpretable data features. We demonstrate such a result using both CelebA, MNIST and the 3D FACES dataset in \cite{paysan20093d}. Furthermore, we show that the global latent space can explain common features in samples coming from two different databases without requiring any domain label for each sample, establishing a probabilistic unsupervised framework for domain alignment. Up to our knowledge, UG-VAE is the first VAE model in the literature that performs unsupervised domain alignment using global latent variables.
Finally, we demonstrate that, even when the model parameters have been trained using an unsupervised approach, the global latent space in UG-VAE can discriminate groups of samples with non-trivial structures, separating groups of people with black and blond hair in CelebA or series of numbers in MNIST. In other words, if weak supervision is applied at test time, the posterior distribution of the global latent variable provides with an informative representation of the user defined groups of correlated data.
\section*{Acknowledgements}
This work has been supported by Spanish government Ministerio de Ciencia, Innovación y Universidades under grants FPU18/00516, TEC2017-92552-EXP and RTI2018-099655-B-100, by Comunidad de Madrid under grants IND2017/TIC-7618, IND2018/TIC-9649, and Y2018/TCS-4705, by BBVA Foundation under the Deep-DARWiN project, and by the European Union (FEDER) and the European Research Council (ERC) through the European Union’s Horizon 2020 research and innovation program under Grant 714161.
\bibliographystyle{abbrvnat}
\section{Unsupervised Global VAE}
We present UG-VAE, a deep generative VAE framework for modeling non-i.i.d. data with global dependencies. It generalizes the ML-VAE graphical model in Figure \ref{fig:related} (c) to \textit{i)} remove the group supervision, \textit{ii)} include a clustering-inducing prior in the local space, and \textit{iii)} propose a more structured variational family.
\begin{figure}[htbp]
\centering
\subfigure[Generative model]{\includegraphics[height=0.32\linewidth]{figs/gen}}
\qquad \qquad \qquad \qquad
\subfigure[Inference model]{\includegraphics[height=0.32\linewidth]{figs/inf}}
\caption{Generative (left) and inference (right) of UG-VAE. }
\label{fig:ggmvae}
\end{figure}
\subsection{Generative model} \label{sec:gen}
Figure \ref{fig:ggmvae} represents the generative graphical model of UG-VAE. A global variable $\boldsymbol{\beta} \in \mathbb{R}^g$ induces shared features to generate a group of $B$ samples $\textbf{X} = \{\textbf{x}_1, ..., \textbf{x}_B \} \subseteq \mathbb{R}^D$, and $\mathcal{G}$ is the number of groups we jointly use to amortize the learning of the model parameters. During amortized variational training, groups are simply random data mini-batches from the training dataset, being $\mathcal{G}$ the number of data mini-batches. We could certainly take $B=N$ (the training set size) and hence $\mathcal{G}=1$, but this leads to less interpretable global latent space (too much data to correlate with a single global random variable), and a slow training process.
Conditioned to $\boldsymbol{\beta}$, data samples are independent and distributed according to a Gaussian mixture local (one per data) latent variable $\textbf{Z} = \{\textbf{z}_1, ..., \textbf{z}_B \} \subseteq \mathbb{R}^d$, and $\textbf{d} = \{d_1, ..., d_B \}\subseteq \{ 1, ..., K \}$ are independent discrete categorical variables with uniform prior distributions. This prior, along with the conditional distribution $p(\textbf{z}_i \vert d_i, \boldsymbol{\beta})$, defines a Gaussian mixture latent space, which helps to infer similarities between samples from different batches (by assigning them to the same cluster), and thus, $d_i$ plays a similar role than the semi-supervision included in \cite{bouchacourt2018multi} by grouping. Our experimental results demonstrate that this level of structure in the local space is crucial to acquire interpretable information at the global space, and specially, if we fix $d_i$ for all the samples within a batch, that the global variable $\boldsymbol{\beta}$ is able to tune different generative factors for each cluster.
The joint distribution for a single group is therefore defined by:
\begin{equation}
p_\theta(\textbf{X}, \textbf{Z}, \textbf{d}, \boldsymbol{\beta}) =
p(\textbf{X} | \textbf{Z}, \boldsymbol{\beta}) \, p(\textbf{Z} \vert \textbf{d}, \boldsymbol{\beta} ) \, p(\textbf{d}) \, p(\boldsymbol{\beta})
\end{equation}
where the likelihood term of each sample is a Gaussian distribution, whose parameters are obtained from a concatenation of $\textbf{z}_i$ and $\boldsymbol{\beta}$ as input of a decoder network:
\begin{equation}
p(\textbf{X} | \textbf{Z}, \boldsymbol{\beta}) = \prod_{i=1}^{B} p(\textbf{x}_i | \textbf{z}_i, \boldsymbol{\beta}) = \prod_{i=1}^{B} \mathcal{N}\left(\boldsymbol{\mu}_{\theta_x}( [ \textbf{z}_i, \boldsymbol{\beta} ] ), \boldsymbol{\Sigma}_{\theta_x} ( [ \textbf{z}_i, \boldsymbol{\beta} ] )\right) \\
\end{equation}
In contrast with \cite{johnson2016composing}, where the parameters of the clusters are learned but shared by all the observations, in UG-VAE, the parameters of each component are obtained with networks fed with $\boldsymbol{\beta}$. Thus, the prior of each local latent continuous variable is defined by a mixture of Gaussians, where $d_i$ defines the component and $\boldsymbol{\beta}$ is the input of a NN that outputs its parameters:
\begin{equation}\label{eq:zgivenb}
p(\textbf{Z} | \textbf{d}, \boldsymbol{\beta}) = \prod_{i=1}^{B} p(\textbf{z}_i | d_i, \boldsymbol{\beta})
= \prod_{i=1}^{B} \mathcal{N}\left(\boldsymbol{\mu}_{\theta_z}^{(d_i)}( \boldsymbol{\beta} ), \boldsymbol{\Sigma}_{\theta_z}^{(d_i)} ( \boldsymbol{\beta} )\right),
\end{equation}
hence we trained as many NNs as discrete categories. This local space encodes samples in representative clusters to model local factors of variation. The prior of the discrete latent variable is defined as uniform:
\begin{equation}
p(\textbf{d}) =\prod_{i=1}^{B} \text{Cat}(\boldsymbol{\pi}) \quad \pi_k = 1/K
\end{equation}
and the prior over the continuous latent variable $\beta$ follows an isotropic Gaussian, $p(\boldsymbol{\beta} ) = \mathcal{N} (\textbf{0}, \textbf{I})$.
\subsection{Inference model} \label{sec:inf}
The graphical model of the proposed variational family is shown in Figure \ref{fig:ggmvae}(b):
\begin{equation}
q_\phi (\textbf{Z}, \textbf{d}, \boldsymbol{\beta}| \textbf{X} ) = q(\textbf{Z} | \textbf{X}) \, q(\textbf{d} | \textbf{Z}) q( \boldsymbol{\beta} | \textbf{X}, \textbf{Z} )
\end{equation}
where we employ an encoder network that maps the input data into the local latent posterior distribution, which is defined as a Gaussian:
\begin{equation}
q(\textbf{Z} | \textbf{X}) = \prod_{i=1}^{B} q(\textbf{z}_i | \textbf{x}_i) = \prod_{i=1}^{B} \mathcal{N}( \boldsymbol{\mu}_{\phi_z}( \textbf{x}_i ), \boldsymbol{\Sigma}_{\phi_z}( \textbf{x}_i ) )
\end{equation}
Given the posterior distribution of $\textbf{z}$, the categorical posterior distribution of $d_i$ is parametrized by a NN that takes $\textbf{z}_i$ as input
\begin{equation}
q(\textbf{d} \vert \textbf{Z}) = \prod_{i=1}^{B} q(d_i \vert \textbf{z}_i) = \prod_{i=1}^{B} \text{Cat} (\pi_{\phi_d}(\textbf{z}_i))
\end{equation}
The approximate posterior distribution of the global variable $\boldsymbol{\beta}$ is computed as a product of local contributions per datapoint. This strategy, as demonstrated by \cite{bouchacourt2018multi}, outperforms other approaches like, for example, a mixture of local contributions, as it allows to accumulate group evidence. For each sample, a NN encodes $\textbf{x}_i$ and the Categorical parameters $\pi_{\phi_d}(\textbf{z}_i)$ in a local Gaussian:
\begin{equation}\label{eq:betagivenz}
q(\boldsymbol{\beta} | \textbf{X}, \textbf{Z}) = \mathcal{N} \left( \boldsymbol{\mu}_{\beta}, \boldsymbol{\Sigma}_{\beta}\right) = \prod_{i=1}^{B} \mathcal{N} \left( \boldsymbol{\mu}_{\phi_\beta}( [\textbf{x}_i, \pi_{\phi_d}(\textbf{z}_i)] ),\boldsymbol{\Sigma}_{\phi_\beta}( [\textbf{x}_i, \pi_{\phi_d}(\textbf{z}_i)] ) \right)
\end{equation}
If we denote by $\boldsymbol{\mu}_i$ and $\boldsymbol{\Sigma}_i$ the parameters obtained by networks $\boldsymbol{\mu}_{\phi_\beta}$ and $\boldsymbol{\Sigma}_{\phi_\beta}$, respectively, the parameters of the global Gaussian distribution are given, following \cite{bromiley2003products}, by:
\begin{equation}
\begin{gathered}
\boldsymbol{\Lambda}_{\beta} = \boldsymbol{\Sigma}_{\beta}^{-1} = \sum_{i=1}^B \boldsymbol{\Lambda}_i \\
\boldsymbol{\mu}_{\beta} = (\boldsymbol{\Lambda}_\beta)^{-1} \sum_{i=1}^B \boldsymbol{\Lambda}_i \boldsymbol{\mu}_i
\end{gathered}
\end{equation}
where $\boldsymbol{\Lambda}_\beta=\boldsymbol{\Sigma}_\beta^{-1}$ is defined as the precision matrix, which we model as a diagonal matrix.
\subsection{Evidence Lower Bound}
Overall, the evidence lower bound reads as follows:
\begin{equation}\label{eq:elbo}
\begin{gathered}
\mathcal{L} (\theta, \phi ; \, \textbf{X}, \textbf{Z}, \textbf{d}, \boldsymbol{\beta} ) = \mathbb{E}_{q(\boldsymbol{\beta})} \left[ \mathcal{L}_i (\theta, \phi ; \, \textbf{x}_i, \textbf{z}_i,\textbf{d}, \boldsymbol{\beta}) \right]
- \mathbb{E}_{q( \textbf{d})} \left[ D_{KL} \left( q(\boldsymbol{\beta} | \textbf{X}, \textbf{Z}) \Vert p(\boldsymbol{\beta}) \right) \right]
\end{gathered}
\end{equation}
The resulting ELBO is an expansion of the ELBO for a standard GMVAE with a new regularizer for the global variable. As the reader may appreciate, the ELBO for UG-VAE does not include extra hyperparameters to enforce disentanglement, like other previous works as $\beta$-VAE, and thus, no extra validation is needed apart from the parameters of the networks architecture, the number of clusters and the latent dimensions. We denote by $\mathcal{L}_i$ each local contribution to the ELBO:
\begin{equation}
\begin{gathered}
\mathcal{L}_i (\theta, \phi ; \, \textbf{x}_i, \textbf{z}_i,\textbf{d}, \boldsymbol{\beta}) =
\mathbb{E}_{q(\boldsymbol{d_i}, \textbf{z}_i)} \left[ \log p (\textbf{x}_i | \textbf{z}_i, d_i, \boldsymbol{\beta} ) \right] \\
- \mathbb{E}_{q(\boldsymbol{d_i})} \left[ D_{KL} \left( q(\textbf{z}_i | \textbf{x}_i) \Vert p(\textbf{z}_i | d_i , \boldsymbol{\beta}) \right) \right] - D_{KL} \left( q(d_i | \textbf{z}_i) \Vert p(d_i)) \right)
\end{gathered}
\end{equation}
The first part of \eqref{eq:elbo} is an expectation over the global approximate posterior of the so-called local ELBO. This local ELBO differs from the vanilla ELBO proposed by \cite{kingma2013auto} in the regularizer for the discrete variable $d_i$, which is composed by the typical reconstruction term of each sample and two KL regularizers: one for $\textbf{z}_i$, expected over $d_i$, and the other over $d_i$. The second part in \eqref{eq:elbo} is a regularizer on the global posterior. The expectations over the discrete variable $d_i$ are tractable and thus, analytically marginalized.
In contrast with GMVAE (Figure \ref{fig:related} (b)), in UG-VAE, $\boldsymbol{\beta} $ is shared by a group of observations, therefore the parameters of the mixture are the same for all the samples in a batch. In this manner, within each optimization step, the encoder $q(\boldsymbol{\beta} | \mathbf{X}, \mathbf{Z})$ only learns from the global information obtained from the product of Gaussian contributions of every observation, with the aim at configuring the mixture to improve the representation of each datapoint in the batch, by means of $p(\mathbf{Z} | \textbf{d}, \boldsymbol{\beta})$ and $p(\mathbf{X} | \mathbf{Z}, \boldsymbol{\beta})$. Hence, the control of the mixture is performed by using global information. In contrast with ML-VAE (whose encoder $q(C_G | \mathbf{X})$ is also global, but the model does not include a mixture), in UG-VAE, the $\boldsymbol{\beta}$ encoder incorporates information about which component each observation belongs to, as the weights of the mixture inferred by $q(\textbf{d} | \mathbf{Z})$ are used to obtain $q(\boldsymbol{\beta} | \mathbf{X}, \mathbf{Z})$. Thus, while each cluster will represent different local features, moving $\boldsymbol{\beta}$ will affect all the clusters. In other words, modifying $\boldsymbol{\beta}$ will have some effect in each local cluster. As the training progresses, the encoder $q(\boldsymbol{\beta} | \mathbf{X}, \mathbf{Z})$ learns which information emerging from each batch of data allows to move the cluster in a way that the ELBO increases.
\section{Related work} \label{sec:related}
Non i.i.d. deep generative models are getting recent attention but the literature is still scarse. First we find VAE models that implement non-parametric priors: in \cite{gyawali2019improving} the authors make use of a global latent variable that induces a non-parametric Beta process prior, and more efficient variational mechanism for this kind of IBP prior are introduced in \cite{xu2019variational}. Second, both \cite{tang2019correlated} and \cite{korshunova2018bruno} proposed non i.i.d. exchangable models by including correlation information between datapoints via an undirected graph. Finally, some other works rely on simpler generative models (compared to these previous approaches), including global variables with fixed-complexity priors, typically a multi-variate Gaussian distribution, that aim at modelling the correlation between user-specified groups of correlated samples (e.g. images of the same class in MNIST, or faces of the same person). In \cite{bouchacourt2018multi} or \cite{hosoya2019group}, authors apply weak supervision by grouping image samples by identity, and include in the probabilistic model a global latent variable for each of these groups, along with a local latent variable that models the distribution for each individual sample. Below we specify the two most relevant lines of research, in relation to our work.
\paragraph{VAEs with mixture priors. } Several previous works have demonstrated that incorporating a mixture in the latent space leads to learn significantly better models. In \cite{johnson2016composing} authors introduce a latent GMM prior with nonlinear observations, where the means are learned and remain invariant with the data. The GMVAE proposal by \cite{dilokthanakul2016deep} aims at incorporating unsupervised clustering in deep generative models for increasing interpretability. In the VAMP VAE model \cite{tomczak2018vae}, the authors define the prior as a mixture with components given by approximated variational posteriors, that are conditioned on learnable pseudo-inputs. This approach leads to an improved performance, avoiding typical local optima difficulties that might be related to irrelevant latent dimensions.
\begin{figure}[htbp]
\centering
\subfigure[VAE]{\includegraphics[width=0.17\linewidth]{figs/vae}}
\subfigure[GMVAE]{\includegraphics[width=0.25\linewidth]{figs/gmvae}}
\subfigure[ML-VAE]{\includegraphics[width=0.32\linewidth]{figs/mlvae}}
\subfigure[NestedVAE]{\includegraphics[width=0.2\linewidth]{figs/nested}}
\caption{Comparison of four deep generative models. Dashed lines represent the graphical model of the associated variational family. The Vanilla VAE (a), the GMVAE (b), and semi-supervised variants for grouped data; ML-VAE (c) and NestedVAE (d).}
\label{fig:related}
\end{figure}
\paragraph{Semi-supervised deep models for grouped data.} In contrast to the i.i.d. vanilla VAE model in Figure \ref{fig:related} (a), and its augmented version for unsupervised clustering, GMVAE, in Figure \ref{fig:related} (b), the graphical model of the Multi-Level Variational Autoencoder (ML-VAE) in \cite{bouchacourt2018multi} is shown in Figure \ref{fig:related} (c), where G denotes the number of groups. ML-VAE includes a local Gaussian variable $S_i$ that encodes style-related information for each sample, and global Gaussian variable $C_G$ to model shared in a group of samples. For instance, they feed their algorithm with batches of face images from the same person, modeling content shared within the group that characterize a person. This approach leads to learning a disentangled representations at the group and observations level, in a content-style fashion. Nevertheless, the groups are user-specified, hence resulting in a semi-supervised modelling approach. In \cite{vowels2020nestedvae} authors use weak supervision for pairing samples. They implement two outer VAEs with shared weights for the reconstruction, and a Nested VAE that reconstructs latent representation off one to another, modelling correlations across pairs of samples. The graphical model for Nested VAE is depicted in Figure \ref{fig:related} (d).
\section{Extended Experiments} \label{app:extended_exps}
\subsection{Extended results for Section 4.1: Unsupervised learning of global factors} \label{app:ext1}
With the aim at evaluating whether a fraction of the clusters inferred by UG-VAE encode visually interpretable global/local features, in Figure \ref{fig:clusters} we include the results for CelebA for $K=20$ clusters. We observe that a considerate proportion of the clusters captures disentangled generative factors. Moreover, considering the heterogeneity and variety in the generative factors of celebA faces (up to 40 different attributes), increasing the number of clusters might lead to capture more representative faces, and thus, generative global factors modulated by $\boldsymbol{\beta}$. In Figure \ref{fig:clusters}, we appreciate that, apart from skin color, beard or image contrast, other generative factors controlled by the global variable are hair style (remarkable for components 9, 16 , 17 or 18), sex (components 4 and 14), or background color (components 4, 16 and 17).
\begin{figure}[htbp]
\centering
\subfigure[$d=0$]{\includegraphics[width=0.23\linewidth]{figs/sup/K0}} \,
\subfigure[$d=1$]{\includegraphics[width=0.23\linewidth]{figs/sup/K1}} \,
\subfigure[$d=2$]{\includegraphics[width=0.23\linewidth]{figs/sup/K2}} \,
\subfigure[$d=3$]{\includegraphics[width=0.23\linewidth]{figs/sup/K3}} \,
\subfigure[$d=4$]{\includegraphics[width=0.23\linewidth]{figs/sup/K4}} \,
\subfigure[$d=5$]{\includegraphics[width=0.23\linewidth]{figs/sup/K5}} \,
\subfigure[$d=6$]{\includegraphics[width=0.23\linewidth]{figs/sup/K6}} \,
\subfigure[$d=7$]{\includegraphics[width=0.23\linewidth]{figs/sup/K7}} \,
\subfigure[$d=8$]{\includegraphics[width=0.23\linewidth]{figs/sup/K8}} \,
\subfigure[$d=9$]{\includegraphics[width=0.23\linewidth]{figs/sup/K9}} \,
\subfigure[$d=10$]{\includegraphics[width=0.23\linewidth]{figs/sup/K10}} \,
\subfigure[$d=11$]{\includegraphics[width=0.23\linewidth]{figs/sup/K11}} \,
\subfigure[$d=12$]{\includegraphics[width=0.23\linewidth]{figs/sup/K12}} \,
\subfigure[$d=13$]{\includegraphics[width=0.23\linewidth]{figs/sup/K13}} \,
\subfigure[$d=14$]{\includegraphics[width=0.23\linewidth]{figs/sup/K14}} \,
\subfigure[$d=15$]{\includegraphics[width=0.23\linewidth]{figs/sup/K15}} \,
\subfigure[$d=16$]{\includegraphics[width=0.23\linewidth]{figs/sup/K16}} \,
\subfigure[$d=17$]{\includegraphics[width=0.23\linewidth]{figs/sup/K17}} \,
\subfigure[$d=18$]{\includegraphics[width=0.23\linewidth]{figs/sup/K18}} \,
\subfigure[$d=19$]{\includegraphics[width=0.23\linewidth]{figs/sup/K19}} \,
\caption{Sampling from UG-VAE for CelebA. We include samples from each of the K = 20 clusters.}
\label{fig:clusters}
\end{figure}
In order to visually remark the advantage of capturing global correlations among samples of UG-VAE wrt the cited related models, we include in Figure \ref{fig:interp_betaVAE} an interpolation in the latent space of $\beta$-VAE, following the approach of experiment 4.1 in the paper. We explore the latent space from $\textbf{z} = [-1, -1, ..., -1]$ to $\textbf{z} = [1, 1, ..., 1]$, given that the prior is an isotropic Gaussian. As the reader may appreciate, only one row is included as $\beta$-VAE does not have global space. In this case, moving diagonally through the latent space start from a blond woman and ends in a brunette woman with the same angle face. Thus, the local space is in charge of encoding both content and style aspects. Although in $\beta$-VAE, authors analyze the disentanglement in each dimension of the latent space, we do not study whether each dimension of $\textbf{z}$ represents an interpretable generative factor in UG-VAE or not, as it is out of the scope for this work. The novelty lies on the fact that, apart from the local disentanglement, our model adds an extra point of interpretability through the disentanglement in the global space.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{figs/sup/interp_betaVAE.pdf}
\caption{Interpolation in the prior latent space of $\beta$-VAE with $\beta=10$, using the same networks architecture than in the local part of UG-VAE. Interpolation consists on 7 steps from $\textbf{z} = [-1, -1, ..., -1]$ to $\textbf{z} = [1, 1, ..., 1]$. }
\label{fig:interp_betaVAE}
\end{figure}
With the aim justifying the configuration for obtaining the samples exposed in Figure 3 of the paper (fixing $d$ for the whole interpolation in $\mathbf{z}$ and $\boldsymbol{\beta}$ spaces), we include in Figure \ref{fig:exp_1_without_fix_d} this interpolation process when we do not fix $d$. Hence, for each row, we sample $d$ and interpolate $\mathbf{z}$ for the selected component. The global interpolation remains equal, but as the reader might appreciate, the interpretability of which global information is controlled by $\boldsymbol{\beta}$ is hard to analyze by using this set up.
\begin{figure}[htbp]
\centering
\subfigure[CelebA]{\includegraphics[width=0.4\linewidth]{figs/sup/exp_1_without_fix_d_a.pdf} } \qquad
\subfigure[MNIST]{\includegraphics[width=0.4\linewidth]{figs/sup/exp_1_without_fix_d_b.pdf}}
\caption{Sampling from UG-VAE for CelebA (left) and MNIST (right).
We include samples for CelebA with $K=20$ and MNIST with $K=10$. We sample from $p(d)$ to obtain a cluster for each row. The information encoded in global $\boldsymbol{\beta}$ remains hardly interpretable by using this set up.}
\label{fig:exp_1_without_fix_d}
\end{figure}
\subsection{Extended results for Section 4.2: Domain Alignment} \label{app:exp2}
We include here the results of a interpolation in both the local space obtained when the number of components is $K=1$, i. e., using the ML-VAE approach. As showed in Figure \ref{fig:align}, when training ML-VAE with randomly grouped data, global space is not capable of capturing correlations between datasets, and the local space is in charge of encoding the transition from celebA to 3D FACES, which is performed within each row.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\linewidth]{figs/sup/align.pdf}
\caption{ML-VAE interpolation in local (columns) and global (rows) posterior spaces, fusing celebA and FACES datasets}
\label{fig:align}
\end{figure}
With the aim at reinforcing the robustness of UG-VAE in domain alignment, we include in Figure \ref{fig:alignGMVAE} the results of evaluating GMVAE with two clusters ($K=2$) in a similar setup that in section 4.2. A map with the reduced latent space (using t-SNE) of GMVAE is included in Figure \ref{fig:alignmapGMVAE}, where each point represents an encoded image. Figure \ref{fig:alignGMVAE} shows the interpolation between images of two different domains. As GMVAE does not have global variables, the interpolation applies only for the latent encodings in $\textbf{z}$. Note that the interpolation is merely a gradual overlap between the two images. Namely, the model is not able to correlate the features of both images, regardless of their domain. On the other hand, with UG-VAE, by keeping fixed the global variable and interpolating in the local one, we maintain the domain but we translate the features of one image into the other. This analysis corroborates that the model finds this type of correlations in a clearly separated way.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{figs/sup/alignmapGMVAE.pdf}
\caption{Interpolation map (with t-SNE) of the latent space of a GMVAE with $K=2$ after performing domain alignment, using the same networks architecture than in the local part of UG-VAE. We interpolate between the encodings of images from CelebA and FACES dataset. }
\label{fig:alignmapGMVAE}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{figs/sup/alignGMVAE.pdf}
\caption{Interpolation in the latent space of GMVAE with $K=2$ for performing domain alignment, using the same networks architecture than in the local part of UG-VAE. We interpolate between the encodings of images from CelebA and FACES dataset. }
\label{fig:alignGMVAE}
\end{figure}
\subsection{Extended results for Section 4.3: Representation of structured non-trivial data batches} \label{app:exp3}
In this extension, we show another evaluation of the capacity of UG-VAE in capturing global structures. In this ocassion, after training the model with randomly picked digits from MNIST, we compute the posterior of structured batches containing only even numbers, only odd numbers, numbers from Fibonacci series, and prime numbers. This grouped batches do not share strong generative factors among them that influence the pixel distributions (as with CelebA groups in experiment 3.3). Namely, the only global information in this example is their frequency of appearance in each batch type. In Figure \ref{fig:series} we show the 2D t-SNE projection of the posterior global latent variable $\boldsymbol{\beta}$ distributions. We observe that UG-VAE is able to discriminate among them in the global space.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\linewidth]{figs/sup/series.pdf}
\caption{2D t-SNE projection of the UG-VAE $\boldsymbol{\beta}$ posterior distribution of structured batches of 128 MNIST images. UG-VAE is trained with completely random batches of 128 train images.}
\label{fig:series}
\end{figure}
\section{Networks architecture} \label{app:networks}
In this section we detail the architectures and parameters used for training the models exposed in the main paper. An extended overview is included in Table \ref{tab:nets}.
\afterpage{%
\clearpag
\thispagestyle{empty
\begin{landscape
\begin{table}[htbp]
\caption{Architecture, parameters and hyperparameters for all the models trained for the experiments presented in the paper. \newline}
\label{tab:nets}
\begin{center}
\begin{tabular}{cllllll}
\multicolumn{1}{c}{\multirow{2}{*}{\textbf{Dataset}}} &\multicolumn{4}{c}{\textbf{Architecture}} & \multirow{2}{*}{\textbf{Params}}
& \multirow{2}{*}{\textbf{Hyperparams}} \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{Pre-encoder}} & \multicolumn{1}{c}{\textbf{Local encoder}} & \multicolumn{1}{c}{\textbf{Global encoder}} & \textbf{Decoder} & & \\ \hline
\textbf{CelebA}
& \begin{tabular}[c]{@{}l@{}}$\textbf{h}$: \\ 5 CNN layers\\ Filters: 32, 32, 64, 64, 256\\ Stride: all 4\\ Padding: All 1\\ ReLU activation\\ Batch normalization\end{tabular}
& \begin{tabular}[c]{@{}l@{}}$\phi_z$:\\ Linear layer: \\ $256\rightarrow2d$\\ First half $\boldsymbol{\mu}_z$\\ Second half diag($\boldsymbol{\Sigma}_z$)\\ \\ $\phi_d$:\\ Linear layers: \\ $d\rightarrow256\rightarrow K$\\ Tanh activation\\ Softmax output\end{tabular}
& \begin{tabular}[c]{@{}l@{}}$\phi_B$:\\ Linear layer: \\$256+K\rightarrow2g$ \\ First half $\boldsymbol{\mu}_B$\\ Second half diag($\boldsymbol{\Sigma}_B$)\end{tabular}
& \begin{tabular}[c]{@{}l@{}}$\theta_z$:\\ Linear layers: \\ $g\rightarrow256\rightarrow 2d$\\ First half $\boldsymbol{\mu}_z$\\ Second half diag($\boldsymbol{\Sigma}_z$)\\ \\$\theta_x$:\\ Linear layer: \\$d+g\rightarrow256$\\ 5 transpose CNN layers\\ Filters: 64, 64, 32, 32, 3\\ Stride: 1, 4, 4, 4, 4\\ Padding: 0, 1, 1, 1, 1\\ ReLU activation\\ Sigmoid output\end{tabular}
& \begin{tabular}[c]{@{}l@{}}d=20\\ $g=50$\\ $K=20$\end{tabular}
& $\sigma_x$ = 0.2 \\ \hline
\textbf{MNIST}
& \begin{tabular}[c]{@{}l@{}}$\textbf{h}$:\\ Linear layer: \\$28*28\rightarrow256$\\ ReLU activation\end{tabular}
& \begin{tabular}[c]{@{}l@{}}$\phi_z$:\\ Linear layer: \\ $256\rightarrow2d$\\ First half $\boldsymbol{\mu}_z$\\ Second half diag($\boldsymbol{\Sigma}_z$)\\ \\ $\phi_d$: \\ Linear layers: \\ $d\rightarrow256\rightarrow K$\\ Tanh activation\\ Softmax output\end{tabular}
& \begin{tabular}[c]{@{}l@{}}$\phi_B$:\\ Linear layer: \\$256+K\rightarrow2g$\\ First half $\boldsymbol{\mu}_B$\\ Second half diag($\boldsymbol{\Sigma}_B$)\end{tabular}
& \begin{tabular}[c]{@{}l@{}}$\theta_z$:\\ Linear layers: \\ $g\rightarrow256\rightarrow2d$\\ First half $\boldsymbol{\mu}_z$\\ Second half diag($\boldsymbol{\Sigma}_z$)\\ \\ $\theta_x$:\\ Linear layers: \\ $d+g\rightarrow256\rightarrow28*28$\\ ReLU activation\\ Sigmoid output\end{tabular}
& \begin{tabular}[c]{@{}l@{}}$d=10$\\ $g=20$\\ $K=10$\end{tabular}
& $\sigma_x = 0.2$ \\ \hline
\textbf{CelebA + 3D FACES}
& \multicolumn{4}{c}{Same than for CelebA} & \begin{tabular}[c]{@{}l@{}}$d=40$\\ $g=40$\\ $K=40$ \end{tabular}
& $\sigma_x$ = 0.2 \\ \hline
\textbf{3D Cars-3D Chairs}
& \multicolumn{4}{c}{Same than for CelebA} & \begin{tabular}[c]{@{}l@{}}$d=20$\\ $g=20$\\ $K=20$ \end{tabular}
& $\sigma_x$ = 0.2 \\ \hline
\textbf{3D Cars-Cars}
& \multicolumn{4}{c}{Same than for CelebA} & \begin{tabular}[c]{@{}l@{}}$d=20$\\ $g=50$\\ $K=20$ \end{tabular}
& $\sigma_x$ = 0.2 \\ \hline
\end{tabular}
\end{center}
\end{table}
\end{landscape}
\clearpag
}
|
1,108,101,564,635 | arxiv | \section{Introduction}
\label{sec:intro}
The properties of gas matter in more than two-thirds of the galaxy cluster volume
are still largely unknown.
In order to further improve our use of clusters as high-precision cosmological tools, it is therefore necessary to gain insight into the thermodynamics of their outer regions
\citep[see][and references therein]{2011MSAIS..17...47E}.
The surface brightness distribution in clusters outskirts has been studied with {\it ROSAT} PSPC \citep[e.g.][and references therein]{eckert12}
and with {\it Chandra} \citep[see][and references therein]{2009A&A...496..343E}. A step forward has been the recent observation of a handful of nearby clusters with the Japanese satellite {\it Suzaku} that, despite the relatively poor PSF and small field of view
of its X-ray imaging spectrometer (XIS), benefits from the modest background associated
to its low-Earth orbit
(see results on PKS0745-191, \citealt{geo09}; A2204, \citealt{2009A&A...501..899R};
A1795, \citealt{2009PASJ...61.1117B}; A1413, \citealt{2010PASJ...62..371H}; A1689, \citealt{2010ApJ...714..423K}; Perseus, \citealt{si11}; Hydra A, \citealt{2012arXiv1203.1700S}; see also \citealt{2011arXiv1112.3030A}).
The first results from {\it Suzaku} indicate a flattening and sometimes even an inversion of the entropy profile
moving outwards. The infall of cool gas from the large-scale structure might cause the assumption of hydrostatic equilibrium to break down in these regions, which could have important implications on cluster mass measurements \citep[][]{rasia04,lau09,bu10}. However, as a consequence of the small field of view (and of the large solid angle
covered from the bright nearby clusters) observations
along a few arbitrarily chosen directions often yield very different results.
In addition to instrumental effects \citep[e.g.][]{eck11}, or non-gravitational effects \citep[e.g.][]{ro06,lapi10,ma11,2010ApJ...725...91B,scienzo,bode12}, there are "simpler" physical
reasons to expect that properties of the ICM derived close to $\sim R_{\rm 200}$ may be more complex than what is expected from idealized cluster models.
One effect is cluster triaxiality and asymmetry, which may cause variations in
the ICM properties along
directions, due to the presence of large-scale filaments in particular sectors of each cluster \citep[e.g.][]{va11scatter,eckert12}, or to the cluster-to-cluster variance related to the surrounding environment \citep[e.g.][]{bu10}.
A second mechanism related to simple gravitational physics is
gas clumping, and its possible variations across different directions from the cluster centre \citep[][]{1999ApJ...520L..21M,nala11}.
In general, gas clumping may constitute a source of uncertainty in the derivation of properties of galaxy clusters atmospheres since a significant part of detected photons
may come from the most clumpy structures of the ICM, which may not be fully representative
of the underlying large-scale distribution of gas \citep[e.g.][]{ro06,va11scatter}.
Indeed, a significant fraction of the gas mass in cluster outskirts may be
in the form of dense gas clumps, as suggested by recent simulations \citep[][]{nala11}. In such clumps the emissivity of the gas is high, leading to an overestimation of the gas density if the assumption of constant density in each shell is made. The recent results
of \citet{nala11} and \citet{eckert12} show that the
treatment of gas clumping factor slightly improves the agreement
between simulations and observed X-ray profiles.
The gas clumping factor can also bias the derivation of the total hydrostatic gas mass in galaxy clusters at the $\sim 10$ percent level \citep[][]{1999ApJ...520L..21M},
and it may as well bias the projected temperature low \citep[][]{2005ApJ...618L...1R}. Theoretical models of AGN feedback also suggest that overheated clumps of gas may lead to a more efficient distribution of energy within cluster cores \citep[][]{2011MNRAS.415.2566B}.
Given that the effect of gas clumping may be {\it resolved} or {\it unresolved} by real X-ray telescopes depending on their effective resolution
and sensitivity, in this paper we aim at addressing both kind of effects, with an extensive analysis of a sample of massive ($\sim 10^{15} M_{\odot}$) galaxy clusters simulated at high spatial and mass resolution (Sec.\ref{sec:simulations}).
First, we study in Sec.\ref{subsec:clumping} the bias potentially present in cluster profiles derived without resolving the gas substructures. Second,
in Sec.\ref{subsec:clumps} we derive the observable distribution of
X-ray bright clumps, assuming realistic resolution and sensitivity
of several X-ray telescopes.
We test the robustness of our results with changing resolution and with additional non-gravitational processes (radiative cooling, cosmic rays, AGN feedback) in Sec.3.3.
Our discussion and conclusions are given in Sec.\ref{sec:conclusions}.
\begin{figure}
\includegraphics[width=0.45\textwidth]{images/2d_eks_new.ps}
\caption{2-dimensional power spectra of X-ray maps for three simulated clusters at $z=0$ (E1, E3B and E15B, as in Fig.\ref{fig:fig1}). Only one projection is considered. The long-dashed lines show the power spectra of the total X-ray image of each cluster, while the solid lines show the spectrum of each image after the average X-ray profile has been
removed. Zero-padding has been considered to deal with the
non-periodic domain of each image. The spectra are given in arbitrary code units, but the relative difference in normalization of each
cluster spectrum is kept. The vertical grey line shows our filtering scale to extract X-ray clumps within each image.}
\label{fig:pks}
\end{figure}
\begin{figure*}
\includegraphics[width=0.95\textwidth]{images/clumps_maps_3.ps}
\caption{Top panels: X-ray flux in the [0.5-2] keV (in [$\rm erg / (s \cdot cm^2)$]) of three simulated clusters of our sample at z=0 (E15B-relax, E1-post merger and E3B-merging). Bottom panels: X-ray flux of clumps identified by our procedure (also highlighted with white contours). The inner and outer projected area excluded from our analysis have been shadowed. The area shown within each panel is $\sim 3 \times 3 ~\rm R_{200}$ for each object.}
\label{fig:fig1}
\end{figure*}
\begin{table}
\begin{center}
\caption{Main characteristics of the simulated clusters at $z=0$.
Column 1: identification number; 2:
total virial mass ($M_{vir}=M_{\rm DM}+M_{gas}$); 3: virial radius ($R_{v}$); 4:dynamical classification: RE=relaxing, ME=merging or MM=major merger; 5: approximate redshift of the last merger event.}
\begin{tabular}{c|c|c|c|c}
ID & $M_{vir}$ & $R_{v}$ & dyn.state & redshift \\
& [$10^{15}M_{\odot}$] & [$Mpc$] & & $z_{\rm MM}$\\
\hline
E1 & 1.12 & 2.67 & MM & 0.1\\
E2 & 1.12 & 2.73 & ME & - \\
E3A & 1.38 & 2.82 & MM & 0.2 \\
E3B & 0.76 & 2.31 & ME & - \\
E4 & 1.36 & 2.80 & MM & 0.5\\
E5A & 0.86 & 2.39 & ME & -\\
E5B & 0.66 & 2.18 & ME & -\\
E7 & 0.65 & 2.19 & ME & - \\
E11 & 1.25 & 2.72 & MM & 0.6\\
E14 & 1.00 & 2.60 & RE & -\\
E15A & 1.01 & 2.63 & ME & -\\
E15B & 0.80 & 2.36 & RE & -\\
E16A & 1.92 & 3.14 & RE & -\\
E16B & 1.90 & 3.14 & MM & 0.2 \\
E18A & 1.91 & 3.14 & MM & 0.8 \\
E18B & 1.37 & 2.80 & MM & 0.5\\
E18C & 0.60 & 2.08 & MM & 0.3 \\
E21 & 0.68 & 2.18 & RE & -\\
E26 & 0.74 & 2.27 & MM & 0.1\\
E62 & 1.00 & 2.50 & MM & 0.9 \\
\end{tabular}
\end{center}
\end{table}
\section{Cluster simulations}
\label{sec:simulations}
The simulations analysed in this work were produced with the
Adaptive Mesh Refinement code {\small ENZO 1.5}, developed by the Laboratory for Computational
Astrophysics at the University of California in San Diego
{\footnote {http://lca.ucsd.edu}} \citep[e.g.][and references therein]{no07,co11}.
We simulated twenty galaxy clusters with masses in the range $6 \cdot 10^{14} \leq M/M_{\odot} \leq 3 \cdot 10^{15}$, extracted from a total cosmic volume of
$L_{\rm box} \approx 480$ Mpc/h. With the use of a nested grid approach we achieved high mass and spatial resolution in the region of cluster formation: $m_{\rm dm}=6.76 \cdot 10^{8} M_{\odot}$ for the DM particles and $\sim 25 ~\rm kpc/h$ in most of the cluster volume inside the "AMR region" (i.e. $\sim 2-3 ~R_{\rm 200}$ from the cluster centre, see \citealt{va10kp,va11nice,va11turbo} for further details).
We assumed a concordance $\Lambda$CDM cosmology, with $\Omega_0 = 1.0$, $\Omega_{\rm B} = 0.0441$, $\Omega_{\rm DM} =
0.2139$, $\Omega_{\Lambda} = 0.742$, Hubble parameter $h = 0.72$ and
a normalization for the primordial density power spectrum of $\sigma_{8} = 0.8$. Most of the runs we present in this work (Sec.3.1-3.2) neglect radiative cooling, star formation and AGN feedback processes.
In Sec.3.3, however, we discuss additional runs where the following non-gravitational processes are modelled: radiative cooling, thermal feedback from AGN, and pressure feedback from cosmic ray particles (CR) injected at cosmological shock waves.
For consistency with our previous analysis on the same sample of galaxy clusters \citep[][]{va10kp,va11turbo,va11scatter}, we divided our sample in dynamical classes based on the total matter accretion history of each halo for $z \leq 1.0$.
First, we monitored the time evolution of the DM+gas mass for every object inside the "AMR region" in the range $0.0 \leq z \leq 1.0$.
Considering a time lapse of $\Delta t=1~\rm Gyr$, "major merger" events are detected as
total matter accretion episode where $M(t+\Delta t)/M(t)-1>1/3$.
The systems with a lower accretion rate were further divided
by measuring the ratio between the total kinetic energy of
gas motions and the thermal energy inside the virial radius
at $z=0$, since this quantity parameter provides an indication of the dynamical
activity of a cluster \citep[e.g.][]{TO97.2,2006MNRAS.369L..14V}.
Using this proxy, we defined as
"merging" systems those objects that present an energy ratio $>0.4$, but did not
experienced a major merger in their past (e.g. they show evidence
of ongoing accretion with a companion of comparable size, but
the cores of the two systems did not encounter yet). The remaining
systems were classified as "relaxed".
According to the above classification scheme, our sample presents 4 relaxed
objects, 6 merging objects
and 10 post-merger objects.
Based on our further analysis of this sample, this classification
actually mirrors a different level of dynamical activity in the
subgroups, i.e. relaxed systems on average host weaker shocks \citep[][]{va10kp}, they are characterized by a lowest turbulent to thermal energy ratio \citep[][]{va11turbo}, and they are characterized by the smallest amount of azimuthal scatter in the
gas properties \citep[][]{va11scatter,eckert12}.
In \citet{va11scatter} the same sample was also divided based on the
analysis of the power ratios from the multi-pole decomposition of the X-ray surface
brightness images ($P_3/P_0$), and the centroid shift ($w$), as described by \citet{boh10}. These morphological parameters of projected X-ray emission
maps were measured inside the innermost projected $1~\rm Mpc^2$. This leads to decompose our sample into 9 "non-cool-core-like" (NCC)
systems, and 11 "cool-core-like" systems (CC), once that fiducial
thresholds for the two parameters \citep[as in][]{cassano10} are
set. We report that (with only one exception) the NCC-like class here almost perfectly overlap with the
class of post-merger systems of \citet{va10kp}, while the CC-like class contains the relaxed and merging classes of \citet{va10kp}.
Table 1 lists of all simulated clusters, along with their main parameters measured at $z=0$.
All objects of the sample have a final total mass $> 6 \cdot 10^{14}M_{\odot}$, 12 of them having a total mass $>10^{15}M_{\odot}$.
In the last column, we give the classification of the dynamical state of each cluster at $z=0$, and the estimated
epoch of the last major merger event (for post-merger systems).
\subsection{X-ray emission}
\label{subsec:xray}
We simulated the X-ray flux ($S_X$) from our clusters,
assuming a single temperature plasma in ionization equilibrium within each 3D cell. We use the APEC emission model \citep[e.g.][]{2001ApJ...556L..91S} to compute the cooling function $\Lambda(T,Z)$ (where $T$ is the temperature and $Z$ the gas metallicity) inside a given energy band, including continuum and line emission.
For each cell in the simulation we assume a constant metallicity of $Z/Z_{\odot}=0.2$ (which is a good approximation of the observed metal abundance in cluster outskirts, \citealt{2008A&A...487..461L}). While line cooling may be to first approximation not very relevant for the global description of the hot ICM phase ($T \sim 10^{8} K$), it may become significant for the emission from clumps,
because their virial temperature can be lower than that of the host cluster
by a factor $\sim 10$. Once the metallicity and
the energy band are specified, we compute for each cell the X-ray luminosity, $S_X=n_H n_e \Lambda(T,Z) dV$, where $n_H$ and $n_e$ are the number density of hydrogen and electrons, respectively, and $dV$ is the volume of the cell.
\subsection{Definition of gas clumps and gas clumping factor}
\label{subsec:definition}
Although the notion of clumps and of the gas clumping factor is often used in the
recent literature, an unambiguous definition of this is non-trivial
{\footnote{While this article was under review, \citet{2012arXiv1210.6706Z} published a work in which they also characterize the level of inhomongeities in the simulated ICM, based on the radial median value of gas density and pressure. They study the impact of cluster triaxiality and gas clumping in the derivation of global cluster properites such like the gas mass and the gas mass fraction, reporting conclusions in qualitative agreement with our work.}}.
In this work we distinguish between {\it resolved} gas clumps (detected with a filtering of the simulated X-ray maps) and {\it unresolved}
gas clumping, which we consider as an unavoidable source of bias in the derivation of global cluster parameters from radial profiles. On the theoretical point
of view, gas clumps represent the peaks in the distribution of the gas
clumping factor, and can be identified as single "blobs" seen in projection on the cluster atmosphere if they are bright-enough to be detected.
While resolved gas clumps are detected with a 2-dimensional filtering of the
simulated X-ray images (based on their brightness contrast with the
smooth X-ray cluster emission), the gas clumping factor is usually
estimated in the literature within radial shells from the cluster
centre. The two approaches are not fully equivalent, and in this
paper we address both, showing that in simple non-radiative runs
they are closely related phenomena, and present similar dependence on the cluster dynamical
state.\\
\subsubsection{Resolved gas clumps}
\label{subsubsec:clumps_det}
We identify bright gas clumps in our cluster sample by post-processing 2-dimensional mock observations of our
clusters. We do not consider
instrumental effects of real observations (e.g. degrading
spatial resolution at the edge of the field of view of
observation), in order to provide the estimate of the theoretical maximum
amount of bright gas clumps in simulations. Instrumental
effects depends on the specific features of the different
telescope, and are expected to further reduce the rate of
detection of such X-ray clumps in real observations.
In this section we show our
simple technique to preliminary extract gas substructures in our
projected maps, using a rather large scale ($300 ~kpc/h$).
"X-ray bright gas clumps", in our terminology, correspond to the {\it observable} part of these
substructures, namely their small ($\leq 50 ~\rm kpc/h$) compact core, which may be detected within the host cluster atmosphere according
the effective resolution of observations (Sec.\ref{subsec:clumps}).
Based on the literature \citep[e.g.][]{do05,do09}, the most massive substructures in the ICM have a (3-dimensional) linear scale smaller than
$<300-500 ~\rm kpc/h$.
We investigated the typical projected size of gas substructures by computing the 2-dimensional power spectra of $S_X$ for our cluster images. In order to remove the signal from the large-scale gas atmosphere we subtracted the average 2-dimensional cluster profile from each map. We also applied a
zero-padding to take into account
the non-periodicity of the domain (see \citealt{va11turbo} and references therein).
The results are shown in Fig.\ref{fig:pks} for three representative clusters of the sample at $z=0$: the relaxed cluster E15B, the post-merger cluster E1 and the merging system E3B-merging (see also
Fig.\ref{fig:fig1}).
The long-dashed profiles show the power spectra of the
X-ray image of each cluster, while the solid lines show the spectra
after the average 2-dimensional profile of each image has been
removed. The power spectra show that most of
the residual substructures in X-ray are characterized by
a spatial frequency of $k>10-20$, corresponding
to typical spatial scales $l_0 < 300-600 ~\rm kpc/h$, similar
to three-dimensional results \citep[][]{do09}. A dependence on the dynamical state of the host cluster is also visible
from the power spectra: the post-merger and the merging clusters have residual X-ray emission with more power
also on smaller scales, suggesting the presence of enhanced
small-scale structures in such systems. We will investigate this issue in more detail in the next sections.
To study the fluctuations of the X-ray flux as a result of the gas clumps we compute maps of
residuals with respect to the X-ray emission smoothed over the scale $l_0$. The map of clumps is then computed with all pixels from the map
of the residuals, where the condition $S_X/S_{\rm X,smooth} > \alpha$ is satisfied.
By imposing $l_0=300 ~\rm kpc/h$ ($\sim 12$ cells) and $\alpha=2$, all evident gas substructures in the projected X-ray images are captured by the algorithm. We verified that the adoption of a larger or a smaller value of $l_0$ by a factor $\sim 2$ does not affect
our final statistics (Sec.\ref{subsec:clumps}) in a significant way.
In Figure \ref{fig:fig1} we show the projected X-ray flux in the [0.5-2] keV energy range from three representative clusters of the sample (E1-post merger, E3B-merging and E15B-relaxed, in the top panels) at $z=0$, and the corresponding maps of clumps detected by our filtering procedure (lower panels).
The visual inspection of the maps show that our filtering procedure efficiently
removes large-scale filaments around each cluster, and identifies blob-like features in the projected X-ray map. The relaxed system shows evidence
of enhanced gas clumping along its major
axis, which is aligned with the surrounding large-scale filaments.
Indeed, although this system is a relaxed one based on its X-ray
morphology within $R_{\rm 200}/2$ (based on the measure of X-ray
morphological parameters, \citealt{cassano10}, and to its very low
value of gas turbulence, \citealt{va11turbo}), its large-scale
environment shows ongoing filamentary accretions and small satellites, which is a rather common feature even for our relaxed cluster outside $R_{\rm 500}$.
The presence of clumps in the other two systems is more evident, and they have a more symmetric distributions. We will quantify the differences of the gas clumping
factor and of the distribution of bright clumps in these systems in the
following sections.
\subsubsection{The gas clumping factor}
\label{subsubsec:clumping_factor}
The definition of the gas clumping factor, $C_{\rho}$, follows from the computation
of the cluster density profile averaged within radial shell:
\begin{equation}
C_{\rho}(R) \equiv \sqrt{\frac {\int_{\Omega} {\rho^{2}(R) d\Omega}} {(\int_{\Omega} {\rho(R) d\Omega})^2}},
\label{eq:clumping}
\end{equation}
\noindent where at each radial bin from the cluster centre (which we define according to the peak of the gas mass density), $R$, we compute the angular average within radial shells (with constant width of 1 cell at the highest AMR level, equivalent to $25 ~\rm kpc/h$) from the centre of clusters out to $\approx 1.5 ~\rm R_{\rm 200}$. This definition of the gas clumping factor is often used to interpret observed departures from the smooth gas density, suggested
by some observations \citep[][]{si11,nala11}.
However, averaging within spherical shells to compute $C_{\rho}(R)$ is a procedure prone to errors in the cases of mergers or asymmetries in the cluster atmosphere, because these phenomena break the spherical symmetry in the cluster and the a spherical average might not be a good approximation. A-priori, the presence of a large gas clumping
factor (measured as in Eq.\ref{eq:clumping}) and the increased
presence of dense gas clumps may be not associated phenomena. However, as we will see in the following Sections, a larger statistics
of gas clumps and the presence of large-scale asymmetries are closely
related phenomena, which are regularly met in perturbed galaxy clusters. Indeed, the removal of the radial average of the X-ray
profile or the filtering of $\leq 300 ~\rm kpc/h$ structures
in X-ray images in Fig.\ref{fig:fig1} visibly highlights the same
structures, because all bright gas clumps are found within
the sectors where the largest departure from the azimuthal profile
are present (i.e. due to the presence of large-scale filaments).
In the next Sections, we will explore the trend with the cluster dynamical state (and other cluster parameters) of the gas
clumping factor and of the distribution of bright clumps, and we will suggest likely observational implications of both
complementary by-products of substructures within clusters.
\begin{figure}
\includegraphics[width=0.49\textwidth]{images/clumping_average.ps}
\caption{Profile of azimuthally averaged gas clumping factors for the sample of simulated clusters at $z=0$ (the error bars show the $1\sigma$ deviation). The different colours represent the three dynamical
classes in which we divide our sample (four relaxed, six merging and ten post-merger clusters).}
\label{fig:prof_clumping1}
\end{figure}
\begin{figure*}
\includegraphics[width=0.9\textwidth,height=0.9\textheight]{images/clumping_relaxed_merging.ps}
\caption{Average profiles of gas clumping factor and gas mass distribution for different phases across the whole cluster sample at $z=0$, for relaxed clusters (left panels) and for merging clusters (right panels). The average gas clumping factor is computed for different decompositions of the cluster volume in gas-density bins (top 4 panels; lines are colour-coded as described in the legend in the top 2 panels) and temperature bins (bottom 4 panels with colour-coded as detailed in paesl in the third row). The gas over-density is normalized to the cosmological critical baryon density.}
\label{fig:prof_clumping2}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E15B0.0_clumping_new.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E18C0.0_clumping_new.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E3B0.0_clumping_new.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E15B0.0_DM_new2.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E18C0.0_DM_new2.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E3B0.0_DM_new2.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E15B0.0_gas_new2.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E18C0.0_gas_new2.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E3B0.0_gas_new2.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E15B0.0_fbar_new3.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E18C0.0_fbar_new3.ps}
\includegraphics[width=0.33\textwidth,height=0.22\textheight]{images/E3B0.0_fbar_new3.ps}
\caption{{\it Top panels}: radial profiles of clumping factor for three clusters of the sample at $z=0$ (from left to right: E15B-relax, E1-post merger and E3B-merging). The solid black line show the average gas clumping factor for the whole volume, the additional coloured lines show the gas clumping factor inside 4 smaller sectors from the cluster centre. {\it Second row}: radial profiles of DM density for the total volume (black) and for the sectors (colors). The dot-dashed lines show the simulated data, the solid ones the best-fit of the NFW profile for the corresponding volumes.
{\it Third row}: average gas density profiles for the same volumes (same meaning
of colours). {\it Last row}: profiles of enclosed gas mass fraction for the same volumes. We show the real profile (solid lines), the "clumpy" baryon fraction (dot-dash) and the "X-ray" baryon fraction (dashed) for each corresponding volume.}
\label{fig:prof_clumping3}
\end{figure*}
\begin{figure}
\includegraphics[width=0.4\textwidth]{images/clumps_all_redsh_13SUZAKUSUZAKUGEN.ps}
\includegraphics[width=0.4\textwidth]{images/clumps_all_redsh_14ROSATROSATGEN.ps}
\includegraphics[width=0.4\textwidth]{images/clumps_all_redsh_15XMMXMMGEN.ps}
\caption{Average luminosity function of clumps detected in our sample of simulated X-ray maps at z=0, z=0.1 and z=0.3. Each panel shows the estimate
for a different assumed X-ray telescope (from top to bottom: {\it Suzaku}, {\it ROSAT} and {\it XMM}). The continuous lines are for the differential distribution, the dashed lines are for the cumulative distributions.}
\label{fig:distrib_clumps1}
\end{figure}
\section{Results}
\label{sec:results}
\subsection{Gas clumping factor from large-scale structures}
\label{subsec:clumping}
The average clumping factor of gas in clusters is
a parameter adopted in the
theoretical interpretation of de-projected
profiles of gas mass, temperature and
entropy \citep[e.g.][]{ur11,si11,eckert12,2012MNRAS.tmp.2785W}.
For each simulated cluster, we compute the profile of the gas
clumping factor following Sec.\ref{subsubsec:clumping_factor}.
Figure \ref{fig:prof_clumping1} shows the average profile of $C_{\rho}(R)$ for the three dynamical classes into which our sample can be divided.
The trends for all classes are qualitatively similar, with the average
clumping factor $C_{\rho} < 2$ across most of the cluster volume,
and increasing to larger values ($C_{\rho} \sim 3-5$) outside $R_{\rm 200}$. Post-merger and merging systems present a systematically larger average value of gas clumping factor at all radii, and also a larger variance within the subset. These findings are consistent with the recent results of \citet{nala11}. By extracting the distribution of bright gas clumps in projected cluster images,
we will see in Sec.\ref{subsec:clumps} that both the radial distribution of clumps, and the trend of their number with
the cluster dynamical state present identical behaviour of the gas clumping factor measured here. This again support the idea
that the two phenomena are closely associated mechanism, following the injection of large-scale substructures in the ICM.
Next, we calculate the average profiles of gas clumping factor for different
bins of gas over-density and temperature (limiting to the relaxed and the
merging subset of clusters). This way we can determine if the gas clumping factor affects all phases of the ICM equally.
In Fig.\ref{fig:prof_clumping2} we show the average profiles of gas clumping factor calculated within different bins of
gas over-density ($\delta \rho_{\rm cr,b}<50$, $50 \leq \delta \rho_{\rm cr,b} < 10^{2}$, $10^2 \leq \delta \rho_{\rm cr,b} < 10^3$ and $\delta \rho_{\rm cr,b}\geq 10^3$, where $\delta \rho_{\rm cr,b}$ is $\rho/(f_{\rm b}\rho_{\rm cr}$), with $f_{\rm b}$ cosmic baryon fraction and $\rho_{\rm cr}$ the cosmological critical density) and gas temperature ($T<10^6$ K, $10^6 \leq T < 10^7$ K, $10^7 \leq T < 10^8$ K and $T \geq 10^8$ K) at $z=0$.
We want to have a characterization of the environment associated
with the most significant source of gas clumping (e.g. gas substructures) in a way which is unbiased by their presence. This is
non-trivial, because if the local overdensity is computed within
a scale smaller than the typical size of substructures ($\leq 300 ~\rm kpc/h$), the gas density of a substructure will bias the estimate
of local overdensity high. For this reason, the local gas
overdensity and gas temperature in Fig.\ref{fig:prof_clumping2} have
bee computed for a much larger scale, $1 ~\rm Mpc/h$.{\footnote {We checked that the use of the DM over density statistics, in this case, yield very similar results to those obtained using the gas over density. However, we focus on the latter in this work, since this quantity can be more easily related to observations}}. In Fig.\ref{fig:prof_clumping2} we also show the gas mass fraction of each phase inside the radial shells.
The gas phases show evidence that the increased clumping factor of merging clusters is related to the increased clumping factor of the low-density phase, at all radii, even if the mass of this component is not dominant in merging systems. An increased clumping factor is also statistically found in the cold ($<10^6$ K) and intermediate $10^6 \leq T < 10^7$ phase of gas in merging systems.
We remark that although the increase in this phase is associated to an enhanced presence of X-ray detectable clumps (Sec.\ref{subsec:clumps}), it cannot be directly detected in X-ray, due to
its low emission temperature.
The gas phase characterized by $\delta \rho_{\rm cr,b}<50$ and $T \leq 10^6-10^7$ is typical of large-scale filaments
\citep[][]{do08,va09shocks,iapichino11}.
While filaments only host a small share of the gas mass inside the virial radius, they can dominate the clumping factor within the entire volume of the merging systems.
This is because their gas matter content contains significantly more clumpy matter compared to the ICM onto which they accrete.
This suggests that a significant
amount of gas substructure (e.g. including massive and bright X-ray clumps, as in Sec.\ref{subsec:clumps}) can originate from low-density regions, and not necessarily from the hot and dense phase of the ICM.
This is reasonable in the framework of hierarchical structure formation, since matter inside filaments had almost the entire cosmic evolution
to develop substructures \citep[][]{2011A&A...531A..75E}. Contrary to most of the matter inside galaxy clusters, large-scale filaments have never undergone
a phase of violent relaxation and efficient mixing. Therefore, even at late cosmic epochs they can supply very
clumpy material.
This suggests that clusters in an early merging phase are characterized by the largest amount of gas clumping factor at all radii, and that this clumpy materials is generally associated with substructures coming from filaments and from the cluster outskirts. Filaments between
massive galaxy clusters are indeed found to anticipate major mergers in
simulations \citep[][]{va11turbo}, starting the injection of turbulence
in the ICM already some $\sim \rm Gyr$ before the cores collision.
\bigskip
We now investigate how gas clumping can affect X-ray observations.
For this purpose, we compare in Fig.\ref{fig:prof_clumping3} the real profiles of clumping factor, DM, gas density and baryon fraction for the same clusters as in Fig.\ref{fig:fig1}, with profiles derived from
the entire cluster volume or within smaller volumes (i.e. thin slices along the line of sight) along perpendicular sectors of the same systems. All profiles are drawn from the centre of total (gas+DM) mass of each system.
These selected volumes are meant to broadly compare with the selection
of sectors chosen in recent deep X-ray observations of clusters
\citep{si11,ur11,2012ApJ...748...11H}. Therefore, in this case we studied the radial profiles inside rectangular sectors of width $\sim 500 ~\rm kpc$, similar to
observations \citep[][]{si11,ur11}, and the largest available line of sight ($\sim 8 ~\rm Mpc$). The projected total volume sampled by these sectors is $\sim 20$ percent of the total cluster volume, and it is interesting to study how representative the information is that can be derived from them.
For comparison with the observationally derived baryon fraction, we compute the best-fit model for NFW profiles \citep[][]{NA96.1} using the {\small CURVEFIT} task in {\small IDL} for the total
volume, and for each sector independently. The radial range used to compute the best-fit to the NFW profile is $0.02 ~R_{\rm 200} \leq R \leq R_{\rm 200}$.
In the relaxed system (first panels) there is little gas clumping out to a radius $0.8 R_{\rm 200}$. The average profile of DM is overall well modelled
by the NFW profile everywhere up to
$\sim R_{\rm 200}$. The profiles inside smaller sectors are also well
described by a NFW profile, yet for $>0.6 R_{\rm 200}$ the NFW profiles
within each sector start to deviate from one another. At large radii the best-fit
profiles are found to be both under- or over-estimating the real DM profile,
due to large-scale asymmetries in the cluster atmosphere (e.g. left panel of Fig.\ref{fig:fig1}). Asymmetries in this relaxed cluster also cause a similar
azimuthal scatter of gas profiles, similar to what was found in \citet[][]{va11scatter}.
The departures from the NFW profile and the differences between sectors
are significantly larger in the post-merger and the merging systems (second and third columns of Fig.\ref{fig:prof_clumping3}).
We find that the best-fit NFW profiles can differ significantly between sectors, with a relative difference of up to $\sim 30-40$ percent at $R_{\rm 200}$ in the merging system caused by the infalling second cluster (see right panel of Fig.\ref{fig:fig1}).
In addition to azimuthal variations, also large
differences ($>40$ percent) between the true DM profile and the best-fit NFW profile are found along some sectors.
These uncertainties in the "true" gas/DM mass within selected sectors
bias the estimated enclosed baryon fraction. This is an
important effect which may provide the solution to understand the variety of recent
results provided by observations \citep[][]{si11,ur11,2012ApJ...748...11H,2012MNRAS.tmp.2785W}.
In Fig.\ref{fig:prof_clumping3}, we present the radial trend of the
enclosed baryon fraction, $Y_{\rm gas}(<R)\equiv f_{\rm bar}(<R)/f_{\rm b}$ for each
cluster and sector (where $f_{\rm b}$ is the cosmic baryon fraction), following
three different approaches:
\begin{itemize}
\item we measure the "true" baryon fraction inside spherical shells (or portion of spherical shells, for the narrow sectors);
\item we measure the "X-ray-like"
baryon fraction given by $(\int 4 \pi R^2 \rho(R)^2 dR)^{0.5} / \int 4 \pi R^2 \rho(R)_{\rm NFW} dR$, where $\rho_{\rm NFW}$ is the profile
given by the best-fit NFW profile for DM (for the total volume or for each sector separately);
\item we measure the "clumpy" baryon fraction, $(\int 4 \pi R^2 \rho(R)^2 dR / \int 4 \pi R^2 \rho(R)_{\rm DM}^2 dR)^{0.5}$, where $\rho(R)_{\rm DM}$ is the true DM density).
\end{itemize}
This "clumpy" estimate of $Y_{\rm gas}$
obviously has no corresponding observable, since it involves the clumping factor of the DM distribution. However, this measure is useful because
it is more connected to the intrinsic baryon fraction of the massive substructures within galaxy clusters.
\bigskip
The radial trend of the true baryon fraction for the total volume is in line with the result of observations \citep[][]{2002MNRAS.334L..11A,2006ApJ...640..691V,2009A&A...501...61E}, with $Y_{\rm g}$ increasing from very low central values up to the cosmic value close to $R_{\rm 200}$. The total enclosed baryon fraction inside $<1.2 ~R_{\rm 200}$ is $\pm 5$ percent of the cosmic value, yet larger departures ($\pm 10-20$ percent) can be found inside the narrow sectors.
The "X-ray-like" measurement of the enclosed baryon fraction is subject to a larger azimuthal scatter, and shows larger differences also with respect to the true enclosed baryon fraction. In our data-sample, it
rarely provides an agreement better than $\pm 10$ percent with the true enclosed baryon fraction (considering the whole cluster or the sectors).
Based on our sample, we estimate that on average in $\sim 25$ percent of cases the baryon fraction measured within sectors by assuming an underlying NFW profile overestimates the true baryon fraction by a $\sim 10$ percent, while in $\sim 50$ percent of cases it underestimates the cosmic baryon fraction by a slightly larger amount ($\sim 10-20$ percent). In the
remaining cases the agreement at $R_{\rm 200}$ is of the order of $\sim 5$ percent.
In all cases the main factor for significant differences between
the X-ray-derived baryon fraction and the true one is the triaxiality of
the DM matter distribution within the cluster
rather than enhanced gas clumping. As an example,
in the relaxed cluster of Fig.\ref{fig:prof_clumping3}
the "X-ray" baryon fraction is underestimated
in 2 sectors, it is overestimated in 1 sector and it almost follows the true one in the last sector. This trend is mainly driven
by the difference between the true DM profiles and the best-fit NFW profiles within sectors for $>0.4 ~R_{\rm 200}$, rather than dramatic differences in the gas clumping factor.
In general, a significant gas clumping factor ($C_{\rho} \sim 2-3$) in the cluster outskirts can still lead to fairly accurate baryon fractions provided that this clumping is associated with DM substructures or DM filaments. In a realistic observation, the presence of a large-scale filament towards the cluster centre would still yield a
reasonable best-fit NFW profile, even if not necessarily in agreement with the global NFW profile
of the cluster. As a result, the measurement of a baryon fraction close to the cosmic value along one sector does not imply
the absence of gas clumping.
On the other hand, the presence of significant large-scale asymmetries can be responsible for strong differences between the baryon fraction measured in X-rays
and the true baryon fraction, even without substantial gas clumping factor.
To summarize, the result of our simulations is that in many realistic configurations obtained in large-scale structures, the gas clumping factor inside clusters and the enclosed baryon fraction may provide even unrelated information.
Also the radial range selected to fit the
NFW profile to the observed data can significantly affect the estimate
of the enclosed baryon fraction. Given the radial gas and DM patterns
found in our simulated clusters, all radii outside $>0.4 R_{\rm 200}$ may
be characterized by equally positive or negative fluctuations compared
to the NFW profile. Adopting a different outer radius to fit a NFW
profile to the available observations may therefore cause
a small but not negligible ($\sim 5-10$ percent in the normalization
at the outer radii) source of uncertainty in the final estimate of the
enclosed baryon fraction.
These findings might explain why the enclosed baryon fraction is estimated
to be {\it larger} than the cosmic baryon fraction at $\sim R_{\rm 200}$, for some nearby clusters \citep[][]{si11}, {\it smaller} than the cosmic
value in some other cases \citep[][]{2010ApJ...714..423K}, and compatible with the cosmic value in the remaining cases \citep[e.g.][]{2012ApJ...748...11H,2012MNRAS.tmp.2785W}. These observations could generally only use a few selected narrow sectors, similar to the procedure we mimic here, and the chance that the profiles derived in this way are prone to a $\pm 10-20$ percent uncertainty in the baryon fraction appears high.
We conclude by noting that the "clumpy" baryon fraction in all systems is systematically {\it lower} by $\sim 30-40$ percent compared to the baryon fraction
of the whole cluster. This suggest that the clumpiest regions of the ICM are baryon-poor compared to the average ICM. The most likely reason for
that is that clumps are usually associated with substructures \citep[][]{TO03.1,do09}, that have been subjected to ram-pressure stripping.
Most of these satellites should indeed lose a sizable part of their gas atmosphere while orbiting within the main halo, ending up with a baryon fraction {\it lower} than the cosmic value. Recent observations of gas-depleted galaxies identified in 300 nearby groups with the Sloan Digital Sky Survey also seem to support this scenario \citep[][]{2012arXiv1207.3924Z}.
This
mechanism is suggested by numerical simulations \citep[][]{TO03.1,do09}, and
it is very robust against numerical details \citep[e.g.][]{va11comparison} or the adopted physical implementations \citep[][]{do09}.
However, it seems very hard to probe observationally this
result, because these DM sub-halos need high resolution
data to resolve them both in their gas component
with X-ray and in the total mass distribution through,
e.g., gravitational lensing techniques.
\begin{figure*}
\includegraphics[width=0.32\textwidth]{images/clumps_all_fix2_2_15XMMGEN_max_z0.1.ps}
\includegraphics[width=0.32\textwidth]{images/clumps_all_fix2_2_15XMMGEN_tot_z0.1.ps}
\includegraphics[width=0.32\textwidth]{images/clumps_all_fix2_2_15XMMGEN_lls_z0.1.ps}
\includegraphics[width=0.32\textwidth]{images/clumps_all_fix2_2_15XMMGEN_dist_z0.1.ps}
\includegraphics[width=0.32\textwidth]{images/clumps_all_fix2_2_15XMMGEN_temp_z0.1.ps}
\includegraphics[width=0.32\textwidth]{images/clumps_all_fix2_2_15XMMGEN_deltemp_z0.1.ps}
\caption{Differential distribution functions of clumps detected in our simulated X-ray maps at z=0.1 and assuming $S_{\rm X,low}=2\cdot 10^{-15} \rm erg/(s \cdot cm^2)$ and an effective resolution of $\approx 10''$ (to mimic a deep survey with {\it XMM}). From left to right, we show: the distribution of the maximum and of the total flux of clumps and the distribution of FWHM in clumps in the top panels, the radial distribution, the temperature distribution and the distribution of temperature contrast of clump in the
bottom panels.
The different colours refer to the dynamical classes in which our sample is divided.
All distributions have been normalized to the number of objects within each class. The lower dot-dashed lines shows the ratio between clumps in the various dynamical classes, compared to the average population.}
\label{fig:distrib_clumps2}
\end{figure*}
\begin{figure}
\includegraphics[width=0.45\textwidth]{images/clumps_all_fix2_2_15XMMGEN_tot_z0.1_radius.ps}
\includegraphics[width=0.45\textwidth]{images/clumps_all_fix2_2_15XMMGEN_tot_z0.1_mass.ps}
\caption{Different distribution functions of clumps detected in our simulated X-ray maps at z=0.1 and assuming $S_{\rm X,low}=2 \cdot 10^{-15} \rm erg/(s \cdot cm^2)$ and an effective spatial resolution of $\approx 10''$ (to mimic a deep survey with {\it XMM}). The continuous lines are for the differential distribution, the dashed lines are for the cumulative distributions (only the total is shown). The top panel show the different luminosity functions for three different bin in the projected radii, the bottom panel shows the decompositions of the dataset into three total mass range. The dot-dash lines show the ratio of objects found in each class, compared to the average population.}
\label{fig:distrib_clumps3}
\end{figure}
\subsection{Properties of gas clumps}
\label{subsec:clumps}
From our simulations we can compute the
number of resolved gas clumps in simulated X-ray images.
Based on the clump detection scheme outlined in Sec.\ref{subsubsec:clumps_det}, we produced catalogues of X-ray bright clumps in our images, and we compared the statistics of different
detection thresholds and spatial resolution, broadly mimicking the realistic performances of
current X-ray satellites. We considered a range of redshifts ($z=0${\footnote {More exactly, we adopted $z=0.023$, corresponding to the luminosity distance of the COMA cluster, $\approx 100 ~\rm Mpc$.}}, $z=0.1$ and $z=0.3$), and
accounted for the effect of cosmological dimming ($\propto (1+z)^4$) and of the reduced linear resolution. We did not generate
photons event files for each specific observational setup, but we
rather assumed a reference sensitivity, $S_{\rm X,low}$, for each
instrument, which was derived from the best cases in the literature.
The production of more realistic mock X-ray observations from simulated images, calibrated for each instruments, involves a number
of technicalities and problems in modelling \citep[e.g.][]{2005ApJ...618L...1R,heinz10,2012MNRAS.420.3545B}, which
are well beyond the goal of this paper. Our aim, instead, is to assess
the maximum amount of bright clumps that can be imaged
within the most ideal observational condition for each instrument.
More realistic observational conditions (e.g. a decreasing sensitivity of real instruments moving outwards of the field of view, etc) are
expected to further reduce the real detection ratio of such
clumps (if any), and therefore the results presented in this Section should be considered as upper-limits.
Our set of synthetic observations consists of 60 images (three projections per cluster) for each redshift and instrument.
In Figure \ref{fig:distrib_clumps1}, we present the luminosity function
of X-ray clumps detected in our maps, assuming three different instrumental setups (we consider here the maximum luminosity per pixel for each clump), representative of realistic X-ray exposures: 10"/pixel and
$S_{\rm X,low}=2 \cdot 10^{-15} \rm erg/(s \cdot cm^2)$ for {\it XMM}-like observations, 30"/pixel and
$S_{\rm X,low}=3 \cdot 10^{-14} \rm erg/(s \cdot cm^2)$ for {\it ROSAT}-like observations and 120"/pixel and
$S_{\rm X,low}=10^{-13} \rm erg/(s \cdot cm^2)$ for {\it Suzaku}-like exposures (for simplicity we considered the [0.5-2] keV energy range in all cases).
In the case of the {\it XMM} "observation" at the two lowest
redshifts, it should be noted that the available spatial resolution
is significantly better than the one probed by our simulated images. In this case, a simple re-binning of our original data to a higher resolution (without the addition of higher resolution modes) has been
adopted.
Based on our results, a significant number
of clumps can be detected in deep exposures with the three instruments:
$\sim 20-30$ per cluster with {\it ROSAT}/{\it Suzaku}, and $\sim 10^2$ per cluster with {\it XMM}. These estimates are based on observations that sample {\it the whole} cluster atmosphere ($1.2 \times 1.2 ~ R_{\rm 200}$).
The evolution with redshift reduces the number of detectable clumps by $\sim 2-3$, mainly due to the
effect of the degrading spatial resolution with increasing distance (and not to significant cosmological evolution). At $z=0.3$, only {\it XMM} observations would still detect some clumps ($\sim 30$ per cluster).
The largest observable flux from clumps, based on these results, is $\sim 5 \cdot 10^{-12} \rm erg/(s \cdot cm^2)$ for {\it Suzaku} and {\it ROSAT} and $\sim 10^{-12} \rm erg/(s \cdot cm^2)$ for {\it XMM} {\footnote{We note that the maximum observable flux here is not completely an intrinsic property of the clumps, but it also slightly depends on the post-processing extraction of clumps in the projected maps at different resolution, as in Sec.\ref{subsubsec:clumps_det}. For large PSF, indeed, the blending of close clumps can produce a boost in the detectable X-ray flux within the PSF. }}.
In the following we restrict ourselves to the fiducial case of $z=0.1$ observations with {\it XMM}, and we study the statistical distributions of clumps in more detail.
\bigskip
In Figure \ref{fig:distrib_clumps2} we show differential distribution functions for the various parameters available for each detected clump:
a) maximum X-ray flux of each identified clump; b) total X-ray flux on the pixel scale; c) projected clump radius (assuming a full-width-half-maximum of the distribution of $S_X${\footnote{The projected clump radius is measured here with an extrapolation, by assuming that
the observed flux from clumps originates from a Gaussian distribution, for which we compute the full-width-half-maximum. This choice is motivated by the fact that in most cases our detected clumps are very close to the resolution limit of our mock observations, and only with
some extrapolation we can avoid a too coarse binning of this data. For this reason, our measure of the clump radius must be considered only
a first order estimate, for which higher resolution is required in
order to achieve a better accuracy.}}); d) projected distance from the cluster centre; e) average projected temperature; f) temperature contrast around the clump (estimated by dividing the temperature of the clump by the average surrounding temperature of the ICM, calculated as the average after the exclusion of the clump temperature).
The luminosity distributions of clumps confirm the result of the previous section, and show that post-merger and merging clusters are characterized by a larger number of detectable clumps. They also host on average clumps with a $\sim 2-3$ times larger X-ray flux. Perturbed clusters have a factor $\sim 2-4$ excess of detectable clumps at all radii compared to relaxed clusters, and they host larger clumps. Clumps in post-merger systems are on average by factor $\sim 2$ smaller than in merging systems, likely as an effect of stripping and disruption of clumps during the main merger phase.
The distributions of temperatures and temperature contrasts show that the projected clumps are always colder than the surrounding ICM. This is because the innermost dense region of surviving clumps is shielded from the surrounding ICM, and retains its lower virial
temperature for a few dynamical times \citep[][]{TO03.1}.
For most of our detected clumps, the estimated FWHM is $\leq 50 ~\rm kpc/h$.
This is close to the cell size in our runs, and raises the problem that the innermost regions of our clumps are probably not yet converged with respect to resolution. We will revisit this issue again in the next section.
In Sec.\ref{fig:distrib_clumps3}, we investigate the dependence of the luminosity function of clumps on the projected radial distance from the centre (top panel) and on the total gas mass.
We found that for all luminosity bins the intermediate annulus ($0.6 \leq R/R_{\rm 200}\leq 1.2$) contains $>70$ percent of detectable clumps, even if its projected volume is about the same as that of the most external annulus.
On the other hand we detect no significant evolution with the host cluster mass. This is reasonable since we neglect radiative cooling or other processes than can break the self-similarity of our clusters.
Given the relatively large number of detectable clumps predicted by our simulations, one may wonder whether existing observations with {\it XMM}, {\it Suzaku}, {\it ROSAT} or {\it Chandra} can already be compared with our
results. We will come back to this point in Sec.\ref{sec:conclusions}.
\begin{figure*}
\includegraphics[width=0.95\textwidth]{images/xmap_feed.ps}
\includegraphics[width=0.95\textwidth]{images/temp_feed.ps}
\caption{Top panels: projected X-ray flux ($log_{\rm 10} \rm L_X$ [erg/s]) of 4 resimulations of a $\sim 3 \cdot 10^{14} M_{\odot}$ employing different physical implementations (see main text for details). Bottom panels: projected spectroscopic-like temperature for the same runs (T in [K]).}
\label{fig:clumping_feedback1}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{images/relaxed.ps}
\includegraphics[width=0.9\textwidth]{images/perturbed.ps}
\caption{Left panels: average profiles of clumping factor for a relaxed and a perturbed (i.e. with an ongoing merger) cluster in the mass range $\sim 2-3 \cdot 10^{14} M_{\odot}$ resimulated with various physical recipes (plus one resimulation at higher resolution).
Right panels: gas density profiles for the corresponding runs on the left panels.}
\label{fig:clumping_feedback2}
\end{figure*}
\begin{figure}
\includegraphics[width=0.45\textwidth]{images/clumps_all_fix2_abelROSATGEN_max_z0.1_r0.1.ps}
\includegraphics[width=0.45\textwidth]{images/clumps_all_fix2_abelROSATGEN_r0.1.ps}
\caption{Top panel: cumulative distribution of luminosity of simulated clumps
(in black, the average over the whole dataset is considered) and of point-like
sources in the {\small ROSAT} observation of A2142 \citep[in orange, taken from][]{eckert12}. The solid lines show the distributions inside $0.1 \leq R/R_{200}<0.6$, the dot-dashed lines show the distribution inside $0.6 \leq R/R_{200}<1.2$. Bottom panel: cumulative radial distribution of simulated clumps
and point-like sources for the same datasets.}
\label{fig:a2142}
\end{figure}
\subsection{Tests with additional physics and at higher resolution}
\label{subsec:test}
Several physical and numerical effects may affect
the amount of gas clumping factor measured in simulations.
For instance, \citet{nala11} recently showed that the number
of clumps in simulations with radiative cooling and feedback from supernovae and star formation is significantly larger
than in non-radiative runs. However, clumps can cool so efficiently that they can eventually cool below X-ray emitting
temperatures \citep[][]{nala11}.
Radiative cooling leads to the formation of
more concentrated high-density clumps in the ICM, while feedback from SN and AGN may tend to wash them out by preventing the cooling
catastrophe and providing the gas of clumps more thermal energy.
The result of this competition between radiative losses and energy input via feedback may depend on the details of implementation.
Also, the presence of cosmic rays accelerated at cosmological shocks may yield a different compressibility of the ICM in the cluster outskirts, which
can change significantly the amount of clumping there \citep[][]{scienzo}.
In this section, we assess the uncertainty in our results, using a set of cluster re-simulations with different
setups.
Cluster runs such as the one we discussed in the previous sections are fairly expensive in terms of CPU time ($\sim 3-4 \cdot 10^4$ CPU hours for each cluster), given the number of high-resolution cells generated during the simulation. In order to monitor the effects of different setups we thus opted for a set of smaller clusters, already explored in \citet{va11entropy}.
We chose a pair of galaxy clusters with final mass of $\sim 2-3 \cdot 10^{14} M_{\odot}$ and two very different dynamical states (one post-merger and one relaxed cluster), and studied in detail the radial distribution of gas clumping factor in each of them. These clusters were re-simulated with the following numerical setups: a) a simple non-radiative run, with maximum resolution of $25 ~\rm kpc/h$; b) a run with pure radiative cooling, same peak resolution; c) a run with thermal feedback from a single, central AGN starting at $z=1$, with a power of $W_{\rm jet} \approx 10^{44} \rm erg/s $ for each injection, with same peak resolution; d) a run with a uniform pre-heating of $100 ~\rm keV cm^2$ (released at $z=10$) followed by a phase of thermal feedback from a central AGN starting at $z=1$ and a power of $W_{\rm jet} \approx 10^{43} \rm erg/s$ for each injection ; e) a run with cosmic-ray injection at shocks, reduced thermalization and pressure feedback, same peak resolution; f) a simple non-radiative run at a higher resolution, $12 ~\rm kpc/h$.
The cases c) and d) are taken from \citet{va11entropy}, where we modelled the effects of a uniform pre-heating background of cosmic gas \citep[following][]{2007ApJ...666..647Y} and of intermittent thermal AGN activity, modelled as bipolar outputs of over-pressurized gas around the peak of cluster gas density. These runs were designed to fit the average thermal properties of nearby galaxy clusters \citep[e.g.][]{cav09} and to quench catastrophic cooling in radiative cluster simulations. We note that the amount of pre-heating and thermal energy release from the "AGN region" were imposed by construction (in order to allow a parameter space exploration) and are not self-consistently derived from a run-time model of super-massive black-holes evolution. In this implementation, we neglect star formation and therefore the energy feedback from supernovae and the gas mass drop-out term related to the star-forming phase is missing, leading to a higher baryon fraction of the hot gas phased compared to more self-consistent recipes \citep[e.g.][and references therein]{borgani08}. In our runs, the baryon fraction of the warm-hot medium is expected to be overestimated by a factor $\sim 2$ in the innermost cluster regions. Given that we excised the innermost $0.1 R_{\rm 200}$ from the analysis of clumps of the previous section, this problem is not expected to be a major one.
Our model that includes cosmic rays, is taken from \citet{scienzo}, and computes the run-time effects of diffusive shock acceleration \citep[e.g.][]{kj07} in cosmological shocks: reduced thermalization efficiency at shock, injection of CRs energy characterized by the adiabatic index $\Gamma=4/3$, pressure feedback on the thermal gas. This model was developed to perform run-time modifications of the dynamical effect of accelerated relativistic hadrons in large-scale structures. Given the acceleration efficiency of the assumed model \citep[][]{kj07} the pressure ratio between CRs and gas is always small, $\sim 0.05-0.1$ inside $R_{\rm 200}$ of our simulated clusters at $z=0$.
The feedback models discussed above are an
idealized subset of feedback recipes that have been used by other authors \citep[e.g.][]{2007MNRAS.376.1547C,2007MNRAS.380..877S,gaspari11b}. For the goal
of this paper, they just represent extreme cases of dramatic cooling
or of efficient heating in clusters, while the reality lies most likely in between.
Figure \ref{fig:clumping_feedback1} shows the projected X-ray maps and spectroscopic-like temperature maps for four re-simulations (a-b-d-f) for the merging cluster.
The corresponding profiles of average clumping factor and gas density for this cluster (bottom panels) and for the relaxed cluster (top panels) are shown in Fig.\ref{fig:clumping_feedback2}. The trend with the physical implementations is quite clear in both cases.
The inclusion of cooling causes a "cooling catastrophe" and the enhancement of the core gas density in both clusters. At the same time it also triggers the formation of denser and brighter clumps at all radii. A weak amount of thermal feedback from a centrally-located AGN
is found to reduce the gas clumping factor only by a moderate
amount, and only very close to the cluster centre ($\leq 0.3 R_{\rm 200}$), whereas it has no effect at larger radii. In these two runs the luminosity of the brightest clumps can be $10-10^2$ times larger than in non-radiative runs.
On the other hand, with a more efficient feedback recipe (early diffuse pre-heating an moderate AGN thermal feedback at low redshift) it seems possible to
quench the cooling catastrophe in both clusters centre, and to reduce at the same time the gas clumping factor at all radii, making the latter only slightly larger than in the simple non-radiative run.
The inclusion of CR feedback at cosmological shocks is found not to affect our gas clumping statistics compared to non-radiative runs. This can be explained because in our model the strongest dynamical effect of CRs is found around accretion shocks,
where the energy ratio between injected CR-energy and gas energy can be $\sim 10-20$ percent. However, filaments can enter far into
the main cluster atmosphere avoiding strong accretion shocks, and thus keeping a smaller energy ratio of CRs. Given the strong
role of filaments in enriching the ICM of clumpy material (as shown in Sec.\ref{subsec:clumping}), the fact that the overall
clumping statistics within the cluster is not affected by the injection of CRs at shocks is not surprising.
Finally, we tested the effect of increasing spatial
resolution on the gas clumping factor in our runs.
On average, a larger resolution in the non-radiative runs is found
not to change the overall gas clumping factor within the cluster,
although assessing its effect on the high density peaks of the
clumps distribution is non trivial. Indeed, the increase in
resolution may change the ram-pressure stripping history
of accreted clumps, causing a slightly different orbit of substructures towards the end of runs \citep[e.g.][]{2007MNRAS.380.1399R}. This may lead to time-dependent features
in the gas clumping factor profile, i.e. the higher resolution
run of the perturbed cluster shows a "spike" of enhanced clumping factor at $\sim R_{\rm 200}$ at $z=0$, while no such feature
is detected in the resimulation of the relaxed system. The
large-scale average trend of the gas clumping factor in both
systems, instead, presents no systematic trend with resolution, suggesting that overall the effect of resolution in non-radiative
runs is not significant for the spatial range explored here.
However, we caution that some of the parameters of bright gas clumps
derived in this work should be subject to some evolution with
spatial resolution. The small number of clumps found in these two cluster runs is so small that we cannot perform meaningful clumps statistics as in the previous Section.
However, we can already notice that the higher resolution
increases the brightness of the clumps by a factor $\sim 2-3$.
In addition, also the typical innermost radius of bright clumps
can be only poorly constrained by our runs, and based on
Sec.\ref{subsec:clumps} our estimate of a typical $\leq 50 ~\rm kpc/h$
must be considered as an upper limits of their real size, because
the FWHM of most of our clumps is close to our maximum resolution,
and this parameter has likely not yet converged with resolution.
In conclusion, the effect of radiative cooling has a dramatic impact on the properties of clumps in our simulations (as suggested by \citealt{nala11}). However, AGN feedback makes the X-ray flux from clumps very similar to the non-radiative case and the most important results of our analysis (Sec.3.1-3.2) should hold. However, the trend with resolution is more difficult to estimate, given the large numerical cost of simulations at a much higher resolution.
We therefore must defer the study of gas clumping statistics at much higher resolution to the future. This will also help to understand the number of {\it unresolved} gas clumps that might contribute to the X-ray emission of nearby clusters.
\section{Discussion and conclusions}
\label{sec:conclusions}
In this paper we used a set of high-resolution cosmological simulations of massive galaxy
clusters to explore the statistics and effects of overdense gas substructures in the intra cluster medium.
While the densest part of such gas substructures may be directly detected in X-ray against the smooth emission of the ICM,
the moderate overdensity associated to substructure increases the gas clumping factor of the ICM, and may produce a (not resolved)
contribution to the X-ray emission profiles of galaxy clusters.
We analysed the outputs of a sample of 20 massive systems ($\sim 10^{15} M_{\odot}$) simulated at high resolution with the {\small ENZO} code \citep[][]{no07,co11}.
For each object we extracted the profile of the gas clumping factor within the full cluster volume, and within smaller volumes defined by projected sectors from the cluster centre.
The gas clumping factor is found to increase with radius in all clusters,
in agreement with \citet{nala11}: it is consistent with 1 in the innermost
cluster regions, and increases to $C_(\rho) \sim 3-5$ approaching the virial radius. Strongly perturbed systems (e.g. systems with an ongoing merger or post-merger systems) are on average characterized by a larger amount of gas clumping factor at all radii. This enhancement is associated with massive accretion of gas/DM along filaments, which also produces large-scale asymmetries in the radial profiles of clusters. Obtaining an accurate estimate of the enclosed baryon fraction for systems with large-scale accretions can be difficult because of the significant bias of gas clumping factor in the derivation of gas mass, and in the departure of
real profiles from the NFW-profile, which can bias the estimate of the underlying
DM mass. In a realistic observation of a narrow sector from the centre of a cluster, the estimate of the enclosed baryon fraction
can be biased by $\pm 10$ percent in relaxed systems and by $\pm 20$ percent in systems with large-scale asymmetries (not necessarily associated with major mergers).
In order to investigate the detectability of the high density peaks of the distribution of gas clumping factor,
we produced maps of X-ray emission in the [0.5-2] keV energy band for our clusters at different epochs. We extracted the
gas clumps present in the images with a filtering technique, selecting all pixels brighter than twice the smoothed X-ray emission of the cluster.
By compiling the luminosity functions and the distribution functions of the most important parameters of the clumps (such has their projected size, temperature and temperature contrast with the surrounding ICM) we studied the evolution of these distributions as a function of the mass and of the dynamical state of with host cluster, and of the epoch of observation.
Summarizing our results on clumps statistics, we find that: a) there is a significant dependence on the number of bright clumps and the dynamical state of the host cluster, with the most perturbed systems hosting on average a factor $\sim 2-10$ more clumps at all radii and luminosities, compared to the most relaxed systems; b) the host cluster mass, on the other hand, does not affect the number of bright clumps (although the investigated mass range is not large, $\sim 0.3$ dex); c) about a half of detectable bright clumps is located in the radial range $0.6 \leq R/R_{\rm 200} \leq 1.2$ from the projected
cluster centre, while a very small amount of clumps per cluster (order 1) are located inside $<0.6 R/R_{\rm 200}$ ; d) the typical size of most of gas clumps (extrapolated by our data) is $\leq 50$ kpc/h.
In a preliminary analysis based on a few re-simulations of two simulated clusters with different recipes of CR-physics or AGN feedback, we tested the stability of our results with respect to complex physical models. Radiative cooling
dramatically increases the observable amount of gas clumping \citep[][]{nala11}.
However, the required energy release from AGN seems able to limit at the same
time the cooling flow catastrophe, and the observed clumping factor down to the
statistics of simpler non-radiative estimates. The increase of resolution does not significantly
change the observed number of detectable clumps or the gas clumping factor of the ICM.
However, resimulations at higher resolution produce
a $\sim 2-3$ larger maximum X-ray flux in the clumps, and therefore in
order to assess the numerical convergence on this parameter further
investigations are likely to be necessary. Also the typical size
of bright clumps cannot yet be robustly constrained by our data,
which only allow us to place an upper limit of the order of $\leq 50 ~\rm kpc/h$.
We conclude by noting that, based on our results of the statistics of bright clumps (Sec.\ref{subsec:clumps}) it seems likely that a given number of them could already
be present in existing X-ray observations by {\small XMM} and by {\small ROSAT} (and, to a much lesser extent given the expected lower statistics, by {\small Suzaku}). However, it is very difficult to distinguish bright gas clumps
from more point-like sources (like galaxies or AGN), given that their expected
size is very small ($<50 ~\rm kpc/h$ ), typically close to the effective PSF of instruments.
In a real observation, a catalogue of point-like sources is generated
during the data-reduction, and excised from the image.
In order to perform a first check of the our quantitative results, we examined the catalogue of point-sources generated in a real {\small ROSAT} observations of
\citet{eckert12}.
In Fig.\ref{fig:a2142} we show the luminosity and the radial distribution of
point-like sources from the {\small ROSAT} observation of A2142, and the
corresponding simulated statistics obtained by analysing the whole
simulated dataset at the same resolution and flux threshold of the
real observation. The luminosity distributions are taken for two radial
bins: $0.1 \leq R/R_{200} < 0.6$ and $0.6 \leq R/R_{200} < 1.2$.
The number of point-like sources in the real observation
is $\sim 1.5-2$ times larger than the simulated
distribution of clumps at all radii.
The observed luminosity distribution inside the innermost radial bin also shows an excess with respect to the simulated clumps at all luminosities.
In the outer bin, however, the observed luminosity distribution
has a totally different shape with respect to the simulated one.
It should be noticed than in the outer regions the sensitivity
to point-like sources in the real observation is much degraded.
Therefore, it appears likely that the change in the observed
luminosity distribution function of point-like sources is mainly
driven by instrumental effects.
We found very similar results in the catalogue of point-sources of the {\small ROSAT} observation of PK0754-191 of \citet{eckert12}.
Based on such small statistics, and given the degrading sensitivity of {\small ROSAT} to point-like sources at large
radii, it is still premature to derive stronger conclusions. The number of expected point-like sources brighter than $\sim 3 \cdot 10^{-14} \rm erg/(s \cdot cm^2)$ in the field is $\sim 20/$deg$^2$, based on the collection of wide field and deep pencil
surveys performed with {\small ROSAT}, {\small Chandra} and {\small XMM} by \citet{2003ApJ...588..696M}. Given the
projected area of the A2142 observation of \citet{eckert12}, the expected number of point-like field sources is $\sim 6-8$ inside
$R_{\rm 200}$, significantly smaller than both the point-like sources detected in the field of A2142 and the total number
of simulated clumps (that are in the order of $\sim 40$ within $R_{\rm 200}$).
However, one should note that the density of galaxies in a cluster field is much higher than in a typical field, and so is the number of AGNs \citep[see e.g.][]{2007ApJ...664..761M,2012ApJ...754...97H}. Thus, the larger number of detected point sources compared to the expectation from background AGN cannot be directly used as evidence for gas clumping.
It is possible that a significant fraction of such point-like sources are compact and bright self-gravitating gas clumps in clusters. However, it is
unlikely that they can be identified based on their X-ray morphology, given their small size.
In the case of high-resolution {\it Chandra} images of nearby clusters, it is possible that some existing data can actually contain
indications of gas clumps. Their detection, however, is made complex
by the strong luminosity contrast required for them to be detected
against the bright innermost cluster atmosphere. Very recently, the
application of sophisticated analysis techniques has
indeed suggested that some gas clumps (with size $\leq 100-200 ~\rm kpc$) may actually be present in nearby {\it Chandra} observations \citep[][]{2012ApJ...746..139A}.
In the next future, cross-correlating with the additional information of temperature in case of available X-ray spectra may unveil the presence of gas clumps in the X-ray images (Fig.7).
\section*{acknowledgements}
We thank our referee for useful comments, that greatly improved the quality of our manuscript.
F.V. and M.B. acknowledge support through grant FOR1254 from the Deutsche Forschungsgemeinschaft (DFG).
F.V. acknowledges computational resources under the CINECA-INAF 2008-2010 agreement. S.E. and F.V. acknowledge the financial contribution from contracts ASI-INAF I/023/05/0 and I/088/06/0. We acknowledge G. Brunetti and C. Gheller for fruitful collaboration in the production of the simulation runs. Support for this work was provided to A. S. by NASA through Einstein Postdoctoral Fellowship grant number PF9-00070 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060. We thank Marc Audard for kindly providing us his code for the X-ray cooling function.
\bibliographystyle{mnras}
|
1,108,101,564,636 | arxiv |
\section{INTRODUCTION}
\label{sec:Introduction}
$B$ meson decays to final states containing
an axial-vector meson and a pseudoscalar meson
have been recently studied both experimentally and
theoretically. Branching fractions of the $B$ mesons decays to final
states containing an $a_1(1260)$ or $b_1$ meson associated with a
pion or a kaon have been measured experimentally \cite{APDECAYS}.
Theoretical predictions for the branching fractions of $B$
decay modes to final states containing an axial-vector and a
pseudoscalar meson have been calculated assuming a naive
factorization hypothesis \cite{LAPORTA,CALDERON} and QCD
factorization~\cite{CHENG}. Expected branching fractions of these $B$
meson decay modes are of the order of $10^{-6}$.
Recently the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ Collaboration has measured \ensuremath{C\!P}\xspace-violating
asymmetries in $B^0 \ensuremath{\rightarrow}\xspace a_1(1260)^{\pm} \pi^{\mp}$ decays and
determined the angle $\alpha_{\rm eff}$~\cite{ALPHA}.
In the absence of penguin contributions in these decay modes, this
angle would coincide with the angle $\alpha$ of the unitary triangle
of the Cabibbo-Kobayashi-Maskawa
quark-mixing matrix~\cite{CKM}.
Theoretical bounds on $\Delta \alpha = \alpha -\alpha_{\rm eff}$ in
these decay modes based on SU(3) flavor-symmetry have been derived
in~\cite{BOUNDS}.
The rates of $B \rightarrow \ensuremath{K_1(1270)\pi}$ and
$B \rightarrow \ensuremath{K_1(1400)\pi}$
decays are experimental inputs to the calculation of these bounds.
For the $\ensuremath{K_1(1400)^+\pi^-}$ decay mode~\footnote{Except as noted
explicitly, we use a particle name to denote either
member of a charge conjugate pair.}
there exists a published experimental
upper limit at 90\%\ confidence level (CL) of $1.1\times10^{-3}$~\cite{ARGUS}.
Preliminary results for the branching fractions of the $\ensuremath{K_1(1270)^+\pi^-}$ and
$\ensuremath{K_1(1400)^+\pi^-}$ decay modes were obtained by the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ Collaboration
on a sample of 384 million $\BB$ pairs~\cite{MORIOND}.
In the following, we use \ensuremath{K_1}\ to indicate both \ensuremath{K_1(1270)}\ and \ensuremath{K_1(1400)}\ mesons.
\section{THE \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ DETECTOR AND DATASET}
\label{sec:babar}
The results presented here are based on a sample of $N_{\BB} = 454.3 \pm 5.0$
million $\BB$ pairs collected with the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\
detector~\cite{BABARNIM} at the PEP-II $e^+e^-$ asymmetric-energy
storage rings. The $e^+e^-$ center-of-mass energy $\sqrt{s}$ is equal
to $10.58 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$, corresponding to the $\ensuremath{\Upsilon(4S)}$ resonance.
Momenta of charged particles are measured in a
tracking system consisting of a silicon vertex tracker with five
double-sided layers and a 40-layer drift chamber, both within the 1.5
T magnetic field of a solenoid.
Identification of charged hadrons is provided by measurements
of the energy loss in the tracking devices and by a ring-imaging
Cherenkov detector.
For lepton identification, we use the energy deposit in a
CsI(Tl) electromagnetic calorimeter and the pattern of hits
in resistive plate chambers (partially upgraded to limited
streamer tubes for a subset of the data used in this analysis)
intervalled with the passive material comprising the solenoid
magnetic flux return.
\section{ANALYSIS METHOD}
\label{sec:Analysis}
The $\ensuremath{B^0}\xspace \rightarrow \ensuremath{K_1^+\pi^-}$ candidates are
identified from the
$\ensuremath{K_1^+} \rightarrow K^+\pi^+\pi^-$ final state, with reconstructed mass
$m_{K\pi\pi}$ in the $\left[1.1,1.8\right]$ \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace range.
They are kinematically characterized by
$\mbox{$m_{\rm ES}$}\xspace=[(s/2+{\bf p}_{\Upsilon}\cdot{\bf p}_B)^2/E_{\Upsilon}^2-{\bf p}_B^2]^{1/2}$
and $\ensuremath{\Delta E} = (E_{\Upsilon}E_B-{\bf p}_{\Upsilon}\cdot{\bf p}_B-s/2)/\sqrt{s}$,
where $(E_B,{\bf p}_B)$ is the four-momentum of the $B$ candidate, and
$(E_{\Upsilon},{\bf p}_{\Upsilon})$ is the $e^+e^-$ initial state
four-momentum, both in the laboratory frame.
We require $\mbox{$m_{\rm ES}$}\xspace > 5.25$ \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\ and $|\ensuremath{\Delta E}| < 0.15$ \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace.
To reject the dominant $\ensuremath{e^+e^-}\xspace \ensuremath{\rightarrow}\xspace $ quark-antiquark background, we use
the thrust angle \ensuremath{\theta_{\rm T}}\ between the $B$-candidate thrust axis and that of
the rest of the event, calculated in the center-of-mass (CM) frame,
and a Fisher discriminant \ensuremath{{\cal F}}~\cite{FISHER}.
The discriminant combines the polar angles of the $B$-momentum
vector and the $B$-candidate thrust axis with respect to the beam
axis, and
the zeroth and second moments
of the energy flow around the $B$-candidate
thrust axis, calculated in the CM frame~\cite{FISHER}.
The resonant $K^+\pi^+\pi^-$ system can receive contributions from
several strange resonances in the selected range
for $m_{K\pi\pi}$, besides $K_1$ mesons, such as $K_1^{*}(1410)^+$ ($J^P=1^-$),
$K_1^{*}(1680)^+$ ($1^-$), and $K_2^{*}(1430)^+$ ($2^+$). Decays containing
any of these resonances are characterized by different angular distributions.
We define \ensuremath{{\cal H}}\ as the cosine of the angle between the direction of the
primary pion from $B$ decay and the normal to the plane defined by
\ensuremath{K_1}\ daughters in \ensuremath{K_1}\ rest frame. We require $|\ensuremath{{\cal H}}| < 0.95$ to
reduce background from $B^0 \rightarrow V^+ \pi^-$ decay modes, where $V^+$ is
a vector meson decaying to $K^+\pi^+\pi^-$.
Background from $B$ decays to final states with charm is suppressed by
rejecting a signal candidate if it has at least one track in common
with a background $B$ candidate, reconstructed in any of the
$B^0 \ensuremath{\rightarrow}\xspace D^- \pi^+$, $B^0 \ensuremath{\rightarrow}\xspace D^{*-} \pi^+$, and $B^+ \ensuremath{\rightarrow}\xspace \bar D^0 \pi^+$
background decay channels, with $D$ meson mass within $0.07$~\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\ of the
nominal value (if more than one such background candidates are reconstructed
per event per background channel, the one with the highest $B$ vertex fit
$\chi^2$ probability is chosen).
To suppress background from $B$ decays to final states with
charmonium we calculate the invariant mass of the neutral
$\pi \pi$ combination of the primary pion from $B$ decay
with the opposite charge pion from \ensuremath{K_1}\ decay, and require that it
is not consistent
with any of the \ensuremath{c\overline c}\xspace\ mesons $J/\psi$, $\psi(2S)$, $\eta_c$,
$\eta_c(2S)$, $\chi_{c0}(1P)$, and $\chi_{c1}(1P)$.
We also make particle identification requirements to identify pions and kaons,
and veto muons, electrons and protons.
The average number of candidates found
per selected event
in the data sample is $1.20$.
In case of events with multiple candidates, we select the candidate
with the highest $B$ vertex fit $\chi^2$ probability.
We classify the events according to the invariant masses
of the $K^+ \pi^-$ and $\pi^+ \pi^-$ systems in the $\ensuremath{K_1^+} \ensuremath{\rightarrow}\xspace K^+ \pi^+
\pi^-$ final state: events which satisfy the requirement
$0.846 < m_{K\pi} < 0.946$ \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\ belong to class 1 ("$K^*$ region");
events not included in class 1 for which $0.500 < m_{\pi\pi} < 0.800$
\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\ belong to class 2 ("$\rho$ region"); all other events are rejected.
A two-resonances, six channels $K$-matrix model \cite{KMATRIX} is used
to describe the resonant $K\pi\pi$ system for the signal~\cite{WA3}.
The notation is consistent with that used in \cite{WA3}.
The labels $a$ and $b$ in the following paragraphs refer to $K_1(1400)$
and $K_1(1270)$, respectively. The production amplitude for channel
$i = \{(K^*~\pi)_{S-wave}, (K^*~\pi)_{D-wave},$ $\rho~K, K_0^*~\pi, f_0(1370)~K, \omega~K\}$
is given by
\begin{equation}
F_i = e^{\mathrm{i} \delta_i}\sum_j(\mathbf{1}-\mathrm{i}\mathbf{K}{\boldsymbol \rho})_{ij}^{-1}
\mathbf{P}_j,
\end{equation}
where
\begin{equation}
K_{ij} = \frac{f_{ai}f_{aj}}{M_a-M}+\frac{f_{bi}f_{bj}}{M_b-M},
\end{equation}
$\delta_i$ are offset phases ($\delta_{(K^*\pi)_S} \equiv 0$), and $\mathbf{P}$ is
the production vector
\begin{equation}
P_i = \frac{f_{pa}f_{ai}}{M_a-M}+\frac{f_{pb}f_{bi}}{M_b-M}.
\end{equation}
The decay constants $f_{ai}$ , $f_{bi}$ and the $K$-matrix poles
$M_{a}$ and $M_{b}$ are real.
The elements of the diagonal phase space matrix \mbox{\boldmath$\rho$}
for the process $K_1 \rightarrow 3 + 4$, $3 \rightarrow 5 + 6$,
where $4$, $5$ and $6$ are long-lived pseudoscalar particles and $3$ is a
resonance,
have been approximated with the form
\begin{equation}
\rho_{i}(M)=\frac{\sqrt{8}}{M}\left[\frac{m^*m_4}{m^*+m_4}(M-m^*-m_4+\mathrm{i}\Delta)\right]^{1/2},
\end{equation}
where $M$ is the mass of $K_1$, $m_4$ is the mass of 4,
$m^*$ is the mean mass of 3 and $\Delta$ is the half width of
$3$ \cite{PHSP}.
The parameters of $\mathbf{K}$ and the offset phases $\delta_i$ are obtained
from a fit to
the intensity and the relative phases of the $K\pi\pi$ channels,
which were extracted by the ACCMOR Collaboration in a partial wave analysis of
the data on the reaction $K^-p \rightarrow K^-\pi^+\pi^-p$
accumulated by the WA3 experiment~\cite{WA3}.
For the fit to WA3 data we add a background term to the production
vector~\cite{DECK}.
The decay constants for the $\omega~K$ channel are fixed according to the
quark model \cite{WA3}.
We express the complex production constants $f_{pa}$ and $f_{pb}$ in terms
of the production parameters ${{\boldsymbol \zeta}=(\theta,\phi)}$:
$f_{pa}\equiv \cos\theta$, $f_{pb}\equiv \sin\theta e^{\mathrm{i}\phi}$, where
$\theta \in [0,\pi/2]$, and $\phi \in [0,2\pi]$. In this parameterization,
$\tan\theta$ represents the magnitude of the production constant for
the $K_1(1270)$ meson relative to that for the $K_1(1400)$ meson, while
$\phi$ is the relative phase.
Signal MC samples are generated by weighting the $(K^+\pi^+\pi^-)\pi^-$
population according to the amplitude
$\sum_{i\neq \omega K}\langle K^+\pi^+\pi^-| i \rangle F_i$, where the term
$\langle K^+\pi^+\pi^-| i \rangle$
consists of a factor describing the angular distribution of the
$(K^+\pi^+\pi^-)$ system resulting from \ensuremath{K_1}\ decay,
an amplitude for the resonant $\pi^+\pi^-$ and $K^+\pi^-$ systems,
and isospin factors, and is
calculated using the formalism described in \cite{HERNDON}.
The branching fraction for $\ensuremath{K_1} \ensuremath{\rightarrow}\xspace \omega K$ is
accounted for as a correction to the total selection efficiency.
We use an unbinned, extended maximum-likelihood (ML) fit to extract the
event yields $n_{s,r}$ and the parameters of the probability density function
(PDF) ${\cal P}_{s,r}$. The subscript $r=\{1,2\}$ corresponds to one of the
event classes defined above. The index $s$ represents six event
categories used in our data model:
\begin{itemize}
\item the signal $\ensuremath{B^0}\xspace \rightarrow \ensuremath{K_1^+\pi^-}$ ($s=1$),
\item possible backgrounds from ${B^0\ensuremath{\rightarrow}\xspace a_1(1260)^{\pm} \pi^{\mp}
\rightarrow (\pi^{\pm} \pi^+ \pi^-) \pi^{\mp}}$ ($s=2$),
\item $B^0 \ensuremath{\rightarrow}\xspace D^- \pi^+ \rightarrow (K^+ \pi^- \pi^-) \pi^{+}$ ($s=3$),
\item $B^0 \ensuremath{\rightarrow}\xspace K^{*}(1410)^+ \pi^-$ ($s=4$),
\item $B^0 \ensuremath{\rightarrow}\xspace K^{*0}\pi^+\pi^- + \rho^0 K^+ \pi^-$ ($s=5$),
\item combinatorial background ($s=6$).
\end{itemize}
We perform a likelihood scan with respect to the parameters
${\boldsymbol \zeta}$, with $21\times 21$ points. At each point, a
simultaneous fit to the two event classes is performed.
The signal and background PDFs are the products of the PDFs for
independent variables.
The signal PDFs for $\ensuremath{\Delta E}$, $\mbox{$m_{\rm ES}$}\xspace$, and $\ensuremath{{\cal F}}$ are parameterized as
the sum of Gaussian functions for the core of the distributions
plus empirical functions accounting for the tails.
The dependence on {\boldmath$\zeta$} of the selection efficiencies
and the signal PDF for $m_{K\pi\pi}$ are parameterized by means of
templates modeled upon signal MC samples.
Resonance production occurs in the non-signal $B$ background and is taken
into account in the PDFs.
For the combinatorial background, we use polynomials, except for
$\mbox{$m_{\rm ES}$}\xspace$ and $\ensuremath{{\cal F}}$ distributions which are parameterized by
an empirical phase-space function \cite{ARGUSMES} and by Gaussian functions,
respectively.
The combinatorial background PDF is found to describe well both the
dominant quark-antiquark background and the background from random
combinations of $B$ tracks.
For all components, PDFs for ${\cal H}$ are parameterized with polynomials.
The likelihood ${\cal L}_e$ for each candidate $e$
belonging to class $r$ is defined as
${\cal L}_e = \sum_{s}n_{s,r}\, {\cal P}_{s,r}$({\boldmath ${\rm
x}_e$};~{\boldmath$\zeta$},~{\boldmath$\xi$}),
where the PDFs are formed using the set of observables
{\boldmath ${\rm x}_e$}~$=\{\ensuremath{\Delta E}$, $\mbox{$m_{\rm ES}$}\xspace$, $\ensuremath{{\cal F}}$, $m_{K\pi\pi}$, $\ensuremath{{\cal H}}$\}
and the dependence on production parameters {\boldmath$\zeta$} is
relevant only for the signal PDF.
{\boldmath$\xi$} represents all other PDF parameters.
In the definition of ${\cal L}_e$ the yields of the signal category
for the two classes are expressed as a function of the signal branching
fraction $ {\cal B}$ as
$n_{1,1}={\cal B} \times N_{\BB} \times
\epsilon_1$({\boldmath$\zeta$}) and $n_{1,2}={\cal B} \times N_{\BB}
\times \epsilon_2$({\boldmath$\zeta$}), where the total selection
efficiency, $\epsilon_r$({\boldmath$\zeta$}), includes the daughter
branching fractions and the reconstruction efficiency obtained from MC
simulation.
The signal branching fraction is a free parameter in the fit.
The yields for event categories $s=2$ and $3$ are fixed to
the values estimated from MC. The yields for the other background
components are determined from the fit.
The PDF parameters for combinatorial background are left free to vary in
the fit while those for the other event categories
are fixed to the values extracted from
Monte Carlo (MC) simulation~\cite{GEANT} and calibration
$B^0 \ensuremath{\rightarrow}\xspace D^-\pi^+$ decays.
\section{SYSTEMATIC STUDIES}
\label{sec:Systematics}
The main sources of systematic uncertainties are summarized in
Table~\ref{tab:systtab}.
We repeat the fit by varying all the parameters in {\boldmath$\xi$}
which were not left floating in the fit within their uncertainties,
and obtain the associated systematic uncertainties.
The signal PDF model excludes the fake combinations originating from
mis-reconstructed signal events.
The biases due to the presence of fake combinations,
or other imperfections in the signal PDF model are estimated with MC
simulation.
The finite resolution of the likelihood scan is also a source of
bias.
A systematic error is evaluated by varying the $K_1(1270)$ and $K_1(1400)$
mass poles in the signal model, the parameterization of the intermediate
resonances in $K_1$ decay, and the offset phases $\delta_i$.
Additional systematic uncertainty originates from potential peaking
\BB\ background, including $B^0 \ensuremath{\rightarrow}\xspace K_2^{*}(1430)^+ \pi^-$ and
$B^0 \ensuremath{\rightarrow}\xspace K_1^{*}(1680)^+ \pi^-$, and is evaluated by introducing
the corresponding components
in the definition of the likelihood
and repeating the fit with their yields fixed to
values estimated from the available experimental information~\cite{PDG}.
We assign a systematic uncertainty due to yield variation
in the $B^0\ensuremath{\rightarrow}\xspace a_1(1260)^{\pm} \pi^{\mp}$ and $B^0 \ensuremath{\rightarrow}\xspace D_{K^+ \pi^- \pi^-}^-
\pi^+$ event categories.
The above systematic uncertainties do not scale with event yield
and are included in the calculation of the significance of the result.
We estimate the systematic uncertainty due to the interference between
the $B^0 \rightarrow \ensuremath{K_1^+\pi^-}$ and the $B^0 \ensuremath{\rightarrow}\xspace K^{*0}\pi^+\pi^- + \rho^0
K^+ \pi^-$ decays using simulated samples in which the decay amplitudes
are generated according to the results of this measurement. The overall
phases and relative contribution for $K^{*0}\pi^+\pi^-$ and $\rho^0 K^+ \pi^-$
interfering states are assumed to be constant across the phase space
and varied between zero and a maximum value using uniform prior
distributions. We calculate the systematic uncertainty from the RMS
variation of the average signal branching fraction and parameters.
In the calculation of significance, this effect is assumed to scale
with the square root of the signal branching fraction.
The systematic uncertainties in efficiencies are dominated
by those in track finding and particle identification.
Other systematic effects arise from event-selection criteria,
such as track multiplicity and thrust angle,
and the number of $B$ mesons.
\begin{table}[h]
\begin{center}
\caption{Estimates of systematic errors.
For the branching fraction, some of these errors are additive (A)
and given in units of $10^{-6}$, others are
multiplicative (M) and given in \% . Contributions are combined in
quadrature.}
\label{tab:systtab}
\begin{tabular}{lccc}
\hline\hline
Quantity & ${\cal B}$ & $\theta$ & $\phi$ \\
\hline
PDF parameters (A) & $ 1.0 $ & $ 0.01 $ & $ 0.04 $ \\
MC/data correction (A) & $ 1.2 $ & $ 0.05 $ & $ 0.27 $ \\
ML Fit bias (A) & $ 0.6 $ & $ 0.03 $ & $ 0.02 $ \\
Scan (A) & $ 1.3 $ & $ 0.04 $ & $ 0.16 $ \\
$K_1$ mass poles (A) & $ 2.2 $ & $ 0.01 $ & $ 0.36 $ \\
$K_1$ offset phases (A) & $ 0.2 $ & $ 0.01 $ & $ 0.02 $ \\
$K_1$ intermediate resonances (A) & $ 0.5 $ & $ 0.00 $ & $ 0.06 $ \\
Peaking $\BB$ bkg (A) & $ 0.8 $ & $ 0.02 $ & $ 0.27 $ \\
Fixed background yields (A) & $ 0.8 $ & $ 0.04 $ & $ 0.08 $ \\
Interference (A) & $ 5.9 $ & $ 0.25 $ & $ 0.52 $ \\
MC statistics (M) & $ 1.0 $ & $ - $ & $ - $ \\
Particle ID (M) & $ 2.9 $ & $ - $ & $ - $ \\
Track finding (M) & $ 1.0 $ & $ - $ & $ - $ \\
\ensuremath{\cos\ensuremath{\theta_{\rm T}}} (M) & $ 1.0 $ & $ - $ & $ - $ \\
Track multip. (M) & $ 1.0 $ & $ - $ & $ - $ \\
Number \BB\ (M) & $ 1.1 $ & $ - $ & $ - $ \\
\hline
Total (A) & $ 6.9 $ & $ 0.26 $ & $ 0.76 $ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\section{RESULTS}
\label{sec:Results}
Figure~\ref{fig:nllscan} shows the likelihood scan and the values
of ${\cal B}_{sg}$ which minimize $-\ln{\cal L}$ as a function of $\theta$
and $\phi$.
The absolute minimum
occurs at $\theta = 0.785$ and $\phi = 0.942$, and the signal branching
fraction corresponding to that point of the scan is
${\cal{B}}(\ensuremath{\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace\fKucPi}) = ( 31.0 \pm 2.7 ) \times 10^{-6}$.
By interpolation between neighbouring
points of the likelihood scan we extract $\theta = 0.81 \pm 0.06$
and $\phi = 1.11 \pm 0.28$. The quoted errors on the branching fraction
and production parameters ${\bf \zeta}$ are only statistical and correspond
to a $0.5$ increase in $-\ln{\cal L}$. A second, local minimum is located
at $\theta = 0.785$ and $\phi = 3.454$, and is associated to a $1.0$
increase in $-\ln{\cal L}$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.445\linewidth]{fig01a.eps}
\includegraphics[width=0.445\linewidth]{fig01b.eps}
\caption{Left: contour plot of $-\ln{\cal L}$ (no systematic effects
included) in the $\theta,\phi$ plane; each line corresponds to a $n^2/2$
increase in $-\ln{\cal L}$, with $n=\{1,2,...\}$, with respect to the
minimum (indicated by a cross). Right: fitted value of ${\cal B}_{sg}$, in
units of $10^{-6}$ as a function of $\theta$ and $\phi$.}
\label{fig:nllscan}
\end{center}
\end{figure}
A conservative estimate of significance is calculated from a likelihood ratio
test $\Delta(-2\ln{\cal L}) $, assuming a $\chi^2$ distribution with $N=3$
degrees of freedom and minimizing the significance with respect to the
production parameters $(\theta,\phi)$. Here $ \Delta(-2\ln{\cal L}) $ is the
difference between the value of $-2\ln{\cal L}$
for zero signal and the value at its minimum for
given values of ${\boldsymbol \zeta}$
(${\cal L}$ represents the convolution of the likelihood with a
Gaussian function representing additive systematic uncertainties on the
branching fraction).
We observe a non zero $B^0 \rightarrow \ensuremath{K_1^+\pi^-}$ branching fraction with
significance greater than $5.1~\sigma$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.700\linewidth]{fig02.eps}
\caption{sPlot projections onto a) \mbox{$m_{\rm ES}$}\xspace\ (class 1),
b) \mbox{$m_{\rm ES}$}\xspace\ (class 2), c) \ensuremath{\Delta E}\ (class 1),
d) \ensuremath{\Delta E}\ (class 2) in the \ensuremath{K_1\pi}\ decay.
Points represent on-resonance data, solid line is the signal fit
function.
}
\label{fig:splots}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.700\linewidth]{fig03.eps}
\caption{sPlot projection onto $m_{K \pi \pi}$ for class 1 (left) and
class 2 events (right).
Points represent
on-resonance data, solid line is the sum of the fit functions of the
decay modes $K_1(1270) \pi + K_1(1400) \pi$ (dashed),
$K^*(1410) \pi$ (dash-dotted), and $K^*(892) \pi \pi$ (dotted).
Here the points are obtained without using any information about
resonances in the fit, \emph{i.e.} we use only \mbox{$m_{\rm ES}$}\xspace, \ensuremath{\Delta E}, and \ensuremath{{\cal F}}\
variables, while for the normalization of the curves we use the signal
yields obtained from the nominal fit.
}
\label{fig:mass}
\end{center}
\end{figure}
Figure~\ref{fig:splots} shows the distributions of \ensuremath{\Delta E}\ and \mbox{$m_{\rm ES}$}\xspace\
for the signal events, obtained by the
event-weighting technique (sPlot) described in~\cite{SPLOT}.
For each event, a weight to be signal or background is derived according
to the results of the fit to all variables and the probability distributions
in the restricted set of variables, in which the projection variable is
omitted. Using these weights, the data is then plotted in the projection
variable. We show in Figure~\ref{fig:mass} the projection onto
$m_{K\pi\pi}$.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.445\linewidth]{fig04a.eps}\\
\includegraphics[width=0.445\linewidth]{fig04b.eps}
\caption{68 \% (dark shaded zone) and 95 \% (light shaded zone) probability
regions for $\theta$ and $\phi$ (top), $\theta$ (bottom-left) and $\phi$
(bottom-right).}
\label{fig:clregions}
\end{center}
\end{figure}
The experimental two-dimensional likelihood $\mathcal{L}$ for
$\theta$ and $\phi$ is convoluted with a two-dimensional Gaussian
that accounts for the systematic uncertainties.
In Figure~\ref{fig:clregions} we show the distributions we obtain
for $\theta$, $\phi$ and $\theta$ vs. $\phi$ (the 68\% and 95\% probability
regions are shown in dark and light shading respectively, and
are defined as the regions which satisfy ${\cal {L}}(r)>{\cal {L}}_{min}$ and
$\int_{{\cal {L}}(r)>{\cal {L}}_{min}}{\cal {L}}(r) dr = 68\%~~(95\%)$,
where $r$ is the projected set of variables).
The condition ${\cal {L}}(r)>{\cal {L}}_{II}$,
where ${\cal {L}}_{II}$ is the value of the likelihood evaluated at the
position of the second, local minimum in Figure~\ref{fig:nllscan}, defines
a $48\%$ probability region, with systematic uncertainties included, on the
$\theta$ vs. $\phi$ plane.
\section{CONCLUSIONS}
\label{sec:Conclusions}
We measure the branching fraction
\begin{eqnarray}
{\cal{B}}(\ensuremath{\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace\fKucPi}) = ( \ensuremath{31.0 \pm 2.7 \pm 6.9} ) \times 10^{-6}, \nonumber
\end{eqnarray}
with significance greater than $5.1~\sigma$.
The first error quoted is statistical and the second systematic.
The value of the branching fraction measured in this analysis is consistent
with preliminary results obtained by \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ Collaboration~\cite{MORIOND},
and is to be compared with the naive factorization \cite{LAPORTA,CALDERON}
and QCD factorization~\cite{CHENG} estimates, of order $10^{-6}$.
For the production parameters we obtain
\begin{eqnarray}
0.25 & < \theta < & 1.32 \nonumber \\
-0.51 & < \phi < & 4.51 \nonumber
\end{eqnarray}
at $95\%$ probability.
This analysis represents the first attempt to measure
the relative phase between the production amplitudes of
\ensuremath{K_1(1270)}\ and \ensuremath{K_1(1400)}\ mesons in $B$ decays.
\section{ACKNOWLEDGMENTS}
\label{sec:Acknowledgments}
\input pubboard/acknowledgements
|
1,108,101,564,637 | arxiv | \section{Introduction}
The tangent space to an integral projective variety $X \subset \mathbb P^N$ of dimension $n$ in a smooth point $P$, named $T_P X$,
is always of dimension $n$. It is no longer true for the osculating spaces. For instance, as it was pointed out by Togliatti in \cite{To}, the
osculating space $T^2_PX$, in a general point $P$, of
the rational surface $X$ defined by
$$ \check{\mathbb P}} \newcommand{\G}{\mathbf G^2 \stackrel{\phi}\longrightarrow \check{\mathbb P}} \newcommand{\G}{\mathbf G^5, \,\, (x,y,z) \mapsto (xz^2,yz^2,x^2z, y^2z, xy^2,x^2y),$$
is of projective dimension $4$ instead of $5$. Indeed there is a non trivial linear relation
between the partial derivatives of order $2$ of $\phi$ at $P$ that define $T^2_PX$. This relation is usually called a \textit{Laplace equation} of order $2$.
More generally,
we will say that $X$ satisfies a Laplace equation of order $s$ when its $s$-th osculating space $T^s_PX$ in a general point $P\in X$ is of dimension less than the expected one, that is $\inf\{N,\binom{n+s}n-1\}$.
The study of the surfaces satisfying a Laplace equation was developed in the last century by Togliatti \cite{To} and Terracini \cite{Te}. Togliatti \cite{To}
gave a complete classification of the rational surfaces embedded
by linear systems of plane cubics and satisfying a Laplace equation of
order two.
In the paper \cite{P}, Perkinson gives a complete classification of smooth toric surfaces (Theorem 3.2) and threefolds (Theorem 3.5) embedded
by a monomial linear system and satisfying a Laplace equation of any order.
Very recently Miro-Roig, Mezzetti and Ottaviani \cite{MMO}
have established a nice link between rational varieties (i.e. projections of Veronese varieties) satisfying a Laplace equation
and artinian graded rings $A=\oplus_{0\le i\le s} A_i$ such that the multiplication by a general linear form
has not maximal rank in a degree $i$. On the contrary, when the rank of the multiplication map is maximal in any degree, the ring is said to have the
\textit{Weak Lefschetz Property} (briefly WLP).
The same type of problems arises when we consider the multiplication by powers $L^k$ ($k\ge 1$) of a general linear form $L$.
Indeed, if the rank of the multiplication map by $L^k$ is maximal for any $k$ and any degree, the ring is said to have the
\textit{Strong Lefschetz Property} (briefly SLP).
\\ These properties are so called after Stanley's seminal work: the Hard Lefschetz theorem is
used to prove that the ring $\frac{\C[x_0,\ldots,x_n]}{(x_0^{d_0},\ldots,x_n^{d_n})}$ has the SLP \cite[Theorem 2.4]{St}.
From this example one can ask if the artinian complete intersection rings have the WLP.
Actually $\frac{\C[x,y,z]}{(F_0,F_1,F_2)}$ has the WLP (first proved in \cite{HMNW} and then also in \cite{BK})
but it is still not known for more than three variables. Many other questions derive from this first example.
\\
For more details about known results and some open problems we refer to \cite{MN}.
\par
Let $I=(F_1,\ldots, F_r)$ be an artinian ideal generated by the $r$ forms $F_1,\ldots, F_r$, all of the same degree $d$, and $Syz(I)$ be the \textit{syzygy bundle} associated to $I$ and defined
in the following way:
$$ \begin{CD}
0@>>> Syz(I)(d) @>>> \mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^{n}}^{r} @>(F_1, \ldots, F_r)>>
\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^{n}}(d) @>>> 0.
\end{CD}$$
For shortness we will denote $K=Syz(I)(d) $ and, forgetting the twist by $d$, in all the rest of this text we call it the syzygy bundle.
As in \cite{HMNW}, many papers about the Lefschetz properties involve the \textit{syzygy bundle}.
Indeed, in \cite[Proposition 2.1]{BK}, Brenner and Kaid prove that the graded piece of degree $d+i$ of the artinian ring $A=\frac{\C[x_0,\ldots,x_n]}{(F_0,\ldots ,F_r)}$ is $\HH^1(K(i))$.
In
[\cite{MMO}, thm. 3.2] the authors characterize the failure of the WLP (in degree $d-1$, i.e. for the map $A_{d-1}\rightarrow A_d$) when $r\le \hh^0(\mathscr{O}_{L}(d))$
by the non injectivity of the restricted map
$$ \begin{CD}
\HH^0( \mathscr{O}_{L})^{r} @>(F_1,\ldots,F_r)>>
\HH^0(\mathscr{O}_{L}(d)),
\end{CD}$$
on a general hyperplane $L$.
Let us say, in few words, what we are doing in this paper and how it is organized. First of all we recall some definitions,
basic facts and we propose a conjecture (Section \ref{s1}). In Section \ref{s2} we
extend to the SLP the characterization of failure of the WLP given in \cite{MMO}. Then we translate the failure of the WLP and SLP in terms of existence of special singular hypersurfaces (Section \ref{s3}).
It allows us to give an answer to three unsolved questions in \cite{MN}. In Section \ref{s4} we
construct examples of artinian rings
failing the WLP and the SLP by producing the appropriate singular hypersurfaces. In the last section we relate the problem of SLP at the range 2 to the topic of line arrangements (Section \ref{s5}).
Let us now give more details about the different sections of this paper.
In Section \ref{s2}, more precisely in Theorem \ref{p1},
we characterize the failure of the SLP by the non maximality of the induced map on sections
$$ \begin{CD}
\HH^0( \mathscr{O}_{L^k}(i))^{r} @>(F_1,\ldots,F_r)>>
\HH^0(\mathscr{O}_{L^k}(i+d)).
\end{CD}$$
The geometric consequences of this link are explained in Section \ref{s3} (see Theorem \ref{th1bis}). The non injectivity is translated in terms of the number of Laplace equations
and the non surjectivity is related, via apolarity, to the existence of special singular hypersurfaces.
Then we give Propositions \ref{pr54-1}, \ref{pr54-2} and \ref{pr54-3} that solve three problems posed in \cite[Problem 5.4 and Conjecture 5.13]{MN}.
In Section \ref{s4} we produce many examples of ideals (monomial and non monomial) that fail the WLP and the SLP. The failure of the WLP is studied for monomial ideals generated in degree $4$ on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$ (Theorem \ref{th3}),
in degree $5$ on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$ (Proposition \ref{th4}), in degree $4$ on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^3$ (Proposition \ref{d4m}); the failure of the SLP
is studied for monomial ideals generated in degree $4$ (Proposition \ref{d4mslp}); finally, we propose a method to produce non monomial ideals that fail the SLP at any range (Proposition \ref{nmslp}).
In the last section Lefschetz properties and line arrangements are linked. The theory of line arrangements, more generally of hyperplane arrangements, is an old and deep subject that concerns
combinatorics, topology and algebraic geometry. One can say that it began with Jakob Steiner (in the first volume of Crelles's journal, 1826) who determined in how many regions a real plane
is divided by a finite number of lines. It is relevant also with Sylvester-Gallai's amazing problem.
Hyperplane arrangements come back in a modern presentation in Arnold's fundamental work \cite{A} on the cohomology ring of $\check{\mathbb P}} \newcommand{\G}{\mathbf G^n\setminus D$ (where $D$ is the union of the hyperplanes of the arrangement).
For a large part of mathematicians working on arrangements, it culminates today with the Terao conjecture (see the last section of this paper or directly \cite{OT}).
This conjecture concerns particularly the derivation sheaf (also called logarithmic sheaf) associated to the arrangement. In this paper we recall the conjecture.
In Proposition \ref{th5}
we prove that the failure of the SLP at the range 2 of some ideals is equivalent to the unstability of the associated derivation sheaves.
Thanks to the important literature on arrangements, we find artinian ideals that fail the SLP. For instance the Coxeter arrangement, called B3, gives an original ideal that fails the SLP at the range 2 in a non trivial way (see Proposition \ref{B3}).
We finish by a reformulation of Terao's conjecture in terms of SLP.
\section{Notations}
\noindent The ground field is $\C$.\\
The dual $\mathrm{Hom}_{\C}(V,\C)$ of a vector space $V$ is denoted by $V^*$.\\
The dimension of the vector space $ \mathrm{H}^0(\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^n}(t))$ is denoted by $r_t$ where $n$ is clearly known in the context.\\
The vector space generated by the set $E\subset \C^t$ is $<E>$.\\
The join variety of $s$ projective varieties $X_i\subset \check{\mathbb P}} \newcommand{\G}{\mathbf G^n$ is denoted by $\mathrm{Join}(X_1,\cdots,X_s)$ (see \cite{H} for the definition of join variety).\\
The fundamental points $(1,0,\ldots, 0), (0,1,\ldots, 0), \ldots, (0,0,\ldots, 0,1)$ in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^n$ are denoted by $P_0, P_1, \ldots, P_n$.\\
We often write in the same way a projective hyperplane and the linear form defining it; we use in general the notation $L_i$ on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^n$ and the notation $l_i$ on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$ for hyperplanes.\\
The ideal sheaf of a point $P$ is $\mathcal{I}_P$ .
\section{Lefschetz properties}
\label{s1}
Let
$R=\C[x_0,\ldots, x_n]=\bigoplus R_t$ be the graded polynomial ring in $n+1$ variables over $\C$. The dimension of the vector space $R_t$ is $r_t$.
\\Let
$$A=R/I= \bigoplus_{i=0}^{m}A_i$$ be a graded artinian algebra, defined by the ideal $I$. Note that $A$ is finite dimensional over $\C$.
\begin{defi}
The artinian algebra $A$ (or the artinian ideal $I$) has the Weak Lefschetz Property (WLP) if there exists a linear form $L$ such that the homomorphism induced by the multiplication by $L$,
$$ \times L : A_i \rightarrow A_{i+1},$$
has maximal rank (i.e. is injective or surjective) for all $i$. The artinian algebra $A$ (or the artinian ideal $I$) has the Strong
Lefschetz Property (SLP) if there exists a linear form $L$ such that
$$ \times L^k : A_i \rightarrow A_{i+k},$$
has maximal rank (i.e. is injective or surjective) for all $i$ and $k$.
\end{defi}
\begin{rems*}
\begin{itemize}
\item It is clear that the SLP for $k=1$ corresponds to the WLP.
\item Actually, it can be proved that if a Lefschetz element exists, then there is an open set of such elements, so that one can call \lq\lq general linear form\rq\rq \ such an element.
\item We will often be interested in artinian rings $A$ that fail the SLP (or WLP), i.e. when
for any linear form $L$ there exist $i$ and $k$ such that the multiplication map
$$ \times L^k : A_i \rightarrow A_{i+k},$$
has not maximal rank. In that case we will say that $A$ (or $I$) fails the SLP at range $k$ and degree $i$. When $k=1$ we will say simply that $A$ fails the
WLP in degree $i$.
\end{itemize}
\end{rems*}
One of the main examples comes from Togliatti's result (see for instance \cite{BK}, Example 3.1): the ideal $I=(x^3,y^3,z^3,xyz)$ fails the WLP in degree $2$.
There are many ways to prove it.
One of them comes from the polarity on the rational normal cubic curve. It leads to a generalization that gives one of the few known non toric examples.
\begin{prop}(\cite[Theorem 3.1]{V1})
Let $n\ge 1$ be an integer and $l_1, \ldots, l_{2n+1}$ be non concurrent linear forms on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$. Then the ideal
$$ (l_1^{2n+1}, \ldots, l_{2n+1}^{2n+1}, \prod_{i=1}^{2n+1}l_i)$$
fails the WLP in degree $2n$.
\end{prop}
Indeed on the general line $l$ the $2n+2$ forms of degree $2n+1$
become dependent thanks to the polarity on the rational normal curve of degree $2n+1$.
We propose the following conjecture. For $n=1$ it is again Togliatti's result.
\begin{conj*}
Let $l_1,\ldots,l_{2n+1} $ be non concurrent linear forms on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$ and $f$ be a form of degree $2n+1$ on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$.
Then the ideal $(l_1^{2n+1}, \ldots, l_{2n+1}^{2n+1}, f)$ fails the WLP in degree $2n$ if and only if
$f\in (l_1^{2n+1}, \ldots, l_{2n+1}^{2n+1},\prod_{i=1}^{2n+1}l_i).$
\end{conj*}
\section{Lefschetz properties and the syzygy bundle}
\label{s2}
In \cite[Proposition 2.3]{MMO}, the failure of the WLP in degree $d-1$ is related to the restriction of the syzygy bundle to a general hyperplane.
Here we extend this relationship to the SLP situation at any range and in many degrees, by using the syzygy bundle method originated in \cite{HMNW}.
\begin{thm}
\label{p1}
Let $I=(F_1, \ldots, F_r) \subset R$ be an artinian ideal generated by homogeneous
forms of degree $d$ and $K$ the syzygy bundle defined by the exact sequence
$$ \begin{CD}
0@>>> K @>>> \mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^{n}}^{r} @>\Phi_{I}>>
\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^{n}}(d) @>>> 0,
\end{CD}$$
where $\Phi_{I}(a_1,\ldots,a_r)=a_1F_1+\ldots+a_rF_r.$
Let $i$ be a non-negative integer such that $ \mathrm{h}^0( K(i))=0$ and $k$ be an integer such that $k\ge 1$.
Then $I$ fails the SLP at the range $k$ in degree $d+ i-k$ if and only if the induced homomorphism on sections
(denoted by $\mathrm{H}^0(\Phi_{I,L^k})$)
$$ \begin{CD}
\mathrm{H}^0( \mathscr{O}_{L^k}(i))^r @>\mathrm{H}^0(\Phi_{I,L^k})>>
\mathrm{H}^0( \mathscr{O}_{L^k}(i+d))
\end{CD}$$
has not maximal rank for a general linear form $L$.
\end{thm}
\begin{rem*}
The theorem is not true if $ \mathrm{h}^0( K(i))\neq 0$ i.e. if there exists a syzygy of degree $i$ among $F_1,\ldots,F_r$. In \cite{MMO} the authors consider the injectivity of the map
$\mathrm{H}^0(\Phi_{I,L^k})$ for $i=0$ and for $r\le \mathrm{h}^0(\mathscr{O}_{L}(d))$. In that case, since the forms $F_j$ are the generators of $I$, we have
of course $ \mathrm{h}^0( K)=0$.
\end{rem*}
\begin{proof}
In \cite[Proposition 2.1]{BK} the authors proved that $A_{d+i}=\mathrm{H}^1(K(i))$ for any $i\in \mathbb Z} \newcommand{\C}{\mathbb C$.
Let us consider the canonical exact sequence
$$ \begin{CD}
0@>>> K(i-k) @> \times L^k>> K(i) @>>>
K\otimes \mathscr{O}_{L^k}(i) @>>> 0.
\end{CD}$$
We obtain a long exact sequence of cohomology
$$
0 \rightarrow
\mathrm{H}^0(K\otimes \mathscr{O}_{L^k}(i)) \rightarrow A_{d+ i-k} \stackrel{\times L^k}\longrightarrow A_{d+i} \rightarrow
\mathrm{H}^1(K\otimes \mathscr{O}_{L^k}(i))\rightarrow \mathrm{H}^2(K(i-k)) \rightarrow 0.
$$
Let us assume first that $n> 2$. Then we have always
$\mathrm{h}^2(K(i-k))=0$ and it gives a shorter exact sequence:
$$
0 \longrightarrow
\mathrm{H}^0(K\otimes \mathscr{O}_{L^k}(i)) \longrightarrow A_{d+ i-k} \stackrel{\times L^k}\longrightarrow A_{d+i} \longrightarrow
\mathrm{H}^1(K\otimes \mathscr{O}_{L^k}(i))\longrightarrow 0.
$$
Moreover, since $n>2$, we have also $\mathrm{h}^1(\mathscr{O}_{L^k}(i)) =0$. Then by tensoring the exact sequence defining the bundle $K$
by $\mathscr{O}_{L^k}(i)$ and taking the long cohomology exact sequence, we find:
$$ 0\longrightarrow
\mathrm{H}^0(K\otimes \mathscr{O}_{L^k}(i)) \longrightarrow \mathrm{H}^0( \mathscr{O}_{L^k}(i))^r \stackrel{\mathrm{H}^0(\Phi_{I,L^k})}\longrightarrow
\mathrm{H}^0( \mathscr{O}_{L^k}(i+d))\longrightarrow \mathrm{H}^1(K\otimes \mathscr{O}_{L^k}(i))
\rightarrow 0.$$
Since the kernel and cokernel of both maps, $\mathrm{H}^0(\Phi_{I,L^k}) $ and $\times L^k$ are the same, the theorem is proved for $n>2$.
If $n=2$, let us introduce the number $t=\mathrm{h}^2(K(i-k))$. This number is equal to $t=rr_{k-i-3}-r_{k-i-d-3}$ and we have a long exact sequence:
$$ 0\rightarrow
\mathrm{H}^0(K\otimes \mathscr{O}_{L^k}(i)) \longrightarrow A_{d+i-k}\stackrel{\times L^k}\longrightarrow A_{d+i}\longrightarrow
\mathrm{H}^1( K\otimes \mathscr{O}_{L^k}(i))\longrightarrow \C^t
\rightarrow 0.$$
Let us consider now the long exact sequence:
$$\begin{CD} 0@>>>\mathrm{H}^0(K\otimes \mathscr{O}_{L^k}(i)) @>>>\mathrm{H}^0( \mathscr{O}_{L^k}(i))^r @>\mathrm{H}^0(\Phi_{I,L^k})>>
\mathrm{H}^0( \mathscr{O}_{L^k}(i+d)) @>>> \\
@>>>\mathrm{H}^1(K\otimes \mathscr{O}_{L^k}(i)) @>>>\mathrm{H}^1( \mathscr{O}_{L^k}(i))^r @>>>\mathrm{H}^1( \mathscr{O}_{L^k}(i+d))@>>>0.
\end{CD} $$
Since $\mathrm{h}^1(\mathscr{O}_{L^k}(i)) =\mathrm{h}^2(\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^2}(i-k))=r_{k-i-3}$ (and $\mathrm{h}^1(\mathscr{O}_{L^k}(i+d)) =\mathrm{h}^2(\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^2}(i+d-k))=r_{k-i-d-3}$), it remains a shorter exact sequence
$$\begin{CD} 0@>>>\mathrm{H}^0(K\otimes \mathscr{O}_{L^k}(i)) @>>>\mathrm{H}^0( \mathscr{O}_{L^k}(i))^r @>\mathrm{H}^0(\Phi_{I,L^k})>>
\mathrm{H}^0( \mathscr{O}_{L^k}(i+d)) \\
@>>>\mathrm{H}^1(K\otimes\mathscr{O}_{L^k}(i))@>>>\C^t@>>>0.
\end{CD} $$
As before, since the kernel and cokernel of both maps are the same, the theorem is proved.
\end{proof}
Let us introduce the numbers
$N(r,i,k,d):=r(r_i-r_{i-k})- (r_{d+i}-r_{d+i-k})$,
$$N^{+}=\mathrm{sup}(0,N(r,i,k,d)) \,\, \mathrm{and}\,\, N^{-}=\mathrm{sup}(0,-N(r,i,k,d)). $$
The following corollary is a didactic reformulation of the theorem above.
\begin{coro}
Assume that there is no syzygy of degree $i$ among the $F_j$'s. Then $I$ fails the SLP at the range $k$ in degree $d+i-k$
if and only if one of the two following equivalent conditions occurs:
\begin{itemize}
\item $\mathrm{h}^0(K\otimes \mathscr{O}_{L^k}(i))=\dim_{\mathbb C} (\mathrm{ker}(\mathrm{H}^0(\Phi_{I,L^k})))> N^{+}$,
\item $\dim_{\mathbb C} (\mathrm{coker}(\mathrm{H}^0(\Phi_{I,L^k})))> N^{-}$.
\end{itemize}
\end{coro}
In the next section we translate this corollary in geometric terms.
\section{Syzygy bundle and Veronese variety}
\label{s3}
We recall that the $s$-th osculating space $T_P^{s}(X)$ to a $n$-dimensional complex projective variety $X\subset \check{\mathbb P}} \newcommand{\G}{\mathbf G^N$ at $P$ is the subspace of $ \check{\mathbb P}} \newcommand{\G}{\mathbf G^N$ spanned
by $P$ and by all the derivative points of degree less than or equal to $s$ of a local parametrization of $X$, evaluated at $P$. Of course, for $s=1$ we get the tangent space $T_P(X)$.
A $n$-dimensional variety $X\subset \check{\mathbb P}} \newcommand{\G}{\mathbf G^N$ whose $s$-th osculating space at a general point has dimension $\mathrm{inf}(\binom{n+s}{n}-1,N)-\delta$ is said to satisfy $\delta$ independent Laplace equations of order $s$. We will say, for shortness, that the \textit{number} of Laplace equations is $\delta$.
\begin{rem*}
If $N < \binom{n+s}{n}-1$, then there are always $\binom{n+s}{n}-1-N$ linear relations between the partial derivatives. These relations are \lq\lq trivial\rq\rq\ Laplace equations of order s. We will not consider them in the following, so when we write \lq\lq there is a Laplace equation of order $s$\rq\rq\ we understand \lq\lq a non-trivial Laplace equation of order $s$\rq\rq.
\end{rem*}
Let us briefly explain now the link with projections of $v_t(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$.
Let $R_1$ be a complex vector space of linear forms of dimension $n+1$ such that $\mathrm{H}^0\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^n}(1)=R_1$.
We consider the Veronese embedding:
$$
\begin{array}{llll}
v_{t} : & \check{\mathbb P}} \newcommand{\G}{\mathbf G (R_1^*)&\hookrightarrow &\check{\mathbb P}} \newcommand{\G}{\mathbf G (R_{t}^*)\\
& [L]& \mapsto & [L^{t}].
\end{array}
$$
The image $v_t(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ is called the Veronese $n$-fold of order $t$.
At the point $[L^t]\in v_t(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$, the $s$-th osculating space, $1\le s\le t-1$,
is the space of degree $d$ forms possessing a factorization
$L^{t-s}G$ where $G$ is a form of degree $s$ \cite[Theorem 1.3]{I2}. It is identified with
$\check{\mathbb P}} \newcommand{\G}{\mathbf G (R_s^*)$.
Let us think about the projective duality in terms of derivations (it is in fact the so-called apolarity, see \cite{D}). A canonical basis of $R_{d}^*$ is given by the $r_d$ derivations:
$$ \frac{\partial^d}{\partial x_0^{i_{0}}\ldots \partial x_n^{i_n}} \,\, \mathrm{with}\,\, i_0+\ldots +i_n=d.$$
Let $I=(F_1, \ldots, F_r)\subset R$ an ideal generated by $r$ forms of degree $d$. Note that $F_1,\ldots,F_r$ are points in $\check{\mathbb P}} \newcommand{\G}{\mathbf G(R_{d}^*)$.
We denote by $I_d$ the vector subspace of $R_d$ generated by the $F_1,\ldots,F_r$ and by $I_{d+i}=R_iF_1+\cdots+R_iF_r$, for any $i\geq 0$.
Let us introduce the orthogonal vector space to $I_{d+i}$
$$ I_{d+i}^{\perp}=\{\delta \in R_{d+i}^*\,|\,\delta(F)=0, \,\, \forall F\in I_{d+i}\}.$$
It gives an exact sequence of vector spaces
$$ \begin{CD}
0 @>>> I_{d+i}^{\perp} @>>>R_{d+i}^* @>>> I_{d+i}^* @>>> 0
\end{CD}$$
and the corresponding projection map
$$\begin{CD}
\pi_{I_{d+i}}: \check{\mathbb P}} \newcommand{\G}{\mathbf G(R^*_{d+i})/\check{\mathbb P}} \newcommand{\G}{\mathbf G(I^*_{d+i}) \rightarrow \check{\mathbb P}} \newcommand{\G}{\mathbf G(I^\perp _{d+i})
\end{CD}
$$
Of course one can identify $R_{d+i}/I_{d+i}\simeq (I_{d+i}^{\perp})^*$ and write the decomposition
$R_{d+i}=I_{d+i} \oplus (I_{d+i}^{\perp})^*.$
\begin{rem*}
In the following two situations, the vector
space $(I_d^{\perp})^*$ is easy to describe:
\begin{enumerate}
\item When $I_d$ is generated by $r$ monomials of degree $d$, $(I_{d}^{\perp})^*$ is generated by
the remaining $r_d-r$ monomials.
\item When $I_d=(L_1^d,\ldots,L_r^d)$ where $[L_i]\in \check{\mathbb P}} \newcommand{\G}{\mathbf G(R_{1}^*)$,
$(I_{d}^{\perp})^*$ is generated by degree $d$ polynomials that vanish at the points
$[L_i^{\vee}]\in \check{\mathbb P}} \newcommand{\G}{\mathbf G(R_{1}).$
\end{enumerate}
\end{rem*}
It is well known that the tangent spaces to the Veronese varieties can be interpreted as singular hypersurfaces.
More precisely a hyperplane containing the tangent space $T_{[L^t]}v_t(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ corresponds in the dual space $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{n\vee}$ to a hypersurface
of degree $t$ that is singular at the point $[L^{\vee}]$. More generally
a hyperplane containing the $s$-th ($s\le 1$) osculating space $T_{[L^t]}^{s}v_t(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ corresponds to a hypersurface of degree $t$ and multiplicity $(s+1)$ at the point $[L^{\vee}]$ (see for instance \cite{BCGI}).
\smallskip
Thus the dual variety of $v_t(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ is the discriminant variety that parametrizes the singular hypersurfaces of degree $t$ when
the $s$-th osculating variety of $v_t(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ parametrizes the hypersurfaces of degree $t$ with a point of multiplicity $s+1$.
We propose now an extended version of the ``main'' theorem of \cite{MMO} (to be precise Theorem 3.2).
\begin{thm}
\label{th1bis}
Let $I=(F_1, \ldots, F_r) \subset R$ be an artinian ideal generated by $r$ homogeneous polynomials of degree $d$.
Let $i,k,\delta$ be integers such that
i\ge 0$, $k\ge 1$.
Assume that there is no syzygy of degree $i$ among the $F_j$'s.
The following conditions are equivalent:
\begin{enumerate}
\item The ideal $I$ fails the SLP at the range $k$ in degree $d+ i-k$.
\item There exist $N^{+}+\delta$, with $\delta \ge 1$, independent vectors $(G_{1j},\ldots, G_{rj})_{j=1, \ldots,N^{+}+\delta} \in R_i^{\oplus r}$ and
$N^{+}+\delta $ forms $G_j\in R_{d+ i-k}$ such that
$G_{1j}F_1+ \ldots + G_{rj}F_r=L^kG_j$ for a general linear form $L$ of $\check{\mathbb P}} \newcommand{\G}{\mathbf G^n$.
\item
The $n$-dimensional variety $\pi_{I_{d+i}}(v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n))$ satisfies
$\delta \ge 1$ Laplace equations of order $d+i-k$.
\item \label{item_iv_thm} For any $L\in R_1$, $\mathrm{dim}_{\C}((I_{d+i}^{\perp})^*\cap \mathrm{H}^0(\mathcal{I}_{L^{\vee}}^{d+i-k+1}(d+i))\ge N^{-}+\delta$, with $\delta \ge 1$.
\end{enumerate}
\end{thm}
\begin{proof}
The equivalence $(1)\Leftrightarrow (2)$ is proved in Theorem \ref{p1}.
Since $I$ is generated in degree $d$, the map $R_i\times I_d \rightarrow I_{d+i}$ is surjective and the relation $G_1F_1+ \ldots + G_rF_r=L^kG$
is equivalent to $\check{\mathbb P}} \newcommand{\G}{\mathbf G(I_{d+i}^*) \cap T_{[L^{d+i}]}^{d+i-k}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)\neq \emptyset$. More generally the
number of independent relations $G_{1j}F_1+ \ldots + G_{rj}F_r=L^kG_j$ is the dimension of the kernel
of the map $\mathrm{H}^0(\Phi_{I,L^k})$ i.e. the dimension of $\mathrm{H}^0(K\otimes \mathscr{O}_{L^k}(i))$; this number of independent relations, written
in a geometric way, is
$$N^++\delta=\mathrm{dim}[\check{\mathbb P}} \newcommand{\G}{\mathbf G(I_{d+i}^*) \cap T_{[L^{d+i}]}^{d+i-k}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)]+1,\,\,\, (\delta \ge 0)$$
where the projective dimension is $-1$ if the intersection is empty. The number \textcolor[rgb]{0.98,0.00,0.00}{$\delta$} is the number of (non trivial) Laplace equations. Indeed,
the dimension of the $(d+i-k)$-th osculating space to $\pi_{I_{d+i}}(v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n))$ is $r_{d+i-k} -N^{+} -\delta$ since
the $(d+i-k)$-th osculating space to $v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ meets the center of projection along a $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{N^{+}+\delta-1}$. In other words, the $n$-dimensional variety $\pi_{I_{d+i}}(v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n))$
satisfies $\delta$ Laplace equations and $(3)$ is equivalent to $(2)$.
The image by $\pi_{I_{d+i}}$ of the $(d+i-k)$-th osculating space to the Veronese
$v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ in a general point has codimension $\mathrm{h}^0(K\otimes \mathscr{O}_{L^k}(i))-N^+$ in
$\check{\mathbb P}} \newcommand{\G}{\mathbf G(I_{d+i}^{\perp})$. The codimension corresponds to the number of hyperplanes in $\check{\mathbb P}} \newcommand{\G}{\mathbf G(I_{d+i}^{\perp})$ containing the osculating space to
$\pi_{I_{d+i}}(v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n))$. These hyperplanes are images by $\pi_{I_{d+i}}$ of hyperplanes in $\check{\mathbb P}} \newcommand{\G}{\mathbf G(R_{d+i}^*)$ containing
$\check{\mathbb P}} \newcommand{\G}{\mathbf G(I_{d+i}^*)$ and the $(d+i-k)$-th osculating plane to
$v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ at the point $[L^{d+i}]$.
In the dual setting it means that these hyperplanes are
forms of degree $d+i$ in $(I_{d+i}^{\perp})^*$ with multiplicity $(d+i-k+1)$ at $[L^{\vee}]$. It proves that
$(3)$ is equivalent to $(4)$.
To summarize, the number of Laplace equations is $\hh^0(K\otimes \mathscr{O}_{L^k})-N^+$ and
$\mathrm{coker}(\mathrm{H}^0(\Phi_{I,L^k}))\simeq (I_{d+i}^{\perp})^*\cap \mathrm{H}^0(\mathcal{I}_{L^{\vee}}^{d+i-k+1}(d+i)).$
\end{proof}
\begin{rems*}
\begin{enumerate}[1.]
\item \label{cor_cones} Let us explain the geometric meaning of Theorem \ref{th1bis} \ref{item_iv_thm} in a simple case: if $N^-=0$, then \ref{item_iv_thm} means that $I$ fails the SLP at the range $k$ in degree $d+i-k$ if and
only if there exists at any point $M\in\check{\mathbb P}} \newcommand{\G}{\mathbf G^n$ a hypersurface of degree $d+i$ with multiplicity $d+i-k+
1$ at $M$ given by a form in
$(I_{d+i}^{\perp})^*\simeq R_{d+i}/I_{d+i}$.
\item Let $
I=(L_1^{d},\ldots,L_r^{d})$ where $L_1, \ldots, L_r$ are general linear forms.
The vector space $(I_{d+i}^{\perp})^*$, where
$I_{d+i}=L_1^{d}R_i+\ldots+L_r^{d}R_i,$ is the vector space of the forms of degree $d+i$
vanishing in $r$ points $[L_j^{\vee}]$ with multiplicity $(i+1)$.
In other words $f\in \cap_{j=1}^r \mathrm{H}^0(\mathcal{I}_{L_j^{\vee}}^{i+1}(d+i)$) (see
\cite[Corollary 3]{EI}). Geometrically it can be described
as $\check{\mathbb P}} \newcommand{\G}{\mathbf G(I_{d+i}^*)=\mathrm{Join}(T_{[L_1^{d+i}]}^{i}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n), \cdots ,
T_{[L_r^{d+i}]}^{i}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)).$
\item By the theorem above, when $
N(r,i,k,d)\ge 0$, the ideal $
I=(L_1^{d},\ldots,L_r^{d})$ fails
the SLP at the range $k$ in degree $d-k+i$ if and only if
the following intersection is not empty:
$$ \mathrm{Join}(T^i_{[L_1^{d+i}]} v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n), \cdots , T^i_{[L_r^{d+i}]}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n))
\cap \, T^{d+i-k}_{[L^{d+i}]}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n).$$
\item
Here we focus the attention also on the number $\delta$ of Laplace equations satisfied by
$\pi_{I_{d+i}}(v_{d+i}(\mathbb P^n))$. The geometric meaning of this number was highlighted by
Terracini \cite{Te}
for Laplace equations of order $2$ and recently for any order by \cite{DDI}, where a classification
of varieties satisfying \lq\lq many\rq\rq\ Laplace equations is given.
\end{enumerate}
\end{rems*}
The characterization of the failure of the SLP by the existence of ad-hoc singular hypersurfaces allows us to answer, in the three following propositions, some questions posed by Migliore and Nagel.
Let us recall their questions:
\begin{problem*}
\cite[Problem 5.4]{MN}
Let $I = (x_1^N,x_2^N,x_3^N,x_4^N,L^N)$ for a general
linear form $L$. $R/I$ fails the WLP, for $N = 3, . . . , 12$.
There are some natural questions arising from this example:
\begin{enumerate}
\item \label{problem1} Prove the failure of the WLP in previous example for all $N \geq 3$.
\item What happens for mixed powers?
\item \label{problem3} What happens for almost complete intersections, that is, for $r+1$ powers of general
linear forms in $r$ variables when $r \ge 4$?
\end{enumerate}
\end{problem*}
\begin{conj*}\cite[Conjecture 5.13]{MN}
Let $L_1, \ldots , L_{2n+2}$ be general linear
forms and $I = (L_1^d, \ldots , L_{2n+2}^d)$
\begin{enumerate}
\item \label{conj1} If $n = 3$ and$ d = 3$ then $R/I$ fails the WLP.
\item If $n \geq 4$ then $R/I$ fails the WLP if and only if $d > 1$.
\end{enumerate}
\end{conj*}
We prove \ref{problem1} of \cite[Problem 5.4]{MN} in Proposition \ref{pr54-1}, \ref{problem3} of \cite[Problem 5.4]{MN}, for $r=4$ and $N=4$, in Proposition \ref{pr54-2} and \ref{conj1} of \cite[Conjecture 5.13]{MN} in Proposition \ref{pr54-3}.
Since all these results concern powers of linear forms, let us first verify that the hypothesis on the global syzygy in Theorem \ref{th1bis} is not restrictive.
\begin{lem}
\label{lem-syz}
Let $I$ be the ideal $(L_1^{d},\ldots,L_r^{d})$ where the $L_j$ are linear forms and $r<r_d$. Let $K$ be its syzygy bundle. Then
$$ \hh^0(K(i))=0 \Leftrightarrow rr_i\le r_{d+i}.$$
\end{lem}
\begin{proof} One direction is obvious. Let us assume that $rr_i\le r_{d+i}$ and that there exists a relation
$$G_{1}L_1^{d}+ \ldots + G_{r}L_r^{d}=0, $$
with $G_{1}, \ldots, G_{r}$ forms of $R_i$. Both hypotheses imply that the projective space
$\mathrm{Join}(T_{[L_1^{d+i}]}^{i}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n), \cdots , T_{[L_r^{d+i}]}^{i}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n))$ has dimension strictly less than the expected one.
Since the linear forms are general, it implies that the algebraic closure of $\cup_{L\in R_1^{d+i}}T_{[L^{d+i}]}^{i}v_{d+i}(\check{\mathbb P}} \newcommand{\G}{\mathbf G^n)$ has not the expected dimension.
It contradicts the lemma 3.3 in \cite{BCGI}.
\end{proof}
Proposition \ref{pr54-2} is already proved in \cite[Lemma 4.8]{HSS} and also in
\cite[Theorem 4.2 (ii)]{MMN}. We propose here a new proof based on the existence of a
singular hypersurface characterizing the failure of the SLP. Let us mention that,
on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$ a hypersurface of degree $d+i$ with a point of multiplicity $d+i$ is simply an union of
lines (as, for instance, in Theorem \ref{th3} and Proposition \ref{th4}), but on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^n$, with $n>2$, a hypersurface
of degree $d+i$ with a point of multiplicity $d+i$ is more generally a cone over a hypersurface in
the hyperplane at infinity. This is the key argument in the proofs of the three following propositions.
\begin{prop}
\label{pr54-1}
Let $N$ be an integer such that $N\ge 3$.
Then the ideal $(x_0^N,x_1^N,x_2^N,x_3^N, (x_0+x_1+x_2+x_3)^N)$ fails the WLP in degree $2N-3$.
\end{prop}
\begin{rem*}
Of course it is equivalent to say that $(L_1^N,\ldots, L_5^N)$ fails the WLP in degree $2N-3$ for $L_1, \ldots, L_5$ general linear forms.
\end{rem*}
\begin{proof}
Let us consider the syzygy bundle associated to the linear system
$$ \begin{CD}
0@>>> K @>>> \mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^{3}}^{5} @>(x_0^N,x_1^N,x_2^N,x_3^N, (x_0+x_1+x_2+x_3)^N)>>
\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^{3}}(N) @>>> 0.
\end{CD}$$
Since $5r_{N-2}< r_{2N-2}$ Lemma \ref{lem-syz} implies $\hh^0(K(N-2))=0.$
Let $L$ be a linear form. When $N\ge 3$ we have $5\mathrm{h}^0(\mathscr{O}_{L}(N-2))\ge \mathrm{h}^0(\mathscr{O}_{L}(2N-2))$.
According to Theorem \ref{th1bis} the failure of the WLP in degree $2N-3$ is equivalent to the existence of a surface
with multiplicity $N-1$ in the points $P_0,P_1,P_2,P_3$ and $P(1,1,1,1)$ and multiplicity $2N-2$ at a moving point $M$.
The five concurrent lines in $M$ passing through $P_0,P_1,P_2,P_3,P$ belong to a quadric cone with equation $\{F=0\}$ (the cone over the conic
at infinity through the five points). Since $F^{N-1}\in \mathrm{H}^0(\mathcal{I}_{M}^{2N-2}(2N-2))$ the hypersurface
$\{F^{N-1}=0\}$ has the desired properties.
\end{proof}
In $\check{\mathbb P}} \newcommand{\G}{\mathbf G^n$ there is always a quadric through $\frac{n(n+3)}{2}$ points in general position. Then given any general point
$M\in \check{\mathbb P}} \newcommand{\G}{\mathbf G^{n+1}$, there is a quadratic cone with a vertex at $M$ and passing through $\frac{n(n+3)}{2}$ fixed points in general position.
Then we prove,
\begin{prop}
\label{pr54-2}
In the following cases the ideal $(L_1^N,\ldots, L_{\frac{n(n+3)}{2}}^N)$ fails the WLP in degree $2N-3$:
\begin{itemize}
\item $N= 3$ and $n\ge 2$,
\item $N=4$ and $2\le n\le 4$,
\item $N>4$ and $2\le n\le 3$.
\end{itemize}
\end{prop}
\begin{proof}
Let us consider the syzygy bundle associated to the linear system
$$ \begin{CD}
0@>>> K @>>> \mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^{n+1}}^{\frac{n(n+3)}{2}} @>>>
\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^{n+1}}(N) @>>> 0.
\end{CD}$$
Let $L$ a linear form. Then the inequality ${\frac{n(n+3)}{2}}\mathrm{h}^0(\mathscr{O}_{L}(N-2))\ge \mathrm{h}^0(\mathscr{O}_{L}(2N-2))$
is true if and only if $N$ and $n$ are one of the possibilities stated in the theorem. In all these cases we have
$ \frac{n(n+3)}{2}r_{N-2}\le r_{2N-2}$, and by Lemma \ref{lem-syz}, $\hh^0(K(N-2))=0$.
According to Theorem \ref{th1bis} the failure of the WLP is equivalent to the existence of a hypersurface
with multiplicity $N-1$ in the points $[L_i^{\vee}]$ and multiplicity $2N-2$ at the moving point $M$.
The lines through $M$ and $[L_i^{\vee}]$ belong to a quadratic cone with equation $\{F=0\}$ (the cone over the quadric
at infinity through the points). Since $F^{N-1}\in \mathrm{H}^0(\mathcal{I}_{M}^{2N-2}(2N-2))$ the hypersurface
$\{F^{N-1}=0\}$ has the desired properties.
\end{proof}
\begin{prop}
\label{pr54-3}
The ideal
$I=(L_1^3, \ldots, L_8^3)$ fails the WLP in degree $3$ where
$L_1, \ldots, L_8$ be general linear forms on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^6$.
\end{prop}
\begin{proof}
Since $8r_{1}< r_{4}$ Lemma \ref{lem-syz} implies $\hh^0(K(1))=0.$
We have to prove that, on a general hyperplane $L$, the cokernel of
$ \begin{CD}
\HH^0(\mathscr{O}_{L}(1))^{8} @>>>
\HH^0(\mathscr{O}_{L}(4))
\end{CD}$
has dimension strictly greater than $ \hh^0(\mathscr{O}_{L}(4)) -\hh^0(\mathscr{O}_{L}(1))^{8}=78.$
The dimension of this cokernel is the dimension of the quartics with a quadruple point
$[L^{\vee}]$ and $8$ double points. We consider on the hyperplane at infinity the vector space $V$ of quadrics through the images of the $8$ points
$[L_1^{\vee}], \ldots, [L_8^{\vee}]$. It has dimension $13$. Let $Q_1, \ldots, Q_{13}$ be a basis of this space of quadrics. Then the vector space
$\mathrm{Sym}^2(V)$ of quartics generated by the products $Q_iQ_j$ has dimension $91$ and all these quartics are singular in the $8$ points.
In $\check{\mathbb P}} \newcommand{\G}{\mathbf G^6$ the independent quartic cones over these quartics belong to the cokernel.
\end{proof}
%
In the next section, we propose many examples of ideals failing the WLP or the SLP by producing ad-hoc singular hypersurfaces.
\section{Classes of ideals failing the WLP and the SLP}
\label{s4}
\subsection{Monomial ideals coming from singular hypersurfaces}
In their nice paper about osculating spaces of Veronese surfaces, Lanteri and Mallavibarena remark that the equation of the curve given by three
concurrent lines depends only on six monomials instead of seven. More precisely let us consider a cubic with a triple point at $(a,b,c)$ passing through
$P_0$, $P_1$ and $P_2$. Its equation is $(bz-cy)(az-cx)(ay-bx)=0$ and it depends only on the monomials
$x^2y,xy^2,x^2z,xz^2,y^2z, yz^2$. So there is a non zero form in
$$(I_{3}^{\perp})^*=<x^2y,xy^2,x^2z,xz^2,y^2z, yz^2>\simeq\frac{R_3}{<x^3,y^3,z^3, xyz>}$$ that is triple at a general point. In this way they explain
the Togliatti surprising phenomena (\cite[Theorem 4.1]{LM}, \cite{I2} and \cite{FI}).
We apply this idea in our context. Recall that in the monomial case being artinian to the ideal $I$ means
that it contains the forms $x_0^d, \ldots, x_n^d$. Let us consider the $(n+1)$ fundamental points
$P_0, P_1, \ldots, P_n$ and let us assume that the number $r$ of monomials generating $I$
is chosen such that $N(r,i,k,d)=0$ for $i\ge 0,\,\, k\ge 1$ fixed integers.
Then, as it is noted in item \ref{cor_cones} of Remarks after Theorem \ref{th1bis},
the ideal $I$ fails the SLP at the range $k$ in degree $d+i-k$ if and only if there exists at any point
$M$ a hypersurface of degree $d+i$ with multiplicity $d+i-k+1$ at $M$ given by a form in $(I_{d+i}^{\perp})^*\simeq R_{d+i}/I_{d+i}$.
We have to write this equation with a number of monomials as small as possible. Then the orthogonal space becomes bigger and we will cover all the possible choices.
First of all we describe exhaustively the monomial ideals $(x^4,y^4, z^4, f,g)\subset \C[x,y,z]$ of degree $4$ that do not verify the WLP.
\begin{thm}
\label{th3}
Up to permutation of variables the monomial ideals generated by five quartic forms in $\C[x,y,z]$ that fail the WLP in degree $3$ are
the following
\begin{itemize}
\item $I_1= (x^4,y^4,z^4, x^3z, x^3y)$,
\item $ I_2=(x^4,y^4,z^4, x^2y^2, xyz^2)$.
\end{itemize}
\end{thm}
\begin{rem*}
Geometrically it is evident that the first ideal $ (x^4,y^4,z^4, x^3z, x^3y)$ fails the WLP. Indeed
under the Veronese map, a linear form $L$ becomes a rational normal curve of degree four that defines a projective space $\check{\mathbb P}} \newcommand{\G}{\mathbf G^4$ and, modulo $L$, the restricted monomials $\bar{x}^i\bar{y}^j$ can be interpreted as points of this $\check{\mathbb P}} \newcommand{\G}{\mathbf G^4$.
Then
the tangent line to the rational quartic curve at the point $[\bar{x}^4]$ contains
the two points $[\bar{x}^3\bar{y}]$ and $[\bar{x}^3\bar{z}]$. This line meets the plane $\check{\mathbb P}} \newcommand{\G}{\mathbf G(<\bar{x}^4,\bar{y}^4,\bar{z}^4>)$ in one point; it implies that
$$\mathrm{dim}_{\C}<\bar{x}^4,\bar{y}^4,\bar{z}^4,\bar{x}^3\bar{y},\bar{x}^3\bar{z} >\le 4.$$
For the second ideal, it is not evident to see that the line $\check{\mathbb P}} \newcommand{\G}{\mathbf G(<\bar{x}^2\bar{y}^2,\bar{x}\bar{y}\bar{z}^2>)$ always (for any restriction) meets the plane $\check{\mathbb P}} \newcommand{\G}{\mathbf G(<\bar{x}^4,\bar{y}^4,\bar{z}^4>)$.
\end{rem*}
\begin{proof}
Let us consider the points $P_0$ , $P_1$ and $P_2$ and
the degree $4$ curves with a quadruple point in $(a,b,c)$ passing through these three points. These curves are product
of four lines:
$$ f(x,y,z)= (ay-bx)(cx-az)(cy-bz)(\alpha(ay-bx)+\beta(cx-az))=0.$$
\begin{figure}[h!]
\centering
\includegraphics[height=5cm]{fig-6.pdf}
\caption{Quartic with a quadruple point}
\end{figure}
Expanding $f$ explicitly in the coordinates $(x,y,z)$, we see that the forms $x^4,y^4,z^4$ are missing
and that
twelve monomials appear to write its equation.
Since we want only ten monomials, we have to remove two. The following possibilities occur:
\begin{itemize}
\item $\alpha=0$ (or equivalently by permutation of variables $[\beta=0]$ or $[\alpha\neq 0$, $\beta \neq 0$ and $b\alpha =c\beta]$) then the remaining linear system is
$ (x^4,y^4,z^4, y^3z, xy^3).$ It corresponds to the first case i.e. to the ideal $I_1$.
\item $\alpha\neq 0 $ and $\beta \neq 0$ but $c\beta +b\alpha=0$ (or equivalentely by permutation of variables $[2b\alpha-c\beta=0]$ or
$[b\alpha-2c\beta=0]$) then the remaining linear system is
$ (x^4,y^4,z^4, x^2yz, y^2z^2).$ It corresponds to the second case i.e. to the ideal $I_2$.
\end{itemize}
\end{proof}
\begin{rem*} The quartic curve with multiplicity four in $(a,b,c)$ consists, in the first case, of two lines and a double line that are concurrent; in the second case of four concurrent lines in harmonic division.
\end{rem*}
We do not apply the same technique to describe exhaustively the monomial ideals $(x^5,y^5, z^5, f,g,h)\subset \C[x,y,z]$ of degree $5$
that do not verify the WLP because the computations become too tricky. But we can give some cases by geometric arguments.
\begin{prop}
\label{th4}
The following monomial ideals
\begin{itemize}
\item $ (x^5,y^5,z^5, x^3y^2, x^3z^2,x^3yz)$,
\item $ (x^5,y^5,z^5, x^4z, x^4y,m)$, where $m$ is any monomial,
\item $ (x^5,y^5,z^5, x^3y^2, x^2y^3,x^2y^2z)$,
\end{itemize}
fail the $\mathrm{WLP}$ in degree $4$.
\end{prop}
\begin{proof}
Under the Veronese map, a linear form $L$ becomes a rational normal curve of degree five that defines a projective space $\check{\mathbb P}} \newcommand{\G}{\mathbf G^5$ and, modulo $L$, the restricted monomials $\bar{x}^i\bar{y}^j$ can be interpreted as points of this $\check{\mathbb P}} \newcommand{\G}{\mathbf G^5$. Then
the tangent line to the rational quintic curve at the point $[\bar{x}^5]$ contains
the two points $[\bar{x}^4\bar{y}]$ and $[\bar{x}^4\bar{z}]$. This line meets the plane $\check{\mathbb P}} \newcommand{\G}{\mathbf G(<\bar{x}^5,\bar{y}^5,\bar{z}^5>)$ in one point; it implies that
$$\mathrm{dim}_{\C}<\bar{x}^5,\bar{y}^5,\bar{z}^5,\bar{x}^4\bar{y},\bar{x}^4\bar{z}, \bar{m} >\le 5.$$
In the same way the osculating plane at $[\bar{x}^5]$ i.e. $\check{\mathbb P}} \newcommand{\G}{\mathbf G(<\bar{x}^3\bar{y}^2,\bar{x}^3\bar{z}^2,\bar{x}^3\bar{y}\bar{z} >)$ meets the plane $\check{\mathbb P}} \newcommand{\G}{\mathbf G(<\bar{x}^5,\bar{y}^5,\bar{z}^5>)$ in one point.
\noindent In the last case, the geometric argument is not so evident. Let us set $X=bz-cy$ and
$Y=cx-az$. Then the equation of the product of the five concurrent lines is $f(X(x,y,z),Y(x,y,z))=XY(aX+bY)(\alpha X+\beta Y) (\gamma X+\delta Y)=\\
a\alpha \gamma X^4Y+(a\alpha \gamma +b\alpha \gamma +a\alpha \delta)X^3Y^2+
(b\beta \gamma +b\alpha \delta +a\beta \delta)X^2Y^3 +b\beta \delta XY^4=0.$
\noindent For any point $M(a,b,c,d)$ we choose $(\alpha, \beta, \gamma, \delta)$ such that $a\alpha \gamma +b\alpha \gamma +a\alpha \delta=0$ and
$b\beta \gamma +b\alpha \delta +a\beta \delta=0$.
Then the equation depends only on $15$ monomials and the remaining monomials are
$(x^5,y^5,z^5, x^3y^2, x^2y^3,x^2y^2z).$
\end{proof}
We describe now some monomial ideals in $ \C[x,y,z,t]$, generated in degree $3$, that do not verify the WLP.
\begin{prop}
\label{th3-1}
The monomial ideals
$I=(x^3,y^3,z^3,t^3, f_1,f_2,f_3,f_4,f_5,f_6)$ where the forms $f_i$ are chosen among one of the following sets of monomials:
\begin{itemize}
\item $ \{ x^2y, xy^2,x^2z, x^2t, y^2z, y^2t,z^2t, zt^2, xyz, xyt \},$ (Case (A1))
\item $ \{ x^2y, xy^2, xz^2, y^2z, yz^2, y^2t, zt^2, z^2t \},$ (Case (A2))
\item $ \{ x^2y,xy^2, z^2t,zt^2, xyz,xyt,xzt,yzt\},$ (Case (A3))
\item $ \{ xz^2, yz^2,xyz,xyt, x^2y,xy^2,z^2t,zt^2 \},$ (Case (A4))
\item $ \{ x^2y, xy^2, x^2z,xz^2, x^2t, xt^2, xyz, xzt, xyt,yzt \},$ (Case (B1))
\end{itemize}
fail the WLP in degree $2$.
\end{prop}
\begin{rem*}
We do not know if under permutation of variables the description above is exhaustive or not. The singular cubic that we are considering here are union of concurrent planes and not
all the cubic cones.
\end{rem*}
\begin{proof}
We look for a surface of degree $3$ with multiplicity $3$ at a general point $M(a,b,c,d)$ that passes through the points
$P_0, P_1, P_2, P_3$
such that its equation depends only on the remaining monomials
in $R_3/I_3$. Such a cubic surface is a cone over a cubic curve. Here, instead of a general cubic cone we consider only three concurrent planes.
Since these $3$ planes have to pass through $P_0, P_1, P_2$ and $P_3$ it remains only, after a simple verification, the following cases:
\begin{figure}[h!]\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[height=2.5cm]{fig1.jpg}
\caption{Case (A1).}\label{pi12^2-pi34}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[height=2.5cm]{pi12pi23pi34.jpg}
\caption{Case (A2).}\label{pi12-pi23-pi34}
\end{subfigure}
\\
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[height=2.5cm]{pi12pipi34.jpg}
\caption{Case (A3).}\label{pi12-pi34-pi}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[height=2.5cm]{pi12pi2pi34.jpg}
\caption{ Case (A4).}\label{pi12-pi2-pi34}
\end{subfigure}
\\
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[height=3cm]{fig-2.jpg}
\caption{Case (B1)}
\end{subfigure}
\end{figure}
\begin{itemize}
\item (A1)
The equation of the cubic is
$(bx-ay)(dz-ct)^2=0.$
\item (A2) The equation of the cubic is
$(bx-ay)(dz-ct)(c x-a z)=0.$
\item (A3) The equation of the cubic is
$(bx-ay)(dz-ct)(bx+ay+udz+uct)=0$ where at any point
$(a,b,c,d)$ the function $u(a,b,c,d) $ verifies $ab+u(a,b,c,d)cd=0$.
\item (A4) The equation of the cubic is
$(bx-ay)(dz-ct)(bdx+ady-2abt)=0$.
\item (B1) The equation of the cubic is
$(cy-bz)(dz-ct)(dy-bt)=0.$
\end{itemize}
\end{proof}
If we want $I_3$ to be of dimension $r< 10$ (for instance $r=8$) we need $10-r+1$ independent cubics
with a triple point.
So, to get the failure of the WLP, we need
$10-r+1$ independent cubics with a triple point.
Let us recover with our method two linear systems of eight cubic forms (the complete classification is already done in \cite[Theorem 4.10]{MMO})
that fail the WLP in degree $2$.
\begin{prop}
The following monomial ideals
\begin{itemize}
\item $I= (x^3,y^3,z^3,t^3, x^2y, xy^2, zt^2, z^2t),$
\item $J= (x^3,y^3,z^3,t^3, xyz, xyt, xzt, yzt)$
\end{itemize}
fail the $\mathrm{WLP}$ in degree $2$.
\end{prop}
\begin{rem*}
The ideals $I$ and $J$ correspond respectively to the cases $(3)$ and $(1)$ in \cite[Theorem 4.10]{MMO}.
\end{rem*}
\begin{proof}
Let us consider the following three forms defining singular cubics passing through the fundamental points and a general point $(a,b,c,d)$:
$$ (ct-dz)(at-dx)(ay-bx)=0, (ct-dz)^2(ay-bx)=0, (ct-dz)(ay-bx)^2=0.$$
They are particular cases of type $(A)$ in the proof of Proposition \ref{th3-1}. They are linearly independent and can be written with twelve monomials.
Then it remains only $8$ forms for $I_3$:
$$ I=(x^3,y^3,z^3,t^3, x^2y, xy^2, zt^2, z^2t).$$
Let us consider the following three forms defining singular cubics passing through the basis points and the general point $(a,b,c,d)$:
$$ (bz-cy)(az-cx)(ay-bx)=0, (bx-ay)(at-dx)(dy-bt)=0, (az-cx)(dx-at)(dz-ct)=0.$$
They are cases of type $(B1)$ in the proof of Proposition \ref{th3-1}. They are linearly independent and can be written with twelve monomials:
$$ (x^2y, x^2z, xy^2, xz^2, y^2z, yz^2, t^2y, t^2z, ty^2, tz^2,t^2x, x^2t).$$ It remains only
$$J=(x^3,y^3,z^3,t^3, xyz, xyt, xzt, yzt).$$
\end{proof}
Of course the same argument (concurrent planes or hyperplanes) can be used in degree or dimension bigger than $3$. For instance
let us give a set of monomial ideals in $ \C[x,y,z,t]$, generated in degree $4$ that fail the WLP.
\begin{prop}
\label{d4m}
Let $f_1, \ldots, f_{11}$ be eleven monomials chosen among
$$x^3y , x^3z, x^3t, xy^3, xz^3, xt^3, y^3z, y^3t, yz^3, yt^3, z^3t, zt^3, x^2y^2, z^2t^2, y^2z^2, x^2t^2.$$
Then the ideal $I=(x^4,y^4,z^4, t^4,f_1, \ldots, f_{11})$ fails the WLP in degree $3$.
\end{prop}
\begin{proof}
At any point $ M=(a,b,c,d)$ an equation of a surface of degree $4$ with multiplicity $4$ at $M$ that passes through the points
$P_0, P_1, P_2,P_3$ is given by
$$ f(x,y,z,t)=(ct-dz)(at-dx)(ay-bx)(bz-cy)=0.$$
\end{proof}
We conclude this section with an example that fails the SLP at the range $2$.
\begin{prop}
\label{d4mslp}
The ideal $I=(x^4,y^4,z^4, xy^3,xz^3,x^2yz,y^2z^2, y^3z, yz^3)\subset \C[x,y,z] $ fails the SLP at the range $2$ in degree $2$.
\end{prop}
\begin{proof}
Let $P_0$, $P_1$, $P_2$ and $M(a,b,c)$ be four points. We consider the quartic curve consisting of the union of
the four lines $(MP_0), (MP_1), (MP_2)$ and $(P_0P_1)$.
It is a quartic passing through $P_0,P_1, P_2$ and triple at $M(a,b,c)$. It depends on the six monomials
$ x^3y,x^3z,x^2y^2, xy^2z, x^2z^2, xyz^2.$
Then it remains $9=15-6$ monomials
$$I_4=<x^4,y^4,z^4, xy^3,xz^3,x^2yz,y^2z^2, y^3z, yz^3>.$$ The associated syzygy bundle $K$
verifies $h^0(K\otimes \mathscr{O}_{L^2})\neq 0$ for a general linear form $L$. It proves that
$I=(x^4,y^4,z^4, xy^3,xz^3,x^2yz,y^2z^2, y^3z, yz^3)$ fails the SLP at the range $2$ in degree $2$.
\end{proof}
\subsection{Non monomial examples coming from singular hypersurfaces}
Let us study now the interesting case $I_d^{\perp}=\mathrm{H}^0(\mathcal{I}_Z(d))^{*}$ where $Z$ is a finite set of distinct points in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{2\vee}$ of length $|Z|$ and
$\mathcal{I}_Z$ its ideal sheaf.
The set $Z$ corresponds by projective duality to a set of $|Z|$ distinct lines in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$ defined by linear forms $l_1,\ldots, l_{|Z|}$.
We will now consider the ideal $I\subset R$ generated by $(l_1^d, \ldots, l_{|Z|}^d)$.
We have $|Z|=\mathrm{dim}_{\C}I_d$.
\begin{prop}
\label{nmslp}
Let $k\ge 1$, $r=r_d-r_{d-k}$ and $Z=\{l_1^{\vee},\ldots, l_{r}^{\vee}\}$ a finite set of $r$ distinct points in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{2\vee}$
where $l_i$ are linear forms on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$. Assume that there exists a subset $Z_1\subset Z$,
of length $r-d+k-1$,
contained in a curve $\Gamma_{1}$ of degree $k-1$. Then
the ideal $I=(l_1^d, \ldots, l_{r}^d)$ fails the SLP at the range $k$ in degree $d-k$.
\end{prop}
\begin{proof}
The union of $\Gamma_{1}$ and $(d-k+1)$ concurrent lines at a point $P$ passing through the
remaining points $Z\setminus Z_1$, is
a non zero section of
$\mathrm{H}^0(\mathcal{I}_Z\otimes \mathcal{I}_P^{d-k+1}(d))$. By Theorem \ref{th1bis} it proves that $I$ fails the SLP at the range $k$ in degree $d-k$.
\end{proof}
With this method it is always possible to find systems of any degree that fail the SLP by
exhibiting a curve of degree $d$ with multiplicity $d-k+1$ at a general point $P$.
But one can find some set of points for which these special curves do not split as product of lines
(see Proposition \ref{B3} in the next section).
\section{SLP at the range $2$ and line arrangements on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$}
\label{s5}
A line arrangement is a collection of distinct lines in the projective plane. Arrangement of lines or more generally arrangement of hyperplanes
is a famous and classical topic that has been studied by many authors for a very long time (see \cite{Cartier} or \cite{OT} for a good introduction).
Let us denote by
$f=0$ the equation of the union of lines of the considered arrangement. Another classical object associated to the arrangement is the
vector bundle $\mathcal{D}_0$ defined as the kernel of the jacobian map:
$$ \begin{CD}
0 @>>> \mathcal{D}_0 @>>> \mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^2}^{3} @>(\partial f)>> \mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^2}(d-1).
\end{CD}
$$
The bundle $\mathcal{D}_0 $ is called \textit{derivation bundle} (sometimes logarithmic bundle ) of the line arrangement (see \cite{S} and \cite{Sc} for an introduction to
derivation bundles).
One can think about the lines of the arrangement in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{2}$ as a set of distinct points $Z$ in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{2\vee}$. Then we will
denote by $\mathcal{D}_0(Z)$ the associated derivation bundle.
The arrangement of lines is said {\it free with exponents} $(a,b)$ when its derivation bundle splits on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$ as a sum of two line bundles, more precisely when
$$ \mathcal{D}_0(Z)=\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^2}(-a)\oplus \mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^2}(-b).$$
The splitting of $\mathcal{D}_0(Z)$ over a line $l\subset \check{\mathbb P}} \newcommand{\G}{\mathbf G^2$ is related to the existence of curves (with a given degree $a+1$) passing through $Z$ that are multiple (with multiplicity $a$) at $l^{\vee}\in \check{\mathbb P}} \newcommand{\G}{\mathbf G^{2\vee}$.
More precisely,
\begin{lem}(\cite[Proposition 2.1]{V2})
\label{linksd} Let $Z\subset \check{\mathbb P}} \newcommand{\G}{\mathbf G^{2\vee}$ be a set of $a+b+1$ distinct points with $1\le a\le b$ and $l$ be
a general line in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{2}$. Then the following conditions are equivalent:
\begin{enumerate}
\item $\mathcal{D}_0(Z)\otimes
\mathscr{O}_{l}=\mathscr{O}_{l}(-a)\oplus \mathscr{O}_{l}(-b)$.
\item $\mathrm{h}^0((\mathcal{J}_{Z}\otimes \mathcal{J}_{l^{\vee}}^{a})(a+1))\neq
0$ and $\mathrm{h}^0((\mathcal{J}_{Z}\otimes \mathcal{J}_{l^{\vee}}^{a-1})(a))=
0.$
\end{enumerate}
\end{lem}
In our context it implies the following characterization of unstability. We recall that
a rank two vector bundle $E$ on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^n, n\ge 2$ is \textit{unstable} if and only if
its splitting $E_l=\mathscr{O}_l(a)\oplus \mathscr{O}_l(b)$ on a general line $l$ verifies $\mid a-b\mid \ge 2$. This characterization
is a consequence of the Grauert-M\"ulich theorem, see \cite{OSS}.
\begin{prop}
\label{th5}
Let $I\subset R=\C[x,y,z]$ be an artinian ideal generated by $2d+1$ polynomials $l_1^d, \ldots, l_{2d+1}^d$ where $l_i$ are distinct linear forms in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$.
Let $Z=\{l_1^{\vee},\ldots, l_{2d+1}^{\vee}\}$ be the corresponding set of points in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{2\vee}$. Then the following conditions are equivalent:
\begin{enumerate}
\item The ideal $I$ fails the $\mathrm{SLP}$ at the range $2$ in degree $d-2$.
\item The derivation bundle $\mathcal{D}_0(Z)$ is unstable.
\end{enumerate}
\end{prop}
\begin{proof}
The failure of the SLP at the range $2$ in degree $d-2$ is equivalent to the existence at a general point $l^{\vee}$ of a curve of degree $d$ with multiplicity
$d-1$ at $l^{\vee}$ belonging to $I_d^{\perp}=\mathrm{H}^0(\mathcal{I}_Z(d))$. By the lemma \ref{linksd} it is equivalent to the following splitting
$$\mathcal{D}_0(Z)\otimes \mathscr{O}_{l}=\mathscr{O}_{l}(d-s)\oplus \mathscr{O}_{l}(d+s) \,\, \mathrm{with}\,\, s>0,$$
on a general line $l$.
In other words the failure of the SLP is equivalent to have a non balanced decomposition
and according to Grauert-M\"ulich theorem it is equivalent to unstability.
\end{proof}
Let us give now an ideal generated by non monomials quartic forms that fails the SLP at the range $2$. It comes from a line arrangement, called B3 arrangement (see \cite[Pages 13, 25 and 287]{OT}), such that
the associated derivation bundle is unstable (in fact even decomposed). The existence of a quartic curve with a general triple point is the key argument. But
contrary to the previous examples, this quartic is irreducible and consequently not obtainable by Proposition \ref{nmslp}.
\begin{prop}
\label{B3}
The ideal $$I=(x^4,y^4,z^4,(x+y)^4,(x-y)^4,(x+z)^4,(x-z)^4, (y+z)^4,(y-z)^4)\subset \C[x,y,z] $$
fails the SLP at the range $2$ and degree $2$.
\end{prop}
\begin{proof}
Consider the set $Z$ of the nine dual points of the linear forms $x,y,z,x+y,x-y,x+z,x-z, y+z,y-z$. Let $I$ be the artinian ideal
$(x^4,y^4,z^4,(x+y)^4,(x-y)^4,(x+z)^4,(x-z)^4, (y+z)^4,(y-z)^4)$ and $K$ its syzygy bundle.
The derivation bundle of the arrangement is $\mathcal{D}_0(Z)=\mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^2}(-3)\oplus \mathscr{O}_{\check{\mathbb P}} \newcommand{\G}{\mathbf G^2}(-5)$ (it is free with exponents $(3,5)$; see \cite{OT} for a proof).
Then, according to the Lemma \ref{linksd} there is at any point $P$ a degree $4$ curve with multiplicity $3$ at $P$ passing through $Z$.
In other words, by Theorem \ref{th1bis}, $I$ fails the SLP at the range $2$ and degree $2$.
\end{proof}
\begin{figure}[h!]
\includegraphics[height=6.5cm]{dualB3.pdf}
\caption{Dual set of points of the $B3$ arrangement}
\end{figure}
More generally non balanced free arrangements lead to ideals that fail the SLP.
\begin{prop}
Let $\mathcal{A}=\{ l_1, \ldots, l_{a+b+1}\}$ a line arrangement that is free with exponents $(a,b)$ such that $a\le b $, $b-a\ge 2$ and $a+b$ even.
The ideal $I=( l_1^{\frac{a+b}{2}}, \ldots, l_{a+b+1}^{\frac{a+b}{2}})$
fails the SLP at the range $2$ and degree $\frac{a+b}{2}-1$.
\end{prop}
\begin{rem*}
If $a+b$ is odd we can add to $Z$ one point $P$ in general position with respect to $Z$ and we can prove in the same way that
$I=( l_1^{\frac{a+b+1}{2}}, \ldots, l_{a+b+1}^{\frac{a+b+1}{2}}, (P^{\vee})^{\frac{a+b+1}{2}})$
fails the SLP at the range $2$ and degree $\frac{a+b}{2}$.
\end{rem*}
\begin{proof}
Let us denote by
$Z=\{ l_1^{\vee}, \ldots, l_{a+b+1}^{\vee}\}$ the dual set of points of $\mathcal{A}$.
Since there exists at any general point $l^{\vee}$ a curve of degree $a+1$ passing through
${Z}$, the Lemma \ref{linksd} implies that $\mathcal{D}_0(Z)$ is unstable and Proposition \ref{th5} implies that $I$
fails the SLP at the range $2$ and degree $\frac{a+b}{2}-1$.
\end{proof}
\subsection{SLP at the range $2$ and Terao's conjecture}
One of the main conjecture about hyperplane arrangements
(still open also for line arrangements) is Terao's conjecture.
It concerns the free arrangements.
The conjecture says that freeness depends only on the combinatorics of the arrangement. Let us recall that the combinatorics of the arrangement $\mathcal{A}=\{l_1,\ldots, l_n\}$
is determined by an incidence graph. Its vertices are
the lines $l_k$ and the points $P_{i,j}=l_i\cap l_j$. Its edges are joining $l_k$ to $P_{i,j}$ when $P_{i,j}\in l_k$.
We refer again to \cite{OT} for a good introduction to the subject. Terao's conjecture is valid not only for line arrangement but more generally for hyperplane arrangements.
\begin{conj*}[Terao]
The freeness of a hyperplane arrangement depends only on its combinatorics.
\end{conj*}
In other words an arrangement with the same combinatorics of a free arrangement is free, too.
Let us consider a free arrangement $\mathcal{A}_0=\{h_1,\ldots, h_n\}$ with exponents $(a,b)$ ($a\le b$) and let us denote by $Z_0$ its dual set of points.
We assume that Terao's conjecture is not true i.e, that there exists
a non free arrangement $\mathcal{A}=\{l_1,\ldots, l_n\}$ with the same combinatorics of $\mathcal{A}_0$.
Let us add $b-a$ points $\{M_1,\ldots, M_{b-a}\}$ in general position to $Z_0$
in order to form $\Gamma_0$ and to th dual set $Z$ of$\mathcal{A}$ to form $\Gamma$.
Then the length of both sets of points is $2b+1$. On the general line $l$ we have
$$\mathcal{D}_0(Z_0)\otimes \mathscr{O}_l=\mathscr{O}_l(-a)\oplus \mathscr{O}_l(-b),$$
when, since $Z$ is not free, we have a less balanced decomposition for $\mathcal{D}_0(Z)$ (this affirmation is proved in \cite{EF}):
$$\mathcal{D}_0(Z)\otimes \mathscr{O}_l=\mathscr{O}_l(s-a)\oplus \mathscr{O}_l(-s-b),\,\, s\ge 1.$$
It implies that
$$\mathrm{h}^0(\mathcal{I}_Z\otimes \mathcal{I}_{l^{\vee}}^{a-1}(a))\neq 0, \mathrm{h}^0(\mathcal{I}_{Z_0}\otimes \mathcal{I}_{l^{\vee}}^{a-1}(a))= 0\,\,
\mathrm{and}\,\,\mathrm{h}^0(\mathcal{I}_{Z_0}\otimes \mathcal{I}_{l^{\vee}}^{a}(a+1))\neq 0.$$
Then adding $b-a$ lines passing through $l^{\vee}$ and the $b-a$ added points we obtain
$\mathrm{h}^0(\mathcal{I}_{\Gamma}\otimes \mathcal{I}_{l^{\vee}}^{b-1}(b))\neq 0$, $\mathrm{h}^0(\mathcal{I}_{Z_0}\otimes \mathcal{I}_{l^{\vee}}^{b-1}(b))= 0$ and
$\mathrm{h}^0(\mathcal{I}_{Z_0}\otimes \mathcal{I}_{l^{\vee}}^{b}(b+1))\neq 0.$
The bundle
$\mathcal{D}_0(\Gamma_0)$ is balanced with splitting
$\mathscr{O}_l(-b)\oplus \mathscr{O}_l(-b)$ and $$\mathcal{D}_0(\Gamma)\otimes \mathscr{O}_l=\mathscr{O}_l(1-b)\oplus \mathscr{O}_l(-1-b).$$
Then $\mathcal{D}_0(\Gamma_0)$ is semistable and $\mathcal{D}_0(\Gamma)$ is unstable.
In other words the ideal
$$ ( l_1^b, \ldots, l_{a+b+1}^b, (M_1^{\vee})^b, \ldots, (M_{b-a}^{\vee})^b)$$ fails the SLP at the range $2$ and degree $b-2$ when
$$ ( d_1^b, \ldots, d_{a+b+1}^b, (M_1^{\vee})^b, \ldots, (M_{b-a}^{\vee})^b)$$
has the SLP at the range $2$ and degree $b-2$.
\noindent The following conjecture written in terms of SLP is equivalent to Terao's conjecture on $\check{\mathbb P}} \newcommand{\G}{\mathbf G^2$.
\begin{conj*}
Let $Z_0=\{d_1^{\vee}, \ldots, d_{2b+1}^{\vee}\}$ a set of points of length $2b+1$ in $\check{\mathbb P}} \newcommand{\G}{\mathbf G^{2\vee}$ such that the ideal
$ I= (d_1^b, \ldots, d_{2b+1}^b)$ has the SLP at the range $2$ and degree $b-2$. Assume that $Z=\{l_1^{\vee}, \ldots, l_{2b+1}^{\vee}\}$ has the same combinatorics of $Z_0$. Then
$ J= (l_1^b, \ldots, l_{2b+1}^b)$ has the SLP at the range $2$ and degree $b-2$.
\end{conj*}
|
1,108,101,564,638 | arxiv | \section{Introduction}
\label{sec:introduction}
The word \emph{scheduling} refers to the allocation of resources between
different competing tasks. This generic, abstract definition reflects the
pervasiveness of the scheduling concern across disciplinary fields.
A concrete class of scheduling problems is obtained by specifying a type of
system and tasks, and the goals of the scheduling action.
In this paper, we outline a general methodology to tackle the scheduling
problem. Our approach exploits \emph{control theory} to formulate the
scheduling problem and to solve it. The control-theoretical paradigm represents
the interaction between two distinct parts of a system: the \emph{plant} and
the \emph{controller}. The plant represents the part of the system whose
dynamics is not modifiable directly, and that must be put under control.
The controller, on the other hand, is a component that provides suitable input
to the plant with the goal of influencing its dynamics towards meeting some
requirements. The controller chooses its action according to the output of the
plant, hence the denomination \emph{feedback control}.
The idea of using control theory to solve scheduling problems is not new.
Indeed, the research area of \emph{feedback scheduling} is based on these
premises \cite{LuEtAl-2002a,HellersteinEtAl-2005a}.
The novelty of our approach consists in how the control-theoretical paradigm
is applied to the scheduling problem, and more precisely which parts of the
system are modeled as the plant and as the controller, respectively.
The most common approach to feedback scheduling supplements an existing
scheduler with a control-theoretical model: the plant is the ``basic''
scheduler itself, and the controller tunes its dynamics over time according
to the evolution of the rest of the system. We suggest a different partitioning,
where the controller \emph{is} the scheduler and the plant is a very abstract
model of the pool of tasks and, in some sense, the resources they run on.
Our stance has a couple of significant advantages over the traditional
approaches. First, it allows the effective re-use of an extensive amount of
powerful results from classical control theory to smoothly design scheduling
algorithms. Second, it is remarkably flexible and can easily accommodate some
complex and peculiar scheduling requirements, such as robustness towards
disturbances, dynamic adjustment of performance, and a quantitative notion of
convergence rates.
The approach is general and applies to a large class of scheduling problems.
It is naturally applicable to scheduling the CPU in \emph{real-time
systems} \cite{ButtazzoEtAl-2004a,LiuLayland-1973a}, which are characterized
by a quantitative treating of time. As it is common with feedback scheduling,
it focuses on \emph{soft} real-time, where the failure to respect a deadline
does not result in a global failure of the system, and average performance is
what matters.
The heterogeneous scope of the scheduling problem and the sought generality of the present approach make, at times, the presentation of the technical details necessarily abstract: it is impossible to formalize each and every (domain-specific) aspect of the scheduling problem (e.g., deadlines, priorities, granularities, etc.) in a unique model that is practically useful.
Additionally, different formalizations are often possible, and choosing the best one depends largely on application-specific details, such as whether one is dealing with a batch or a hard real-time system.
The overall goal of the present paper is high-level: outlining the framework proposed, formalizing its basic traits, and demonstrating its flexibility with a few examples.
Focusing the framework on specialized classes of scheduling problems and comparatively assessing its performance belongs to future work.
The rest of the paper presents our approach to feedback scheduling and is
organized as follows. Section \ref{sec:motivations} presents some additional
motivation, with a more direct comparison to the literature which is most
closely related to this paper. Section \ref{sec:methodology} introduces our
methodology for the scheduling problem; it focuses on presenting the conceptual
contribution in a general setting. Section \ref{sec:experimental} discusses an
experimental validation of the approach, where the general methodology is
instantiated to solve a few specific concrete problems in a real-time
scheduling benchmark. Finally, Section \ref{sec:conclusions} draws some
conclusions and outlines future work.
\section{Motivation and related work}
\label{sec:motivations}
Hellerstein et al.'s book \cite{HellersteinEtAl-2004a} is a comprehensive
review of the applications of control theory to computing-system
problems such as bandwidth allocation and unpredictable data traffic management.
In general, control theory is applied to make computing systems adaptive, more
robust, and stable. Adaptability, in particular, characterizes the response
required in applications whose operating conditions change rapidly and
unpredictably.
Lu et al.~\cite{LuEtAl-2001a,LuEtAl-2006a} present contributions in this
context, for the regulation of the service levels of a web server.
The variable to be controlled is the delay between the arrival time of a request
and the time it starts being processed. The goal is to keep this delay to within
some desired range; the range depends on the class of each request.
An interesting point of this works is the distinction between
the transient and steady state performances, in the presence of variable
traffic. This feature motivates a feedback-control approach to
many computing-system performance problems.
Scheduling is certainly one of these problems where the transient to steady-state
distinction features strongly. Indeed, many \emph{ad hoc} scheduling approaches
solve essentially the same problem in different operating conditions.
This is one of the main reasons why \emph{feedback scheduling} has received
much attention in recent years (see Xia and Sun \cite{XiaSun-2006a} for a
concise review of the topic). As we argued already in the introduction, the
standard approach in feedback scheduling consists in ``closing some control
loop around an existing scheduler'' to adjust its parameters to the varying
load conditions. This may yield performance improvements, but it falls short
of fully exploiting the rich toolset of control theory.
For example, in Abeni et al.~\cite{AbeniEtAl-2002a}, the controller
adjusts the reservation time (i.e., the time the scheduler assigns to
each task) with the purpose of keeping the system utilization below a
specified upper bound. The plant is instead a switching
system with two different states, according to whether the system can satisfy
the total amount of CPU requests or not.
Some tests with a real-time Linux kernel show that the adaptation
mechanism proposed is useful for to improve quality-of-service measurements. Continuing
in the same line of work, Palopoli and Abeni~\cite{PalopoliAbeni-2009a} combine
a reservation-based scheduler and a feedback-based adaptation mechanism to
identify the best parameter set for a given workload.
Block et al.~pursue a similar approach \cite{BlockEtAl-2008} where they integrate feedback models with optimization techniques.
In Lawrence et al.~\cite{LawrenceEtAl-2001a}, the controller adjusts the
reservation time to within an upper bound given by the most frequently
activated task. The model of the plant is a continuous-time system whose
variables record the queuing time of tasks.
The effectiveness of the method proposed is validated through simulations.
Lu et al.\ in~\cite{LuEtAl-2002a} consider some basic scheduling policies
(both open-loop and closed-loop) and design a controller that prevents system
overloading. Such goal is achieved by letting some tasks out of the queue when
the system workload is too high.
All these approaches target the same problem: assigning CPU time to a pool of
tasks to meet some goals. The devised algorithms are usually extremely
efficient, but their scope of applicability is often limited to a fairly
specific domain (e.g., periodic processes with harmonic frequencies deadlines).
Moreover, in all the cited approaches the controller modifies the behavior of
a ``basic'' scheduling algorithm; indeed, the model of the scheduler is often
combined with (some aspects) of the processor model, even if their functions
are in principle clearly distinct.
We believe that this lack of separation of concerns is the result of the
close adherence to a specific scheduling problem domain, and we claim that
enforcing a stricter separation in the model can result in some distinctive
advantage.
The rest of the paper presents an approach where the scheduler is solely
responsible for selecting which tasks have to run and their desired execution time.
The scheduler is then built as the controller that meets some requirements for
such a selection. Notice that the homogeneous nature of the controller (i.e.,
the scheduler) and the plant (i.e., the tasks' execution model) is peculiar to
computer systems, and makes a unitary design of the overall system easier.
The approach itself can be, we believe, more general and flexible than the
aforementioned others.
\section{The methodology}
\label{sec:methodology}
This section outlines a methodology to tackle the scheduling problem.
For clarity, it is phrased in terms of allocating CPU time to a set of tasks in
a mono-processor operating system. It should be clear, however, that the
solution refers to a more abstract class of problems and is relatively general.
In the rest of the paper, we assume familiarity with the basic
control-theoretical terminology and notation (see e.g.,
\cite{HellersteinEtAl-2004a}).
The basic modeling assumption completely separates the processor and the
scheduler: the scheduler chooses the order in which the tasks are executed and
their execution times, while the processor actually runs them.
This separation let us focus more precisely on the characteristics of each
component and understand how to change each model according to the requirements
we have to meet.
Figure \ref{fig:framework} shows the ``big picture'' of how the scheduling
problem is cast as a control problem.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{framework.png}
\caption{The general framework proposed.}
\label{fig:framework}
\end{figure}
The processor is a system that executes tasks according to the input
received from the scheduler. The scheduler, which provides this input, is then
the controller of the processor, which is the plant. The control action is
divided in three aspects: task selection, budget computation, and choice of
the intervention policy. The first two phases choose which tasks will be
executed, in what order, and the budget --- defined as the maximum running time before
preemption --- assigned to each of them.
The intervention policy, instead, determines when the scheduler will next be
activated. In the following, we assume a straightforward intervention policy
where the scheduler runs after every scheduling round. More complex policies
can of course be introduced according to the requirements of specific
applications; for example, the scheduler might run whenever the difference
between the desired execution time and the real measured time exceeds a certain
threshold. A detailed analysis of this aspect is orthogonal to the rest of our
methodology and belongs to future work.
Separating the scheduler action in three components facilitates modifications
of the controller model according to different requirements. More precisely,
the overall model structure remains the same and only the equations modeling
the affected aspects need to be changed.
Notice that anything that influences the behavior of the running tasks, other
than the scheduler action, is modeled as an exogenous disturbance: an action
that prevents the system from reaching the goal requirements, which the
scheduler wants to contrast. This modeling assumption is suitable for factors
that are, for all practical purposes, unpredictable and unmodifiable.
The notion of disturbance (basically disturbance rejection) from control
theory is then adopted to model these factors, with the immediate benefit of
having at our disposal a powerful set of theoretical tools to tackle
effectively the ensuing problems.
The abstractness and genericity of our framework come with the potential
drawback of making it difficult to implement the scheduling policies within an
existing scheduler architecture, which can differ significantly from the
abstract modular structure of Figure \ref{fig:framework}.
Anyway, we believe that the theoretical analysis that can be carried out
within our framework is extremely useful to determine the criticalities of
the system under design, even in the cases in which the final implementation
will require \emph{ad hoc} adjustments.
\subsection{The plant}
\label{sec:plant}
The ``open loop'' model of the plant describes the process executor as a
discrete-time system. It receives a schedule (which will be the output of the
scheduler described in the next subsection) as input and returns the outcome
of executing the tasks as required by the schedule.
A \emph{round} is the time between two consecutive runs of the scheduler.
Assume that more than one task can be scheduled for execution in a given
round; correspondingly, we introduce the following variables to describe the
plant:
\begin{itemize}
\item $N$, the number of tasks to be scheduled;
\item $\tau_p(k)\in \Re^N$, the actual running times of the tasks in the $k$-th
scheduling round;
\item $\tau_r(k)\in \Re$, the duration of the $k$-th round;
\item $s(k)\in \Re^N$, the schedule at the $k$-th round: an ordered list of the budgets, one for each task; the order determines the execution order and a budget of $0$ means that the task is not scheduled for execution in that round;
\item $\delta b(k)\in \Re^N$, the disturbance during the $k$-th round,
defined as the difference between the assigned budget and the actual
running time of a task;\\
(Notice that this variable models uniformly a variety of possible
specific phenomena, such as a task that yields control or terminates
before preemption, an interrupt occurring, the execution of a critical
section where preemption was disabled, etc.)
\item $t \in \Re$, the total time actually elapsed from the system
initialization.
\end{itemize}
The model of the plant is then the following system of equations:
\begin{equation}
\left\{
\begin{array}{rcl}
\tau_p(k) &=& s(k-1) + \delta b(k-1) \\
\tau_r(k) &=& r_1 \tau_p(k-1) \\
t(k) &=& t(k-1) + \tau_r(k-1) \\
\end{array}
\right.
\label{eqn:BasePlantModel}
\end{equation}
where $r_1$ is a row vector of length $N$ with all unit elements.
Model \eqref{eqn:BasePlantModel} is linear and time-invariant. Negative budgets
are not allowed and, correspondingly, each $s(k)+\delta b(k)$ element cannot be
negative. However, this is irrelevant for the controller, since the set of
considered variables is smaller than the domain limitations. Notice that the
discrete-time model assumes that the scheduler is active only once per round.
Clearly, some $s(k)$ elements can be zero, meaning that not all the tasks will
actually run. The $\tau_r$ variable models round duration, which takes into
account system responsiveness issues.
\subsection{The scheduler}
\label{sec:scheduler}
A scheduler should usually achieve the following goals, regardless of the
specificities of the system where it runs \cite{Tanenbaum-2007a}.
\begin{itemize}
\item \emph{Fairness}: comparable tasks should get comparable service;
(This obviously does not apply to tasks with different properties.)
\item \emph{Policy enforcement}: the scheduler has to comply to general
system policies; (This aspect is especially relevant for real-time
systems where constraining system policies are usually in place.)
\item \emph{Balance}: all the components of the system need to be used as
uniformly as possible.
\end{itemize}
In addition to these general requirements, a scheduler must also achieve
additional goals that are specific to the system at hand.
For instance, in batch systems, where responsiveness is not an issue, the
scheduler should guarantee maximization of the throughput, minimization of
the turnaround time, and maximization of CPU utilization.
In interactive systems, on the contrary, minimization of response time and
proportionality guarantees are likely scheduling goals.
Finally, deadlines and predictability are specific to real-time systems.
In the following, we outline a general approach to design a scheduler --- based
on control-theory and the framework presented above --- that achieves a defined
set of goals. Unlike the standard approach that designs a new algorithm for a
new class of systems, we can accommodate most scenarios within the same
framework by changing details of the equations describing the control model.
\textbf{Process selection and budget computation.}
The scheduler decides which tasks to activate and chooses
a budget for them. This is achieved by setting variable $s_i(k)$ which defines
the budget assigned to the $i$-th task at round $k$.
This action is actually made of two conceptually different parts. A Process
Selection Component (PSC) takes care of deciding the next task to be executed
by the processor, while a Budget Computation Component (BCC) fixes the
duration of the execution for each selected task. If more than one
task is to be executed per round, PSC computes an ordered list of tasks
and BCC assigns one or more budgets to the elements of the list.
Execution need not be continuous: if a time $\hat{t_i}$ is assigned to the
$i$-th task, the actual execution can be split into multiple slots within the
same round.
The distinction between PSC and BCC is modeled by defining $s(k)$ as
$S_\sigma(k)\,b(k)$, where $S_\sigma(k)$ is a $N\times n(k)$ matrix representing
the tasks selected at the $k$-th round, while $b(k)\in \Re^{n(k)}$ represents
the budget assigned to the selected tasks. Notice that, in the most general
case, the number of tasks that can be executed at each round is a
variable $n(k)$.
PSC can operate statically or dynamically.
In the first case, the strategy is independent of the previous choices,
such as in Round Robin (RR) scheduling.
In the second case, PSC retains a history of the previous choices and bases its
new choice on it, such as in fair-share scheduling \cite{KayLauder-1988a}.
In case of static PSC, the matrix $S_\sigma(k)$ is not explicitly function of
$b(i)$ and $S_\sigma(j)$, with $i,j<k$. This means that $S_\sigma(k)$ may not be
a function of both $\tau_p$ and $\tau_r$ in the previous rounds; indeed, these
represent the actual behavior of the CPU with respect to each task.
Therefore, they may not reflect the choice that the scheduler made
in previous rounds, due to contingencies in the execution of the system.
Consider, as a more concrete example, the shortest remaining time next
algorithm: the PSC chooses the next task to be executed
according to their remaining running times, which obviously depend on what
actually happened in the previous rounds (i.e., the history of $\tau_p$),
but not necessarily on the scheduler's choice (i.e., $s$).
Once the PSC has selected the tasks to be executed, the BCC computes the
budgets for them, by setting $b(k)$. PSC can be static or dynamic, too: in the
first case the budget is a constant vector $b(k)=\hat{b}$, whereas in the second
case the budget may change at every round.
\textbf{Designing the controller.}
Let us now discuss how to define and enforce some of the previously outlined
features in a scheduler, for given PSC and BCC.
\subsubsection{Fairness}
A fair scheduling algorithm has the property that, in every fixed time interval,
the CPU usage is proportionally distributed among tasks, accordingly to their
weights. For the sake of simplicity, let us focus on a fixed number of rounds
$H$.\footnote{Generalizing this approach to deal with a time window, rather
than a number of rounds, is straightforward.}
Let $p_i(k)$ be the weight
of the $i$-th task at the $k$-th round. In order to guarantee fairness for
each task, the scheduler must achieve the following equation:
\begin{equation}
\sum_{i=k}^{k+H}\tau_{p_i(k)} = \sum_{i=k}^{k+H}
\cfrac{p_i(k)}{\sum_{j=1}^{N}p_j(k)} \tau_r(k)
\label{eqn:eqfair}
\end{equation}
Informally, \eqref{eqn:eqfair} means that the scheduler distributes the CPU
among the different tasks proportionally to their weights (over the next
$H$ rounds). Then, the scheduler computes a $s(k)$ which satisfies equation
\eqref{eqn:eqfair}. The algorithm to compute $s(k)$ comes from the solution to the corresponding control problem, for example by means of optimal control theory\footnote{See \cite{Doyle-1996a} for an overview of optimal control theory and further references on the subject.}: find the optimal value of the controlled variable $s(k)$, given a certain cost function $J$ of the state variables $\tau_p, \tau_r, t$.
\subsubsection{Policy enforcement}
The details of how to handle this aspect within our control framework depend
essentially on \emph{which} system policy should be enforced: the term
``policy'' can refer to very disparate concerns. The experiments described in
Section \ref{sec:experimental} will tackle a specific instance of activation
policy.
Let us notice, in passing, that the strict coupling between the system policy
and the features of the controller that enforce such a policy is one of the
reasons why most scheduling algorithms do not disentangle the different aspects
and tend to lump all of them together in the same model.
\subsubsection{Balance}
Balance requirements do not belong to the simplified model of equation
\eqref{eqn:BasePlantModel}, which refers to a mono-processor system whose only
resource is CPU time. It is straightforward, however, to extend the model along
the same lines to accommodate additional resources, such as another CPU or
I/O bandwidth. New variables would model the usage of these further resources,
with the same assumptions as in \eqref{eqn:BasePlantModel}.
Of course, these control variables must be measurable in the real system for
the scheduler to be effectively implementable (see \cite{HellersteinEtAl-2005a}
for a discussion of this orthogonal aspect).
Then, control-theoretical techniques --- such as optimal control theory or
model-predictive control --- can be used to design a scheduler which enforces
a resource occupation given as a set point.
\subsubsection{Throughput maximization}
If throughput is part of the requirements for our scheduler, we include the
following set of equations in the model \eqref{eqn:BasePlantModel}:
\begin{equation}
\rho_p(k) = \max \left(\rho_p(k-1) - \tau_p(k-1),\, 0 \right)
\label{eqn:rhoequation}
\end{equation}
Equation \eqref{eqn:rhoequation} defines $\rho_p(k)$, the remaining execution
time of task $p$ at round $k$, as the difference between the remaining time
during the previous round and the actual running time of $p$ during the current
round.
Throughput maximization can then be defined as the round-wise maximization of
the number of processes whose $\rho_p$ value is zero.
Standard control-theoretical techniques can then design a controller that
provably achieves this requirement.
\subsubsection{Responsiveness}
The model \eqref{eqn:BasePlantModel} includes a variable $\tau_r$ that describes
the duration of a round, hence requirements on the response time can be
expressed as a target value for $\tau_r$. More precisely, the smaller $\tau_r$,
the more responsive is the controlled system.
\subsubsection{Other requirements}
The same framework can address other requirements, such as turnaround time,
CPU utilization, predictability, proportionality, and deadline enforcement.
As an example, the experiments in Section \ref{sec:experimental} will address
proportionality and deadline enforcement explicitly.
\subsection{Complexity parameters}
\label{sec:complexity}
Analyzing the complexity of scheduling algorithms is often arduous, mostly
due to the difficulty of determining the right level of abstraction to describe
the various components (i.e., the processor, the scheduler, etc.).
It also does not make sense to compare directly the general framework we
have outlined to existing algorithms; on the contrary, specific implementations
can be experimentally evaluated.
It is nonetheless interesting to present a few simple rules of thumb to have a
rough estimate of the complexity of an algorithm designed within our framework.
With the goal of determining the number of elementary operations spent by the
CPU to execute the scheduling algorithm itself, let us introduce the constants
$t_{\Sigma}$, $t_S$, $t_{\Pi}$, and $t_{\rightarrow}$.
They denote the (average) duration of a sum, subtraction, multiplication, and
bit-shift operation, respectively.
Also, let $t_c$ denote the (average) duration of a ``light'' context switch (i.e, the time overhead taken by operations such as storing and restoring context information, which does not include the actual computation of the next task to be run and its budget). Using these figures, Section \ref{sec:experimental} evaluates the
complexity of a specific algorithm that validates our framework.
\section{Application and experimental results}
\label{sec:experimental}
This section instantiates the framework proposed by developing a
scheduler with certain proportionality and deadline meeting requirements
with control-theoretic techniques.
The design is evaluated on the Hartstone \cite{Hartstone-1992a,Weiderman-1989a}
benchmark, a standard real-time system benchmark.
The Hartstone benchmark evaluates deadline misses and was initially conceived
to assess architectures and compilers, but it can be used also for evaluating
the performances of a scheduling algorithm.
The design and the evaluation are necessarily preliminary, and do not tackle every aspect that is relevant in real-time scheduling (for example, earliness/tardiness bounds are not considered); these experiments are meant as a feasibility demonstration of the approach and tackling more challenging problems belongs to future work.
\textbf{Regulating round duration and CPU distribution.}
One of the requirements fixes a desired duration for scheduling rounds;
let $\tau_r^{\circ}$ denote such a duration.
Moreover, define
\begin{equation}
\theta_p^{\circ} \in \Re^N, \qquad \theta_{p,i}^{\circ} \geq 0,
\qquad\sum\limits_{i=1}^N \theta_{p,i}^{\circ}=1
\label{eqn:thetapo}
\end{equation}
as the vector with the \emph{fractions} of CPU time to be allotted to each task.
This vector can be expressed as a function of workload and round duration, and
the corresponding requirement be expressed as a set point for each task.
More generally, notice that requirements on fairness, tardiness, and similar
features, are also expressible in terms of $\tau_r^{\circ}$ and
$\theta_p^{\circ}$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{controlsystem.png}
\caption{The control scheme proposed.}
\label{fig:ProposedScheme-TwoLoopsLinear}
\end{figure*}
Let us now show a possible approach to design a scheduler that meets the
requirement on the duration of the scheduling rounds.
Consider for example a cascade controller such as in Figure
\ref{fig:ProposedScheme-TwoLoopsLinear}.
An appropriate choice for the involved regulators is to give $R_r$ a
PI structure:
\begin{equation}
R_r(z)=k_{rr}\frac{z-z_{rr}}{z-1}
\end{equation}
while selecting $R_p$ as a diagonal integral regulator with gain $k_{pi}$:
\begin{equation}
\begin{array}{c}
A_{R_p}=I_N, \qquad B_{R_p}=k_{pi}I_N, \\
C_{R_p}=I_N, \qquad D_{R_P}=0_{N\times N}.
\end{array}
\end{equation}
Correspondingly, one can perform a model-free synthesis
getting the values $k_{rr}=1.4$, $z_{rr}=0.88$, and $k_{pi}=0.25$.
These values instantiate a cascade controller which we take as our BCC.
For the PSC, let us choose a simple approach where every task with a
positive computed budget is activated following a Round Robin policy.
\textbf{Benchmark description.}
The Hartstone benchmark defines various series of tests.
Each of them starts from a
baseline system, verifies its correct behavior, and then iteratively adds
workload and re-verifies its correct behavior until a failure occurs.
The final amount of (supported) additional workload gives a measure the system
performance. For brevity (and without loss of generality), we consider
only the first series of the Hartstone benchmark --- the ``PH series'',
which deals with periodic tasks --- in this paper.
The PH (Periodic tasks, Harmonic frequencies) series adds
tasks and modifies their period and workload to stress the system.
Tasks are periodic and harmonic.
The baseline system \cite{Hartstone-1992a} consists
of five periodic tasks. Each task has a frequency and a workload.
All frequencies are an integral multiple of the smallest, and the workload is
determined by a fixed amount of work to be completed within the task's period.
More precisely, each task has to
execute a given number of ``Wheatstones'' within a period, hence the workload
rate is measured in Kilo-Whets instruction per second [KWIPS]. In our tests,
we assume that the CPU can complete 1 KWIPS in 25 time units.
We do not change this figure throughout our simulations, thus neglecting the
overhead on the hardware of executing the scheduler. In addition, let a
frequency of 1 Hertz correspond to a period of 20000 time units.
All the tasks are independent: their execution
does not need synchronization and they are all scheduled to start at the same
time. The deadline for the workload completion for each task coincides with the
beginning of the next period. These assumptions are appropriate, for example,
for programs that monitor several sensors at different rates, and display the
results without user intervention or interrupts.
Table \ref{tab:baselineset} gives details on the baseline system.
\begin{table}[ht]
\begin{footnotesize}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Task} & \textbf{Frequency} & \textbf{Workload} & \textbf{Workload rate} \\
\hline
\hline
1 & 2 Hertz & 32 Kilo-Whets & 64 KWIPS \\
2 & 4 Hertz & 16 Kilo-Whets & 64 KWIPS \\
3 & 8 Hertz & 8 Kilo-Whets & 64 KWIPS \\
4 & 16 Hertz & 4 Kilo-Whets & 64 KWIPS \\
5 & 32 Hertz & 2 Kilo-Whets & 64 KWIPS \\
\hline
\end{tabular}
\caption{The baseline task set.}
\label{tab:baselineset}
\end{footnotesize}
\end{table}
\textbf{Benchmark evaluation.}
We run the benchmark with three different algorithms: the one regulating CPU
distribution and round duration (designed above within our framework) with
three different values for $\tau_r^{\circ}$, as well as the standard (real-time)
policies EDF and LLF. The yardstick for the evaluation is a simple Round Robin
scheduler.
The presented results used the Scilab environment \cite{scilab} to perform the
simulations; this allowed a high-level evaluation of the scheduling algorithms
that is not tied down to any lower-level implementation detail.
As a further validation, we also run the same tests within the Cheddar framework
\cite{Cheddar-2009a}.
The results of the two sets of tests, with Cheddar and with Scilab, essentially
coincide, therefore reinforcing our confidence in the soundness of the
evaluation.
In the first PH test, the highest-frequency task (task 5) has the
frequency increased by 8 Hertz at each iteration, until a deadline is missed.
This tests the interactivity of the system or, in other words, its ability
to switch rapidly between tasks. In the second test, all the frequencies
are scaled by 1.1, 1.2, $\ldots$ at each iteration, until a deadline is
missed. This is a uniform increase of the number of operations done by the
tasks, therefore testing the ability to handle an
increased but still balanced workload. The third test starts from the
baseline set and increases the workload of each task by 1, 2, $\ldots$
KWPIS per period at each iteration, until a deadline is missed. This increases
the system's overhead while introducing unbalance. In the last test, a new task
is added at each iteration, with a workload of 8 KWPIS per period
and a frequency of 8 Hertz (equivalent to the third task of the baseline
set). This test evaluates the performance in handling a large number of
tasks.
\begin{table*}[ht]
\begin{footnotesize}
\begin{center}
\begin{tabular}{|cc|cc|cc|cc|cc|}
\hline
& & \multicolumn{8}{c}{\textbf{Benchmark No.}} \vline \\
& & \multicolumn{2}{c}{I} & \multicolumn{2}{c}{II} &
\multicolumn{2}{c}{III} & \multicolumn{2}{c}{IV} \vline \\ \hline \hline
& Period duration & \multicolumn{2}{c}{10000} \vline & \multicolumn{2}{c}{4000} \vline
& \multicolumn{2}{c}{10000} \vline & \multicolumn{2}{c}{10000} \vline \\ \hline \hline
\multirow{8}{*}{\begin{sideways}\textbf{Policy}\end{sideways}}
& EDF & 14 & (265) & 24 & (42) & 7 & (43) & 7 & (73) \\
& LLF & 14 & (993) & 24 & (1183) & 7 & (491) & 7 & (7143) \\
& RR, $q/T_{\min}^{base}=1/625$ & 3 & (3485) & 24 & (3999) & 3 & (4867) & 7 & (9351) \\
& RR, $q/T_{\min}^{base}=5/625$ & 3 & (705) & 24 & (799) & 3 & (981) & 7 & (1870) \\
& RR, $q/T_{\min}^{base}=10/625$ & 3 & (357) & 24 & (399) & 2 & (435) & 7 & (935) \\
& PSC+BCC, $\tau_r^{\circ}=500$ & 14 & (126) & 24 & (60) & 7 & (126) & 7 & (252) \\
& PSC+BCC, $\tau_r^{\circ}=1000$ & 14 & (66) & 24 & (36) & 7 & (66) & 7 & (132) \\
& PSC+BCC, $\tau_r^{\circ}=2000$ & 14 & (48) & 24 & (24) & 7 & (42) & 7 & (84) \\
\hline
\end{tabular}
\end{center}
\end{footnotesize}
\caption{Hartstone PH (Periodic Tasks, Harmonic Frequencies) series benchmark: number of iterations before first deadline miss, and (in parentheses) number of context switches in the last period of the last test iteration before that with the first miss. In the RR case the quantum $q$ is selected as a fraction of the minimum task period ($625$ time units) in the baseline system, denoted by $T_{\min}^{base}$.}
\label{tab:BenchPH-IterationsBefore1stMiss}
\end{table*}
Table \ref{tab:BenchPH-IterationsBefore1stMiss} shows the results of the PH
tests. The scheduling algorithm designed within our framework shows consistently
good performances, and can outperform other standard algorithms in certain
operational conditions for aspects such as deadline misses.
\textbf{Complexity evaluation.}
Let $\sigma_{POL}$ denote the time spent during one round in running the
scheduler $POL$.
In our experiments, $POL$ is one of $RR$, $SRR$ (Selfish
Round Robin\footnote{Notice that the SRR is a useful example as it provides an adaptation mechanism.}),
and $PSC+BCC$.
\begin{small}
\begin{equation}
\begin{array}{rcl}
\sigma_{RR} &=& N\cdot t_{\rightarrow}+
N\cdot t_c, \\
\sigma_{SRR} &=& N\cdot t_{\rightarrow}+N\cdot t_c+
N^2\cdot \left(t_S+t_{\Pi}\right), \\
\sigma_{PSC+BCC} &=& N\cdot t_{\rightarrow}+N\cdot
t_c+(N+1)\cdot t_s + \\
& & (2N+1)\cdot t_{\Sigma}+(2N+2)\cdot t_{\Pi}
\end{array}
\end{equation}
\end{small}
The expressions above take into account the arithmetic
operations necessary to execute the controller's code.
Then, if we denote the quantum (where applicable) by $q$, the total duration
of one round is given by
\begin{small}
\begin{equation}
\tau_{r,RR} = N\cdot q, \quad
\tau_{r,SRR} = N_w\cdot q, \quad
\tau_{r,PSC+BCC} = \tau_r^{\circ}
\end{equation}
\end{small}
\noindent where $N_w \leq N$ is the number of tasks in the waiting queue in the
SRR case.
Correspondingly, the overall time complexity of the algorithms can be computed.
With PSC, it is independent of the number of tasks and can be tuned by changing
the round duration parameter.
In addition, it is interesting to compare the complexity of our $PSC+BCC$ algorithm against the $RR$ algorithm (an open-loop policy) and the $SRR$ algorithm (a closed-loop variant of $RR$, where the possibility of moving tasks between queues provides the feedback mechanism).
It turns out that our $PSC+BCC$ algorithm is computationally slightly more complex than $RR$; however, the more complex properties that $PSC+BCC$ can guarantee --- such as convergence to the desired distribution in the presence of disturbances --- pay off the additional cost.
$SRR$, on the other hand, can enforce similar properties and has a greater computational complexity than $PSC+BCC$.
The comparison with $SRR$ requires, however, further investigation, because the parameters of $SRR$ do not seem to have an entirely clear interpretation within the control-theoretical framework.
\section{Conclusion and future work}
\label{sec:conclusions}
We presented a framework, based on control theory, to approach the scheduling
problem. The approach clearly separates the models of the processor and of the
scheduler. This enables the re-use of a vast repertoire of control-theoretical
techniques to design a scheduling algorithm that achieves certain requirements.
Algorithm design is then essentially reduced to controller synthesis.
We showed how to compare the resulting algorithms to existing ones, and the
advantages that are peculiar to our approach.
This paper focused on developing the components responsible for the computation
of budgets and the selection of tasks.
Future work will focus on the design of the intervention policies.
This aspect can still be approached within the same framework, by analyzing
the effects of different policies on the model equations and on the overall
system performance.
Moreover, we plan to refine the complexity evaluation of scheduling algorithms.
\input{scheduling.bbl}
\end{document}
|
1,108,101,564,639 | arxiv | \section{Introduction} \label{introduction}
As indicated by the recent fatalities \cite{ntsbreport}, the current sensors in self-driving vehicles with level 2 and level 3 autonomy (lacking thermal imaging) do not adequately detect vehicles and pedestrians. Pedestrians are especially at risk after dark, when 75\% of the 5,987 U.S. pedestrian fatalities occurred in 2016 \cite{HighwaySafety2017}. Thermal sensors perform well in such conditions where autonomy level 2 and level 3 sensor-suite technologies are challenged. As is well-known, thermal IR cameras are relatively more robust to illumination changes, and can thus be useful for deployment both during the day and night. In addition, they are low-cost, non-intrusive and small in size. Consequently, thermal IR cameras have become increasingly popular in applications such as autonomous driving recently, as well as in other mainstream applications such as security and military surveillance operations. Detection and classification of objects in thermal imagery is thus an important problem to be addressed and invested in, to achieve successes that can be translated to deployment of such models in real-world environments.
Although object detection has always remained an important problem in computer vision, most of the efforts have focused on detecting humans and objects in standard RGB imagery. With the advent of Deep Convolutional Neural Networks (CNNs) \cite{NIPS2012_4824}, object detection performance in the RGB domain has been significantly improved using region-based methods, such as the R-CNN \cite{rcnn} and Fast R-CNN \cite{fastrcnn} that use selective search, as well as Faster R-CNN \cite{fasterrcnn} that uses region-proposal networks to identify regions of interest.
\begin{figure}[]
\centering
\includegraphics[width=\hsize, scale=0.5]{showcase.pdf}
\caption{\textit{Left:} Detection with single mode Faster-RCNN; \textit{Middle:} Detection using the proposed method; \textit{Right:} Annotated Ground Truth as provided in FLIR dataset \cite{flir}.}
\vspace{-1.5em}
\label{fig:teaser}
\end{figure}
Object detection methods such as YOLO \cite{yolo} rephrase the object detection problem into a regression problem, where the coordinates of the bounding boxes and the class probability for each of those boxes are generated simultaneously. This makes YOLO \cite{yolo} extremely fast, although its performance is lower than R-CNN based counterparts \cite{refineDet}.
The aforementioned object detection methods rely, however, on architectures and models that have been trained on large-scale RGB datasets such as ImageNet, PASCAL-VOC, and MS-COCO. A relative dearth of such publicly available large-scale datasets in the thermal domain restricts the achievement of an equivalent level of success of such frameworks on thermal images.
In this work, we propose a `pseudo multi-modal' framework for object detection in the thermal domain, consisting of two branches. One branch is pre-trained on large-scale RGB datasets (such as PASCAL-VOC or MS-COCO) and finetuned using
a visual RGB input that is obtained using an image-to-image (I2I) translation framework from a given thermal image (and hence the name `pseudo multi-modal'). The second branch follows the standard training process on a relatively smaller thermal dataset. Our multi-modal architecture helps borrow complex high-level features from the RGB domain to improve object detection in the thermal domain. In particular, our multi-modal approach does not need paired examples from two modalities; our framework can borrow from any large-scale RGB dataset available for object detection and does not need the collection of a synchronized multi-modal dataset. This setting makes this problem challenging too. Our experimental results demonstrate that using our multi-modal framework significantly improves the performance of fully supervised detectors in the thermal domain. The proposed framework also overcomes the problem of inadequacy of training examples in the thermal domain. Furthermore, we also study the relevance of this methodology when there is very limited data in the thermal domain. Our experimental results on the recently released FLIR ADAS\cite{flir} thermal imagery dataset show that, using only a quarter of the thermal dataset, the proposed multi-modal framework achieves a higher mAP than a single-mode fully-supervised detector trained on the entire dataset.
The remainder of this paper is organized as follows. Section \ref{relatedwork} provides the context for study including a brief overview of the early and recent work on applying deep learning for thermal imagery. Section \ref{method} describes our approach and methodology. Section \ref{exp_section} describes the experiments carried out and their results. Section \ref{discussion} investigates the impact of size of training set, image resolution and ends with a discussion on some of the cases where our model fails to perform as well.
\begin{figure*}
\begin{center}
\includegraphics[width=16.5cm, height=10cm]{Framework.pdf}
\end{center}
\vspace{-7em}
\caption{Adaptation of proposed Mutli-modal framework for Faster-RCNN} \label{framework}
\end{figure*}
\section{Related Work} \label{relatedwork}
Detection and classification of objects in the thermal imagery has been an active area of research in computer vision \cite{2007Zin}\cite{2013target_detection}\cite{MovingObject2015}\cite{kaist}, especially in the context of military and surveillance\cite{multichannelCNN}. There has been a significant amount of work on classifying and detecting people and objects in thermal imagery using standard computer vision and machine learning models, even before deep learning became popular. Bertozzi \textit{et al.} \cite{bertozzi} proposed a probabilistic template-based approach for pedestrian detection in far infrared (IR) images. They divided their algorithm into three parts: candidate generation, candidate filtering and validation of candidates. One main weakness of this approach is that it assumes the human is hotter than the background which may not be the case in many real-world scenarios. Davis \textit{et al.} \cite{davis-twostageapproach} proposed a two-stage template-based method to detect people in widely varying thermal imagery. To locate the potential person locations, a fast screening procedure is used with a generalized template and then an AdaBoost ensemble classifier is used to test the hypothesized person locations. Kai \textit{et al.} \cite{Jngling2009FeatureBP} proposed a local feature-based pedestrian detector on thermal data. They used a combination of multiple cues to find interest points in the images and used SURF \cite{surf} as features to describe these points. A codebook is then constructed to locate the object center. The challenge of this detector is whether a high performance can be achieved when local features are not obvious.
While these efforts have shown good performance for IR image classification and detection tasks over a small number of objects, they have been outperformed in recent years by deep learning models that enable more descriptive features to be learned. With the increase in popularity of deep neural networks, several methods have been proposed for applying deep learning methods to thermal images. Peng \textit{et al.} \cite{pengnirfacenet} proposed a Convolutional Neural Network (CNN) for face identification in near IR images. Their CNN is a modification of GoogLeNet but has a more compact structure. Lee \textit{et al.} \cite{LEE2016261} designed a lightweight CNN consisting of two convolutional layers and two subsampling layers for recognizing unsafe behaviors of pedestrians using thermal images captured from moving vehicles at night. They combined their lightweight CNN with a boosted random forest classifier. Chevalier \textit{et al.} \cite{chevalier:hal-01332061} proposed LR-CNN for automatic target recognition which is a deep architecture designed for classification of low-resolution images with strong semantic content. Rodger \textit{et al.} \cite{lwircnn} developed a CNN trained on short-to-midrange high resolution IR images containing six object classes (person, land vehicle, helicopter, aeroplane, unmanned aerial vehicle and false alarm) using an LWIR sensor. This network was successful at classifying other short to mid-range objects in unseen images, although it struggled to generalize to long range targets. Abbott \textit{et al.} \cite{Abbott2017} used a transfer learning approach with the YOLO \cite{yolo} framework to train a network on high-resolution thermal imagery for classification of pedestrians and vehicles in low-resolution thermal images. Berg \textit{et al.} \cite{2015BergRails}\cite{amanda_phd} proposed an anomaly-based obstacle detection method using a train-mounted thermal camera. Leykin \textit{et al.} \cite{leykin} proposed a fusion tracker and pedestrian classifier for multispectral pedestrian detection. Proposals for performing detection are generated using background subtraction and evaluated using periodic gait analysis.
Among efforts that use a multimodal approach, Wagner \textit{et al.} \cite{wagner} applied Aggregated Channel Features (ACF) and Boosted Decision trees (BDT) for proposal generation and classified these proposals with a CNN, which fuses Visual and IR information. Choi \textit{et al.} \cite{choietal} uses two separate region proposal networks for both Visual and IR images and evaluates the proposals generated by both the networks with Support Vector Regression on fused deep features. The efforts closest to our work are that of Konig \textit{et al.} \cite{daniel} and Liu \textit{et al.} \cite{Liu2016MultispectralDN}, both of which propose a multi-modal framework that combines RGB and thermal information in a Faster-RCNN architecture by posing it as a convolutional network fusion problem. However, all of these multimodal efforts assume the availability of a dataset with paired training examples from the visual and thermal domain. On the other hand, our work assumes only the presence of thermal imagery and seeks to leverage the use of publicly available RGB datasets (which may not be paired with the thermal dataset) to obtain significant improvement in thermal object detection performance.
\section{Methodology} \label{method}
Our overall proposed methodology for `pseudo multi-modal' object detection for thermal images is summarized in Figure \ref{framework}. The key idea of our methodology is to borrow knowledge from data-rich domains such as visual (RGB) without the explicit need for a paired multimodal dataset. We achieve this objective by leveraging the success of recent image-to-image translation methods \cite{CycleGAN2017, UNIT} to automatically generate a pseudo-RGB image from a given thermal image, and then propose a multimodal Faster R-CNN architecture to achieve our objective. Image-to-Image translation models aim to learn the visual mapping between a source domain and target domain. Learning this mapping becomes challenging when there are no paired images in source and target domains. Recently, there have been noteworthy efforts on addressing this problem using unpaired images \cite{CycleGAN2017}\cite{dualgan2017}\cite{stargan2017}\cite{UNIT}\cite{taigman2017unsupervised}\cite{instagan}\cite{drit2018}. While one could use any unsupervised image-to-image translation framework in our overall methodology, we use CycleGAN\cite{CycleGAN2017} and UNIT\cite{UNIT} as I2I frameworks of choice in our work, owing to their wide use and popularity. We begin our discussion with the I2I translation frameworks used in this work.
\vspace{-1em}
\paragraph{Unpaired Image-to-Image Translation:} CycleGAN \cite{CycleGAN2017} is a popular unpaired image-to-image translation framework that aims to learn the mapping functions $F:\mathcal{X \rightarrow Y}$ and $G:\mathcal{Y \rightarrow X}$ where $\mathcal{X}$ and $\mathcal{Y}$ are source and target domains respectively. It maps the images onto two separate latent spaces and employs two generators $\mathcal{G_{X \rightarrow Y}}, \mathcal{G_{Y \rightarrow X}}$ and two discriminators $\mathcal{D_X}, \mathcal{D_Y}$. The generator $\mathcal{G_{X \rightarrow Y}}$ attempts to generate images $\hat{y}_i$ that look similar to images from domain $\mathcal{Y}$, while $\mathcal{D}_y$ aims to distinguish between the translated samples $\hat{y}_i$ and real samples $y_i$. This condition is enforced using an adversarial loss. To reduce the space of possible mapping functions, a cycle-consistency constraint is also enforced, such that a source-domain image $x_i$ when transformed into target domain ($\hat{y}_i$) and re-transformed back to source domain ($\hat{x}_i$) will ensure in $\hat{x}_i$ and $x_i$ will belong to the same distribution. For more details, please see \cite{CycleGAN2017}.
Unlike CycleGAN \cite{CycleGAN2017}, UNIT \cite{UNIT} tackles the unpaired image-to-image translation problem assuming a shared latent space between both the domains. It learns the joint distribution of images in different domains using the marginal distribution in individual domains. The framework is based on variational autoencoders $\mathcal{\text{VAE}_{\text{1}}}, \mathcal{\text{VAE}_{\text{2}}}$ and generative adversarial networks $\mathcal{\text{GAN}_{\text{1}}}, \mathcal{\text{GAN}_{\text{2}}}$ with a total of six sub-networks including two image encoders $\mathcal{E}_1, \mathcal{E}_2$, two image generators $\mathcal{G}_1, \mathcal{G}_2$ and two adversarial discriminators $\mathcal{D}_1, \mathcal{D}_2$. Since they assume a shared latent space between the two domains, a weight sharing constraint is enforced to relate the two VAEs. Specifically, weight sharing is done between the last few layers of encoders $\mathcal{E}_1, \mathcal{E}_2$ that are responsible for higher level representations of the input images in the two domains and the first few layers of the image generators $\mathcal{G}_1, \mathcal{G}_2$ responsible for decoding the high-level representations for reconstructing the input images. The learning problems of $\mathcal{\text{VAE}_{\text{1}}}, \mathcal{\text{VAE}_{\text{2}}}, \mathcal{\text{GAN}_{\text{1}}}, \mathcal{\text{GAN}_{\text{2}}}$ for image reconstruction, image translation and cyclic reconstruction are jointly solved. For more information, please see \cite{UNIT}.
In case of both CycleGAN and UNIT, the trained model provides two generators which perform the translation between source and target domains. In our case, we use the generator which performs the Thermal-to-RGB translation, which is given by $G:\mathcal{X \rightarrow Y}$ in case of a CycleGAN and $G_1$ in case of UNIT (we used Thermal as the source domain, and RGB as the target domain while training these models). We refer to the parameters of these generators as $W_{T2R}$ in our methodology.
\begin{algorithm}[h]
\SetAlgoLined
\textbf{Input:} Thermal image training data: $\{(c_i, y_i)\}_{i=1}^m$; Generator of I2I framework: $W_{T2R}$; Pre-trained RGB base network: $W_{RGB}$; Pre-trained thermal base network: $W_{TIR}$, Pre-trained thermal top network $W_{top}$; Randomly initialised 1x1 conv weights: $W_{conv}$; Number of epochs: $num\_epochs$; Loss function: $\mathcal{L(.)}$\\
\textbf{Output:} Trained MMTOD model, $\mathcal{F(.)}$\\
\For{$num\_epochs$}{
\For{$c_{i}, i= 1, \cdots, m$}{
Generate a pseudo-RGB image $\hat{c}_i$ using $W_{T2R}$. \\
Generate feature maps by passing $c_{i}$ and $\hat{c}_{i}$ to base networks $W_{TIR}$ and $W_{RGB}$ respectively\\
Stack the feature maps and use $W_{conv}$ to get $1 \times 1$ conv output \\
Pass the 1x1 conv output to $W_{top}$ \\
Update weights: $W_{RGB}, W_{TIR}, W_{top}, W_{conv}, W_{T2R}$ by minimizing $\mathcal{L}$ of the object detection framework.
}
}
\caption{MMTOD: Multi-modal Thermal Object Detection Methodology}
\label{alg_mmtod}
\end{algorithm}
\paragraph{Pseudo Multi-modal Object Detection:} As shown in Figure \ref{framework}, our object detection framework is a multi-modal architecture consisting of two branches, one for the thermal image input and the other for the RGB input.
Each branch is initialized with a model pre-trained on images from that domain (specific details of implementation are discussed in Section \ref{exp_section}). To avoid the need for paired training examples from two modalities but yet use a multi-modal approach, we use an image-to-image (I2I) translation network in our framework. During the course of training, for every thermal image input, we generate a pseudo-RGB using $W_{T2R}$ and pass the pseudo-RGB and Thermal to the input branches (parametrized by $W_{RGB}$ and $W_{TIR}$ respectively). Outputs from these branches are stacked and passed through a $1 \times 1$ convolution ($W_{conv}$) to learn to combine these features appropriately for the given task. The output of this $1 \times 1$ convolution is directly passed into the rest of the Faster-RCNN network (denoted by $W_{top}$). We use the same Region Proposal Network (RPN) loss as used in Faster-RCNN, given as follows:
\begin{equation*}
\label{eq:loss}
L(\{p_i\}, \{t_i\}) = \frac{1}{N_{cls}} \sum_iL_{cls}(p_i, p^{*}_i) + \lambda \frac{1}{N_reg} \sum_i p^{*}_{i}L_{reg}(t_i, t^{*}_{i})
\end{equation*}
\noindent where $i$ is the index of an anchor, $p_i$ is the predicted probability of anchor $i$ being an object, $p^{*}_i$ is the ground truth, $t_i$ represents the coordinates of the predicted bounding box, $t^{*}_i$ represents the ground truth bounding box coordinates, $L$ is log loss, $R$ is the robust loss function (smooth L$_1$) as defined in \cite{fastrcnn}, and $\lambda$ is a hyperparameter. We use the same multi-task classification and regression loss as used in Fast-RCNN \cite{fastrcnn} at the end of the network.
While the use of existing I2I models allow easy adoption of the proposed methodology, the images generated from such I2I frameworks for thermal-to-RGB translation are perceptually far from natural RGB domain images (like MS-COCO\cite{coco} and PASCAL-VOC \cite{pascal-voc-2007}), as shown in Figure \ref{trans_examples}. Therefore, during the training phase of our multi-modal framework, in order to learn to combine the RGB and thermal features in a way that helps improve detection, we also update the weights of the I2I generator $W_{T2R}$. This helps learn a better representation of the pseudo-RGB image for borrowing relevant features from the RGB-domain, which we found to be key in improving detection in the thermal domain. The proposed methodology provides a fairly simple strategy to improve object detection in the thermal domain. We refer to the proposed methodology as MMTOD (Multimodal Thermal Object Detection) hereafter. Our algorithm for training is summarized in Algorithm \ref{alg_mmtod}. More details on the implementation of our methodology are provided in Section \ref{exp_section}.
\section{Experiments} \label{exp_section}
\subsection{Datasets and Experimental Setup}
\paragraph{Datasets:} We use the recently released FLIR ADAS \cite{flir} dataset and the KAIST Multispectral Pedestrian dataset \cite{kaist} for our experimental studies. FLIR ADAS \cite{flir} consists of a total of 9,214 images with bounding box annotations, where each image is of $640 \times 512$ resolution and is captured using a FLIR Tau2 camera. 60\% of the images are collected during daytime and the remaining 40\% are collected during night. While the dataset provides both RGB and thermal domain images (not paired though), we use only the thermal images from the dataset in our experiments (as required by our method). For all the experiments, we use the training and test splits as provided in the dataset benchmark, which contains the person (22,372 instances), car (41,260 instances), and bicycle (3,986 instances) categories. Some example images from the dataset are shown in Figure \ref{example_images}.
The KAIST Multispectral pedestrian benchmark dataset \cite{kaist} contains around 95,000 8-bit day and night images (consisting of only the Person class). These images are collected using a FLIR A35 microbolometer LWIR camera with a resolution of $320 \times 256$ pixels. The images are then upsampled to $640 \times 512$ in the dataset. Sample images from the dataset are shown in Figure \ref{example_images}. Though the KAIST dataset comes with fully aligned RGB and Thermal, we choose not to use the RGB images as our goal to improve the detection in the absence of paired training data.
\begin{figure}[H]
\centering
\includegraphics[width=\hsize]{example_images.pdf}
\caption{\textit{Row 1} \& \textit{Row 2}: Example images from FLIR \cite{flir} ADAS dataset, \textit{Row 3}: Example Images from KAIST \cite{kaist} dataset} \label{example_images}
\vspace{-0.9em}
\end{figure}
Our methodology relies on using publicly available large-scale RGB datasets to improve thermal object detection performance. For this purpose, we use RGB datasets with the same classes as in the aforementioned thermal image datasets. In particular, we perform experiments using two popular RGB datasets namely, PASCAL VOC \cite{pascal-voc-2007} and MS-COCO \cite{coco}. In each experiment, we pre-train an object detector on either of these datasets and use these parameters to initialise the RGB branch of our multimodal framework. We also compare the performance of these two initializations in our experiments. In case of thermal image datasets, an end-to-end object detector is first trained on the dataset and used to initialize the thermal branch of our framework. We use mean Average Precision (mAP) as the performance metric, as is common for the object detection task.
\vspace{-4pt}
\paragraph{Baseline:} A Faster-RCNN trained in a fully supervised manner on the thermal images from the training set is used as the baseline method for the respective experiments in our studies.
We followed the original paper \cite{fasterrcnn} for all the hyperparameters, unless specified otherwise. The FLIR ADAS dataset \cite{flir} also provides a benchmark test mAP (at IoU of 0.5) of 0.58 using the more recent RefineDetect-512 \cite{refineDet} model. We show that we beat this benchmark using our improved multi-modal Faster-RCNN model.
\vspace{-4pt}
\paragraph{Image-to-Image Translation (IR-to-RGB):} For our experiments, we train two CycleGAN models: one for FLIR $\leftrightarrow$ RGB which uses thermal images from FLIR \cite{flir} and RGB images from PASCAL VOC \cite{pascal-voc-2007}, and another for KAIST $\leftrightarrow$ RGB which uses thermal images from KAIST \cite{kaist} and RGB images from PASCAL VOC \cite{pascal-voc-2007}. We use an initial learning rate of 1e-5 for the first 20 epochs, which is decayed to zero over the next 20 epochs. The identity mapping is set to zero, i.e., the identity loss and the reconstruction loss are given equal weightage. The other hyperparameters of training are as described in \cite{CycleGAN2017}. For training of the UNIT framework, all the hyperparameters are used as stated in the original paper, without any alterations. Since UNIT takes a long time to train (7 to 8 days on an NVIDIA P-100 GPU), we trained it only for FLIR $\leftrightarrow$ RGB, so the experiments on KAIST are performed using CycleGAN only. Our variants are hence referred to as MMTOD-CG (when I2I is CycleGAN) and MMTOD-UNIT (when I2I is UNIT) in the remainder of the text.
We use the same metrics as mentioned in CycleGAN \cite{CycleGAN2017} and UNIT \cite{UNIT} papers for evaluating the quality of translation. In an attempt to improve the quality of generated images in CycleGAN \cite{CycleGAN2017}, we tried adding feature losses in addition to cycle consistency loss and adversarial loss. However, this did not improve the thermal to visual RGB translation performance. We hence chose to finally use the same loss as mentioned in \cite{CycleGAN2017}.
\paragraph{Training our Multi-modal Faster-RCNN:} Our overall architecture (as in Figure \ref{framework}) is initialized with pre-trained RGB and Thermal detectors as described in Section \ref{method}. Since our objective is to improve detection in thermal domain, the region proposal network (RPN) is initialized with weights pre-trained on thermal images. The model is then trained on the same set of images on which the thermal detector was previously pre-trained. The I2I framework generates a pseudo-RGB image corresponding to the input thermal image. The thermal image and the corresponding pseudo-RGB image are passed through the branches of the multi-modal framework to obtain two feature maps of 1024 dimension each, as shown in figure \ref{framework}. These two feature maps are stacked back-to-back and passed through a $1 \times 1$ convolution, which is then passed as input to the Region Proposal Network (RPN). RPN produces the promising Regions of Interest (RoIs) that are likely to contain a foreground object. These regions are then cropped out of the feature map and passed into a classification layer which learns to classify the objects in each ROI. Note that as mentioned in Section \ref{method}, during the training of the MMTOD framework, the weights of the I2I framework are also updated which allows it to learn a better representation of the translated image for improved object detection in thermal domain. We adapted the Faster-RCNN code provided at \cite{jjfaster2rcnn} for our purpose. The code for the CycleGAN and UNIT was taken from their respective official code releases\cite{CycleGAN2017}\cite{isola2017image}\cite{UNIT}. Our code will be made publicly available for further clarifications.
\paragraph{Experimental Setup:} To evaluate the performance of the proposed multi-modal framework, the following experiments are carried out:
\vspace{-1em}
\begin{itemize}
\item MMTOD-CG with RGB branch initialized by PASCAL-VOC pre-trained detector, thermal branch initialized by FLIR ADAS pre-trained detector
\vspace{-0.85em}
\item MMTOD-UNIT with RGB branch initialized by PASCAL-VOC pre-trained detector, thermal branch initialized by FLIR ADAS pre-trained detector
\vspace{-0.85em}
\item MMTOD-CG with RGB branch initialized by MS-COCO pre-trained detector, thermal branch initialized by FLIR ADAS pre-trained detector
\vspace{-0.85em}
\item MMTOD-UNIT with RGB branch initialized by MS-COCO pre-trained detector, thermal branch initialized by FLIR ADAS pre-trained detector
\vspace{-0.85em}
\item MMTOD-CG with RGB branch initialized by PASCAL-VOC pre-trained detector, thermal branch initialized by KAIST pre-trained detector
\vspace{-0.85em}
\item MMTOD-CG with RGB branch initialized by COCO pre-trained detector, thermal branch initialized by KAIST pre-trained detector
\end{itemize}
\begin{figure*}
\begin{center}
\includegraphics[width=16.5cm, height=5cm]{results_flir.pdf}
\end{center}
\caption{Qualitative results of detections on the FLIR ADAS dataset. \textit{Row 1:} Baseline \textit{Row 2:} MMTOD}
\label{rresults_flir}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=16.5cm, height=5cm]{results_kaist.pdf}
\end{center}
\caption{Qualitative results of detections on the KAIST. \textit{Row 1:} Baseline. \textit{Row 2:} MMTOD}
\label{results_kaist}
\end{figure*}
\subsection{Results}
\paragraph{IR-to-RGB Translation Results:} Figure \ref{trans_examples} shows the results of CycleGAN and UNIT trained for Thermal $\leftrightarrow$ RGB translation. As mentioned in Section \ref{method}, the generated pseudo-RGB images are perceptually far from natural domain images. This can be attributed to the fact that the domain shift between RGB and Thermal domains is relatively high compared to other domains. In addition, RGB images have both chrominance and luminance information, while thermal images just have the luminance part which makes estimating the chrominance for RGB images a difficult task. However, we show that using our method, these generated images add value to the detection methodology.
\begin{figure}[H]
\centering
\includegraphics[width=\hsize]{trans_results.pdf}
\caption{\textit{Row 1:} Thermal images from FLIR ADAS\cite{flir} dataset; \textit{Row 2:} Translations generated using UNIT\cite{UNIT}; \textit{Row 3:} Translations generated using CycleGAN\cite{CycleGAN2017}.} \label{trans_examples}
\end{figure}
\paragraph{Thermal Object Detection Results:}
Tables \ref{tbl:flir_each_class_comparision} and \ref{tbl:kaist_person_comparision} show the comparison of AP for each class and the mAP of our framework against the baseline detector when trained on FLIR ADAS and KAIST datasets respectively. (Note that the KAIST dataset has only one class, the Person.) We observe that in all the experiments, our framework outperforms the baseline network across all the classes.
\begin{table}[H]
\tabcolsep=1.5pt
\begin{tabular}{cccccc}
\hline
& & \multicolumn{3}{c}{\textit{AP across each class}} & \\ \cline{3-5}
\multicolumn{1}{l}{\textit{Method}} & & \textit{Bicycle} & \textit{Person} & \textit{Car} & \textit{mAP} \\ \hline
\multicolumn{1}{l}{Baseline} & & 39.66 & 54.69 & 67.57 & 53.97 \\ \hline
\multicolumn{1}{c|}{Framework} & RGB Branch & & & & \\ \cline{1-2}
\multicolumn{1}{c|}{\multirow{2}{*}{MMTOD-UNIT}} & MSCOCO & 49.43 & \textbf{64.47} & \textbf{70.72} & \textbf{61.54} \\
\multicolumn{1}{c|}{} & Pascal VOC & 45.81 & 59.45 & 70.42 & 58.56 \\ \hline
\multicolumn{1}{c|}{\multirow{2}{*}{MMTOD-CG}} & MSCOCO & 50.26 & 63.31 & 70.63 & 61.40 \\
\multicolumn{1}{c|}{} & Pascal VOC & 43.96 & 57.51 & 69.85 & 57.11 \\ \hline
\end{tabular}
\caption{Performance comparison of proposed methodology against baseline on FLIR \cite{flir}}
\label{tbl:flir_each_class_comparision}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{lcc}
\hline
\textit{Method} & & \textit{mAP} \\ \hline
Baseline & & 49.39 \\ \hline
\multicolumn{1}{c|}{Framework} & RGB Branch & \\ \cline{1-2}
\multicolumn{1}{l|}{\multirow{2}{*}{MMTOD-CG}} & MS-COCO & \textbf{53.56} \\
\multicolumn{1}{l|}{} & Pascal VOC & 52.26 \\ \hline
\end{tabular}
\caption{Performance comparison of proposed methodology against baseline on KAIST \cite{kaist}}
\label{tbl:kaist_person_comparision}
\end{table}
In case of FLIR, we observe that initializing the RGB branch with MS-COCO obtains better results than those with PASCAL-VOC. This can be attributed to the fact that MS-COCO has more instances of car, bicycle, and person as compared to PASCAL VOC. Also, experimental results show that employing UNIT as the I2I framework achieves better performance than CycleGAN. Our framework with MS-COCO initialization and UNIT for I2I translation results in an increase in mAP by at least 7 points. In particular, as mentioned earlier, the FLIR ADAS dataset provides a benchmark test mAP (at IoU of 0.5) of 0.58 using the more recent RefineDetect-512 \cite{refineDet} model. Our method outperforms the benchmark despite using a relatively older object detection model such as the Faster-RCNN.
As shown in Table \ref{tbl:kaist_person_comparision}, our improved performance on the KAIST dataset shows that although this dataset has more examples of the 'Person' category than the RGB dataset used such as PASCAL-VOC, our framework still improves upon the performance of the baseline method. This allows us to infer that the proposed framework can be used in tandem with any region-CNN based object detection method to improve the performance of object detection in thermal images. On average our framework takes 0.11s to make detections on a single image, while the baseline framework takes 0.08s. Our future directions of work include improving the efficiency of our framework while extending the methodology to other object detection pipelines such as YOLO and SSD.
\section{Discussion and Ablation Studies}
\label{discussion}
\paragraph{Learning with limited examples:} We also conducted studies to understand the capability of our methodology when there are limited samples in the thermal domain. Our experiments on the FLIR ADAS dataset showed that our framework outperforms the current state-of-the-art detection performance using only half the training examples. Moreover, our experiments show that using only a quarter of the training examples, our framework outperforms the baseline on the full training set. Table \ref{tbl:stats_datasets} presents the statistics of the dataset used for this experiment. Note that the test set used in these experiments is still the same as originally provided in the dataset.
\begin{table}[H]
\centering
\begin{tabular}{@{}llll@{}}
\toprule
& \multicolumn{3}{c}{Number of Instances} \\ \cmidrule(l){2-4}
Dataset & Car & Person & Bicycle \\ \midrule
FLIR & 41,260 & 22,372 & 3,986 \\
FLIR (1/2) & 20,708 & 11,365 & 2,709 \\
FLIR (1/4) & 10,448 & 5,863 & 974 \\ \bottomrule
\end{tabular}
\caption{Statistics of the datasets we used for our experiments.}
\label{tbl:stats_datasets}
\vspace{-1em}
\end{table}
We perform the same set of experiments (as discussed in Section \ref{exp_section}) on FLIR(1/2) and FLIR(1/4) datasets. Tables \ref{tbl:flir_half_each_class_comparision} and \ref{tbl:flir_quarter_each_class_comparision} present the results.
\begin{table}[H]
\resizebox{0.48\textwidth}{!}{%
\tabcolsep=1.5pt
\begin{tabular}{cccccc}
\hline
& & \multicolumn{3}{c}{\textit{AP across each class}} & \\ \cline{3-5}
\multicolumn{1}{l}{\textit{Method}} & & \textit{Bicycle} & \textit{Person} & \textit{Car} & \textit{mAP} \\ \hline
\multicolumn{1}{l}{Baseline (FLIR)} & & 39.66 & 54.69 & 67.57 & 53.97 \\ \hline
\multicolumn{1}{l}{Baseline (FLIR-1/2)} & & 34.41 & 51.88 & 65.04 & 50.44 \\ \hline
\multicolumn{1}{c|}{Framework} & RGB Branch & & & & \\ \cline{1-2}
\multicolumn{1}{c|}{\multirow{2}{*}{MMTOD-UNIT}} & MSCOCO & 49.84 & \textbf{59.76} & \textbf{70.14} & \textbf{59.91} \\
\multicolumn{1}{c|}{} & Pascal VOC & 45.53 & 57.77 & 69.86 & 57.72 \\ \hline
\multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}MMTOD-CG\end{tabular}}} & MSCOCO & 50.19 & 58.08 & 69.77 & 59.35 \\
\multicolumn{1}{c|}{} & Pascal VOC & 40.17 & 54.67 & 67.62 & 54.15 \\ \hline
\end{tabular}}
\caption{Performance comparison of proposed methodology against baseline on FLIR (1/2)}
\label{tbl:flir_half_each_class_comparision}
\end{table}
\begin{table}[H]
\resizebox{0.48\textwidth}{!}{%
\tabcolsep=1.5pt
\begin{tabular}{cccccc}
\hline
& & \multicolumn{3}{c}{\textit{AP across each class}} & \\ \cline{3-5}
\multicolumn{1}{l}{\textit{Method}} & & \textit{Bicycle} & \textit{Person} & \textit{Car} & \textit{mAP} \\ \hline
\multicolumn{1}{l}{Baseline(FLIR)} & & 39.66 & 54.69 & 67.57 & 53.97 \\ \hline
\multicolumn{1}{l}{Baseline(FLIR-1/4)} & & 33.35 & 49.18 & 60.84 & 47.79 \\ \hline
\multicolumn{1}{c|}{Framework} & RGB Branch & & & & \\ \cline{1-2}
\multicolumn{1}{c|}{\multirow{2}{*}{MMTOD-UNIT}} & MSCOCO & \textbf{44.24} & \textbf{57.76} & \textbf{69.77} & \textbf{57.26} \\
\multicolumn{1}{c|}{} & Pascal VOC & 35.23 & 54.71 & 67.83 & 52.59 \\ \hline
\multicolumn{1}{c|}{\multirow{2}{*}{MMTOD-CG}} & MSCOCO & 41.29 & 57.08 & 69.10 & 55.82 \\
\multicolumn{1}{c|}{} & Pascal VOC & 35.02 & 51.62 & 66.09 & 50.91 \\ \hline
\end{tabular}}
\caption{Performance comparison of proposed methodology against baseline on FLIR (1/4)}
\label{tbl:flir_quarter_each_class_comparision}
\end{table}
\vspace{-1em}
Table \ref{tbl:flir_half_each_class_comparision} shows the baselines for training the Faster-RCNN on the complete FLIR training dataset as well as FLIR (1/2). We observe that both MMTOD-UNIT and MMTOD-CG trained on FLIR(1/2) outperform both the baselines, even when Faster-RCNN is trained on the entire training set.
Similarly, Table \ref{tbl:flir_quarter_each_class_comparision} shows the baselines for training the Faster-RCNN on the complete FLIR training dataset as well as FLIR (1/4). Once again, we observe that both MMTOD-UNIT and MMTOD-CG trained on FLIR(1/4) outperform both the baselines, even when Faster-RCNN is trained on the entire training set. In other words, the MMTOD framework requires only a quarter of the thermal training set to surpass the baseline accuracy achieved using the full training set. The results clearly demonstrate the proposed framework's ability to learn from fewer examples. This shows that our framework effectively borrows features from the RGB domain that help improve detection in the thermal domain. This is especially useful in the context of thermal and IR images, where there is a dearth of publicly available large-scale datasets.
\vspace{-1em}
\paragraph{Effect of Image Resolution:} To understand the effect of image resolution on object detection performance, we repeated the above experiments were conducted using subsampled images of the FLIR ADAS dataset. Table \ref{tbl:flir_400_exps} presents these results for $400 \times 400$ input images. We observe that our multi-modal framework improves the object detection performance significantly even in this case. Our future work will involve extending our work to images of even lower resolutions.
\begin{table}[H]
\resizebox{0.48\textwidth}{!}{%
\tabcolsep=1.5pt
\begin{tabular}{cccccc}
\hline
& & \multicolumn{3}{c}{\textit{AP across each class}} & \\ \cline{3-5}
\textit{Dataset} & \textit{Method} & \textit{Bicycle} & \textit{Person} & \textit{Car} & \textit{mAP} \\ \hline
\multicolumn{1}{c|}{\multirow{2}{*}{FLIR}} & Baseline & 29.25 & 43.13 & 58.83 & 43.74 \\
\multicolumn{1}{c|}{} & P-VOC + CycleGAN & \textbf{39.42} & \textbf{52.75} & \textbf{62.05} & \textbf{51.41} \\ \cline{2-6}
\multicolumn{1}{c|}{\multirow{2}{*}{FLIR (1/2)}} & Baseline & 23.31 & 40.82 & 56.25 & 40.13 \\
\multicolumn{1}{c|}{} & P-VOC + CycleGAN & \textbf{33.32} & \textbf{48.32} & \textbf{60.87} & \textbf{47.50} \\ \cline{2-6}
\multicolumn{1}{c|}{\multirow{2}{*}{FLIR (1/4)}} & Baseline & 18.81 & 35.42 & 52.82 & 35.68 \\
\multicolumn{1}{c|}{} & P-VOC + CycleGAN & \textbf{30.63} & \textbf{45.45} & \textbf{60.32} & \textbf{45.47} \\ \hline
\end{tabular}}
\caption{Performance comparison of proposed methodology against baseline on FLIR $400 \times 400$ images}
\label{tbl:flir_400_exps}
\end{table}
\vspace{-2em}
\paragraph{Missed Detections:} We tried to analyze the failure cases of the proposed methodology, by studying the missed detections. Some examples of these missed detections are shown in figure \ref{missed_det}. We infer that MMTOD finds object detection challenging when: (i) the objects are very small and located far from the camera; (ii) two objects are close to each other, and are detected as a single object; and (iii) there is heavy occlusion and crowd. Our future efforts will focus on addressing these challenges.
\begin{figure}[H]
\centering
\includegraphics[width=\hsize, height=5cm]{failures.pdf}
\caption{Some examples of missed detections, \textit{Red}: Predictions using MMTOD, \textit{Green}: Ground Truth} \label{missed_det}
\end{figure}
\vspace{-2em}
\section{Conclusion}
We propose a novel multi-modal framework to extend and improve upon any Region-CNN-based object detector in the thermal domain by borrowing features from the RGB domain, without the need of paired training examples. We evaluate the performance of our framework applied to a Faster-RCNN architecture in various settings including the FLIR ADAS and KAIST datasets. We demonstrate that our framework achieves better performance than the baseline, even when trained only on quarter of the thermal dataset. The results suggest that our framework provides a simple and straightforward strategy to improve the performance of object detection in thermal images.
\section*{Acknowledgements}
This work was carried out as part of a CARS project supported by ANURAG, Defence Research and Development Organisation (DRDO), Government of India.
\newpage
\clearpage
{\small
\bibliographystyle{ieee}
|
1,108,101,564,640 | arxiv | \section*{Introduction }
In 1968, Rentschler \cite{Rentschler1968} established in a pioneering
work that every algebraic action of the additive group $\mathbb{G}_{a}=\mathbb{G}_{a,\mathbb{C}}$
on the complex affine space $\mathbb{A}^{2}$ is triangular in a suitable
polynomial coordinate system. Consequently, every set-theoretically
free $\mathbb{G}_{a}$-action is a translation, in the sense that
$\mathbb{A}^{2}$ is equivariantly isomorphic to $\mathbb{A}^{1}\times\mathbb{A}^{1}$
where $\mathbb{G}_{a}$ acts by translations on the second factor.
An example due to Bass \cite{Bass1984} in 1984 shows that in higher
dimensions, $\mathbb{G}_{a}$-actions are no longer triangulable in
general, and Winkelmann \cite{Winkelmann1990} constructed in 1990
a set-theoretically free $\mathbb{G}_{a}$-action on $\mathbb{A}^{4}$
which is not a translation. The question about set-theoretically free
$\mathbb{G}_{a}$-actions on $\mathbb{A}^{3}$ was eventually settled
affirmatively first by Deveney and the second author \cite{Deveney1994a}
in 1994 under the additional assumption that the action is proper
and then in general by Kaliman \cite{Kaliman2004a} in 2004.
For proper actions, the argument turns out to be much simpler than
the general one, the crucial fact being that combined with the flatness
of the algebraic quotient morphism $\pi:\mathbb{A}^{3}\rightarrow\mathbb{A}^{3}/\!/\mathbb{G}_{a}={\rm Spec}(\Gamma(\mathbb{A}^{3},\mathcal{O}_{\mathbb{A}^{3}})^{\mathbb{G}_{a}})$
which is obtained from dimension considerations, properness implies
that the action is locally trivial in the Zariski topology, i.e. that
$\mathbb{A}^{3}$ is covered by invariant Zariski affine open subsets
of the from $V_{i}=U_{i}\times\mathbb{A}^{1}$ on which $\mathbb{G}_{a}$
acts by translations on the second factor. The factoriality of $\mathbb{A}^{3}$
implies in turn that a geometric quotient $\mathbb{A}^{3}/\mathbb{G}_{a}$
exists as a quasi-affine open subset of $\mathbb{A}^{3}/\!/\mathbb{G}_{a}\simeq\mathbb{A}^{2}$
with at most finite complement, and the equality $\mathbb{A}^{3}/\mathbb{G}_{a}=\mathbb{A}^{3}/\!/\mathbb{G}_{a}$
ultimately follows by comparing Euler characteristics.
Local triviality in the Zariski topology is actually a built-in property
of proper $\mathbb{G}_{a}$-actions on smooth algebraic varieties
of dimension less than four. Indeed, recall that an action $\mu:\mathbb{G}_{a}\times X\rightarrow X$
on an algebraic variety $X$ is said to be proper if the morphism
$\mu\times\mathrm{pr}_{2}:\mathbb{G}_{a}\times X\rightarrow X\times X$
is proper, in this context in fact a closed immersion since $\mathbb{G}_{a}$
has no nontrivial algebraic subgroup. Being in particular set-theoretically
free, such an action is then locally trivial in the \'etale topology,
i.e., there exists an \'etale covering $U\times\mathbb{G}_{a}\rightarrow X$
of $X$ which is equivariant for the action of $\mathbb{G}_{a}$ on
$U\times\mathbb{G}_{a}$ by translations on the second factor. This
implies that a geometric quotient exists in the category of algebraic
spaces in the form of an \'etale locally trivial $\mathbb{G}_{a}$-bundle
$\rho:X\rightarrow X/\mathbb{G}_{a}$ over a certain algebraic space
$X/\mathbb{G}_{a}$, the properness of $\mu$ being then equivalent
to the separatedness of $X/\mathbb{G}_{a}$ (see e.g. \cite{Popp1977}).
Now if $X$ is smooth of dimension at most three, then $X/\mathbb{G}_{a}$
is a smooth separated algebraic space of dimension at most two whence
a quasi-projective variety by virtue of Chow's Lemma. Since $\mathbb{G}_{a}$
is a special group, the $\mathbb{G}_{a}$-bundle $\rho:X\rightarrow X/\mathbb{G}_{a}$
is then in fact locally trivial in the Zariski topology on $X/\mathbb{G}_{a}$
which yields the Zariski local triviality of the $\mathbb{G}_{a}$-action
on $X$.
For $\mathbb{G}_{a}$-actions on higher dimensional affine spaces,
properness fails in general to imply Zariski local triviality and
Zariski local triviality is no longer sufficient to guarantee that
a proper $\mathbb{G}_{a}$-action is a translation. In particular,
starting from dimension $5$, there exists proper triangular $\mathbb{G}_{a}$-actions
which are not Zariski locally trivial \cite{Deveney1995} and proper
triangular, Zariski locally trivial actions with strictly quasi-affine
geometric quotients \cite{Winkelmann1990}. But the question whether
a proper $\mathbb{G}_{a}$-action on $\mathbb{A}^{4}$ is a translation
or at least Zariski locally trivial remains open and very little progress
has been made in the study of these actions during the last decade.
The only existing partial results so far concern triangular $\mathbb{G}_{a}$-actions
: Deveney, van Rossum and the second author \cite{Deveney2004} established
in 2004 that a Zariski locally trivial triangular $\mathbb{G}_{a}$-action
on $\mathbb{A}^{4}$ is in fact a translation. The proof depends on
the very particular structure of the ring of invariants for such actions
and hence cannot be adapted to more general actions. The second positive
result concerns a special type of triangular $\mathbb{G}_{a}$-actions
called \emph{twin-triangular,} corresponding to locally nilpotent
derivations of $\mathbb{C}[x,y,z_{1},z_{2}]$ of the form $\partial=r(x)\partial_{y}+p_{1}(x,y)\partial_{z_{1}}+p_{2}(x,y)\partial z_{2}$
where $r(x)\in\mathbb{C}\left[x\right]$ and $p_{1}(x,y),p_{2}(x,y)\in\mathbb{C}[x,y]$.
It was established by Deveney and the second author \cite{Deveney2002}
that a proper twin-triangular $\mathbb{G}_{a}$-action corresponding
to a derivation for which the polynomial $r(x)$ has simple roots
is a translation. This was accomplished by explicitly computing the
invariant ring $\mathbb{C}[x,y,z_{1},z_{2}]^{\mathbb{G}_{a}}$ and
investigating the structure of the algebraic quotient morphism $\mathbb{A}^{4}\rightarrow\mathbb{A}^{4}/\!/\mathbb{G}_{a}=\mathrm{Spec}(\mathbb{C}[x,y,z_{1},z_{2}]^{\mathbb{G}_{a}})$.
While a result of Daigle and Freudenburg \cite{Daigle1998} gives
finite generation of $\mathbb{C}[x,y,z_{1},z_{2}]^{\mathbb{G}_{a}}$
for arbitrary triangular $\mathbb{G}_{a}$-actions, there is no a
priori bound on the number of its generators, and the simplicity of
the roots of $r(x)$ was crucial to achieve the computation of these
rings.
Here we consider the more general case of twin-triangular actions
of $\mathbb{G}_{a}=\mathbb{G}_{a,X}=\mathbb{G}_{a,\mathbb{C}}\times_{{\rm Spec}(\mathbb{C})}X$
on an affine space $\mathbb{A}_{X}^{3}$ over the spectrum $X$ of
a complex Dedekind domain $A$. Removing in particular the condition
on simplicity of the roots of $r$, we show that a proper $\mathbb{G}_{a}$-action
on $\mathbb{A}_{X}^{3}$ generated by an $A$-derivation of $A[y,z_{1},z_{2}]$
of the form $\partial=r\partial_{y}+p_{1}(y)\partial_{z_{1}}+p_{2}(y)\partial_{z_{2}}$,
$r\in A$, $p_{1},p_{2}\in A[y]$ is a translation, i.e. the geometric
quotient $\mathbb{A}_{X}^{3}/\mathbb{G}_{a}$ is $X$-isomorphic to
$\mathbb{A}_{X}^{2}$ and $\mathbb{A}_{X}^{3}$ is equivariantly isomorphic
to $\mathbb{A}_{X}^{3}/\mathbb{G}_{a}\times_{X}\mathbb{G}_{a}$ where
$\mathbb{G}_{a}$ acts by translations on the second factor. Even
though finite generation of the rings of invariant for triangular
$A$-derivations of $A[y,z_{1},z_{2}]$ holds in this more general
setting thanks to the aforementioned result of Daigle and Freudenburg,
our approach avoids the computation of these rings and focuses more
on the nature of the geometric quotients $\mathbb{A}_{X}^{3}/\mathbb{G}_{a}$.
As noted before, these quotients a priori exist only as separated
algebraic spaces and the crucial step is to show that for the actions
under consideration they are in fact schemes, or, equivalently that
proper twin-triangular $\mathbb{G}_{a}$-actions on $\mathbb{A}_{X}^{3}$
are not only locally trivial in the \'etale topology but also in
the Zariski topology. Indeed, if so then a straightforward generalization
of the aforementioned result of Deveney, van Rossum and the second
author shows that such Zariski locally trivial triangular $\mathbb{G}_{a}$-actions
are in fact translations.
To explain the main idea of our proof, let us assume for simplicity
that $A=\mathbb{C}\left[x\right]_{(x)}$ and consider a triangular
$A$-derivation $\partial=x^{n}\partial_{y}+p_{1}(y)\partial_{z_{1}}+p_{2}(y,z_{1})\partial_{z_{2}}$
of $A[y,z_{1},z_{2}]$ generating a proper action on $\mathbb{A}_{X}^{3}$
that we denote by $\mathbb{G}_{a,\partial}$. Being triangular, the
action of $\mathbb{G}_{a,\partial}$ commutes with that $\mathbb{G}_{a,\partial_{z_{2}}}$
defined by the partial derivative $\partial_{z_{2}}$ and descends
to an action on $\mathbb{A}_{X}^{2}=\mathrm{Spec}(A[y,z_{1}])\simeq\mathbb{A}_{X}^{3}/\mathbb{G}_{a,\partial_{z_{2}}}$
corresponding with that generated by the derivation $x^{n}\partial_{y}+p_{1}(y)\partial_{z_{1}}$.
Similarly, the action of $\mathbb{G}_{a,\partial_{z_{2}}}$ on $\mathbb{A}_{X}^{3}$
descends to the geometric quotient $\mathbb{A}_{X}^{3}/\mathbb{G}_{a,\partial}$
. These induced actions are in general no longer set-theoretically
free but if we take the quotient of $\mathbb{A}_{X}^{2}$ by $\mathbb{G}_{a,\partial}$
as an algebraic stack $[\mathbb{A}_{X}^{2}/\mathbb{G}_{a,\partial}]$
we obtain a cartesian square \[\xymatrix{ \mathbb{A}^3_X \ar[d]_{\mathrm{pr}_{y,z_1}} \ar[r] & \mathbb{A}^3_X/\mathbb{G}_{a,\partial} \ar[d] \\ \mathbb{A}^2_X \ar[r] & [\mathbb{A}^2_X/\mathbb{G}_{a,\partial}] }\]
which identifies $[\mathbb{A}_{X}^{2}/\mathbb{G}_{a,\partial}]$ with
the algebraic stack quotient $[(\mathbb{A}_{X}^{3}/\mathbb{G}_{a,\partial})/\mathbb{G}_{a,\partial_{z_{2}}}]$.
In this setting, the Zariski local triviality of a proper triangular
$\mathbb{G}_{a}$-action on $\mathbb{A}_{X}^{3}$ becomes equivalent
to the statement that a separated algebraic $X$-space $V$ admitting
a $\mathbb{G}_{a}$-action with algebraic stack quotient $[V/\mathbb{G}_{a}]$
isomorphic to that of a triangular $\mathbb{G}_{a}$-action on $\mathbb{A}_{X}^{2}$
is in fact a scheme. While a direct proof (or disproof) of this equivalent
characterization seems totally out of reach with existing methods,
we establish that it holds at least over suitable $\mathbb{G}_{a,\partial}$-invariant
principal open subsets $U_{1}$ of $\mathbb{A}_{X}^{2}=\mathrm{Spec}(A[y,z_{1}])$
faithfully flat over $X$ and whose algebraic stack quotients $[U_{1}/\mathbb{G}_{a,\partial}]$
are in fact represented by locally separated algebraic spaces $U_{1}/\mathbb{G}_{a,\partial}$.
So this provides at least a $\mathbb{G}_{a,\partial}$-invariant principal
open subset $V_{1}=\mathrm{pr}_{x,z_{1}}^{-1}(U_{1})\simeq U_{1}\times\mathrm{Spec}(\mathbb{C}[z_{2}])$
of $\mathbb{A}_{X}^{3}$, faithfully flat over $X$, and for which
the Zariski open sub-space $V_{1}/\mathbb{G}_{a,\partial}$ of $\mathbb{A}_{X}^{3}/\mathbb{G}_{a,\partial}$
is a scheme.
This is where twin-triangularity enters the argument: indeed for such
actions the symmetry between the variables $z_{1}$ and $z_{2}$ enables
the same construction with respect to the other projection $\mathrm{pr}_{y,z_{2}}:\mathbb{A}_{X}^{3}\rightarrow\mathbb{A}_{X}^{2}=\mathrm{Spec}(A[y,z_{2}])$
providing a second Zariski open sub-scheme $V_{2}/\mathbb{G}_{a,\partial}$
of $\mathbb{A}_{X}^{3}/\mathbb{G}_{a,\partial}$ faithfully flat over
$X$. Since the action of $\mathbb{G}_{a,\partial}$ is by definition
equivariantly trivial over the complement of the closed point $0$
of $X$, its local triviality in the Zariski topology follows provided
that the invariant affine open subsets $V_{1}$ and $V_{2}$ can be
chosen so that their union covers the closed fiber of $\mathrm{pr}_{X}:\mathbb{A}_{X}^{3}\rightarrow X$.
\\
With this general strategy in mind, the scheme of the proof is fairly
streamlined. In the first section, we describe algebraic spaces that
arise as geometric quotients of certain affine open subsets $U$ of
an affine plane $\mathbb{A}_{X}^{2}$ over a Dedekind domain equipped
with a triangular $\mathbb{G}_{a}$-action. Then we establish the
crucial property that for such affine open subsets $U$, a proper
lift to $U\times\mathbb{A}^{1}$ of the induced $\mathbb{G}_{a}$-action
on $U$ is equivariantly trivial with affine geometric quotient. This
criterion is applied in the second section to deduce that proper twin-triangular
$\mathbb{G}_{a}$-actions on an affine $3$-space $\mathbb{A}_{X}^{3}$
over a complex Dedekind domain are locally trivial in the Zariski
topology.
\section{Preliminaries on triangular $\mathbb{G}_{a}$-actions on an affine
plane over a Dedekind domain }
This section is devoted to the study of certain algebraic spaces that
arise as geometric quotients for triangular $\mathbb{G}_{a}$-actions
on suitably chosen invariant open subsets in $\mathbb{A}_{X}^{2}$.
\begin{parn} As a motivation for what follows, consider a $\mathbb{G}_{a}$-action
on $\mathbb{A}^{3}=\mathbb{A}^{1}\times\mathbb{A}^{2}={\rm Spec}(\mathbb{C}[x][y,z])$
generated by a triangular derivation $\partial=x^{n}\partial_{y}+p(y)\partial_{z}$
of $\mathbb{C}[x,y,z]$, where $n\geq1$ and where $p(y)\in\mathbb{C}[y]$
is a non constant polynomial. Letting $P(y)\in\mathbb{C}[y]$ be an
integral of $p$, the polynomials $x$ and $t=-x^{n}z+P(y)$ generate
the algebra of invariants $\mathbb{C}[x,y,z]^{\mathbb{G}_{a}}={\rm Ker}\partial$.
Corresponding to the fact that $y/x^{n}$ is a slice for $\partial$
on the principal invariant open subset $\{x\neq0\}$ of $\mathbb{A}^{3}$,
the quotient morphism $q:\mathbb{A}^{3}\rightarrow\mathbb{A}^{3}/\!/\mathbb{G}_{a}={\rm Spec}\left(\mathbb{C}\left[x\right][t]\right)$
restricts to a trivial principal $\mathbb{G}_{a}$-bundle over the
open subset $\left\{ x\neq0\right\} $ of $\mathbb{A}^{3}/\!/\mathbb{G}_{a}$.
In contrast, the set-theoretic fiber of $q$ over a point $(0,t_{0})\in\mathbb{A}^{3}/\!/\mathbb{G}_{a}$
consists of a disjoint union of affine lines in bijection with the
roots of $P(y)-t_{0}$, each simple root corresponding in particular
to an orbit of the action. Thus $\mathbb{A}^{3}/\!/\mathbb{G}_{a}$
is in general far from being even a set-theoretic orbit space for
the action. However, the observation that the inverse image by $q$
of the line $L_{0}=\left\{ x=0\right\} \subset\mathbb{A}^{3}/\!/\mathbb{G}_{a}$
is equivariantly isomorphic to the product $L_{1}\times\mathbb{A}^{1}={\rm Spec}(\mathbb{C}[y][z])$
on which $\mathbb{G}_{a}$ acts via the twisted translation generated
by the derivation $p(y)\partial_{z}$ of $\mathbb{C}[y,z]$ suggests
that a better geometric object approximating an orbit space for the
action should be obtained from $\mathbb{A}^{3}/\!/\mathbb{G}_{a}$
by replacing the line $L_{0}$ by $L_{1}$ , considered as total space
of the finite cover $h_{0}:L_{1}\rightarrow L_{0}$, $y\mapsto t=P(y)$.
On the other hand, on every invariant open subset $V$ of $\mathbb{A}^{3}$
on which the action restricts to a set-theoretically free $\mathbb{G}_{a}$-action,
a geometric quotient $\rho:V\rightarrow V/\mathbb{G}_{a}$ exists
in the form an \'etale locally trivial $\mathbb{G}_{a}$-bundle over
an algebraic space $V/\mathbb{G}_{a}$. By definition of $\partial$,
the fixed points of the $\mathbb{G}_{a}$-action are supported on
the disjoint union of lines $\left\{ x=p(y)=0\right\} $. Therefore,
letting $C_{0}\subset L_{0}={\rm Spec}(\mathbb{C}[t])$ be the complement
of the branch locus of $h_{0}$ and considering $\mathbb{A}^{1}\times C_{0}$
as an open subset of $\mathbb{A}^{3}/\!/\mathbb{G}_{a}$, a geometric
quotient exists on the open subset $V=q^{-1}(\mathbb{A}^{1}\times C_{0})$
of $\mathbb{A}^{3}$. In view of the previous discussion, the algebraic
quotient morphism $q\mid_{V}:V\rightarrow V/\!/\mathbb{G}_{a}\simeq\mathbb{A}^{1}\times C_{0}\subset\mathbb{A}^{3}/\!/\mathbb{G}_{a}$
should thus factor through a $\mathbb{G}_{a}$-bundle $\rho:V\rightarrow V/\mathbb{G}_{a}$
over an algebraic space $V/\mathbb{G}_{a}$ obtained from $\mathbb{A}^{1}\times C_{0}$
by replacing the curve $\left\{ 0\right\} \times C_{0}\simeq C_{0}$
by the finite \'etale cover $h_{0}:C_{1}=h_{0}^{-1}(C_{0})\rightarrow C_{0}$
of itself.
\end{parn}
In what follows, to give precise sense to the above intuitive interpretation,
we review the construction of a particular type of algebraic space
$\mathfrak{S}$ obtained from a surface by ``replacing a curve by
a finite \'etale cover of itself'' and we check that these spaces
do indeed arise as geometric quotients for $\mathbb{G}_{a}$-actions
on certain affine threefolds. Then, conversely, we characterize effectively
which \'etale locally trivial $\mathbb{G}_{a}$-bundles $\rho:V\rightarrow\mathfrak{S}$
over such spaces have an affine total space.
\subsection{Algebraic space surfaces with an irreducible $r$-fold curve }
\indent\newline\noindent Given a smooth affine curve $X={\rm Spec}(A)$,
a closed point $o\in X$ and a finite \'etale morphism $h_{0}:C_{1}={\rm Spec}(R_{1})\rightarrow C_{0}={\rm Spec}(R_{0})$
between smooth connected affine curves, our aim is to construct an
algebraic space $\mathfrak{S}=\mathfrak{S}(X,o,h_{0})$ which looks
like $X\times C_{0}$ but with the special curve $\left\{ o\right\} \times C_{0}\simeq C_{0}$
replaced by $C_{1}$. To obtain such an $\mathfrak{S}$, one can simply
define it as the quotient of $X\times C_{1}$ by the \'etale equivalence
relation $(x,c_{1})\sim(x',c_{1}')\Leftrightarrow(x=x'\neq0\textrm{ and }h_{0}(c_{1})=h_{0}(c_{1}'))$.
More formally, letting $X_{*}=X\setminus\left\{ o\right\} $, this
means that $\mathfrak{S}=X\times C_{1}/R$ where
\[
\mathrm{diag}\sqcup j:R=X\times C_{1}\sqcup(X\times C_{1})\times_{X_{*}\times C_{0}}(X\times C_{1})\setminus\mathrm{Diag}\longrightarrow(X\times C_{1})\times(X\times C_{1})
\]
is the \'etale equivalence relation defined by the diagonal embedding
$\mathrm{diag}:X\times C_{1}\rightarrow(X\times C_{1})\times(X\times C_{1})$
and the natural immersion $j:(X\times C_{1})\times_{X_{*}\times C_{0}}(X\times C_{1})\setminus\mathrm{Diag}\rightarrow(X\times C_{1})\times(X\times C_{1})$
respectively. This equivalence relation restricts on the invariant
open subset $X_{*}\times C_{1}$ to that defined by the diagonal embedding
$X_{*}\times C_{1}\rightarrow(X_{*}\times C_{1})\times_{X_{*}\times C_{0}}(X_{*}\times C_{1})$
which has quotient $X\times C_{0}$. This implies that the $R$-invariant
morphism $\mathrm{pr}_{1}\times h_{0}:X\times C_{1}\rightarrow X\times C_{0}$
descends to a morphism $\overline{\varphi}:\mathfrak{S}\rightarrow X\times C_{0}$
restricting to an isomorphism over $X_{*}\times C_{0}$. In contrast,
since $R$ induces the trivial equivalence relation on $\{o\}\times C_{1}\simeq C_{1}$,
one has $\overline{\varphi}^{-1}(\{o\}\times C_{0})\simeq C_{1}$
as desired.
A disadvantage of this simple presentation of $\mathfrak{S}$ is that
the equivalence relation $R$ is quasi-finite but not finite. To construct
an alternative presentation of $\mathfrak{S}$ as a quotient of a
suitable scheme $Z$ by a finite \'etale equivalence relation, in
fact by a free action of a finite group $G$, we proceed as follows:
\begin{parn} \label{par:alg-space-const} We let $C={\rm Spec}(R)$
be the normalization of $C_{0}$ in the Galois closure of the field
extension ${\rm Frac}(R_{0})\hookrightarrow{\rm Frac}(R_{1})$. By
construction, the induced morphism $h:C\rightarrow C_{0}$ is a torsor
under the corresponding Galois group $G$ which factors as $h:C\stackrel{h_{1}}{\rightarrow}C_{1}\stackrel{h_{0}}{\rightarrow}C_{0}$
where $h_{1}:C\rightarrow C_{1}$ is a torsor under a certain subgroup
$H$ of $G$ with index equal to the degree $r$ of the finite morphism
$h_{0}$. Now we let $Z$ be the scheme obtained by gluing $r$ copies
$Z_{\overline{g}}$, $\overline{g}\in G/H$, of $X\times C$ by the
identity outside the curves $\{o\}\times C\subset Z_{\overline{g}}$.
The group $G$ acts freely on $Z$ by $Z_{\overline{g}}\ni(x,t)\mapsto g'\cdot(x,t)=(x,g'\cdot t)\in Z_{\overline{g'\cdot g}}$
and so a geometric quotient $\xi:Z\rightarrow\mathfrak{S}=Z/G$ exists
in the category of algebraic spaces in the form of an \'etale $G$-torsor
over an algebraic space $\mathfrak{S}$. The local morphisms ${\rm pr}_{1}\times h:Z_{\overline{g}}\simeq X\times C\rightarrow X\times C_{0}$,
$\overline{g}\in G/H$, glue to a global $G$-invariant morphism $\varphi:Z\rightarrow X\times C_{0}$
which descends in turn to a morphism $\overline{\varphi}:\mathfrak{S}=Z/G\rightarrow X\times C_{0}$
restricting to an isomorphism outside $\{o\}\times C_{0}$. In contrast,
$\overline{\varphi}^{-1}(\{o\}\times C_{0})$ is isomorphic as a scheme
over $C_{0}$ to the quotient of $C\times(G/H)$ by the diagonal action
of $G$ whence to $C/H\simeq C_{1}$.
\begin{figure}[ht]
\psset{unit=0.8}
\begin{pspicture}(12,-3)(10,8)
\rput(1,4){\usebox{\XZ}}
\psline{->}(4,3.8)(4,2.8)
\rput(0,0){\usebox{\XC}}
\rput(8,0){\usebox{\Xspace}}
\rput(16,0){\usebox{\XCzero}}
\psline{->}(6.45,4.8)(10,2.6)
\rput(8.5,3.8){{\scriptsize $\xi$}}
\psline{->}(6.5,5)(18,2.6)
\rput(13,3.9){{\scriptsize $\varphi$}}
\psline{->}(14,1)(16.5,1)
\rput(15.2,1.3){{\scriptsize $\bar{\varphi}$}}
\end{pspicture}
\caption{Construction of $\mathfrak{S}$ as a quotient of $Z$ by a finite group action}
\end{figure}
\noindent The fact that the algebraic spaces $\mathfrak{S}=Z/G$
obtained by this construction coincide with the $X\times C_{1}/R$
defined above can be seen as follows. By construction, every open
subset $Z_{\overline{g}}\simeq X\times C$ of $Z$, $\overline{g}\in G/H$,
is invariant under the induced action of $H$, with quotient $Z_{\overline{g}}/H\simeq X\times C/H=X\times C_{1}$.
So the morphism $X\times C\rightarrow\mathfrak{S}$ induced by restricting
$\xi:Z\rightarrow\mathfrak{S}$ to any open subset $Z_{\overline{g}}\subset Z$
descends to an \'etale morphism $X\times C_{1}=X\times C/H\rightarrow\mathfrak{S}$,
and one checks that the \'etale equivalence relation $(\mathrm{pr}_{1},\mathrm{pr}_{2}):(X\times C_{1})\times_{\mathfrak{S}}(X\times C_{1})\rightrightarrows X\times C_{1}$
is precisely isomorphic to that $R\rightrightarrows X\times C_{1}$
defined above.
\end{parn}
\begin{rem}
Note that if $h_{0}:C_{1}\rightarrow C_{0}$ is a not an isomorphism
then $\mathfrak{S}$ cannot be a scheme. Indeed, otherwise the image
by $\xi$ of a point $z_{0}\in\left\{ o\right\} \times C\subset Z_{\overline{g}}\subset Z$
for some $\overline{g}\in G/H$ would have a Zariski open affine neighborhood
$U$ in $\mathfrak{S}_{h_{0}}$. But then since $\xi:Z\rightarrow\mathfrak{S}$
is a finite morphism, $\xi^{-1}(U)$ would be a $G$-invariant affine
open neighborhood of $z_{0}$ in $Z$, which is absurd as such a point
does not even have a separated open neighborhood in $Z$.
\end{rem}
\subsection{Geometric quotients for restricted triangular $\mathbb{G}_{a}$-actions
on a relative affine plane}
\indent\newline\noindent Here we show that the algebraic spaces $\mathfrak{S}=\mathfrak{S}(X,o,h_{0})$
described in the previous subsection naturally arise as geometric
quotients for $\mathbb{G}_{a}$-actions on certain open subsets of
affine planes over discrete valuation rings.
\begin{parn} \label{par:Open-subs-A3} We let $X={\rm Spec}(A)$
be the spectrum of a discrete valuation ring with uniformizing parameter
$x$ and with residue field $\mathbb{C}$. We denote by $o$ its closed
point and we let $\mathbb{A}_{X}^{2}={\rm Spec}(A\left[y,z\right])$.
Given an irreducible triangular locally nilpotent $A$-derivation
$\partial=x^{n}\partial_{y}+p\left(y\right)\partial_{z}$ of $A\left[y,z\right]$,
where $p(y)\in A\left[y\right]$, we let $P\left(y\right)\in A\left[y\right]$
be a integral of $p\left(y\right)$. Since $\partial$ is irreducible,
$p\left(y\right)$ is not divisible by $x$ and so the restriction
$\overline{P}$ of the morphism $P:\mathbb{A}_{X}^{1}={\rm Spec}(A[y])\rightarrow\mathbb{A}_{X}^{1}={\rm Spec}(A[t])$
over the closed point of $X$ is not constant. Its branch locus is
a principal divisor ${\rm div}\left(\alpha\right)$ for a certain
$\alpha\in\mathbb{C}\left[t\right]$ and we let $C_{\partial}={\rm Spec}(R_{0})$,
where $R_{0}=\mathbb{C}\left[t\right]_{\alpha}$, be its complement.
The polynomial $-x^{n}z+P\left(y\right)\in A\left[y,z\right]$ defines
a $\mathbb{G}_{a}$-invariant $X$-morphism $f:\mathbb{A}_{X}^{2}={\rm Spec}(A\left[y,z\right])\rightarrow{\rm Spec}\left(A\left[t\right]\right)$,
smooth over $X\times C_{\partial}$, and such that the induced $\mathbb{G}_{a}$-action
on $V_{\partial}=f^{-1}\left(X\times C_{\partial}\right)\subset\mathbb{A}_{X}^{2}$
is set-theoretically free. Thus a geometric quotient exists in the
category of algebraic spaces in the form of an \'etale locally trivial
$\mathbb{G}_{a}$-bundle $\rho:V_{\partial}\rightarrow V_{\partial}/\mathbb{G}_{a}$.
Clearly, the curve $C_{1}={\rm Spec}(R_{0}\left[y\right]/\left(P(0,y)-t\right))$
is smooth and irreducible, and the induced morphism $h_{0}:C_{1}\rightarrow C_{\partial}$
is finite and \'etale. With the notation of $\S$ \ref{par:alg-space-const}
above, we have the following result:
\end{parn}
\begin{prop}
\label{prop:Restricted-quotient}The algebraic space quotient $V_{\partial}/\mathbb{G}_{a}$
is isomorphic to $\mathfrak{S}(X,o,h_{0})$.\end{prop}
\begin{proof}
Again, we let $h:C={\rm Spec}(R)\rightarrow C_{\partial}$ be the
Galois closure of the finite \'etale morphism $h_{0}:C_{1}\rightarrow C_{\partial}$.
By construction, the polynomial $\overline{P}(y)-t\in R\left[y\right]$
splits as $\overline{P}(y)-t=\prod_{\overline{g}\in G/H}(y-t_{\overline{g}})$
for certain elements $t_{\overline{g}}\in R$, $\overline{g}\in G/H$,
on which the Galois group $G$ acts by permutation. Furthermore, since
$h_{0}:C_{1}\rightarrow C_{\partial}$ is \'etale, it follows that
for distinct $\overline{g},\overline{g}'\in G/H$, one has $t_{\overline{g}}(c)\neq t_{\overline{g'}}(c)$
for every point $c\in C$. Now a similar argument as in the proof
of Theorem 3.2 in \cite{Dubouloz2009} implies that there exists a
collection of elements $\sigma_{\overline{g}}\in A\otimes_{\mathbb{C}}R$
with respective residue classes $t_{\overline{g}}\in R$ modulo $x$,
$\overline{g}\in G/H$, on which $G$ acts by permutation, a polynomial
$S_{1}\in A\otimes_{\mathbb{C}}R\left[y\right]$ with invertible residue
class modulo $x$ and a polynomial $S_{2}\in A\otimes_{\mathbb{C}}R\left[y\right]$
such that in $A\otimes_{\mathbb{C}}R\left[y\right]$ one can write
\[
P(y)-t=S_{1}(y)\prod_{\overline{g}\in G/H}(y-\sigma_{\overline{g}})+x^{n}S_{2}(y).
\]
This implies that\emph{ $W=V_{\partial}\times_{C_{\partial}}C\simeq{\rm Spec}\left(A\otimes_{\mathbb{C}}R\left[y,z\right]/(x^{n}z-P(y)+t\right)$
}is isomorphic to the sub-variety of $C\times\mathbb{A}_{X}^{2}$
defined by the equation $x^{n}z=\tilde{P}\left(y\right)=S_{1}(y)\prod_{\overline{g}\in G/H}(y-\sigma_{\overline{g}})$.\emph{
}Furthermore, the $\mathbb{G}_{a}$-action of $V_{\partial}$ lifts
to the set-theoretically free $\mathbb{G}_{a}$-action on $W$ commuting
with that of $G$ associated with the locally nilpotent $A\otimes_{\mathbb{C}}R$-derivation
$x^{n}\partial_{y}+\partial_{y}(\tilde{P}(y))\partial_{z}$. Then
a standard argument (see e.g. \emph{loc. cit.} or \cite{Dubouloz2011b})
shows that the $\mathbb{G}_{a}$-invariant morphism ${\rm pr}_{X,C}:W\rightarrow X\times C$
factors through a $G$-equivariant $\mathbb{G}_{a}$-bundle $\eta:W\rightarrow Z$
over the scheme $Z$ as in $\S$ \ref{par:alg-space-const} above with
local trivializations $W\mid_{Z_{\overline{g}}}\simeq Z_{\overline{g}}\times{\rm Spec}(\mathbb{C}[u_{\overline{g}}])$,
where $u_{\overline{g}}=x^{-n}(y-\sigma_{\overline{g}})$, $\overline{g}\in G/H$,
and transition isomorphisms over $Z_{\overline{g}}\cap Z_{\overline{g}'}\simeq{\rm Spec}(A_{x}\otimes_{\mathbb{C}}R)$
of the form $u_{\overline{g}}\mapsto u_{\overline{g}'}=u_{\overline{g}}+x^{-n}(\sigma_{\overline{g}}-\sigma_{\overline{g}'})$
for every pair of distinct elements $\overline{g},\overline{g}'\in G/H$.
By construction, we have a cartesian square \[\xymatrix{ W \ar[r] \ar[d]_{\eta} & V_\partial \simeq V/G \ar[d]^{\rho} \\ Z \ar[r] & \mathfrak{S}=Z/G,}\]
where the horizontal arrows are $G$-torsors and the vertical ones
are $\mathbb{G}_{a}$-bundles, which provides, by virtue of the universal
property of categorical quotients, an isomorphism of algebraic spaces
$V_{\partial}/\mathbb{G}_{a}\simeq\mathfrak{S}=\mathfrak{S}(X,o,h_{0})$.
\end{proof}
\subsection{Criteria for affineness}
\indent\newline\noindent Even though Proposition \ref{prop:Restricted-quotient}
shows in particular that algebraic spaces of the form $\mathfrak{S}=\mathfrak{S}(X,o,h_{0})$
may arise as geometric quotient for $\mathbb{G}_{a}$-actions on affine
schemes, the total space of an \'etale locally trivial $\mathbb{G}_{a}$-bundle
$\rho:V\rightarrow\mathfrak{S}$ is in general neither a scheme nor
even a separated algebraic space. However it is possible to characterize
effectively which $\mathbb{G}_{a}$-bundles $\rho:V\rightarrow\mathfrak{S}$
have affine total space.
\begin{parn} Indeed, with the notation of $\S$ \ref{par:alg-space-const}
above, since $X\times C_{0}$ is affine, the affineness of $V$ is
equivalent to that of the morphism $\overline{\varphi}\circ\rho:V\rightarrow X\times C_{0}$.
Furthermore, since $\rho:V\rightarrow\mathfrak{S}$ is an affine morphism
and $\overline{\varphi}:\mathfrak{S}\rightarrow X\times C_{0}$ is
an isomorphism outside the curve $\{o\}\times C_{0}$, it is enough
to consider the case that $X={\rm Spec}\left(A\right)$ is the spectrum
of a discrete valuation ring with closed point $o$ and uniformizing
parameter $x$. Every $\mathbb{G}_{a}$-bundle $\rho:V\rightarrow\mathfrak{S}$
pulls-back via the Galois cover $\xi:Z\rightarrow\mathfrak{S}=Z/G$
to a $G$-equivariant $\mathbb{G}_{a}$-bundle $\eta={\rm pr}_{2}:W=V\times_{\mathfrak{S}}Z\rightarrow Z$.
By construction of $Z$, the latter becomes trivial on the canonical
covering $\mathcal{U}$ of $Z$ by the affine open subsets $Z_{\overline{g}}\simeq X\times C$,
$\overline{g}\in G/H$, whence is determined up to isomorphism by
a $G$-equivariant \v{C}ech $1$-cocyle
\[
\{f_{\overline{g}\overline{g}'}\}\in C^{1}(\mathcal{U},\mathcal{O}_{Z})\simeq\bigoplus_{\overline{g},\overline{g}'\in G/H,\overline{g}\neq\overline{g}'}A_{x}\otimes_{\mathbb{C}}R.
\]
With this notation, we have the following criterion:
\end{parn}
\begin{thm}
\label{thm:Aff-criterion} For a $\mathbb{G}_{a}$-bundle $\rho:V\rightarrow\mathfrak{S}$,
the following are equivalent:
a) $V$ is a separated algebraic space,
b) For every every pair of distinct elements $\overline{g},\overline{g}'\in G/H$,
there exists an element $\tilde{f}_{\overline{g}\overline{g}'}\in A\otimes_{\mathbb{C}}R$
with invertible residue class modulo $x$ such that $f_{\overline{g}\overline{g}'}=x^{-l}\tilde{f}_{\overline{g}\overline{g}'}$
for a certain $l>1$.
c) $V$ is an affine scheme.\end{thm}
\begin{proof}
By virtue of \cite[Proposition 10.1.2 and Lemma 10.1.3 ]{Dubouloz2004a},
b) is equivalent to the separatedness of the total space of the $\mathbb{G}_{a}$-bundle
$\eta:W\rightarrow Z$ and this property is also equivalent to the
affineness of $W$ thanks to the generalization of the so-called Fieseler
criterion for affineness \cite{Fieseler1994} established in \cite[Theorem 10.2.1]{Dubouloz2004a}.
Now if $V$ is a separated algebraic space then so is $W=V\times_{\mathfrak{S}}Z$
as the projection ${\rm pr}_{1}:W\rightarrow V$ is a $G$-torsor
whence a proper morphism. Thus $W$ is in fact an affine scheme and
so $V\simeq W/G\simeq{\rm Spec}(\Gamma(W,\mathcal{O}_{W})^{G})$ is
an affine scheme as well.
\end{proof}
\begin{parn} Given a $\mathbb{G}_{a}$-bundle $\rho:V\rightarrow\mathfrak{S}$
with affine total space $V$, we have a one-to-one correspondence
between $\mathbb{G}_{a}$-bundles over $\mathfrak{S}$ and lifts of
the $\mathbb{G}_{a}$-action on $V$ to $V\times\mathbb{A}^{1}$.
Indeed, if $\rho':V'\rightarrow\mathfrak{S}$ is another $\mathbb{G}_{a}$-bundle
then the fiber product $V'\times_{\mathfrak{S}}V$ is a $\mathbb{G}_{a}$-bundle
over $V$ via the second projection, whence is isomorphic to the trivial
one $V\times\mathbb{A}^{1}$ on which $\mathbb{G}_{a}$ acts by translation
on the second factor. Via this isomorphism, the natural lift to $V'\times_{\mathfrak{S}}V$
of the $\mathbb{G}_{a}$-action on $V$ defined by $t\cdot\left(v',v\right)=(v',t\cdot v)$
coincides with a lift of it to $V\times\mathbb{A}^{1}$ with geometric
quotient $V\times\mathbb{A}^{1}/\mathbb{G}_{a}\simeq V'$. Conversely,
since every lift to $V\times\mathbb{A}^{1}$ of the $\mathbb{G}_{a}$-action
on $V$ commutes with that by translations on the second factor, the
equivariant projection ${\rm pr}_{1}:V\times\mathbb{A}^{1}\rightarrow V$
descends to a $\mathbb{G}_{a}$-bundle $\rho':V'=V\times\mathbb{A}^{1}/\mathbb{G}_{a}\rightarrow\mathfrak{S}=V/\mathbb{G}_{a}$
fitting into a cartesian square \[\xymatrix{V\times \mathbb{A}^1 \ar[d]_{{\rm pr}_1} \ar[r] & V'=V\times \mathbb{A}^1/\mathbb{G}_a \ar[d]^{\rho'} \\ V \ar[r]^{\rho} & \mathfrak{S}=V/\mathbb{G}_a} \] of
$\mathbb{G}_{a}$-bundles. In this diagram the horizontal arrows correspond
to the $\mathbb{G}_{a}$-actions on $V$ and its lift to $V\times\mathbb{A}^{1}$
while the vertical ones correspond to the actions on $V\times\mathbb{A}^{1}$
by translations on the second factor and the one it induces on $V\times\mathbb{A}^{1}/\mathbb{G}_{a}$.
Combined with Theorem \ref{thm:Aff-criterion}, this correspondence
leads to the following criterion:
\end{parn}
\begin{cor}
\label{cor:Affine-extended-quotient} Let $\rho:V\rightarrow\mathfrak{S}$
be a $\mathbb{G}_{a}$-bundle with affine total space over an algebraic
space $\mathfrak{S}$ as in $\S$ \ref{par:alg-space-const}. Then the
total space of a $\mathbb{G}_{a}$-bundle $\rho':V'\rightarrow\mathfrak{S}$
is an affine scheme if and only if the corresponding lifted $\mathbb{G}_{a}$-action
on $V\times\mathbb{A}^{1}$ is proper. \end{cor}
\begin{proof}
Since properness of the lifted $\mathbb{G}_{a}$-action on $V\times\mathbb{A}^{1}$
is equivalent to the separatedness of the algebraic space $V'\simeq V\times\mathbb{A}^{1}/\mathbb{G}_{a}$,
the assertion is a direct consequence of Theorem \ref{thm:Aff-criterion}
above.
\end{proof}
\section{Twin triangular $\mathbb{G}_{a}$-actions of affine $3$-spaces over
Dedekind domains }
In what follows, we let $X$ be the spectrum of a Dedekind domain
$A$ over $\mathbb{C}$, and we let $\mathbb{A}_{X}^{3}$ be the spectrum
of the polynomial ring $A[y,z_{1},z_{2}]$ in three variables over
$A$. Algebraic actions of $\mathbb{G}_{a,X}=\mathbb{G}_{a}\times_{{\rm Spec}\left(\mathbb{C}\right)}X$
on $\mathbb{A}_{X}^{3}$ are in one-to-one correspondence with locally
nilpotent $A$-derivations of $A[y,z_{1},z_{2}]$. Such an action
is called triangular if the corresponding derivation can be written
as $\partial=r\partial_{y}+p_{1}(y)\partial_{z_{1}}+p_{2}(x,y)\partial z_{2}$
for some $r\in A$, $p_{1}\in A[y]$ and $p_{2}\in A[y,z_{1}]$. A
triangular $\mathbb{G}_{a,X}$-action on $\mathbb{A}_{X}^{3}$ is
said to be \emph{twin-triangular} if the corresponding $p_{2}$ belongs
to the sub-ring $A[y]$ of $A[y,z_{1}]$.
\subsection{Proper twin-triangular $\mathbb{G}_{a}$-actions are translations}
\indent\newline\noindent This sub-section is devoted to the proof
of the following result:
\begin{thm}
\label{thm:Main-Theorem} A proper twin-triangular $\mathbb{G}_{a,X}$-action
on $\mathbb{A}_{X}^{3}$ is a translation, i.e., $\mathbb{A}_{X}^{3}/\mathbb{G}_{a,X}$
is $X$-isomorphic to $\mathbb{A}_{X}^{2}$ and $\mathbb{A}_{X}^{3}$
is equivariantly isomorphic to $\mathbb{A}_{X}^{3}/\mathbb{G}_{a}\times_{X}\mathbb{G}_{a,X}$
where $\mathbb{G}_{a,X}$ acts by translations on the second factor.
\end{thm}
\begin{parn} The argument of the proof given below can be decomposed
in two steps : we first establish in Proposition \ref{prop:Loc-triv-TR}
that any Zariski locally trivial triangular $\mathbb{G}_{a,X}$-action
on $\mathbb{A}_{X}^{3}$ is a translation. This reduces the problem
to showing that a proper twin-triangular $\mathbb{G}_{a,X}$-action
on $\mathbb{A}_{X}^{3}$ is not only equivariantly trivial in the
\'etale topology, which always holds for a proper whence free $\mathbb{G}_{a,X}$-action,
but also in the Zariski topology. This is done in Proposition \ref{prop:Twin-Loc-trivi}.
In the sequel, unless otherwise specified, we implicitly work in the
category of schemes over $X$ and we denote $\mathbb{G}_{a,X}$ simply
by $\mathbb{G}_{a}$.
\end{parn}
\noindent We begin with the following generalization of Theorem 2.1
in \cite{Deveney2004}:
\begin{prop}
\label{prop:Loc-triv-TR} Let $A$ be a Dedekind domain over $\mathbb{C}$
and let $\partial$ be a triangular $A$-derivation of $A[y,z_{1},z_{2}]$
generating a Zariski locally trivial $\mathbb{G}_{a}$-action on $\mathbb{A}_{X}^{3}={\rm Spec}(A\left[y,z_{1},z_{2}\right])$.
Then the action is equivariantly trivial with quotient isomorphic
to $\mathbb{A}_{X}^{2}$. \end{prop}
\begin{proof}
The hypotheses imply that $\mathbb{A}_{X}^{3}$ has the structure
of Zariski locally trivial $\mathbb{G}_{a}$-bundle over a a quasi-affine
$X$-scheme $\psi:Y=\mathbb{A}_{X}^{3}/\mathbb{G}_{a}\rightarrow X$
(see e.g. \cite{Deveney1994}). Furthermore, since each fiber, closed
or not, of the invariant morphism ${\rm pr}_{X}:\mathbb{A}_{X}^{3}\rightarrow X$
is isomorphic to an affine $3$-space equipped with an induced free
triangular $\mathbb{G}_{a}$-action, it follows from \cite{Snow1988}
that all fibers of $\psi:Y\rightarrow X$ are isomorphic to affine
planes over the corresponding residue fields. It is enough to show
that $Y$ is an affine $X$-scheme. Indeed, if so, then by virtue
of \cite{Sathaye1983}, $\psi:Y\rightarrow X$ is in fact a locally
trivial $\mathbb{A}^{2}$-bundle in the Zariski topology whence a
vector bundle of rank $2$ by \cite{Bass1977}. Furthermore, the affineness
of $Y$ implies that the quotient morphism $\mathbb{A}_{X}^{3}\rightarrow Y$
is a trivial $\mathbb{G}_{a}$-bundle. Thus $Y\times\mathbb{A}^{1}\simeq\mathbb{A}_{X}^{3}$
as bundles over $X$ and so $\psi:Y\rightarrow X$ is the trivial
bundle $\mathbb{A}_{X}^{2}$ over $X$ by virtue of \cite[IV 3.5]{Bass1968}.
The affineness of $\psi:Y\rightarrow X$ being a local question with
respect to the Zariski topology on $X$, we may reduce to the case
where $A$ is a discrete valuation ring with uniformizing parameter
$x$ and residue field $\mathbb{C}$. Since $\Gamma(Y,\mathcal{O}_{Y})\simeq A[y,z_{1},z_{2}]^{\mathbb{G}_{a}}$
is finitely generated by virtue of \cite{Daigle1998}, it is enough
to show that the canonical morphism $\alpha:Y\rightarrow Z={\rm Spec}(A[y,z_{1},z_{2}]^{\mathbb{G}_{a}})$
is surjective, whence an isomorphism. If $\partial y\in A^{*}$ then
the result is clear. Otherwise if $\partial y=0$ then the assertion
follows from \emph{loc. cit.} We may thus assume that $\partial y\in xA\setminus\left\{ 0\right\} $
and then the result follows verbatim from the argument of \cite[Theorem 2.1]{Deveney2004}
which shows that $\alpha$ is surjective over the closed point of
$X$.
\end{proof}
\noindent Now it remains to show the following:
\begin{prop}
\label{prop:Twin-Loc-trivi} A proper twin-triangular $\mathbb{G}_{a}$-action
on $\mathbb{A}_{X}^{3}$ is locally trivial in the Zariski topology. \end{prop}
\begin{proof}
The question is local in the Zariski topology on $X$. Since the corresponding
derivation $\partial=r\partial_{y}+p_{1}(y)\partial_{z_{1}}+p_{2}(y)\partial z_{2}$
of $A[y,z_{1},z_{2}]$ has a slice over the principal open subset
$D_{r}$ of $X$, whence is equivariantly trivial over it, we may
reduce after localizing at the finitely many maximal ideals of $A$
containing $r$ to the case where $A$ is discrete valuation ring
with uniformizing parameter $x$ and $r=x^{n}$ for some $n\geq1$.
Then it is enough to show that the closed fiber $\mathbb{A}_{o}^{3}$
of the projection ${\rm pr}_{X}:\mathbb{A}_{X}^{3}\rightarrow X$
is contained in a union of invariant open subsets of $\mathbb{A}_{X}^{3}$
on which the induced actions are equivariantly trivial. By virtue
of Lemma \ref{lem:Bad-Plane-Removal} below, we may assume up to a
coordinate change preserving twin-triangularity that the residue classes
$\overline{p}_{i}\in\mathbb{C}[y]$ of the $p_{i}$'s modulo $x$
are non constant and that the inverse images of the branch loci of
the morphisms $\overline{P}_{i}:{\rm Spec}\left(\mathbb{C}\left[y\right]\right)\rightarrow{\rm Spec}\left(\mathbb{C}\left[t\right]\right)$
defined by suitable integrals $\overline{P}_{i}$ of $\overline{p}_{i}$,
$i=1,2$ are disjoint. The first property guarantees that the triangular
derivations $\partial_{i}=x^{n}\partial_{y}+p_{i}(y)\partial_{z_{i}}$
of $A\left[y,z_{i}\right]$, $i=1,2$, are both irreducible. Furthermore,
if we let $V_{\partial_{i}}$ be the invariant open subset of $\mathbb{A}_{X,i}^{2}={\rm Spec}(A\left[y,z_{i}\right])$,
$i=1,2$, equipped with $\mathbb{G}_{a}$-action associated with $\partial_{i}$
as defined in $\S$ \ref{par:Open-subs-A3} above, then the second property
implies that $\mathbb{A}_{o}^{3}$ is contained in the union of the
open subsets ${\rm pr}_{z_{i}}^{-1}(V_{\partial_{i}})\simeq V_{\partial_{i}}\times\mathbb{A}^{1}$,
where ${\rm pr}_{z_{i}}:\mathbb{A}_{X}^{3}\rightarrow\mathbb{A}_{X,i}^{2}$,
$i=1,2$, are the natural projections. These projections being equivariant,
the $\mathbb{G}_{a}$-action on ${\rm \mathbb{A}_{X}^{3}}$ restricts
on ${\rm pr}_{z_{i}}^{-1}(V_{\partial_{i}})\simeq V_{\partial_{i}}\times\mathbb{A}^{1}$
to a proper lift of that on $V_{\partial_{i}}$, $i=1,2$, and so
the geometric quotients ${\rm pr}_{z_{i}}^{-1}(V_{\partial_{i}})/\mathbb{G}_{a}$,
$i=1,2$, are affine schemes by virtue of Corollary \ref{cor:Affine-extended-quotient}.
This implies in turn that the induced actions on the open subsets
${\rm pr}_{z_{i}}^{-1}(V_{\partial_{i}})$, $i=1,2$, are equivariantly
trivial and completes the proof.
\end{proof}
\noindent In the proof of Proposition \ref{prop:Twin-Loc-trivi},
we exploited the following crucial technical fact concerning set-theoretically
free twin-triangular $\mathbb{G}_{a}$-actions:
\begin{lem}
\label{lem:Bad-Plane-Removal} Let $A$ be a discrete valuation ring
over $\mathbb{C}$ with uniformizing parameter $x$. A twin-triangular
$A$-derivation $\partial$ of $A[y,z_{1},z_{2}]$ generating a set-theoretically
free $\mathbb{G}_{a}$-action is conjugate to a one of the form $x^{n}\partial_{y}+p_{1}(y)\partial_{z_{1}}+p_{2}(y)\partial_{z_{2}}$
with the following properties:
a) The residue classes $\overline{p}_{i}\in\mathbb{C}[y]$ of the
polynomials $p_{i}\in A[y]$ modulo $x$, $i=1,2$, are both non zero
and relatively prime,
b) There exists integrals $\overline{P}_{i}\in\mathbb{C}[y]$ of $\overline{p}_{i}$,
$i=1,2$, for which the inverse images of the branch loci of the morphisms
$\overline{P}_{i}:\mathbb{A}^{1}\rightarrow\mathbb{A}^{1}$, $i=1,2$,
are disjoint. \end{lem}
\begin{proof}
A twin-triangular derivation $\partial=x^{n}\partial_{y}+p_{1}(y)\partial_{z_{1}}+p_{2}(y)\partial z_{2}$
generates a set-theoretically free $\mathbb{G}_{a}$-action if and
only if $x^{n}$, $p_{1}(y)$ and $p_{2}(y)$ generate the unit ideal
in $A[y,z_{1},z_{2}]$. So $\overline{p}_{1}$ and $\overline{p}_{2}$
are relatively prime and at least one of them, say $\overline{p}_{2}$,
is nonzero. If $\overline{p}_{1}=0$ then $p_{2}$ is necessarily
of the form $p_{2}(y)=c+x\tilde{p}_{2}(y)$ for some nonzero constant
$c$ and so changing $z_{1}$ for $z_{1}+z_{2}$ yields a twin-triangular
derivation conjugate to $\partial$ for which the corresponding polynomials
$p_{1}(y)$ and $p_{2}(y)$ both have non zero residue classes modulo
$x$. More generally, changing $z_{2}$ for $\lambda z_{2}+\mu z_{1}$
for general $\lambda\in\mathbb{C}^{*}$ and $\mu\in\mathbb{C}$ yields
a twin-triangular derivation conjugate to $\partial$ and still satisfying
condition a). So it remains to show that up to such a coordinate change,
condition b) can be achieved. This can be seen as follows : we consider
$\mathbb{A}^{2}$ embedded in $\mathbb{P}^{2}={\rm Proj}(\mathbb{C}[u,v,w])$
as the complement of the line $L_{\infty}=\left\{ w=0\right\} $ so
that the coordinate system $\left(u,v\right)$ on $\mathbb{A}^{2}$
is induced by the rational projections from the points $\left[0:1:0\right]$
and $\left[1:0:0\right]$ respectively. We let $C$ be the closure
in $\mathbb{P}^{2}$ of the image of the immersion $j:\mathbb{A}^{1}={\rm Spec}(\mathbb{C}[y])\rightarrow\mathbb{A}^{2}$
defined by integrals $\overline{P}_{1}$ and $\overline{P}_{2}$ of
$\bar{p}_{1}$ and $\bar{p}_{2}$, and we denote by $a_{1},\ldots,a_{r}\in C$
the images by $j$ of the points in the inverse image of the branch
locus of $\overline{P}_{1}:\mathbb{A}^{1}\rightarrow\mathbb{A}^{1}$.
Since the condition that a line through a fixed point in $\mathbb{P}^{2}$
intersects transversally a fixed curve is Zariski open, the set of
lines in $\mathbb{P}^{2}$ passing through a point $a_{i}$ and tangent
to a local analytic branch of $C$ at some point is finite. Therefore,
the complement of the finitely many intersection points of these lines
with $L_{\infty}$ is a Zariski open subset $U$ of $L_{\infty}$
with the property that for every $q\in U$, the line through $q$
and $a_{i}$, $i=1,\ldots,r$, intersects every local analytic branch
of $C$ transversally at every point. By construction, the rational
projections from $\left[0:1:0\right]$ and an arbitrary point in $U\setminus\{\left[0:1:0\right]\}$
induce a new coordinate system on $\mathbb{A}^{2}$ of the form $\left(u,\lambda v+\mu u\right)$,
$\lambda\neq0$, with the property that none of the $a_{i}$, $i=1,\ldots,r$,
is contained in the inverse image of the branch locus of the morphism
$\lambda\overline{P}_{2}+\mu\overline{P}_{1}:\mathbb{A}^{1}\rightarrow\mathbb{A}^{1}$.
Hence changing $z_{2}$ for $\lambda z_{2}+\mu z_{1}$ for a pair
$(\lambda,\mu)$ corresponding to a general point in $U$ yields a
twin-triangular derivation conjugate to $\partial$ and satisfying
simultaneously conditions a) and b).
\end{proof}
\subsection{Complement : a criterion for properness of twin-triangular $\mathbb{G}_{a}$-actions}
\indent\newline\noindent In contrast with the set-theoretic freeness
of a $\mathbb{G}_{a}$-action on an affine variety, which can be easily
decided in terms of the corresponding locally nilpotent derivation
$\partial$ of its coordinate ring, it is difficult in general to
give effective conditions on $\partial$ which would guarantee that
the action is proper. However, for twin-triangular derivations, we
derive below from our previous descriptions a criterion that can be
checked algorithmically.
\begin{parn} \label{par:Prop-Crit-setup} For a set-theoretically
free twin-triangular $\mathbb{G}_{a}$-action on the affine space
$\mathbb{A}_{X}^{3}={\rm Spec}(A[y,z_{1},z_{2}])$ over a Dedekind
domain $A$, properness is equivalent to the separatedness of the
algebraic space quotient $Y=\mathbb{A}_{X}^{3}/\mathbb{G}_{a}$. Since
$X$ is affine, the separatedness of $Y$ is equivalent to that of
the morphism $\theta:Y=\mathbb{A}_{X}^{3}/\mathbb{G}_{a}\rightarrow X$
induced by the invariant projection ${\rm pr}_{X}:\mathbb{A}_{X}^{3}\rightarrow X$.
The question being local in the Zariski topology on $X$, we may reduce
again to the case where $A$ is a discrete valuation ring with uniformizing
parameter $x$.
We may further assume that the corresponding twin-triangular $A$-derivation
$\partial=x^{n}\partial_{y}+p_{1}(y)\partial_{z_{1}}+p_{2}(y)\partial z_{2}$
of $A[y,z_{1},z_{2}]$ satisfies the hypotheses of Lemma \ref{lem:Bad-Plane-Removal}.
If $n=0$, then $\partial$ generates an equivariantly trivial whence
proper $\mathbb{G}_{a}$-action with $y$ as an obvious global slice.
So we may assume from now on that $n\geq1$. Our assumptions guarantee
that similarly to $\S$ \ref{par:Open-subs-A3} above, an integral $P_{i}\in A[y]$
of $p_{i}$ defines a morphism $P_{i}:\mathbb{A}_{X}^{1}\rightarrow\mathbb{A}_{X}^{1}={\rm Spec}(A[t])$
whose restriction $\overline{P}_{i}$ over the closed point of $X$
is non constant. Passing to the Galois closure $C_{i}={\rm Spec}(R_{i})$
of the finite \'etale morphism obtained by restricting $\overline{P}_{i}$
over the complement $C_{0,i}\subset{\rm Spec}(\mathbb{C}[t])$ of
its branch locus enables as in the proof of Proposition \ref{prop:Restricted-quotient}
the expression of $P_{i}(y)-t\in A\otimes_{\mathbb{C}}R_{i}\left[y\right]$
as
\begin{equation}
P_{i}(y)-t=S_{1,i}(y)\prod_{\overline{g}\in G_{i}/H_{i}}(y-\sigma_{\overline{g},i})+x^{n}S_{2,i}(y)\label{eq:decomp}
\end{equation}
for suitable elements $\sigma_{\overline{g},i}\in A\otimes_{\mathbb{C}}R_{i}$,
$\overline{g}\in G_{i}/H_{i}$ and polynomials $S_{1,i},S_{2,i}\in A\otimes_{\mathbb{C}}R_{i}\left[y\right]$.
Then we have the following criterion:
\end{parn}
\begin{prop}
\label{prop:Proper-Crit} With the assumption and notation above,
the following are equivalent:
a) $\partial$ generates a proper $\mathbb{G}_{a}$-action on $\mathbb{A}_{X}^{3}$,
b) For every $i\neq j$ in $\left\{ 1,2\right\} $ and every pair
of distinct elements $\overline{g},\overline{g}'\in G_{i}/H_{i}$,
$P_{j}(\sigma_{\overline{g},i})-P_{j}(\sigma_{\overline{g}',i})\in A\otimes_{\mathbb{C}}R_{i}$
can be written as $x^{n-k}\tilde{f}_{ij,\overline{g}\overline{g}'}$
where $1\leq k\leq n$ and where $\tilde{f}_{ij,\overline{g}\overline{g}'}\in A\otimes_{\mathbb{C}}R_{i}$
has invertible residue class modulo $x$. \end{prop}
\begin{proof}
The hypothesis on $\partial$ guarantees that the $A$-derivations
$\partial_{i}=x^{n}\partial_{y}+p_{i}(y)\partial_{z_{i}}$ of $A[y,z_{i}]$
are both irreducible. Letting $V_{\partial_{i}}$ be the invariant
open subset of $\mathbb{A}_{X}^{2}={\rm Spec}(A[y,z_{i}])$ associated
to $\partial_{i}$ as defined in $\S$ \ref{par:Open-subs-A3}, it follows
from the construction given in the proof of Proposition \ref{prop:Restricted-quotient}
that $W_{i}=V_{\partial_{i}}\times_{C_{0,i}}C_{i}$ is the total space
of a $\mathbb{G}_{a}$-bundle $\eta_{i}:W_{i}\rightarrow Z_{i}$ over
an appropriate scheme $Z_{i}$. The $\mathbb{G}_{a}$-action on $V_{\partial_{i}}\times{\rm Spec}(\mathbb{C}[z_{j}])\subset\mathbb{A}_{X}^{3}$,
$j\neq i$, induced by the restriction of $\partial$ lifts to one
on $W_{i}\times\mathbb{A}^{1}$ commuting with that by translations
on the second factor and so the quotient $W_{i}'=W_{i}\times\mathbb{A}^{1}/\mathbb{G}_{a}$
has the structure of a $\mathbb{G}_{a}$-bundle $\eta_{i}':W_{i}'\rightarrow Z_{i}$
over $Z_{i}$. Since $\partial$ satisfies the conditions of Lemma
\ref{lem:Bad-Plane-Removal}, it follows from Corollary \ref{cor:Affine-extended-quotient}
and the proof of Proposition \ref{prop:Twin-Loc-trivi} that the properness
of $\partial$ is equivalent to the separatedness of the schemes $W_{i}'$,
$i=1,2$. So it is enough to show that in our case condition b) above
is equivalent to that in Theorem \ref{thm:Aff-criterion}. We only
give the argument for $W_{1}'$, the case of $W_{2}'$ being similar.
With the notation of the proof of Proposition \ref{prop:Restricted-quotient},
$\eta_{1}:W_{1}\rightarrow Z=Z_{1}$ is the $\mathbb{G}_{a}$-bundle
with local trivializations $W_{1}\mid_{Z_{\overline{g}}}\simeq Z_{\overline{g}}\times{\rm Spec}(\mathbb{C}[u_{\overline{g}}])$,
where $u_{\overline{g}}=x^{-n}(y-\sigma_{\overline{g},1})$, $\overline{g}\in G_{1}/H_{1}$,
and transition isomorphism over $Z_{\overline{g}}\cap Z_{\overline{g}'}\simeq{\rm Spec}(A_{x}\otimes_{\mathbb{C}}R_{1})$
given by $u_{\overline{g}}\mapsto u_{\overline{g}'}=u_{\overline{g}}+x^{-n}(\sigma_{\overline{g},1}-\sigma_{\overline{g}',1})$
for every pair of distinct elements $\overline{g},\overline{g}'\in G_{1}/H_{1}$.
The lift to $W_{1}\times\mathbb{A}^{1}$of the induced $\mathbb{G}_{a}$-action
on $V_{\partial_{1}}\times{\rm Spec}(\mathbb{C}[z_{2}])\subset\mathbb{A}_{X}^{3}$
coincides with the one defined locally on the open covering $\{W_{1}\mid_{Z_{\overline{g}}}\simeq Z_{\overline{g}}\times\mathbb{A}^{1},\;\overline{g}\in G_{1}/H_{1}\}$
of $W_{1}\times\mathbb{A}^{1}$ by the derivations $\partial_{\overline{g}}=\partial_{u_{\overline{g}}}+\varphi_{2}(u_{\overline{2}})\partial_{z_{2}}$
of $A\otimes_{\mathbb{C}}R_{1}[u_{\overline{g}},z_{2}]$ where $\varphi_{2}(u_{\overline{g}})=p_{2}(x^{n}u_{\overline{g}}+\sigma_{\overline{g},1})$,
$\overline{g}\in G_{1}/H_{1}$. Letting $\Phi_{2}(t)\in A\otimes_{\mathbb{C}}R_{1}\left[t\right]$
be an integral of $\varphi_{2}(t)\in A\otimes_{\mathbb{C}}R_{1}\left[t\right]$,
a direct computation of invariants shows that $\eta_{1}':W_{1}'=W_{1}\times\mathbb{A}^{1}/\mathbb{G}_{a}\rightarrow Z$
is the $\mathbb{G}_{a}$-bundle with local trivializations $W_{1}'\mid_{Z_{\overline{g}}}\simeq Z_{\overline{g}}\times{\rm Spec}(\mathbb{C}[v_{\overline{g}}])$
where $v_{\overline{g}}=z_{2}-\Phi_{2}(u_{\overline{g}})$, $\overline{g}\in G_{1}/H_{1}$,
and transition isomorphisms
\[
v_{\overline{g}}\mapsto v_{\overline{g}'}=v_{\overline{g}}+\Phi_{2}(u_{\overline{g}})-\Phi_{2}(u_{\overline{g}'})=v_{g}+x^{-n}(P_{2}(\sigma_{\overline{g},1})-P_{2}(\sigma_{\overline{g}',1})).
\]
So condition b) above for $i=1$ and $j=2$ is precisely equivalent
to that of Theorem \ref{thm:Aff-criterion}. \end{proof}
\begin{rem}
With the notation of $\S$ \ref{par:Prop-Crit-setup}, for every regular
value $\lambda_{i}$ of $\overline{P}_{i}:\mathbb{A}^{1}\rightarrow\mathbb{A}^{1}$,
the expression \ref{eq:decomp} specializes to one of the form
\[
P_{i}(y)-\lambda_{i}=\overline{S}_{1,i}(y)\prod_{\overline{g}\in G_{i}/H_{i}}(y-\overline{\sigma}_{\overline{g},i})+x^{n}\overline{S}_{2,i}(y)
\]
for elements $\overline{\sigma}_{\overline{g},i}\in A$, $\overline{g}\in G_{i}/H_{i}$,
reducing modulo $x$ to the distinct roots of $\overline{P}_{i}(y)-\lambda_{i}\in\mathbb{C}[y]$,
and polynomials $\overline{S}_{1,i},\overline{S}_{2,i}\in A\left[y\right]$.
One checks that condition b) in Proposition \ref{prop:Proper-Crit}
can be equivalently rephrased in this context as the fact that for
every $i\neq j$ in $\left\{ 1,2\right\} $, every regular value $\lambda_{i}$
of $\overline{P}_{i}$, and every pair of distinct elements $\overline{g},\overline{g}'\in G_{i}/H_{i}$,
$P_{j}(\overline{\sigma}_{\overline{g},i})-P_{j}(\overline{\sigma}_{\overline{g}',i})\in A\setminus x^{n}A$.
This alternative form enables to quickly decide that certain twin-triangular
derivations give rise to improper $\mathbb{G}_{a}$-actions. For instance,
consider the family of derivations $D_{n}=x\partial_{y}+2y\partial_{z_{1}}+\left(1+y^{n}\right)\partial_{z_{2}}$,
$n\geq1$, of $\mathbb{C}\left[x\right]_{(x)}[y,z_{1},z_{2}]$. If
$n=2m$, one has $P_{1}=y^{2}$ and $P_{2}=y\left(y^{2m}+2m+1\right)/(2m+1)$.
At the regular value $0$ of $\overline{P}_{2}$, the $2m$ nonzero
roots of $P_{2}$ come in pairs $\pm\alpha_{k}\in\mathbb{C}^{*}$,
$k=1,\ldots,m$, and so $P_{1}(\alpha_{k})-P_{1}(-\alpha_{k})=0$
for every $k$. It follows that the corresponding action is improper.
In contrast, if $n$ is odd then the criterion is satisfied at the
regular value $0$ of $\overline{P}_{2}$. Actually, for all odd $n$,
it was established in \cite{Deveney2002} by different methods that
the corresponding $\mathbb{G}_{a}$-action is a translation.
For a triangular derivation $\partial=x^{n}\partial_{y}+p_{1}(y)\partial_{z_{1}}+p_{2}(y,z_{1})\partial_{z_{2}}$
of $A[y,z_{1},z_{2}]$ generating a set-theoretically free $\mathbb{G}_{a}$-action
and such that the induced derivation $x^{n}\partial_{y}+p_{1}(y)\partial_{z_{1}}$
of $A[y,z_{1}]$ is irreducible, on can still deduce from Theorem
\ref{thm:Aff-criterion} a more general version of the above criterion
which is again a necessary condition for properness. While more cumbersome
than the twin-triangular case, the criterion can be used to construct
improper actions and has potential to study arbitrary proper triangular
actions.
\end{rem}
\bibliographystyle{amsplain}
|
1,108,101,564,641 | arxiv | \section{Lecture One: Tetrahedral Condensed Matter}
Water acts in many ways, from dissolving salt to creating the medium for all life on Earth. It is so important and multifaceted that whole books can be and are written about it. See, for example, David Eisenberg's and Walter Kauzmann's timeless and recently re-issued monograph on the physical properties of water~\cite{EisenbergKauzmann2005}. Our charge in these three lectures is to describe something about this material from a molecular perspective. In limiting scope, we focus on thermal fluctuations and their consequences on solvation and self-assembly. The presentation is like that of a textbook chapter, not a comprehensive review. We make use of statistical mechanics at the level it is treated in Ref.~\cite{Chandler1987}. While our focus is on water, what we say applies to much of liquid matter, where good background is found in Ref.~\cite{BarratHansen2003}.
The perspective we adopt is influenced by the results of computer simulations because this approach provides unambiguous microscopic detail, albeit for idealized models. Combined with experiments and theory, simulation is central to all modern understanding of condensed matter. Water is a most important example. Computer simulations of water were pioneered in the 1970s by Aneesur Rahman and Frank Stillinger. Many subsequent advances have validated their general approach and enhanced our understanding of water. Reviews of that early work in Refs.~\cite{Stillinger1976} and~\cite{Stillinger1980} remain informative to this day.
Our first lecture covers properties of pure water, particularly distribution functions related to local arrangements of water molecules. The second lecture is about free energies of solvation and how these free energies are related to the statistics of spontaneous molecular fluctuations in water. The third and last lecture builds from that stage to treat forces of self assembly, especially hydrophobic forces, which act on molecular and supramolecular scales.
We begin at the smallest length scales, those of one water molecule.
\subsection{Molecular Structure}
\label{sec:MolStruct}
Figure~\ref{fig:WaterStructure} illustrates the water molecule and its most significant interaction---the hydrogen bond. Though the molecule is quite polar, its electron density is dominated by the electrons of the oxygen atom. As such, the space-filling volume of a water molecule is approximately spherical with van der Waals radius of~$1.4\,$\AA, like that of its isoelectronic partner, neon. Because this volume is roughly spherical, it is often convenient to identify the position of a water molecule with the position of its oxygen nucleus. The $\O\H$ chemical bond is about~$1\,$\AA\ long and the $\H\O\H$ angle is about $108^\circ$.
\begin{figure}
\begin{tabular}{cc}
\includegraphics{WaterStructureSchematic}&
\includegraphics{HBondSimpler}
\end{tabular}
\caption{\label{fig:WaterStructure}Geometry of a water molecule (left) and a typical hydrogen-bond (right).}
\end{figure}
The hydrogen bond interaction is largely electrostatic in origin, and it is strongest when the intermolecular separation, $R$ in Fig.~\ref{fig:WaterStructure}, is about $3.0 \pm 0.2\,$\AA, and the angle, $\theta$, is $\theta \lesssim 20^\circ$. This linear hydrogen-bond has a maximum adhesive strength of about $6\,$kcal/mol, which coincides with~$10\,k_{\text{B}} T$ at room temperature (i.e., $25^\circ\,$C). Such a large attraction between molecules is unusual. But the strength is only this high over a limited range of relative positions and orientations, so that most of the hydrogen-bond strength is lost for $\theta \gtrsim 30^\circ$ and $R \gtrsim 3.5\,$\AA, and the interaction decays as~$1/R^3$ to negligible values for separations that are a few Angstroms larger.
Liquids in general are characterized by nearly-balanced competition between energy and entropy. When energy dominates, the material is solid, and when entropy dominates, the material is gaseous. Liquid water exhibits this balance and is thus ordinary in this and many other respects. Its values for density, viscosity and diffusion constant are comparable to those of other liquids at ambient conditions (i.e., $25^\circ\,$C and $1\,$atm pressure). But the energy-entropy competition in water is remarkable because its large adhesive energy acts over a limited range of configurations. It makes water fragile in the sense that its properties have unusually strong dependence with respect to temperature and pressure, as discussed below.
Frozen water is ice, depicted in Fig.~\ref{fig:HBondNetwork}(a), with the individual molecules arranged in nearly perfect tetrahedral order. The resulting hydrogen-bonding network is nearly defect-free, with each water hydrogen-bonded to four other waters. Liquid water, on the other hand, is bound together by a disordered network of hydrogen-bonds, depicted in Fig.~\ref{fig:HBondNetwork}(b). Hydrogen bonds are constantly breaking and reforming, so that hydrogen-bonded partners switch allegiances regularly.
On average, a water molecule in the liquid will have fewer than the energetically optimum four hydrogen-bonding partners. But the number of hydrogen bonds fluctuates, and for a tagged water molecule at any one point in time the number typically ranges from two to five. On rare occasions, there are even as few as one or none. The precise numbers and their probabilities depend upon the definition of a hydrogen bond, and conventions for that definition are somewhat arbitrary. For any reasonable definition, however, a basin in the potential energy landscape of liquid water coincides with a specific hydrogen-bonding network, whereas transition states or saddles separating basins in that landscape coincide with breaking hydrogen bonds in the network, and the probability distribution for the number of hydrogen bonds that a given molecule experiences is unimodal.
Without theory and simulation, experimental probes of water structure and dynamics are often difficult to interpret. This is especially true for spectroscopy. These experiments occasionally motivate pictures of water structure different than those described above. One example is the two-state model of water, where the liquid is imagined to be an ideal mixture of bonded and non-bonded molecules. Unions of experiment with theory and simulation have dispensed with that idea, as illustrated, for example, in Refs.~\cite{FeckoEtAl2003}, \cite{EavesEtAl2005} and \cite{Geissler2005}.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=60mm]{IceIh}&
\includegraphics[width=60mm]{LiquidWater2Cropped}\\
(a) Ice&(b) Liquid Water
\end{tabular}
\caption{\label{fig:HBondNetwork}Hydrogen-bonding networks of condensed water, where dashed lines depict hydrogen bonds. The picture on the left is a rendering of the molecular structure of ice I. The picture on the right is a configuration generated through a classical trajectory (i.e., molecular dynamics computer simulations) in which forces are computed according to the SPC/E model of intermolecular potentials~\cite{BerendsenGrigeraStraatsma1987}. Measures of structure, dynamics and thermodynamics obtained with this and similar models are in reasonable harmony with those obtained from experiments on real water.}
\end{figure}
One experimentally-accessible measure of liquid structure is a pair distribution function. For water, there are three: $g_{\O\O}(r)$, $g_{\O\H}(r)$~and~$g_{\H\H}(r)$, which measure the probability that two different atoms, oxygen~or~hydrogen, are a distance~$r$ apart. More precisely,
\begin{equation}
\label{g_of_r}
\rho g_{\O\O}(r) = V \Bigl\langle \sum_{j>1} \delta({\vec r}_1^{(\O)}) \delta({\vec r}_j^{(\O)} - {\vec r}) \Bigr\rangle,
\end{equation}
where $\rho$ is the number density of water, $V$ is the volume of the system, ${\vec r}_i^{(\O)}$ denotes the position of the oxygen atom of molecule~$i$, the angular brackets denote a thermal average, and the $\delta$-functions are Dirac's functions, which have unit volume and are non-zero only at points where their arguments are zero. Analogous expressions apply for $g_{\O\H}(r)$~and~$g_{\H\H}(r)$. In an isotropic system, they depend on only the magnitude of ${\vec r}$, $r = |{\vec r}|$.
Neutron and x-ray scattering experiments probe different weighted combinations of these functions. By adjusting isotopic concentrations and thereby altering weights, each distribution function may be estimated. Figure~\ref{fig:neutronGAndLJG}(a) shows the three pair distribution functions as deduced from neutron scattering measurements. These functions are fully consistent with and supportive of a picture in which liquid water is organized with a preference for local tetrahedral ordering.
For comparison, Fig.~\ref{fig:neutronGAndLJG}(b) shows the radial distribution function of argon. Argon has no strong and directional adhesive forces, and it is therefore called a non-associated liquid. Most liquids are non-associated. Structure in such liquids is dominated by packing, and since argon atoms are spherical, liquid argon structure is essentially that of a dense fluid of hard spheres. An integral over the first peak in $g(r)$ indicates that an atom in liquid argon has on average about 12 nearest neighbors, while the corresponding integral over $g_{\O\O}(r)$ indicates that a water molecule in liquid water has on average about 4 nearest neighbors. Further, while the second peak for the $g(r)$ of liquid argon is at about twice the nearest-neighbor separation, the second peak of $g_{\O\O}(r)$ is located at around $1.6$~times the nearest-neighbor separation, corresponding to water molecules that share a common hydrogen-bonding partner with local tetrahedral order.
\begin{figure}\begin{center}
\begin{tabular}{cc}
\includegraphics{MyGRsNeutron}&
\includegraphics{MyLJGofR}\\
\includegraphics{DiagramGRsNeutron}&
\includegraphics{DiagramLJGofR}
\end{tabular}
\end{center}\caption{\label{fig:neutronGAndLJG}Left: Pair distribution functions of water at ambient conditions, as obtained from neutron scattering experiments (adapted from Ref.~\cite{Soper2001}). The shortest distance peaks for the OH and HH functions refer to intramolecular separations between atoms. Neutron scattering detect these peaks along with intermolecular features. The intramolecular peaks are shown here for purposes of comparing with remaining intermolecular features. Right: Radial distribution function of a model of Argon at temperature $T=94.4^\circ\,$K and density $\rho=1.374\,$g/cc (adapted from Ref.~\cite{Rahman1964}). The schematic pictures at the bottom depict the pertinent geometries for water (left) and for argon (right) from which the main features in the exhibited distribution functions can be rationalized.}
\end{figure}
Because non-associated liquids are dominated by packing forces, as detailed in Weeks-Chandler-Andersen theory~\cite{ChandlerWeeksAndersen1983}, the structures of these fluids are nearly athermal, responding to changes in temperature only insofar as these cause changes in density. The structure of water, on the other hand, is very sensitive to temperature. Figure~\ref{fig:XRaysGofR} shows the function~$g_{\O\O}(r)$ for water, as determined from X-ray scattering experiments at~$25^\circ\,$C~and~$75^\circ\,$C. The most striking feature is that most of the structure beyond the first peak disappears above around $50^\circ\,$C, even though the density of the liquid decreases by less than~$1$\%. This notable change can be rationalized in terms of the competition between entropy and enthalpy. While hydrogen-bonds are energetically very favorable, the open structure they impose is entropically unfavorable. At ambient conditions, enthalpy narrowly dominates, while at higher temperatures, entropy plays the larger role. In warm water, above $50^\circ\,$C, hydrogen bonds are often broken and no longer dictate local tetrahedral ordering of molecules. This sensitivity to temperature implies that water has a large heat capacity at ambient conditions. Indeed, per water molecule, the heat capacity at constant volume, $C_{\text{\tiny V}}$, is around $10\,k_{\text{B}}$. For simple non-associated liquids, the corresponding heat capacity is an order of magnitude smaller.
\begin{figure}\begin{center}
\includegraphics{MyNartenLevyGR}
\end{center}\caption{\label{fig:XRaysGofR}Pair distribution functions of water at $25^\circ\,$C~and~$75^\circ\,$C, determined from X-Ray scattering measurements. Adapted from Ref.~\cite{NartenLevy1971} (data smoothed for clarity).}
\end{figure}
The competition between hydrogen-bond enthalpy and packing entropy is manifest in many other properties of water around ambient conditions, though the specific crossover temperatures and pressures vary slightly with different observables. For example, at atmospheric pressure, water attains a maximum density at~$4^\circ$C, where its structure is most open (Fig.~\ref{fig:DensityAndDiffusionMaximum}(a)). Other examples are discussed below.
\begin{figure}
\begin{tabular}{cc}
\includegraphics{ClearerDensities}&
\includegraphics{ClearerSelfDiffusion}\\
(a)&(b)
\end{tabular}
\caption{\label{fig:DensityAndDiffusionMaximum}(a)~Density of water at $1$~bar and $200$~bar. Data from~\cite{NISTWater}. (b)~Self-diffusion constant of water at $T=4^\circ\,$C~and~$T=45^\circ\,$C. Adapted from Ref.~\cite{HarrisWoolf1980}.}
\end{figure}
\subsection{Diffusion}
\label{sec:diffusion}
Fluctuations in the structure of liquid water allow molecules to diffuse. The temperature and pressure dependence of the self-diffusion constant of water reflects the degree to which fluctuations change with external conditions. For temperatures below about $310\,$K, there exists a pressure at which the diffusion constant~$D$ of water is maximal, as shown in Fig.~\ref{fig:DensityAndDiffusionMaximum}(b). As with the density maximum, this maximum is a manifestation of the nearly-balanced competition between enthalpy and entropy, about which we will have more to say. First, however, note that the diffusion constant of water under most common conditions is, to within an order magnitude, equal to $10^{-5}\,$cm$^2$/s, a value that is typical of dense liquids at ambient conditions. This constant measures the rate at which the mean-squared displacement of a particle increases with time,
\begin{equation}
\langle | {\vec r}_1^{(\O)}(t) - {\vec r}_1^{(\O)}(0) |^2 \rangle \sim 6 D t,\quad \text{for }t\,\,\, \text{large}.
\end{equation}
The value of $D$ for water is such that after about~$1\,$ps, on average a water oxygen travels about~$1\,$\AA. By taking the time-derivative of both sides of the above equation, we obtain the relationship connecting the self-diffusion constant to the velocity auto-correlation function,
\begin{equation}
\beta m D = \langle {\vec v}^2 \rangle^{-1} \int_0^\infty dt\, \langle \vec{v}(0)\cdot\vec{v}(t) \rangle = \tau_v \approx 10^{-12}\,\text{s}.
\end{equation}
Here, $\beta^{-1} = k_{\text{B}} T$, $m$ is the mass of an oxygen atom, and $\vec{v}$ is shorthand for $\vec{v}_1^{(\O)}$, the velocity of the oxygen atom of water molecule~$1$.
The right-hand side of this equation is the time-scale~$\tau_v$ beyond which $\langle | {\vec r}_1^{(\O)}(t) - {\vec r}_1^{(\O)}(0) |^2 \rangle$ goes as $6Dt$, and this time for water, of the order of 1 ps, is also typical of most dense liquids at ambient conditions.
The diffusion constant is a measure of average molecular motion. The actual motions of liquid water molecules are intermittent, and at any single 1 ps time frame, a tagged water may or may not move a distance of the order of 1 \AA. For a water molecule to diffuse, it must break an existing hydrogen bond and form a new bond. Conversely, after a hydrogen bond is broken, the possibility of the previously-paired water molecules to drift apart is much larger than when they are bonded. Diffusion and hydrogen-bond dynamics are thus coupled. At least in part because of this coupling, hydrogen-bond lifetimes have a non-trivial distribution~$P_{\text{HB}}(\tau)$, whose mean corresponds to the typical bond lifetime,
\begin{equation}
\tau_{\text{HB}} = \int \tau \,P_{\text{HB}}(\tau) \, d\tau \approx 10^{-12}\,\text{s}.
\end{equation}
Direct experimental measures of $P_{\text{HB}}(\tau)$ are not available. Its general features can be inferred from simulation. The typical lifetime~$\tau_{\text{HB}}$ is comparable to the diffusion timescale~$\tau_v$.
The maximum in the diffusion constant reflects the effect of pressure on the entropy-enthalpy balance of the hydrogen-bonding network of water. As pressure increases from $1\,$bar to about $1\,$kbar, hydrogen bonds are destabilized, so it is easier for water molecules to break and reform hydrogen bonds and thus, diffuse. For moderate pressures, this effect is larger than the counteracting slowdown in diffusion expected from the higher densities induced by higher pressures. There is a crossover at around $1\,$kbar, above which the diffusion constant, as in a simple liquid, decreases with increasing pressure. At high temperatures, as evidenced in Fig.~\ref{fig:XRaysGofR}, the hydrogen-bonding network is already substantially perturbed, and only the simple-liquid dependence of diffusivity on pressure is observed.
\subsection{Chemistry in water}
Water plays important roles as the medium or solvent for chemical processes. It also plays important roles as a reactant. Chemical properties of liquid water in those cases mostly pertain to the presence of disassociated water molecules, the hydroxide and hydronium ions, $\H\O^-$ and $\H_3\O^+$, respectively. They are in dynamic equilibrium with intact water molecules: $2\,\H_2\O \rightleftharpoons \H\O^- + \H_3\O^+$. While the concentrations of these ions are extremely low (about $10^{-8}$~times smaller than that of intact water molecules), the presence of these ions is crucial to acid-base chemistry, biological pumps and motors, and, possibly, transportation fuels of the future.
A proton in water, sometimes denoted by~$[\H^+]_\mathrm{aq}$, is usually bonded to or strongly associated with one or more water molecules. Most often it appears in a hydronium ion, $\H_3\O^+$, surrounded by other water molecules. Such protons diffuse in water faster than do the individual molecules. They do so by moving along hydrogen-bonding wires, whereby a hydronium ion ($\H_3\O^+ = [\H^+]_{\mathrm{aq}}$) that is hydrogen-bonded to a water molecule ($\H_2\O$) transfers the extra proton along a hydrogen bond. Recall that in a hydrogen bond, the chemical $\O$-$\H$ bond on one molecule points towards the O of another molecule, $\O\H \cdots \O$. The H in that arrangement is called the donor hydrogen, and it is a proton of a donor hydrogen that moves from the $\H_3\O^+$ to the adjacent $\H_2\O$. After this transfer, the initially donating ion becomes a stable water molecule and the initially receiving water molecule becomes an ion. The new ion can then transfer one of its three protons in a similar fashion to yet another water molecule. This chain of events, called the Grotthus mechanism, can continue indefinitely provided water molecules are networked together with hydrogen bonds. Through this mechanism, excess charge of the hydronium ion can travel with minimal diffusion of water molecules.
While the diffusion of the proton is faster than that of intact water molecules, it is only so by factors that are of order one. This is because the shuttling of charge along a chain of hydrogen bonds requires forces from molecules surrounding the chain, and these in turn require some changes in the orientations of water molecules---the same sorts of reorganization that can also lead to hydrogen bond breaking and molecular diffusion. Illustrative computer simulations of this dynamics of water and solvated protons in Refs.~\cite{SchmittVoth1999}, \cite{DayEtAl2002}~and~\cite{MarxEtAl1999} employ models in which species can break and make chemical bonds.
Auto-ionization of water, $[\H_2\O]_{\mathrm{aq}} \rightarrow [\H^+]_{\mathrm{aq}} +[\O \H^-]_{\mathrm{aq}}$, also involves the Grotthus mechanism, as shown in Fig.~\ref{fig:AutoIonization} below. But before describing how this ionization occurs, we consider further the sorts of fluctuations that occur in water. It is fluctuations---in this case, fluctuations of electric fields---that can cause a proton to leave one oxygen and join another.
\subsection{Density fluctuations}
\label{sec:DensityFlucts}
Microscopic fluctuations in density are related to how water adjusts to the presence of solutes in general, and these fluctuations control the organization of apolar molecules in water in particular. Remarkably, density fluctuations on small length scales obey Gaussian statistics to good accuracy. We will exploit this fact in the next two lectures.
One way to study density fluctuations is to use computer simulations to calculate the probability $P_\V(N)$ for observing $N$~waters within a probe volume~${\text{\tiny V}}$. Figure~\ref{fig:GaussianDensities}(a) shows these probabilities for water in spherical probe volumes, and compares them to Gaussian distributions with the same mean and variance. Gaussian statistics are not unique to water. Figure~\ref{fig:GaussianDensities}(b) shows a similar collection of $P_\V(N)$ distributions in a hard-sphere fluid. Gaussian statistics of this sort are consistently found in numerical simulations of a wide variety of dense homogeneous fluids. This result is remarkable because it is not obvious why the central limit theorem should apply for molecular-scale probe volumes~${\text{\tiny V}}$.
\begin{figure}\begin{center}
\begin{tabular}{cc}
\includegraphics{WaterPn}&
\includegraphics{HardSpheresPn}\\
(a) Liquid Water&(b) Hard Sphere Fluid
\end{tabular}
\end{center}
\caption{\label{fig:GaussianDensities} (a) Probability of seeing $N$ waters in a spherical volume ${\text{\tiny V}}$ of radius (left to right) $2.5\,$\AA, $3.75\,$\AA~and~$5.0\,$\AA. Simulation results at ambient conditions (symbols) are compared with a Gaussian of equal mean and variance (lines). After Ref.~\cite{HummerEtAl1996}. (b) Same, for a fluid of hard spheres of diameter~$\sigma$ at density~$\rho=0.5\,\sigma^{-3}$, with a spherical probe volume ${\text{\tiny V}}$ of radius (left to right) $\sigma$, $1.5\sigma$~and~$2\sigma$. Adapted from Ref.~\cite{CrooksChandler1997}.}
\end{figure}
For large probe volumes~${\text{\tiny V}}$, mean-square density fluctuations are related to the isothermal compressibility. In particular,
\begin{equation}
\label{compressibility}
\frac{\langle(\delta N)^2\rangle_{\text{\tiny V}}}{\langle N \rangle_{\text{\tiny V}}} \to \frac{\partial \rho}{\partial \beta p}\,,\qquad {\text{\tiny V}}\to\infty\,,
\end{equation}
where $\rho$ is the average (bulk) molecular density, $p$ is pressure, and $\delta N = N - \langle N \rangle_{\text{\tiny V}}$ is the fluctuation in the number of molecules in~${\text{\tiny V}}$, $N$. Equation~\eqref{compressibility} is a standard result of the grand canonical ensemble; see, for instance, Refs.~\cite{Chandler1987} and~\cite{HansenMcDonald2006}. The limiting value of Eq.~\ref{compressibility}, $\partial\rho/\partial\beta p$, is a unit-less characterization of the compressibility. It is 1 for an ideal gas, and it far exceeds 1 and ultimately diverges as a fluid approaches a critical point. But liquid water at standard conditions is not an ideal gas and it is far from a critical point. Rather, it is near its triple point. For typical liquids near triple points, $\partial \rho / \partial \beta p$ is of the order of $10^{-2}$. For water, its value is about $0.06$, reflecting the not atypical but relatively open and malleable structure of this liquid.
From Eq.~\ref{compressibility}, one expects that the variance of $P_{\text{\tiny V}}(N)$ will increase as the size of the probe volume ${\text{\tiny V}} = \langle N \rangle_{\text{\tiny V}} / \rho$ increases. This behavior is indeed observed in the probability functions shown in Fig.~\ref{fig:GaussianDensities}. But for volumes ${\text{\tiny V}}$ smaller than about 1 nm$^3$, the mean-square size of fluctuations in $N$ is larger than that estimated from Eq.~\ref{compressibility}. In other words, homogeneous water is generally stiffer on large length scales (more than 1 nm) than on small length scales (less than 1 nm). In the next lecture, we have more to say about this fact, which pertains to the correlation length of water.
\subsection{Electric field fluctuations}
\label{sec:EFieldFlucts}
Microscopic fluctuations in electric fields play a prominent role in charge transfer reactions. In such processes, charge initially localized on one site transitions to be localized at a different site. These transferring charges can be associated with an electron or a proton. When the transition occurs spontaneously, it is the result of thermal fluctuation. In particular, these fluctuations can produce electric potentials that make the two localized charge states isoenergetic, facilitating movement of charge from one state to the other. The activation energy for this process is therefore dominated by the energies of the facilitating fluctuation in electric potentials, which ultimately refers to the free energies and therefore probabilities for fluctuations in arrangements of water molecules. These ideas form the basis of Rudolph Marcus' celebrated theory of electron transfer. An extended introduction to that theory is found in Ref.~\cite{Chandler1998}, and in Marcus' Nobel Prize lecture published in Ref.~\cite{Marcus1993}.
\begin{figure}\begin{center}
\begin{tabular}{cc}
\includegraphics[width=6cm]{MarcusOverview}&
\includegraphics[width=7cm]{NonEquDeltaE}\\
(a)&(b)
\end{tabular}
\end{center}\caption{\label{fig:Marcus}(a) Schematic of a solute in either state I~or state~II and the associated energies of interaction with the surrounding water. Photoexcitation promotes a transition from I~to~II at time~$t=0$ (following Ref.~\cite{JimenezEtAl1994}). (b) Probability of observing an energy gap~$\mathcal{E}$ a time~$t$ after photoexcitation (symbols: simulation, lines: Gaussian with equal mean and variance). Adapted from Ref.~\cite{GeisslerChandler2000}.}
\end{figure}
The basic idea of Marcus theory is shown in Fig.~\ref{fig:Marcus}(a). The potential energy of two charge states, denoted I~and~II, is shown as a function of the structure of the solvent. Generically, given a particular solvent configuration, the energies of the two charged states will differ by an amount~$\mathcal{E}$, called the energy gap. Without an external perturbation, the activation barrier to charge transfer is given by the free energy to change the solvent structure from a typical configuration to one where the two potential energy surfaces intersect.
In experiments, rather than wait for charge transfer to happen spontaneously (i.e., in the dark), one can inject a photon of energy~$\mathcal{E}(0)$ to produce a transition at time~$t=0$ from electronic state~I to electronic state~II. In computer simulations, one can take a typical equilibrium configuration of state~I and instantaneously change the charge configuration to coincide with that of state~II. In both cases, one can subsequently monitor the relaxation of the surrounding solvent as it adapts to the presence of the new charge state. Figure~\ref{fig:Marcus}(b) illustrates the statistics of the energy gap as a function of time elapsed since the charge transfer reaction. Here too, Gaussian statistics prevails, in this case for the energy gap, not only at equilibrium, but at all times during the non-equilibrium relaxation after the charge transfer.
Another important observation illustrated in Fig.~\ref{fig:Marcus}(b) is the speed at which the excess energy added to the system at~$t=0$ is dissipated into the solvent. In this example, about $60\,k_{\text{B}} T$ is injected into the system. Within $30\,$fs, two thirds of that energy has been transferred to the solvent. The fast dynamics illustrated in Fig.~\ref{fig:Marcus} are polarization fluctuations associated with small rotational or rocking motions of water molecules---termed librations. These motions occur with high frequencies because protons have small mass and thus water molecules have small moments of inertia.
Because librations are so rapid, nuclear quantum effects can be important. Any motion of frequency~$\omega$ such that $\hbar\omega / k_{\text{B}} T \gtrsim 1$ will show quantization effects, and at ambient conditions, $\hbar / k_{\text{B}} T \approx 25\,$fs. Indeed, one mechanism of polarization fluctuations in water is through quantum tunneling. Librations move molecules through classically forbidden regions of configuration space. If the hydrogens are replaced by deuteriums, tunneling probabilities decrease because tunneling probabilities in general decrease exponentially with increasing particle mass. This decrease explains some of the substantial isotope effects in electron transfer reactions. Other observable nuclear quantum effects include a small difference in melting temperatures ($4^\circ$C for $\mathrm{D}_2\O$, $0^\circ$C for $\H_2\O$) and the temperature of maximum density ($12^\circ$C for $\mathrm{D}_2\O$, $4^\circ$C for $\H_2\O$). Theory for such effects is described in Refs.~\cite{Chandler1991} and \cite{Nitzan2006}.
\subsection{Water auto-ionization}
\label{sec:AutoIonization}
To conclude this first lecture, we discuss the role of fluctuations on the mechanism of water auto-ionization. At equilibrium in water, once in roughly 10 hours, a tagged intact water molecule will dissociate to become a hydroxide ($\O\H^-$) ion, thereby giving up one of its protons to the surrounding liquid. The process is both rare (occurring only once in hours) and fleeting (completing in only picoseconds after it starts). Such events can be examined in computer simulations through an importance sampling of trajectory space. This sampling was used to harvest many independent examples of auto-ionization events in water, and one example is shown in Fig.~\ref{fig:AutoIonization}. From analyzing the harvested trajectories and characterizing their transition states, it has been found that auto-ionization follows from the coincidence of two fluctuations, which we describe now.
First, a large electric field fluctuation briefly destabilizes an $\O$-$\H$ chemical bond and thus stabilizes an ion-separated configuration. Fluctuations large enough to break chemical bonds occur in water, but they are extremely rare. They are unusual circumstances where electric (mostly dipolar) fields from many, many molecules focus at the site of one O-H bond. But when this happens, because molecular orientations change quickly, the large destabilizing electric field lasts for no more than a few tens of femtoseconds. The presence of such large and fleeting electric fields in water was discovered by Graham Fleming and his co-workers~\cite{JimenezEtAl1994}. For the brief period of time when this large field persists, an O-H chemical bond is destabilized and charge separation can occur along a hydrogen bond wire by the Grotthus mechanism, as seen in Panels B, C and D of Fig.~\ref{fig:AutoIonization}.
A second fluctuation is required to break the wire of hydrogen bonds along which the charge separation proceeds by the Grotthus mechanism. Otherwise, when the rare electric field disappears, the separated charges will quickly recombine after the rare electric field disappears. The breakage of the wire is evident from Panels D and E of Fig.~\ref{fig:AutoIonization}. It removes the direct pathway to ion recombination. Average hydrogen-bond lifetimes are of the order of picoseconds, while the destabilizing electric field persist for only tens of femtoseconds. So, it is unusual for a breaking of a hydrogen bond wire to occur during the specific period of time when the electric field acts on that particular wire. After the wire is broken, with the charges separated, and rapid recombination is inhibited, the solvated proton (i.e., the hydronium ion) can then diffuse away from its parent hydroxide.
According to these theoretical results, therefore, auto-ionization is the coincidence of a large but fleeting electric field fluctuation and a switch in hydrogen bond allegiance. Both are rare events, the former much rarer than the latter, and the two must occur simultaneously---at the same point in space and time. Figures~\ref{fig:AutoIonization} D~and~E show typical configurations just before and after the system crosses the transition state for this process. Experimental observations find that the recombination of initially separated hydroxide and hydronium ions is diffusion-limited with a reactive inter-ionic separation of about 8 \AA. The mechanism exhibited in Fig.~\ref{fig:AutoIonization} explains why this length is significantly larger than the typical separations between nearest-neighbor and second-nearest-neighbor water molecules. But direct experimental tests of this mechanism for the fundamental kinetic step of pH are not yet available.
\begin{figure}\begin{center}
\includegraphics[width=130mm]{AutoIonization}
\end{center}\caption{\label{fig:AutoIonization}Typical $150\,$fs trajectory illustrating an auto-ionization event, from Ref.~\cite{GeisslerEtAl2001}. Each panel is separated by $30\,$fs. The transition state for the process is visited in a time frame between those pictured in Panels D and E. The hydroxide and hydronium ions are shown in blue and yellow, respectively. The subset of hydrogen bonds that play a specific role in this particular trajectory are shown with dotted lines.}
\end{figure}
\section{Lecture Two: Solvation}
The previous lecture describes the behavior of pure liquid water, stressing how its local structure is tetrahedral, and how its density and polarization fluctuations obey Gaussian statistics to a good approximation. This second lecture describes how the nature of these fluctuations determines water's behavior as a solvent. The central quantity to be considered is solvation free energy---the reversible work done on the solvent to accommodate a solute molecule. This quantity determines the probability of solvation and its associated driving forces.
These forces can be strong, such as when water successfully outcompetes powerful ionic bonds, or when it completely shuns solutes, as happens in oil-water de-mixing. Amphiphilic substances add geometric frustration to the physics of solvation, where each molecule has two distinct portions at a fixed separation, one that binds water and the other that repels water. The fixed separation is a geometric constraint that frustrates water structure, which in turn can lead to aggregated nano structures, like membranes and micelles. Proteins are large amphiphilic molecules for which water significantly affects dynamics and thermodynamics, for instance by providing one of the driving forces for protein folding and assembly.
This lecture describes underlying principles and simple quantitative estimates of free energies related to these behaviors. A few consequences are described in the next lecture.
\subsection{Solvation free energies}
Imagine a system where initially there is only bulk water and a solute in vacuum far away. The solute is then slowly inserted into the water (Fig.~\ref{fig:SolvationCartoon}). This insertion requires performing some reversible work, which is equal to the free energy difference~$\Delta\mu$ between the system before and after insertion. This free energy can be calculated from statistical mechanics as a ratio of partition functions,
\begin{equation}
e^{-\beta\Delta\mu} = \frac{\int dx\, e^{-\beta E_1(x)}}{\int dx\, e^{-\beta E_0(x)}}\,\,,
\label{eqn:FreeEnergyDefn}
\end{equation}
where $x$ denotes a configuration of the system, $E_0(x)$ is the energy of the pure solvent and $E_1(x)$ is the energy of the solvent when the solute is in the liquid. The energy function $E_1(x)$ is parameterized by the state of the solute, so changing the condition of the solute generally changes that function.
\begin{figure}\begin{center}
\includegraphics{SolvationCartoonClean}
\end{center}
\caption{\label{fig:SolvationCartoon}Solvation free energies and the forces of assembly. The excess chemical potentials for the separated pair (the red and green particles in the middle picture) and the complexed pair (the red and green particles in the right-hand picture) are the solvation free energies ~$\Delta\mu_1$ and $\Delta \mu_2$, respectively. The change in solvation free energy, $\Delta \mu _2 - \Delta \mu _{1}$ is the reversible work done on the liquid to assemble the complexed red-green solute from the separated red-green pair.}
\end{figure}
The quantity $\Delta\mu$ is called the \emph{solvation free energy}. In a dilute solution, where the solutes do not interact with each other, the chemical potential $\mu$ of the solute is given by
\begin{equation}
\beta\mu = \beta\Delta\mu + \ln(\rho/\rho_0),
\end{equation}
where $\rho$ is the concentration of the solute, relative to its concentration~$\rho_0$ in the standard state. The logarithm on the right-hand-side is the only contribution to $\beta \mu$ when the solute and the water do not interact. This logarithm is thus the ideal gas chemical potential. For this reason, the solvation free energy is sometimes also called the solute's \emph{excess} chemical potential---the excess relative to that of an ideal gas. One way to deduce excess chemical potentials from experiments is to measure partitioning fractions of solutes in two coexisting fluids. Connections like that between $\Delta \mu$ and experiments are discussed in standard texts, e.g.~\cite{Chandler1987} and \cite{DillBromberg2010}.
When two or more solutes are dissolved in water, their solvation free energy can depend on their relative positions. This dependence may be due to packing effects, for example, or the structure of the hydrogen-bonding network of water around one solute being perturbed by the presence of the other solute. Regardless of how this dependence arises, the solvation free energy of the two solutes when they are nearby ($\Delta \mu _2$ in Fig.~\ref{fig:SolvationCartoon}) may differ from its value when they are far apart ($\Delta\mu_1$ in Fig.~\ref{fig:SolvationCartoon}). If the difference $\Delta \mu _2 - \Delta \mu_1$ is negative, then the associated state is favored. If it is positive, on the other hand, the dissociated state is favored. The solvent provides a force of assembly or disassembly, respectively, depending upon whether $\Delta \mu_2 - \Delta \mu_1$ is negative or positive. This type of force is what drives oily particles together in water and is ultimately responsible for the hydrophobic effect, the main topic we discuss in Lecture~Three.
The excess chemical potential is but one particular free energy that can be computed for a solute-water system. In particular, consider a variable $\lambda$ that interpolates between one solute and another. For instance, $\lambda$ may be the distance from the solute to the liquid-vapor interface, or it may be the distance between two dissolved solutes, or a parameter that is used to progressively ``create'' or ``grow'' solutes into the solvent. Denote by $\lambda(x)$ the value of~$\lambda$ for a particular configuration~$x$. At equilibrium, configurations with specific values of $\lambda$ occur with probability
\begin{equation}
P(\lambda) \propto \int \text{d}x\, \delta \left( \lambda(x) - \lambda \right) \exp[-\beta E_1(x; \lambda(x))].
\end{equation}
The right-hand side of this equation is a partition function for configurations constrained to the surface $\lambda(x) = \lambda$. The quantity $-k_{\text{B}} T \ln P(\lambda)$ is therefore the free energy of the system constrained to have that particular value of~$\lambda$. When $\lambda = 1$ and $\lambda = 0$ correspond to the solvent with and without the solute, respectively, then $- k_{\mathrm{B}} T \ln [P(1)/P(0)] = \Delta \mu$. At intermediate values of $\lambda$, $- k_{\mathrm{B}} T \ln [P(\lambda)/P(0)]$ serves as a useful generalization of solvation free energy.
We now apply this generalization in two ways, first to estimate solvation free energy of excluded volume (i.e., the reversible work to make space for a solute in a liquid solvent), and then to estimate solvation free energy due to the charge of a solute (i.e., the reversible work to polarize the solvent).
\subsection{Solvation of small excluded volumes}
To compute the solvation free energy for an excluded volume ${\text{\tiny V}}$, we use for $\lambda(x)$ the observable $N(x)$, which denotes the number of water molecules in a volume of that size in the liquid. In other words, we consider the probability that $N$ molecules exist in a volume ${\text{\tiny V}}$, $P_{\text{\tiny V}}(N)$. The excess chemical potential for a particle of volume ${\text{\tiny V}}$ absent any other interactions with the solvent is the reversible work to empty that volume, which is to say
\begin{equation}
\label{Pv(0)}
\Delta \mu_{\text{\tiny V}} = -k_{\text{B}} T \ln P_{\text{\tiny V}}(0).
\end{equation}
Gerhard Hummer, Lawrence Pratt and their co-workers introduced this way of considering solvation~\cite{HummerEtAl1996}, and we follow their lead.
Specifically, from the empirical observation discussed in the previous lecture---that density fluctuations in small volumes are Gaussian---we write
\begin{equation}
\label{GaussianDistribution}
P_\V(N) \approx \frac{1}{\sqrt{2\pi \,\sigma_{\text{\tiny V}} \,}}\exp\Biggl[-\frac{(N - \langle N \rangle_{\text{\tiny V}})^2}{2 \sigma_{\text{\tiny V}}} \Biggr],
\end{equation}
where $\langle N \rangle_{\text{\tiny V}} = \rho \, {\text{\tiny V}}$ is the average number of water molecules in the volume~${\text{\tiny V}}$, $\rho$ being the mean bulk density of the liquid, and
\begin{equation}
\sigma_{\text{\tiny V}} = \langle (\delta N)^2 \rangle _{\text{\tiny V}}
\end{equation}
is the mean-square fluctuation in the number of molecules in that volume. Accordingly, the solvation free energy of an excluded volume~${\text{\tiny V}}$ is
\begin{equation}
\Delta\mu_v \, \approx \, k_{\text{B}} T \Bigl[ \frac{{\text{\tiny V}}^2\,\rho^2}{2\sigma_{\text{\tiny V}}} + \frac{1}{2}\ln(2\pi \sigma_{\text{\tiny V}}) \Bigr]\,\,.
\label{eqn:DeltaMuSmall}
\end{equation}
Equation~\eqref{eqn:DeltaMuSmall} is an important result that holds to the extent that the Gaussian distribution, Eq.~\eqref{GaussianDistribution}, is valid. This in turn requires that ${\text{\tiny V}}$ not be too large. In particular, water at standard conditions is close to coexistence with its vapor phase, and as such, emptying a large enough volume can nucleate a vapor bubble and its associated liquid-vapor interface. The Gaussian approximation to $P_{\text{\tiny V}}(N)$ does not include this physics of phase change, but rather includes only the physics of fluctuations that do not move the region within ${\text{\tiny V}}$ far from its typical liquid state. We will see that this criterion limits the usefulness of Eq.~\eqref{eqn:DeltaMuSmall} to ${\text{\tiny V}}<1$ nm$^3$.
Keeping to ${\text{\tiny V}} < 1$ nm$^3$, Eq.~\eqref{eqn:DeltaMuSmall} provides a means for estimating solvation free energies associated with excluded volume in water. The values of $\langle N \rangle_{\text{\tiny V}}$ and $\sigma_{\text{\tiny V}}$ can be estimated simply. Specifically, the average number of water molecules in~${\text{\tiny V}}$ is $\langle N \rangle_{\text{\tiny V}} = \rho\,{\text{\tiny V}}$, and the mean-square fluctuations are given by
\begin{equation}
\sigma_{\text{\tiny V}} = \int_{{\vec r}\in {\text{\tiny V}}} \text{d}{\vec r}\, \int_{{\vec r}'\in {\text{\tiny V}}} \text{d}{\vec r}'\, \langle \delta\rho({\vec r}) \delta\rho({\vec r}') \rangle\,,
\end{equation}
where $\delta \rho ({\vec r})$ is the fluctuation (i.e., deviation from the mean) of the molecular density at position ${\vec r}$. The integrand is related to the pair distribution function $g_{\O\O}({\vec r})$,
\begin{equation}
\label{PairCorrelation}
\langle \delta\rho({\vec r})\, \delta\rho({\vec r}') \rangle = \rho \, \delta({\vec r}-{\vec r}') + \rho^2 [g_{\O\O}(|{\vec r} - {\vec r}'|) - 1],
\end{equation}
and can thus be calculated when $g_{\O\O}(r)$ is known. Solvation free energies estimated from these formulas agree well with those obtained from computer simulations, and these formulas form a convenient basis for accurate estimates of hydration free energies for small oily molecules in water. The extent of accuracy is illustrated later in this lecture with Fig.~\ref{fig:SolvationScaling} below.
Equation~\eqref{eqn:DeltaMuSmall} shows that the free energy to exclude small volumes is primarily entropic. In particular, this free energy grows with increasing temperature. To the extent that excluded volume effects are dominant, therefore, small solutes become \emph{less} soluble, not more, as temperature increases. This trend is indeed observed in experimentally measured solubilities of small apolar molecules in water. A trove of such data is found in Charles Tanford's classic monograph~\cite{Tanford1973}. At temperatures above about~$50^\circ$\,C, however, the structure of water begins to change substantially (Fig.~\ref{fig:XRaysGofR}), so that the solvation free energy of a small solute stops being proportional to temperature. The theoretical predictions are graphed in Fig.~\ref{fig:EntropyConvergence}.
\begin{figure}\begin{center}
\includegraphics{ClearerEntropyConvergence}
\end{center}
\caption{\label{fig:EntropyConvergence} Predictions from Eq.~\eqref{eqn:DeltaMuSmall} of solvation free energies for hard spheres in water, with hard-sphere radii comparable to those of Ne, Ar, Me (methane) and Xe, as. Around $T\approx 400\,$K, all curves have the same slope (i.e., the same solvation entropy). Adapted from Ref.~\cite{GardeEtAl1996}.}
\end{figure}
To gain further insight, consider ${\text{\tiny V}}$ significantly larger than a correlation volume. Such sizes are at the upper limit of where the Gaussian approximation is accurate. But for such volumes, the compressibility theorem mentioned in the previous lecture gives
\begin{equation}
(\rho\,{\text{\tiny V}})^{-1}\,\sigma_v = \frac{\langle (\delta N)^2 \rangle_{\text{\tiny V}}}{\langle N \rangle_{\text{\tiny V}}} \approx \frac{\partial \rho}{\partial \beta p} \equiv \chi \,.
\end{equation}
An approximate expression for the solvation free energy is thus
\begin{equation}
\Delta\mu \approx k_{\text{B}} T \Biggl[ \frac{\rho{\text{\tiny V}}}{2\chi} + \frac12\ln(2\pi \rho{\text{\tiny V}} \chi) \Biggr].
\end{equation}
This simple estimate is qualitatively correct, but overestimates the more generally accurate Eq.~\eqref{eqn:DeltaMuSmall}. Nevertheless, it immediately shows that hydration free energies due to excluded volume of small apolar species are typically $5$~to~$10\,k_{\text{B}} T$ because the quantity $\chi$ has a value of about $0.06$ for water for a range of temperatures around ambient conditions. It further illustrates that the solvation free energy grows roughly linearly with increasing volume in the regime where the Gaussian approximation is valid.
Figure~\ref{fig:TanfordAlkanes} shows how the experimentally measured solvation free energies of small-to-medium alkyl chains scale with chain length. These solutes are extended but thin, so they can be treated with the theory we described above for small solutes. As chain length and solvent-excluded volume are proportional, these results illustrate that solvation free energies of small solutes are proportional to excluded volume. In many discussions, e.g., Ref.~\cite{StillEtAl1990}, these same results have been used to argue that solvation free energies of even small solutes may be modeled as scaling with solvent-accessible surface area (SASA), because chain length and SASA are also proportional. However, the fact that the apparent surface tension that would result is much smaller than the measured water-oil surface tension, and that the alkyl chains become less soluble, not more, as temperature is increased, shows that the proportionality with SASA is a misleading coincidence. References~\cite{Chandler2005} and~\cite{Tanford1979} provide further discussion on this point.
\begin{figure}\begin{center}
\includegraphics{ClearerTanfordAlkanes}
\end{center}
\caption{\label{fig:TanfordAlkanes}Solvation free energies of $\mathrm{C}\H_3 (\mathrm{C}\H_2)_{n-1} \mathrm{C}\O\O\H$ as a function of chain length. The line is a linear fit to the data. Adapted from Ref.~\cite{SmithTanford1973}.}
\end{figure}
Equation~\eqref{eqn:DeltaMuSmall} rationalizes a phenomenon called \emph{entropy convergence}. To discuss it, we introduce the entropy of solvation, which is the entropic component of the solvation free energy, given by
\begin{equation}
\Delta s= -\frac{\partial \Delta\mu}{\partial T}.
\end{equation}
It is found that along the saturation curve of water, the entropy of solvation of many different small solutes is different, but surprisingly, converges to a common, small value at a temperature of about~$400\,$K. This convergence is illustrated in Fig.~\ref{fig:EntropyConvergence}. Around this temperature, it is found that the factor $\sigma_{\text{\tiny V}}$ is nearly athermal, so the solvation free energies are essentially proportional to $T \rho^2(T)$. This combination is non-monotonic with temperature, and is maximal around $T=400\,$K. Thus, its derivative is zero there, leading to a vanishing entropy of solvation at that temperature. The observed small but non-zero value arises from the second term of Eq.~\eqref{eqn:DeltaMuSmall} and the small temperature dependence of $\sigma_{\text{\tiny V}}$.
A similar convergence is also found around $T=400\,$K in the per-residue entropies of unfolding of proteins. Though it superficially resembles the entropy convergence of small solutes, the origin of this effect is actually more complicated and due to the interplay of small and large length-scale physics that we discuss in general at the end of this lecture and in Lecture~Three. Reference~\cite{HuangChandler2000a} provides further discussion on this particular point.
\subsection{Solvation of ions}
So far, we have discussed solvation of excluded volume. While all solutes exclude water from some region of space, most solutes act with additional forces on water, including weak dispersive attractions and electrostatic interactions. Weak interactions change the solvation free energy quantitatively but not qualitatively. On the other hand, owing to the large dielectric constant of water ($\epsilon \approx 80$), electrostatic interactions are significant, and often dominate solvation behavior when they are present. For instance, since we know that water dissolves many salts, it must be the case that the solvation free energies of the separate ions is larger than the strength of the ionic bonds between them in a crystal, which is about $1\,$eV, or about $40\,k_{\text{B}} T$ at ambient conditions.
\begin{figure}
\hfill\begin{tabular}{cc}
\includegraphics{SolvatingIonCartoonVertical}&
\includegraphics{F0phi}\\
(a)&(b)
\end{tabular}\hfill
\caption{\label{fig:SolvatingIonCartoon}(a) Calculating the solvation free energy of an ion in two steps. A free energy~$\Delta\mu_v$ is needed to expel waters from the ion's hard core. A free energy~$\Delta\mu_q$ is then required to place a charge~$q$ in the ion and thus polarize the solvent. (b) Free energies of the potential~$\phi$ at the center of an ion, before (solid) and after (dashes) the charge is inserted. The induced polarization of the solvent shifts the mean potential to $\langle\phi\rangle_q = -q/\kappa$.}
\end{figure}
As a model for electrostatic interactions, we estimate the solvation free energy of a single ion, as outlined in Fig.~\ref{fig:SolvatingIonCartoon}(a). An ion typically interacts with water by excluding a water from its hard core and by electrostatically polarizing the surrounding solvent. To be concrete, a Cl$^-$ ion excludes water from a sphere with radius of about $2.5\,$\AA, and can be modeled as having a point charge of $-e$ at its center. The free energy~$\Delta\mu_v$ to expel the water from the hard core is given by Eq.~\eqref{eqn:DeltaMuSmall}. The free energy~$\Delta\mu_q$ to then place a point charge~$q$ at the ion is given by
\begin{equation}
e^{-\beta\Delta\mu_q} = \frac{\int \text{d}\phi\, e^{-\beta[F_0(\phi) +q\phi]}}{\int \text{d}\phi\, e^{-\beta F_0(\phi)}},
\label{eqn:DeltaMuChargeSetup}
\end{equation}
where $\phi$ is the value of the electrostatic potential at the center of the ion given that a cavity for it has been created, and $F_0(\phi)$ is the free energy associated with a particular value of this potential, equal to $-k_{\text{B}} T \ln P(\phi)$. As outlined previously, the statistics of electric field and potentials in water are Gaussian, so $F_0(\phi)$ is quadratic in $\phi$. Moreover, since the water is charge neutral, the mean value of $\phi$ should be essentially zero\footnote{It wont be exactly zero, however, in the presence of large enough solutes that cause significant inhomogeneity. Liquid-vapor interfaces have a small surface dipole moment density, since a small number of water molecules at the interface tend to align their dipoles with the surface normal. The bulk liquid has been measured to be at a potential of about $0.1\,$V higher than the bulk vapor. This potential difference is the so-called \emph{zeta potential} of the interface. For a discussion of these effects and how the presence of ions modifies the structure of the liquid-vapor interface, see Ref.~\cite{PetersenSaykally2006}.}. Hence, we approximate
\begin{equation}
F_0(\phi) \approx \frac12 \kappa \phi^2,
\end{equation}
where $\kappa$ is related to the fluctuations of~$\phi$ in the absence of the charge by equipartition,
\begin{equation}
\langle (\delta\phi)^2 \rangle = \frac{k_{\text{B}} T}{\kappa}.
\end{equation}
These fluctuations are, in turn, controlled by the dielectric constant of water and the size of the ion, as discussed below. With this expression for $F_0(\phi)$, we can evaluate the right-hand side of Eq.~\eqref{eqn:DeltaMuChargeSetup} and simplify the result to obtain
\begin{equation}
\Delta\mu_q = -\frac12 \beta q^2 \langle (\delta\phi)^2 \rangle = -\frac{q^2}{2\kappa}.
\end{equation}
When the charge is placed into the ion, the surrounding solvent is polarized. As depicted in Fig.~\ref{fig:SolvatingIonCartoon}(b), the potential fluctuations are unchanged but the mean value of the potential shifts to
\begin{equation}
\langle\phi\rangle_q = -q/\kappa
\end{equation}
This induced potential acts as a \emph{reaction field} of the solvent to the presence of the charge.
Polarizing the solvent in this way has a free energy cost of
\begin{equation}
F_0(\langle\phi\rangle_q) = \frac12 \kappa (q /\kappa)^2 = \frac{q^2}{2\kappa},
\end{equation}
but results in a favorable interaction energy of
\begin{equation}
q \langle\phi\rangle_q = -\frac{q^2}{\kappa}
\end{equation}
between the ion and the polarized solvent.
The value of~$\kappa$ can be determined by relating the fluctuations in the dipole moment density field of water to its dielectric constant and the geometry of the cavity (for details, see Ref.~\cite{SongChandlerMarcus1996}). It is found to be
\begin{equation}
\kappa = R \Bigl( 1 - \frac1\epsilon \Bigr)^{-1},
\end{equation}
where $R$ is the radius of the ion. This result is identical to the continuum electrostatics result~\cite{Griffiths1999}, where the work required to move a charge $q$ from vacuum into a spherical cavity of radius~$R$ carved into a dielectric material with dielectric constant~$\epsilon$ is
\begin{equation}
\Delta\mu_q = -\frac{q^2}{2 R} \Bigl( 1 - \frac1\epsilon \Bigr).
\label{eqn:Born}
\end{equation}
Equation~\eqref{eqn:Born} is the Born solvation formula. It is negative, so it is always favorable to solvate an ion. It scales as $q^2$, so divalent ions are much more strongly solvated than monovalent ones. Finally, it scales as $1/R$, so smaller ions are more soluble than larger ones.
For water, $\epsilon$ is so high that the term in brackets is virtually unity. To within an order of magnitude, the solvation free energies of ions are about $e^2 / (1\,\text{\AA}) \approx 10\,\text{eV} \approx 400\,k_{\text{B}} T$. As anticipated, these free energies are indeed sufficiently large to overcome the ionic bonds in salt crystals. Figure~\ref{fig:TestOfBorn} shows a more detailed comparison of the Born solvation formula to experimental enthalpies of solvation. It is clear that the Born solvation formula works quite well for all of these cases, and thus captures the dominant physics of solvation for ions.
\begin{figure}\begin{center}
\includegraphics{ClearerTestOfBorn}
\end{center}e
\caption{\label{fig:TestOfBorn}Enthalpies of hydration ($\Delta\mu - T [\partial\Delta\mu/\partial T]$) for a variety of monovalent ions as a function of inverse ionic radius, as measured experimentally (circles) and predicted by Eq.~\eqref{eqn:Born} (line). Adapted from Ref.~\cite{RashinHonig1985}. Similar correlations are seen for polyvalent ions.}
\end{figure}
\subsection{Solvation of large solutes}
\label{sec:SolvationLarge}
In both of the scenarios considered above, we could calculate solvation free energies from the statistics of small structural fluctuations in water. The solute, whether ideal or charged, acted as a small perturbation that did not fundamentally change the nature of the liquid. As solutes increase in size beyond about $1\,$nm, however, water responds to their presence in a massively collective fashion, undergoing nano-scale reorganization manifesting a macroscopic phase transition---the liquid-vapor phase transition---that occurs at thermodynamic conditions very close to those of ambient water. The solvation free energy is then dominated by the surface energy of the interface between the two phases, which for macroscopic solutes is given by
\begin{equation}
\Delta\mu \to \gamma A, \qquad\text{(large solutes)},
\label{eqn:DeltaMuLarge}
\end{equation}
Here, $\gamma$ is the surface tension between water and the solute (about $72\,$mJ/m$^2$ for a solute that only excludes volume) and $A$ is the surface area of the solute.
For nanometer-sized solutes, there is a gradual crossover between the small length-scale behavior described by Eq.~\eqref{eqn:DeltaMuSmall} and the large length-scale behavior described by Eq.~\eqref{eqn:DeltaMuLarge}. This crossover is depicted schematically in Fig.~\ref{fig:SolvationScaling} for hard sphere solutes of volume~$v$. The two estimates for $\Delta\mu$ intersect around $v$ of $1\,$nm$^3$. This sets the scale for where the crossover is relevant, though corrections to Eq.\eqref{eqn:DeltaMuSmall} are already important for $v \gtrsim 0.5\,$nm$^3$. Solvation free energies eventually do tend to $\gamma A$, but convergence to this limit is slow because the ratio of these two quantities tends to~$1$ only as $v^{-1/3}$. Corrections to Eq.~\eqref{eqn:DeltaMuLarge} are significant even for excluded volumes measuring tens of cubic nanometers.
\begin{figure}\begin{center}
\includegraphics{DeltaMuVsV}
\end{center}
\caption{\label{fig:SolvationScaling}Solvation free energies for hard spheres of volume~$v$, computed through direct simulation with the SPC/E model of water at ambient conditions (black, Ref.~\cite{HuangGeisslerChandler2001}). The small length-scale estimate (Eq.~\eqref{eqn:DeltaMuSmall}) and the large length-scale estimate (Eq.~\eqref{eqn:DeltaMuLarge}) for the same model are also shown.}
\end{figure}
The breakdown of Eq.~\eqref{eqn:DeltaMuSmall} for larger~$v$ indicates that the $P_\V(N)$ distributions for these larger volumes deviate significantly from Gaussian behavior. This is indeed the case. Figure~\ref{fig:PvNFatTails} shows an explicit calculation of $P_\V(N)$ where ${\text{\tiny V}}$ is a cube of volume~$1.7\,$nm$^3$. The breakdown of Gaussian statistics appears as a fat tail in this distribution. This fat tail indicates that there is a mechanism beyond Gaussian density fluctuations that can be involved when removing $N$~waters from the probe volume~${\text{\tiny V}}$. The fat tail in $\ln P_\V(N)$ is well-described by a function that scales as $-\gamma (\delta N)^{2/3}$, which describes the free energy to form an empty cavity of volume~$\delta N / \rho$ inside of the probe volume. The physical consequences of this fat tail and corresponding crossover in length-scale are discussed in the next lecture.
\begin{figure}\begin{center}
\includegraphics{PvNCubic}
\end{center}
\caption{\label{fig:PvNFatTails}Water number distribution, $P_\V(N)$, for a cube of volume~$v = 1.7\,$nm$^3$ computed from importance sampling with a computer simulation of the SPC/E model of water at ambient conditions (solid), and the distribution that would be expected if the statistics of these number fluctuations were Gaussian (dashed). The fat tail is an indication of the crossover in solvation free energies to the large-length scale regime. Adapted from Ref.~\cite{PatelVarillyChandler2010}.}
\end{figure}
\section{Lecture Three: Hydrophobicity and Self-Assembly}
This lecture is about hydrophobicity, which essentially means it is about how and why oily species---termed hydrophobic species---separate from water in nanometer scale structures. ``Hydrophobic'' is actually a misnomer, because the underlying physics is not about oil fearing water, but rather about hydrogen bonds that are lost when water mixes with oil. A hydrophobic species is simply a molecule or complex of molecules that binds more weakly to water than water binds to itself. A hard sphere is the simplest (and idealized) example. Surfaces of nominally hydrophilic molecules can also be hydrophobic when the geometry of possible water-surface binding sites is incommensurate with favorable hydrogen patterns of liquid water.
Hydrophobic forces are the water-mediated interactions between oily species in water that cause these species to segregate or demix from water. Macroscopic demixing of oil and water manifests a first-order phase transition. Such manifestations require clusters of size larger than a critical nucleus. A small enough hard sphere in water will not trigger the physics of phase separation. Hydrophobic forces therefore become significant only for sufficiently large hydrophobic species. We will see that ``sufficiently large'' implies hydrophobic surfaces of low curvature that extend over lengths of the order of 1 nm or more.
Lecture Two provides the background to these ideas. Indeed, the $1\,$nm scale mentioned here is related to a finding discussed in the previous lecture, where we describe how solvation free energies are related to statistics of solvent density fluctuations and solvent polarization fluctuations. We show there that these statistics are essentially Gaussian for cases where the solute does not largely perturb the liquid. But we also show there that this situation changes in the disruptive presence of sufficiently extended hydrophobic surfaces. This lecture focuses on the consequences of these changes and a theory that embodies the underlying physics. Ref.~\cite{Chandler2005} is review of the topic written with the same perspective as that of this lecture.
\begin{figure}
\begin{center}\includegraphics{ClearerHydrophobicAssembly}\end{center}
\caption{\label{fig:HydrophobicAssembly}Driving force~$\Delta G$ of hydrophobic assembly. At room temperature (thick black lines), the crossover length scale is about~$1\,$nm. At higher temperatures (thin red lines), the crossover length scale decreases and the driving force increases. After Ref.~\cite{Chandler2005}.}
\end{figure}
\subsection{The driving force for hydrophobic assembly}
Figure~\ref{fig:HydrophobicAssembly} helps to show how the assembly of hydrophobic aggregates in water is explained in terms of the length-scale dependence of solvation free energies discussed at the end of Lecture~Two. As schematized in that figure, the solvation free energy to solvate $N$ small, dissociated solutes scales as~$N$, and is mostly entropic in origin. The hydrogen-bonding network of water is stretched but not disrupted by these solutes. Except for small packing effects common to all dense liquids, the total solvation free energy of this collection of solutes is independent of their positions. As such, entropy disfavors aggregation, so the solutes disperse evenly throughout the solvent.
If enough of these solutes associate spontaneously, however, it becomes impossible for water to wrap its hydrogen-bonding network around the cluster. It then becomes preferable to sacrifice some hydrogen bonds completely and nucleate a vapor-liquid interface around the cluster. The cost of maintaining this interface is primarily enthalpic, and scales as $N^{2/3}$. Hence, for large enough $N$, it is energetically favorable for small solutes to associate into a larger cluster, with the free energy difference~$\Delta G$ between the dissociated and associated states termed the ``driving force'' for hydrophobic assembly. At ambient conditions, the critical nucleus that needs to form before aggregation becomes favorable is typically about $1\,$nm in size.
The temperature dependence of the driving force for assembly is non-trivial. As evidenced by Eq.~\eqref{eqn:DeltaMuSmall}, the solvation free energy of very small solutes is proportional to temperature, because of its entropic origin. Conversely, the surface tension of most liquids, including water, decreases with temperature. In other words, whereas rises in temperature make small solutes less soluble, they make large solutes more soluble. This curious dichotomy in the temperature dependence of solvation free energy has two consequences. First, the size of the critical nucleus for aggregation is reduced at higher temperatures. Second, the driving force towards hydrophobic association is stronger in hot water than in cold water. These consequences are pertinent to protein thermodynamics, where hydrophobic interactions stabilize the core of most globular proteins. At colder temperatures, the driving force for this hydrophobic collapse decreases, so at low enough temperatures, proteins undergo cold denaturation.
At high pressures, the dominant component to solvating large solutes is not the cost of forming a liquid-vapor interface, but the work needed to create a vacuum bubble in a high-pressure environment, which scales as~$N$. Hence, above pressures of about $500$\,atm, the driving force for hydrophobic assembly disappears, and proteins undergo pressure denaturation.
\subsection{Micelle Assembly}
\label{sec:micelles}
When small amphiphilic molecules, such as the fatty acids that compose cell membranes, are dissolved in water at low concentrations, entropy favors dispersing them uniformly as monomers. At higher concentrations, however, the driving force for hydrophobic assembly overcomes this entropic effect, so collections of these molecules assemble into nano-scale clusters, which are called ``micelles.'' In this way, the hydrophobic tails of the amphiphiles are separated from water by a layer of hydrophilic head groups. Experimentally, the change in aggregation behavior occurs abruptly at a critical micelle concentration, $\rho_{\text{cmc}}$, whose non-trivial temperature dependence can be understood from the considerations we have been describing---the competition between small length-scale solvation and interface formation.
\begin{figure}
\begin{center}\begin{tabular}{cc}
\includegraphics{CleanerMicellesEqu}&
\includegraphics{rhocmc}\\
(a)&(b)
\end{tabular}\end{center}
\caption{\label{fig:MicellesIntro} Micelle assembly. (a) Small amphiphiles dispersed uniformly as monomers in solution (three of which are shown); these monomers are in equilibrium with $n$-mers -- clusters of $n$ amphiphiles (one of which is shown). (b) For amphiphile concentrations~$\rho$ below~$\rho_{\text{cmc}}$, only the monomeric species (solid) is present. Above~$\rho_{\text{cmc}}$, $n$-mers form (dashes), and adding more amphiphiles only increases the number of $n$-mers}
\end{figure}
A simplified model of micelle assembly is guided by Fig.~\ref{fig:MicellesIntro}. For simplicity, we shall assume that the micelles that form are all the same size, each containing $n$ amphiphiles, with that number $n$ to be determined. (For large enough $n$ we expect the dispersion of $n$-mer size, $\Delta n$, to be small as a consequence of a law of large numbers, specifically $\Delta n \sim \sqrt{n}$\,.) Accordingly, the total amphiphile concentration is
\begin{equation}
\rho = \rho_1 + n \rho_n,
\end{equation}
where $\rho_1$ is the density of monomers and $\rho_n$ is the density of $n$-mers. The law of mass action relates these two quantities by
\begin{equation}
\rho_n = \rho_1^n \exp( -\beta \Delta G ),
\label{eqn:MicelleMassAction}
\end{equation}
where $\Delta G$ is the driving force for assembly of the micelle, namely the free energy difference between one $n$-mer and $n$~monomers.
\begin{figure}
\begin{center}
\includegraphics[width=4.2in]{CleanerMicellesDeltaG}
\caption{\label{fig:MicellesDeltaG}Thermodynamic cycle of micelle formation (adapted from Ref.~\cite{MaibaumDinnerChandler2004}). The free energy of micelle assembly~$\Delta G$ consists of three contributions: a term~$\Delta G_1$ for forming a cavity of the right size in the solvent, a term~$\Delta G_2$ for detaching the head and tail groups and transferring the hydrophobic tails from solution into the cavity, and a term~$\Delta G_3$ for reattaching the head groups to the tails.}
\end{center}
\end{figure}
To estimate the driving force for assembling an $n$-mer, we use the thermodynamic cycle depicted in Fig.~\ref{fig:MicellesDeltaG}. The width of the amphiphile is denoted by~$a$ and its typical length is denoted by~$\delta$. The $n$-mers have radius~$R$ and surface area~$A$. The assembly is achieved in three steps:
\begin{enumerate}
\item A cavity of the size of a micelle is formed in the solvent. Since this cavity is large, the free energy to create it is proportional to its area~$A$, so
\begin{equation}
\Delta G_1 \approx \gamma_{\text{water-vapor}} A.
\end{equation}
In terms of $n$, $a$~and~$\delta$, the area~$A$ scales as $a^2 (\delta/a)^{2/3} n^{2/3}$.
\item The head and tail groups are detached, at energy cost $\epsilon_{\text{bond}}$ per molecule, and the tails are transferred from the solution to what will be an oily region. This is the free energy of transfer~$\Delta\mu_{\text{transfer}}$ of the tails from water to oil, and depends on the tail's size and chemical character. Finally, what was a water-vapor interface after Step~1 is now a water-oil interface. Due to dispersive interactions, its surface tension is thus reduced by an amount~$\Delta\gamma$ given by~$\gamma_{\text{water-vapor}} - \gamma_{\text{oil-water}}$. Hence,
\begin{equation}
\Delta G_2 \approx n \epsilon_{\text{bond}} - n \Delta \mu_{\text{transfer}} - \Delta \gamma A.
\end{equation}
\item Finally, the head groups are reattached to the tails, for an energy gain of $\epsilon_{\text{bond}}$ per molecule. The additional entropic cost of restricting the $n$ head groups to be at the surface of the micelle is quite large. Since this restriction is exactly analogous to the requirement of charge neutrality in a polarized dielectric, it can be estimated accurately by analogy~\cite{deGennes1979,Stillinger1983}. Specifically, the entropic cost of maintaining a separation of a head group from a tail group at a distance~$r$ goes as $1/r$, which makes the effects of this cost isomorphic to that of an electrostatic cost. Accordingly, the total entropic cost has the same $n^2/R$ functional dependence as the Born solvation energy, Eq.~\eqref{eqn:Born}, where $n$ plays the role of charge and $R \sim n^{1/3}$ is the micelle radius. The pre-factor is related to the aspect ratio of the amphiphile. We thus obtain the estimate
\begin{equation}
\Delta G_3 \approx - n \epsilon_{\text{bond}} + k_{\text{B}} T (a / \delta)^{4/3} n^{5/3}.
\end{equation}
\end{enumerate}
Above, we focused on the functional dependences of the free energies on $n$, $a$~and~$\delta$. Adding up the contributions, we obtain the total free energy of assembly:
\begin{equation}
\Delta G \approx \gamma_{\text{oil-water}} a^2 (\delta / a)^{2/3} n^{2/3} - n \Delta \mu_{\text{transfer}} + k_{\text{B}} T (a / \delta)^{4/3} n^{5/3}.
\label{eqn:MicelleDeltaG}
\end{equation}
For a given set of parameters in the above equation, we find the number of monomers $n = n^*$ that minimizes~$\Delta G / n$ to obtain the typical micelle size, and $\rho_{\text{cmc}}$ can then be determined through the law of mass action. We omit the algebra and simply give the final result~\cite{MaibaumDinnerChandler2004}:
\begin{equation}
\ln \rho_{\text{cmc}} a^3 \approx c \, (\beta \gamma_{\text{oil-water}} a^2)^{2/3} - \beta \Delta\mu_{\text{transfer}},
\label{eqn:rhocmc}
\end{equation}
with $c =(5832/49)^{1/3} \approx 4.9$. Figure~\ref{fig:MicelleResults} shows how this prediction compares to the experimental measurements. The data can clearly be explained quantitatively by invoking only the length-scale dependence of the hydrophobic effect and the geometric constraint that head and tail groups be adjacent.
\begin{figure}
\begin{center}\begin{tabular}{ccc}
\includegraphics{ClearerMicelleResults1}&
\includegraphics{ClearerMicelleResults2}\\
(a)&(b)
\end{tabular}\end{center}
\caption{\label{fig:MicelleResults}Comparison of Eq.~\eqref{eqn:rhocmc} with experimental $\rho_{\text{cmc}}$ of $\mathrm{C}\H_3 (\mathrm{C}\H_2)_{(m-1)} (\O\mathrm{C}\H_2\mathrm{C}\H_2)_6 \O\H$. The single adjustable parameter~$a \approx 3\,$\AA\ is fit to the $m=12$ result at $T = 25^\circ$C. (a) Dependence of $\rho_{\text{cmc}}$ on amphiphile length at $T = 25^\circ$C. (b) Dependence of $\rho_{\text{cmc}}$ on temperature for $m=12$. In both plots, the data shown is experimental (circles and triangles, from two different experiments) and the result of Eq.~\eqref{eqn:rhocmc} (solid). Adapted from Ref.~\cite{MaibaumDinnerChandler2004}}
\end{figure}
\subsection{Dewetting transitions in hydrophobic assembly}
As discussed in Section~\ref{sec:SolvationLarge}, and as illustrated in Figs.~\ref{fig:SolvationScaling}~and~\ref{fig:PvNFatTails}, the length-scale dependence of solvation free energies is intimately connected to the formation of liquid-vapor-like interfaces around large solutes.
Liquid-vapor interfaces are soft modes, which is to say that it costs little free energy to deform these structures. Creating a cavity next to a large solute is thus much easier than creating the same cavity in bulk water. A demonstration of this effect in water is shown in Fig.~\ref{fig:PvNHydrophobicHydrophilic}. Here, a model $24\times24\times6\,$\AA\ plate that is either hydrophobic or hydrophilic is placed next to a $24\times24\times3\,$\AA\ probe volume~${\text{\tiny V}}$, and the probability~$P_\V(N)$ of finding $N$ waters in~${\text{\tiny V}}$ is calculated. The hydrophobic plate induces a fat tail in the $P_\V(N)$ distribution, similar to the one depicted in Fig.~\ref{fig:PvNFatTails}. This occurs because the liquid-vapor interface can be easily deformed to create an empty space inside the probe volume. Next to the hydrophilic plate, where the interface between water and the plate is not liquid-vapor-like, the $P_\V(N)$ distribution is identical to that obtained when the probe volume ${\text{\tiny V}}$ is far away from the plate.
\begin{figure}
\begin{center}\begin{tabular}{ccc}
\includegraphics{CleanerAmishPlate}&
\includegraphics{PvNNextToPlate}\\
(a)&(b)
\end{tabular}\end{center}
\caption{\label{fig:PvNHydrophobicHydrophilic}Effect of a large solute on the density fluctuations of the surrounding solvent. (a) The model plate and the probe volume~${\text{\tiny V}}$. (b) Probability $P_\V(N)$ of observing $N$~waters in the probe volume~${\text{\tiny V}}$ in bulk (solid black), next to the hydrophobic plate (solid blue) and next to the hydrophilic plate (dashed red, nearly indistinguishable from solid black). Adapted from Ref.~\cite{PatelVarillyChandler2010}.}
\end{figure}
The fat tails in $P_\V(N)$ distributions are present for large probe volumes in bulk and enhanced by a nearby large solute. This behavior has two main physical consequences. First, a hydrophobic object that excludes water from a volume~${\text{\tiny V}}$ has a lower solvation free energy when next to the large solute than when dispersed in bulk water. Indeed, the difference in $-k_{\text{B}} T \ln P_\V(0)$ next to the large solute and in bulk measures the reversible work needed to move the object from a specific position in the bulk to the vicinity of the large solute. This is precisely the free energy of hydrophobic adhesion.
Second, these fat tails also signal an underlying phase instability that can be exposed by an external perturbation. The idea is illustrated in Fig.~\ref{fig:FatTailsPerturbed}. Consider an external perturbation of the form~$\epsilon N$. This perturbation might be the attractive dispersive interactions between a large solute and the water in its vicinity, or the effective repulsion of water from a cavity in confining geometry. If the $P_{\text{\tiny V}}(N)$ distribution is Gaussian, then a linear perturbation results in another Gaussian free energy of equal width but different mean number of waters in that volume, $\langle N \rangle_{\text{\tiny V}}$. Small changes in the strength of the perturbation induce small changes in~$\langle N \rangle_{\text{\tiny V}}$. If, on the other hand, $P_{\text{\tiny V}}(N)$ has a fat tail, then a linear repulsive perturbation can result in a precipitous reduction in~$\langle N \rangle_{\text{\tiny V}}$. This phenomenon is called a dewetting transition.
This transition can occur in volumes confined by hydrophobic surfaces, such as water inside nano-scale tubes. Emptying and filling such volumes is thus collective, with several water molecules leaving or entering in bursts. Refs.~\cite{HummerEtAl2001} and \cite{MaibaumChandler2003} illustrate this behavior. The phenomenon seems likely relevant to functioning of biological pores. It is akin to a liquid-vapor transition, but on a scale of nanometers. Dewetting also appears in cases where two hydrophobic protein surfaces approach one another. Water confined by these surfaces is destabilized, and as water departs the surfaces move closer together to fill the vacated volume. This behavior seems relevant to the dynamics of protein folding and assembly, as our group first discussed in Ref.~\cite{tenWoldeChandler2002}.
\begin{figure}
\begin{center}\begin{tabular}{ccc}
\includegraphics{ClearerGaussianPerturbation}&
\includegraphics{ClearerFatTailPerturbation}\\
(a) $v$ small&(b) $v$ large
\end{tabular}\end{center}
\caption{\label{fig:FatTailsPerturbed} Effect of an external perturbation of the form $\epsilon N$ on different $P_\V(N)$ distributions. (a) When ${\text{\tiny V}}$ is small and $P_\V(N)$ is Gaussian. The perturbation simply shifts the mean number of waters~$\langle N \rangle_{\text{\tiny V}}$ in~${\text{\tiny V}}$. (b) When ${\text{\tiny V}}$ is large, or ${\text{\tiny V}}$ is next to a hydrophobic solute, $P_\V(N)$ has fat tails. A large enough perturbation results in a first-order microscopic phase transition. In both figures, the $P_\V(N)$ distribution shown correspond to $\epsilon=0$ (black/solid), $\epsilon < 0$ (blue/dot-dash) and $\epsilon > 0$ (red/dashed).}
\end{figure}
\subsection{Theory of Dewetting}
Theory for dewetting requires treatment of both interfaces and small length-scale fluctuations. High free-energetic costs of solvating excluded volumes at small length scales gives way to lower costs in the presence of soft interfacial fluctuations. Here, we describe how to build a theory that captures this physics with a density field that describes interfaces and a coupling of that field to small-length scale fluctuations. The development uses some elements of statistical field theory, and while it is therefore a step beyond the simplicity adopted in the earlier parts of our lectures, good textbooks on the topic do exist. See, for instance, Mehran Kardar's~\cite{Kardar2007b}.
Interfaces are well described by density fields that vary slowly on molecular scales. In the simplest case, the energetics of such a field, $n(\mathbf{r})$, is given by a Landau-Ginzburg hamiltonian of the form
\begin{equation}
\label{HL}
\beta H_{\mathrm{L}}[n(\mathbf{r})] = \int \text{d}\mathbf{r} \left[ w(n(\mathbf{r})) + \frac{m}{2} | \nabla n(\mathbf{r}) |^2 \right] \,,
\end{equation}
where we use subscript ``L'' to indicate that this hamiltonian applies to a fluid on \emph{large} length scales only. The quantity $w(n)$ is a local (grand canonical) free energy density in units of $k_{\mathrm{B}}T = 1/\beta$, and the parameter $m$ determines the free energy cost to create an inhomogeneity.
In mean field theory, the average $n(\mathbf{r})$ is the function that minimizes this hamiltonian (subject to whatever constraints specify the ensemble considered). This minimization produces a spatially invariant value for $\langle n(\mathbf{r}) \rangle$, except at conditions of phase coexistence where an interface separates volumes with different values of the field. The interfacial tension for this interface is proportional to $m$ and the shape of the interface is determined by the function $w(n)$. These relationships can be read about in standard texts, e.g. Ref.~\cite{RowlinsonWidom2003}.
The actual molecular density, $\rho(\mathbf{r})$, can be written as a slowly varying field like $n({\vec r})$ plus a correction, where the correction accounts for fluctuations that occur on small-length scales. In particular, we take
\begin{equation}
\label{Density}
\rho(\mathbf{r}) = n(\mathbf{r}) + \delta \rho(\mathbf{r})\,,
\end{equation}
where $ \delta \rho(\mathbf{r})$ is the small-length scale part. There is flexibility in defining this decomposition, but it is important that $n(\mathbf{r})$ varies little over a length $\xi$, which is correlation length of the homogeneous liquid. With this generic criterion, the vapor phase of water is where $n(\mathbf{r})$ is close to zero, and the liquid phase is where $n(\mathbf{r})$ is close to the density of liquid water. Equation~\eqref{HL} describes the energetics of $n(\mathbf{r})$, and the interface it forms is the liquid-vapor interface.
To the extent that $n(\mathbf{r})$ is a constant equal to the average density of the liquid, $\delta \rho(\mathbf{r}) $ will have the Gaussian hamiltonian, i.e.,
\begin{equation}
\label{HS}
\beta H_{\mathrm{S}}[\delta \rho(\mathbf{r})] = \frac{1}{2}\int d\mathbf{r} \int d\mathbf{r}' \,\delta \rho(\mathbf{r})\, \chi^{-1}(\mathbf{r},\mathbf{r}')\, \delta \rho(\mathbf{r})
\end{equation}
where $\chi^{-1}(\mathbf{r},\mathbf{r}')$ is the functional inverse of the density-density correlation function, i.e., $\chi(\mathbf{r},\mathbf{r}')= \langle \delta \rho(\mathbf{r})\, \delta \rho(\mathbf{r}')\rangle$. The subscript ``S'' indicates that this hamiltonian applies to \emph{small}-length scale fluctuations.
The presence of a large enough solute will force the fluid to be inhomogeneous on large length scale. To account for that possibility, the two fields $n(\mathbf{r})$ and $\delta \rho(\mathbf{r})$ must be coupled, and the simplest way to do so is with a bi-linear form. Specifically, we take
\begin{equation}
\label{H}
H[n(\mathbf{r}), \delta \rho(\mathbf{r})] = H_{\mathrm{L}}[n(\mathbf{r})] + H_{\mathrm{S}}[\delta \rho(\mathbf{r})] + H_{\mathrm{I}}[n(\mathbf{r}), \delta \rho(\mathbf{r})]\,,
\end{equation}
with
\begin{equation}
\label{HI}
H_{\mathrm{I}}[n(\mathbf{r}), \delta \rho(\mathbf{r})] = \int d \mathbf{r}\int d \mathbf{r}' \,n(\mathbf{r})\,u(\mathbf{r}, \mathbf{r}')\,\delta \rho(\mathbf{r}') \,+\, H_{\mathrm{norm}}[n(\mathbf{r})],
\end{equation}
where $H_{\mathrm{norm}}[n(\mathbf{r})]$ ensures that
\footnote{We leave it as an exercise for the reader to carry out the indicated sum over the Gaussian fields to show that $H_{\mathrm{norm}}[n(\mathbf{r})]$ is an irrelevant constant plus
$$
\frac{\beta}{2}\int \text{d}\mathbf{r} \int \text{d}\mathbf{r}' \int \text{d}\mathbf{s} \int \text{d}\mathbf{s}' \,n(\mathbf{r})\,u (\mathbf{r},\mathbf {s}) \, \chi(\mathbf{s},\mathbf{s}') \, u(\mathbf{s}',\mathbf{r}')\,n(\mathbf{r}') \,.
$$}
\begin{equation}
\label{Norm}
\sum_{\delta \rho(\mathbf{r})} \exp\{-\beta H[n(\mathbf{r}), \delta \rho(\mathbf{r})] \} = \exp\{-\beta H_{\mathrm{L}}[n(\mathbf{r})]\}\,.
\end{equation}
The subscript ``I'' labeling $H_{\mathrm{I}}[n(\mathbf{r}), \delta \rho(\mathbf{r})]$ stands for \emph{interaction}, and the symmetric function $u(\mathbf{r}, \mathbf{r}')$ specifies the strength and range of the interaction between the two fields
While the unperturbed liquid partition sum in Eq.\eqref{Norm} leads back to the Landau-Ginzburg description at large-length scales, a non-trivial alteration occurs in the presence of imposed inhomogeneity. In the context of possible de-wetting, the most important of these alterations comes from excluded volume ${\text{\tiny V}}$ due to a solute. This constrains the partition sum in a fashion that is compactly described with the functional
\begin{equation}
\label{Constraint}
C_{\text{\tiny V}}[\rho(\mathbf{r})] =
\begin{cases}
1,&\text{when $\rho(\mathbf{r})=0$ for all $\mathbf{r}\in{\text{\tiny V}}$,}\\
0,&\text{otherwise},
\end{cases}
\end{equation}
where ${\text{\tiny V}}$ denotes the volume that the solute excludes from the solvent. The volume can be complicated, indeed not even contiguous. See for instance the excluded volumes depicted in Fig.~\ref{fig:SolvationCartoon} of Lecture~Two. The excluded volume constraint is common to all solutes, hydrophobic or hydrophilic. For the latter, a different constraint functional could also be employed, one that binds solvent a regions of space adjoining the excluded volumes. With the constraints imposed by the solute, the partition sum over small-length scale density fluctuations is then
\begin{equation}
\label{WithExclusion}
\sum_{\delta \rho(\mathbf{r})} \exp\{-\beta H[n(\mathbf{r}), \delta \rho(\mathbf{r})] \}\,C_{\text{\tiny V}}[n(\mathbf{r}) + \delta \rho(\mathbf{r})] = \exp\{-\beta \overline{H}_v[n(\mathbf{r})] \}\,,
\end{equation}
where
\begin{equation}
\label{Hbar}
\overline{H}_v[n(\mathbf{r})] = H_{\mathrm{L}}[n(\mathbf{r})]\,+\,\Delta H_v[n(\mathbf{r})]
\end{equation}\\
The alteration to the Landau-Ginzburg hamiltonian, $\Delta H_v[n(\mathbf{r})]$, is straightforwardly (though tediously) evaluated by carrying out the indicated sum over the Gaussian field in Eq.~\eqref{WithExclusion} \footnote{We leave it as a second exercise to the reader to show that the result of this calculation is
\begin{multline*}
\Delta H_v[n(\mathbf{r})] = -\frac{k_{\text{B}} T}{2} \ln \det (\mathbf{\chi_v^{\mathrm{-1}}}/2\pi)\\
+ \frac{k_{\text{B}} T}{2} \int_v \text{d}{\vec r}\, \int_v \text{d}{\vec r}'\,
\biggl[ n({\vec r}) + \int\text{d}\vec{s}\, \int\text{d}\vec{s}'\, \chi({\vec r},\vec{s})\, \beta u(\vec{s},\vec{s}')\, n(\vec{s}') \biggr]\\
\chi_v^{-1}({\vec r},{\vec r}')%
\biggl[ n({\vec r}') + \int\text{d}\vec{s}''\, \int\text{d}\vec{s}'''\, \chi({\vec r}',\vec{s}'')\, \beta u(\vec{s}'',\vec{s}''')\, n(\vec{s}''') \biggr],
\end{multline*}
where $\chi_v^{-1}({\vec r},{\vec r}')$ is the inverse of $\chi({\vec r},{\vec r}')$ when ${\vec r}$~and~${\vec r}'$ are restricted to the volume~$v$. In other words, $\chi_v^{-1}({\vec r},{\vec r}')$ satisfies
$$
\int_v \text{d}{\vec r}' \chi_v^{-1}({\vec r},{\vec r}') \chi({\vec r}',{\vec r}'') = \delta( {\vec r} - {\vec r}'' ),\qquad
\text{for ${\vec r},{\vec r}''\in v$}.
$$
Full details on a method of deriving this result can be found in Ref.~\cite{Chandler1993}.
}.
In using these formulas to compute numbers, some information about the function $u(\mathbf{r}, \mathbf{r}')$ is required. As it generates a force on the slowly varying field, its functional form need not be very specific. It suffices to characterize the function in terms of a mean strength and range, i.e., $u(\mathbf{r}, \mathbf{r}') = \alpha \, \phi(|\mathbf{r} - \mathbf{r}'|)$, where $\alpha$ is the mean strength with a value of the order of $k_{\text{B}} T$, and $\phi(r)$ is normalized with a range of a few \AA's. The theory constructed with Eq.\eqref{WithExclusion} is not terribly sensitive to the specific values of strength and range. Physical arguments can be made to estimate their values \textit{a priori}. Alternatively, the parameters can be adjusted so that the theory produces essentially perfect agreement between its predictions and the results of a few representative simulation calculations. A non-zero value of $\alpha$ is needed to capture the wide breadth of the crossover from small- to large-length scale behaviors, e.g., as depicted in Fig.~\ref{fig:SolvationScaling}.
A subtlety in these formulas concerns the form of $\chi(\mathbf{r}, \mathbf{r}')$. When $n(\mathbf{r})$ is not a constant, this variance is not simply the liquid phase function given in Eq.~\eqref{PairCorrelation} of Lecture~Two. A simple interpolation formula can be used to estimate the variance of the small-length scale field,
\begin{equation}
\chi(\mathbf{r}, \mathbf{r}') \approx n(\mathbf{r})\,\delta(\mathbf{r} - \mathbf{r}')\,+\,n(\mathbf{r}\,)\,[g(|\mathbf{r}-\mathbf{r}'|)-1]\,n(\mathbf{r}').
\end{equation}
Notice that this formula guarantees that there are no density fluctuations wherever $n(\mathbf{r})=0$. Again, because $n(\mathbf{r})$ is slowly varying, more molecular-scale detail than provided reliably by this interpolation formula is unimportant to the evaluation of $\Delta H_v[n(\mathbf{r})]$.
This evaluation establishes the following behaviors:
\begin{enumerate}
\item If the excluded volume ${\text{\tiny V}}$ is not much larger than the correlation volume of the unperturbed liquid, $\xi^3$, or if it is composed of several distantly separated excluded volumes, each one not much larger than a correlation volume, then $\Delta H_v[n(\mathbf{r})]$ is essentially a constant and equal to the solvation free energy of the excluded volume as given by Gaussian fluctuation theory, Eq.~\eqref{eqn:DeltaMuSmall} in Lecture~Two.
\item On the other hand, if ${\text{\tiny V}}$ presents a surface of low curvature extending over lengths larger than $\xi$, then $\Delta H_v[n(\mathbf{r})]$ is no longer constant and is relatively large when $n(\mathbf{r}) \neq 0$ for $\mathbf{r} \in {\text{\tiny V}}$. To avoid this energetic penalty, probable configurations of the slowly varying fields will adjust to make $n(\mathbf{r}) = 0$ within the excluded volume. As such, according to Eq.~\eqref{HL}, the probability for configurations of the slowly varying field will be maximal at configurations with a liquid-vapor-like interface adjacent to the excluded volume.
\end{enumerate}
This latter situation is that of dewetting. Regions of space where dewetting occurs is where an excluded volume ${\text{\tiny V}}$ causes the average value of $n(\mathbf{r})$ to be zero. It occurs only for cases of sufficiently large excluded volumes. The field $n(\mathbf{r})$ governed by Eq.~\eqref{HL} is a continuum version of an Ising model or lattice-gas model. We see from the theory sketched above that the underlying physics of dewetting is captured by coupling of an Ising-like field to a small-length scale field through excluded volume perturbations. Indeed, with a single fixed parameter the strength parameter $\alpha$, the results of these equations agree quantitatively with those of computer simulations, as we have recently shown in detail in Ref.~\cite{VarillyPatelChandler2011}.
\subsection{Applications and hydrophobic collapse}
The first general theoretical treatment of dewetting in water and its role in hydrophobicity was provided by the Lum-Chandler-Weeks (LCW) theory~\cite{LumChandlerWeeks1999}. That theory is the mean-field approximation to what we have presented above. In particular,
\begin{equation}
\label{LCW}
\langle n(\mathbf{r}) \rangle \approx n_{\mathrm{LCW}}(\mathbf{r})\,,
\end{equation}
where $ n_{\mathrm{LCW}}(\mathbf{r})$ is the field that minimizes $\overline{H}_v[n(\mathbf{r})] $, i.e.,
\begin{equation}
\label{LCWmin}
0 = \delta \overline{H}_v[n_{\mathrm{LCW}}(\mathbf{r})] / \delta n_{\mathrm{LCW}}(\mathbf{r})\,.
\end{equation}
The LCW paper~\cite{LumChandlerWeeks1999} presents several illustrative predictions of Eqs.~\eqref{LCW} and~\eqref{LCWmin}. These predictions have motivated many subsequent studies of hydrophobic effects.
More recent work building from this approach have focused on fluctuations in $n(\mathbf{r})$. These fluctuations are important in dynamics. The weight functional is
\begin{equation}
\label{PartitionFunction}
P[n(\mathbf{r})] \propto \exp\{-\beta \overline{H}_v[n(\mathbf{r})]\}\,,
\end{equation}
which can be sampled by Monte Carlo. The simplest implementation replaces $n(\mathbf{r})$ with a binary field on a lattice, $\rho \,n_i$, where $\rho$ is the bulk liquid density and $n_i$ is either 0 or 1 depending upon whether $i$th cell in the lattice is vapor-like or liquid-like, respectively. $H_{\mathrm{L}}[n(\mathbf{r})]$ is then taken to be a lattice-gas hamiltonian, and $\Delta H_v[n(\mathbf{r})] \approx \sum_i c\, n_i v_i$, where $c$ is a positive constant, and $v_i$ is the volume excluded by solutes in cell $i$. That is,
\begin{equation}
\label{Simplest}
\overline{H}[n({\vec r})] \approx - \epsilon \sum_{i,j}{\,}^{'} \,n_i\,n_j\, +\,\sum_i (c\, v_i - \rho\, \mu)\,n_i\,,
\end{equation}
where $\mu$ is the chemical potential of the liquid and the primed sum is over nearest neighbors. The $c\,v_i$ terms account for excluded volume in liquid-like regions, with the free energy cost for this excluded to be proportional to the size of that volume. Proportionality to excluded volume is approximately consistent with small-length scale hydrophobic free energies of solvation, as we explained in Lecture~Two.
This version of the theory is particularly easy to implement. With judicious choices of lattice spacing and the constant $c$, most qualitative features of dewetting and hydrophobic forces of assembly are captured correctly. The principal feature it fails to capture is the slow approach to the macroscopic surface-area scaling illustrated in Fig.~\ref{fig:SolvationScaling}. Instead of a broad crossover from small to large-length scale hydrophobicity, this simplest version exhibits a relatively abrupt crossover around 1 nm. This deficiency can be ameliorated, as we have detailed in Ref.~\cite{VarillyPatelChandler2011}, but the simplicity of the model described with Eq.~\eqref{Simplest} makes it an attractive choice for easy estimates of roles of hydrophobic forces.
Figure~\ref{fig:tWC02Trajectory} shows a result obtained from this form of modeling. Depicted are snap shots of a trajectory illustrating the collapse of a chain of hard spheres in water. The chain follows Newtonian dynamics with friction and random forces reflecting the effects of the small-length scale field that has been integrated out. The liquid's dynamics is Monte Carlo. Motion of the chain is reflected in changes of $v_i$, and as such the liquid's slowly varying field couples to the dynamics of the chain. The two move together illustrating the collective nature of hydrophobic forces of assembly.
The intra-chain forces for the chain considered in that figure are such that the extended chain is most stable configuration in the gas phase. Solvation, in this case hydrophobic forces of assembly, make the globule state the most stable configurations in the liquid. The half-life of the extended chain in the liquid is about 1~$\mu$s. The pictured trajectory shows parts of a trajectory during the relatively short period of time when the chain passes from its metastable extended state to the stable globule state.
The transition state occurs when a random fluctuation of the chain produces a large enough cluster of hard spheres to nucleate a soft liquid-vapor-like interface. At that stage, water moves away with relatively low free energy cost and the chain collapses to its most stable thermodynamic state. In effect, the reorganization of the chain boils away the water. It is a suggestive of the possibility of a nano-scale steam engine, albeit in a much more complicated device than a simple hydrophobic chain. It is a possibility that we ponder and wonder if it exists in some biological molecular motor.
\begin{figure}
\begin{center}\includegraphics[width=130mm]{tWC02Trajectory}\end{center}
\caption{\label{fig:tWC02Trajectory}Example collapse trajectory of a model hydrophobic polymer embedded in solvent. Frame~(a) shows a typical extended configuration. Frames (b), (c)~and~(d) are snapshots from a~$1.5\,$ns collapse trajectory, with the configuration shown in Frame~(c) being a transition state. The white cubes are the cells in the lattice gas where $n_i$ is~$0$. In this model, the cell size $\ell$ is chosen to be~$2.1\,$\AA. From Ref.~\cite{tenWoldeChandler2002}.}
\end{figure}
|
1,108,101,564,642 | arxiv | \section{Introduction}
Todays astrometry necessitates theoretical predictions of light-deflection by massive bodies on microarcsecond
($\mu{\rm as}$) level, e.g. astrometric missions SIM (NASA) or GAIA (ESA). In principle, an
astrometric precision on microarcsecond level can be achieved by numerical integration of geodesic equation of
light-propagation. On the other side, modern astrometric missions like GAIA determine the positions and proper motions of
approximately one billion objects, each of which is observed about one hundred times.
The data reduction of such huge amount of observations implies the need of analytical solutions, because the numerical
investigation of geodesic equation is by far too
time-consuming.
The metric of a massive body can be expanded in terms of multipoles, i.e. monopole term, quadrupole
term and higher multipoles \cite{Thorne,Blanchet}.
Usually, the largest contributions of light-deflection originates from the spherically
symmetric part (Schwarzschild) of the massive body under consideration. The exact analytical solution of light-propagation
in Schwarzschild metric \cite{Chandrasekhar1983} inherits elliptic integrals, but their evaluation becomes comparable with
the time effort needed for a numerical integration of geodesic equation. Thus, approximative analytical solutions
valid on microarcsecond level of accuracy are indispensible for a highly time-efficient data reduction.
In the same way, exact lens equations of light-deflection have been obtained in \cite{exact_lens1,exact_lens2,exact_lens3}.
However, such exact relations are also given in terms of elliptic integrals. Therefore, approximations of these exact
solutions are also needed for a time-efficient data reduction.
An excellent overview of such approximative lens equations has recently been presented in \cite{exact_lens4}.
Basically, two essential approximative approaches are known in order to determine the light-deflection
in weak gravitational fields:
The first one is the standard parameterized post-Newtonian approach (PPN)
\cite{Misner_Thorne_Wheeler,Brumberg2} which is of the order ${\cal O} \left(m\right)$. During the last
decades, it has been the common understanding that the higher order terms ${\cal O} \left(m^2\right)$ are negligible even
on microarcsecond level, except for observations in the vicinity of the Sun. However, recent investigations
\cite{Teyssandier_LePoncinLafitte,Ashby_Bertotti,Article_Klioner_Zschocke,Teyssandier} have revealed that the
post-post-Newtonian approximation \cite{Brumberg1,Brumberg2}, which is of the order ${\cal O} \left(m^2\right)$, is needed
for such high accuracy. Both approximations are applicable for $d \gg m$, where $d$ being the impact parameter of the
unperturbed light ray.
The second one is the standard weak-field approximative lens equation, which is usually called the classical lens equation,
see Eq.~(67) in \cite{exact_lens3} of Eq.~(24) in \cite{exact_lens4}.
One decisive advantage of classical lens equation is its validity for arbitrarily small values of impact parameter $d$.
The classical lens equation is valid for astrometrical configurations where source and observer are far ernough from
the lens, especially in case of $a \gg d$ and $b \gg d$, where $a = \ve{k}\cdot\ve{x}_{{\rm 1}}$ and
$b = - \ve{k}\cdot\ve{x}_{{\rm 0}}$, where $\ve{x}_{{\rm 0}}$ and $\ve{x}_{{\rm 1}}$ are the three-vectors from the
center of the massive body to the source and observer, respectively, and $\ve{k}$ is the unit vector from the
source to the observer. However, the classical lens equation is not applicable for determine
the light-deflection of moons of giant planets in the solar system, because astrometrical configurations
with $b = 0$ are possible.
Moreover, there are astrometric configurations where neither the standard post-Newtonian approach nor the classical lens
equation are applicable, for instance binary systems. In order to investigate the light-deflection in such systems
a link between these both approaches is needed. Such a link can be provided by a generalized lens equation which,
in the appropriate limits, coincides with standard post-Newtonian approach and classical lens equation.
Accordingly, the aim of our investigation is an analytical expression for the generalized lens equation
having a form very similar to the classical lens equation. We formulate the following conditions under which our
generalized lens equation should be applicable:
\begin{enumerate}
\item[] $1.$ valid for $d = 0\;,\; a = x_{{\rm 1}} \gg m\;,\;b = x_{{\rm 0}} \gg m$,
\item[] $2.$ valid for $a = 0\;,\;d \gg m\;,\;b \neq 0$,
\item[] $3.$ valid for $b = 0\;,\;d \gg m\;,\;a \neq 0$.
\end{enumerate}
These conditions imply that the light-path is always far enough from the lens, thus inherit weak gravitational fields,
i.e. small light deflection angles.
In order to control the numerical accuracy, the generalized lens equation is compared with the numerical solution of
exact geodesic equation in the Schwarzschild metric (throughout the paper, we work in harmonic gauge):
\begin{eqnarray}
g_{00} &=& - \frac{1-a}{1+a}\;,\quad \quad g_{i0} = 0\,,
\nonumber\\
g_{ij} &=& \left(1+a\right)^2\delta_{ij} + \frac{a^2}{x^2}\,\frac{1+a}{1-a}\,x^i\,x^j\,.
\label{exact_Schwarzschild_5}
\end{eqnarray}
\noindent
Here, $\displaystyle a=\frac{m}{x}$ and $\displaystyle m=\frac{G\,M}{c^2}$ is the Schwarzschild radius
and $M$ is the mass of the light-deflecting body, $G$ is Newtonian constant of gravitation and $c$ is the speed of light.
Latin indices take values $1,2,3$, and the Euclidean metric $\delta_{ij}=1 (0)$ for $i=j$ ($i\neq j$).
The absolute value of a three-vector is denoted by $x=\left|\ve{x}\right| = \sqrt{x_1^2 + x_2^2 + x_3^2}$.
The exact geodesic equation in Schwarzschild metric reads, cf. \cite{Article_Klioner_Zschocke}
\begin{eqnarray}
\fl
\ddot{\ve{x}} =
\frac{a}{x^2} \left[ - c^2 \frac{1 - a}{(1 + a)^3} - \dot{\ve{x}} \cdot \dot{\ve{x}}
+ a \frac{2 - a}{1 - a^2} \left( \frac{ {\ve{x}} \cdot \dot{\ve{x}}}{x} \right)^2
\right] \ve{x} + 2 \frac{a}{x^2} \;\frac{2 - a}{1 - a^2}
( \ve{x} \cdot \dot{\ve{x}}) \, \dot{\ve{x}} \,,
\label{exact_Schwarzschild_10}
\end{eqnarray}
\noindent
where a dot denotes time derivative in respect to the coordinate time $t$, and $\ve{x}$ is the three-vector
pointing from the center of mass of the massive body to the photon trajectory at time moment $t$.
The scalar product of two three-vectors with respect to Euclidean metric $\delta_{ij}$ is
$\ve{a}\cdot\ve{b}=\delta_{i j}\, a^i b^j$.
The numerical solution of this equation will be used in order to determine the accuracy of approximative solutions.
We abbreviate the angle between two three-vectors $\ve{a}$ and $\ve{b}$ by $\delta(\ve{a},\ve{b})$,
which can be computed by means of $\displaystyle \delta(\ve{a},\ve{b}) = \arccos \frac{\ve{a}\cdot\ve{b}}{a\,b}$.
The paper is organized as follows: In Section \ref{Standard_post_Newtonian}
the post-Newtonian approach is presented.
In Section \ref{post_post_Newtonian} the steps of post-post-Newtonian approach
relevant for this investigation are shown, and I will briefly summarize the
main results of our article \cite{Article_Klioner_Zschocke}.
The generalized lens equation is obtained in Section \ref{generalized_lens} and
discussed in Section \ref{discussion}. A summary is given in Section \ref{summary}.
\section{Post-Newtonian approximation\label{Standard_post_Newtonian}}
Let us consider the trajectory of a light-signal in post-Newtonian Schwarzschild metric:
\begin{eqnarray}
g_{00} &=& - 1 + 2\,a + {\cal O} \left(c^{-4}\right)\;, \quad g_{i0}=0\,,
\nonumber\\
g_{ij} &=& \delta_{ij} + 2\,\gamma\,a\,\delta_{ij} + {\cal O} \left(c^{-4}\right).
\label{pN_mestric_5}
\end{eqnarray}
\noindent
Here, $\gamma$ is the parameter of the Parametrized Post-Newtonian (PPN) formalism, which characterizes
possible deviation of the physical reality from general relativity theory where $\gamma=1$.
The light-ray is being emitted at a position $\ve{x}_{{\rm 0}}$ at time moment $t_0$ and received at position
$\ve{x}_{{\rm 1}}$ at a time moment $t_1$, see Figure~\ref{Fig: LensC}.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.8]{figure-nksigma.eps}
\caption{A geometrical representation of the boundary problem under consideration for a light-propagation from the
source to the observer.}
\label{Fig: LensC}
\end{center}
\end{figure}
\noindent
Light propagation is governed by the geodesic equation, in post-Newtonian order given by
\begin{eqnarray}
\ddot{\ve{x}} &=&
-(1+\gamma)\,c^2\,\frac{a\,\ve{x}}{x^2}
+2\,(1+\gamma)\,\frac{a\,\dot{\ve{x}}\,(\dot{\ve{x}}\cdot\ve{x})}{x^2} + {\cal O} (c^{-2})\,.
\label{geodesic-post-Newtonian}
\end{eqnarray}
\noindent
The unit tangent vector at the point of observation is
$\displaystyle \ve{n} = \frac{\dot{\ve{x}}(t_1)}{\left|\dot{\ve{x}}(t_1)\right|}$, and the unit
tangent vector $\displaystyle \ve{k}=\frac{\ve{R}}{R}$, where $\ve{R} = \ve{x}_{{\rm 1}} - \ve{x}_{{\rm 0}}$ and the absolute
value is $R = |\ve{R}|$. Furthermore, we define the unit tangent vector at remote past:
$\displaystyle \ve{\sigma} = \lim_{t\rightarrow - \infty} \frac{\dot{\ve{x}}(t)}{c}$.
Up to post-Newtonian order, the differential equation (\ref{geodesic-post-Newtonian}) can be solved
analytically. The solution for the transformation between $\ve{n}$ and $\ve{k}$ reads
\begin{eqnarray}
\ve{n} &=& \ve{k} - (1 + \gamma) \,\frac{m}{d^{\;\prime}} \,\frac{\ve{d^{\;\prime}}}{d^{\;\prime}} \,
\frac{x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}}\cdot \ve{x}_{{\rm 1}}}{R\,x_{{\rm 1}}} + {\cal O} \left(m^2\right),
\label{standard_25_A}
\end{eqnarray}
\noindent
in terms of the coordinate-independent impact vector $\ve{d}^{\;\prime}$, cf. Eq.~(57) of \cite{Article_Klioner_Zschocke}:
\begin{eqnarray}
\ve{d}^{\;\prime} &=&
\lim_{t\rightarrow - \infty} \ve{\sigma} \times \left(\ve{x} (t) \times \ve{\sigma}\right).
\label{impact_B}
\end{eqnarray}
\noindent
This impact parameter is identical to Chandrasekhar's impact parameter \cite{Article_Klioner_Zschocke,Report4},
that means in vectorial form $\ve{d}^{\;\prime} = \frac{\displaystyle \ve{L}}{\displaystyle E}$,
where $\ve{L}$ is the orbital three-momentum and $E$ is the energy of the photon on the light-trajectory;
cf. Eq.~(215) in chapter 20 of \cite{Chandrasekhar1983}.
By means of $\sin \varphi = \left|\ve{n} \times \ve{k}\right|$, we find the light-deflection
angle $\varphi = \delta(\ve{n},\ve{k})$ in post-Newtonian approximation:
\begin{eqnarray}
\varphi &=& (1 + \gamma) \,\frac{m}{d^{\;\prime}}\,
\frac{x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}}\cdot \ve{x}_{{\rm 1}}}{R\,x_{{\rm 1}}} + {\cal O} \left(m^2\right).
\label{standard_25_B}
\end{eqnarray}
\noindent
Note that $\displaystyle \frac{x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}}\cdot \ve{x}_{{\rm 1}}}{R\,x_{{\rm 1}}}\le 2$, and
therefore $\displaystyle \varphi \le \frac{4\,m}{d^{\;\prime}}$.
One problem of post-Newtonian solution (\ref{standard_25_A}) or (\ref{standard_25_B}) is, that one can only state that the
neglected terms are of order ${\cal O} \left(m^2\right)$, but their magnitude remains unclear. In order to make a
statement about the upper magnitude of the higher order terms one needs to consider the geodesic equation in
post-post-Newtonian approximation.
\section{Post-post Newtonian approximation\label{post_post_Newtonian}}
Now we will consider the trajectory of a light-signal in post-post-Newtonian Schwarzschild metric:
\begin{eqnarray}
g_{00} &=& - 1 + 2\,a - 2\,\beta\,a^2 + {\cal O} \left(c^{-6}\right)\;,\quad g_{i0} = 0\,,
\nonumber\\
g_{ij} &=&\delta_{ij} + 2\,\gamma\,a\,\delta_{ij} + \epsilon \left(\delta_{ij} + \frac{x^i\,x^j}{x^2}\right) a^2
+ {\cal O} \left(c^{-6}\right).
\label{ppN_metric_5}
\end{eqnarray}
\noindent
The geodesic equation of light-propagation in post-post-Newtonian approximation is given by \cite{Article_Klioner_Zschocke}
\begin{eqnarray}
\ddot{\ve{x}} &=&
-(1+\gamma)\,c^2\,\frac{a\,\ve{x}}{x^2}
+2\,(1+\gamma)\,\frac{a\,\dot{\ve{x}}\,(\dot{\ve{x}}\cdot\ve{x})}{x^2}
\nonumber
\\
&& \hspace{-0.5cm} +2\,c^2\,\left(\beta-\epsilon+2\,\gamma\,
(1+\gamma)\right)\,\frac{a^2\,\ve{x}}{x^2}
+2\,\epsilon\,\frac{a^2\,\ve{x}\,(\dot{\ve{x}}\cdot\ve{x})^2}{x^4}
\nonumber
\\
&& \hspace{-0.5cm} + 2\,(2(1-\beta)+\epsilon-2\,\gamma^2)\,
\frac{a^2\,\dot{\ve{x}}\,(\dot{\ve{x}}\cdot\ve{x})}{x^2}
+{\cal O} \left(c^{-4}\right).
\label{geodesic-post-post-Newtonian}
\end{eqnarray}
\noindent
The parameters $\beta$, $\gamma$ and $\epsilon$ characterize possible deviation of physical reality from general
relativity theory (in general relativity $\beta = \gamma = \epsilon = 1$). The solution
of (\ref{geodesic-post-post-Newtonian}) and the transformation between the unit vectors $\ve{n}$ and $\ve{k}$ in
post-post-Newtonian order has been given in \cite{Article_Klioner_Zschocke}, cf. Eqs.~(108) and (109) ibid., and reads
\begin{eqnarray}
\ve{n} &=& \ve{k}-(1+\gamma) \frac{m}{d^{\,\prime}} \frac{\ve{d}^{\,\prime}}{d^{\,\prime}}
\frac{x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}}\cdot\ve{x}_{{\rm 1}}}{R\,x_{{\rm 1}}}
+ {\cal O} \left(\frac{m^2}{{d^{\,\prime}}^2}\right).
\label{lens_50}
\end{eqnarray}
\noindent
The terms of the order $\displaystyle {\cal O} \left(\frac{m^2}{{d^{\;\prime}}^2}\right)$
can be estimated to be smaller than or equal to $\displaystyle \frac{15\,\pi}{4}\,\frac{m^2}{{d^{\;\prime}}^2}$.
From Eq.~(\ref{lens_50}) we obtain the expression
\begin{eqnarray}
\varphi &=&
(1 + \gamma)\,\frac{m}{d^{\;\prime}}\,\frac{x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}} \cdot \ve{x}_{{\rm 1}}}{R\,x_{{\rm 1}}}
+ {\cal O} \left(\frac{m^2}{{d^{\;\prime}}^2}\right).
\label{lens_155_A}
\end{eqnarray}
\noindent
The solutions (\ref{lens_50}) and (\ref{lens_155_A}) are identical to the post-Newtonian solution (\ref{standard_25_A})
and (\ref{standard_25_B}), respecticvely. This fact means that the post-post-Newtonian terms in the metric
(\ref{ppN_metric_5}) and also the post-post-Newtonian terms in the geodesic equation (\ref{geodesic-post-post-Newtonian})
contribute only terms which can be estimated to be smaller than or equal to
$\displaystyle \frac{15\,\pi}{4}\,\frac{m^2}{{d^{\;\prime}}^2}$. Therefore, the only difference between
(\ref{lens_155_A}) and (\ref{standard_25_B}) here is, that the post-post-Newtonian approximation allows
to make a statement about the upper magnitude of the regular post-post-Newtonian terms.
\section{Generalized lens equation\label{generalized_lens}}
Usually, in practical astrometry the position of observer $\ve{x}_{{\rm 1}}$ and the position of light-deflecting body
is known (here, the center of massive body coincides with the coordinate center),
but the impact parameter $d^{\;\prime}$ is not accessible. Therefore, the solutions (\ref{lens_50}) or
(\ref{lens_155_A}) are not applicable in the form presented. Instead, one has to rewrite these solutions in terms of
the impact vector of the unperturbed light ray
\begin{eqnarray}
\ve{d} &=& \ve{k} \times \left( \ve{x}_{{\rm 1}} \times \ve{k} \right).
\label{impact_vector_5}
\end{eqnarray}
\noindent
For that one needs a relation between impact vector $\ve{d}^{\;\prime}$ defined in Eq.~(\ref{impact_B})
and impact vector $\ve{d}$ defined in Eq.~(\ref{impact_vector_5}).
Such a relation has been given in \cite{Article_Klioner_Zschocke}, cf.~(62) ibid., and reads
(note, $d^{\;\prime} = d + {\cal O} \left(m\right)$):
\begin{eqnarray}
d^{\;\prime} &=& d + (1+\gamma)\,\frac{m}{d^{\prime}}\,\frac{x_{{\rm 0}}+x_{{\rm 1}}}{R}\,
\frac{x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}} \cdot\ve{x}_{{\rm 1}}}{R}
+ {\cal O} \left(m^2\right).
\label{coordinate_independent_15}
\end{eqnarray}
\noindent
Eq.~(\ref{coordinate_independent_15}) represents an quadratic equation for $d^{\;\prime}$, and these both
solutions correspond to the two possible light-trajectories.
A comparison of (\ref{coordinate_independent_15}) with (\ref{lens_155_A}) yields the relation
\begin{eqnarray}
d^{\;\prime} &=& d + x_{{\rm 1}}\,\varphi + \frac{x_{{\rm 0}} +
x_{{\rm 1}}-R}{R}\,x_{{\rm 1}}\,\varphi + {\cal O} \left(m^2\right)\,,
\label{simplest_form_5}
\end{eqnarray}
\noindent
where $\varphi$ is given by Eq.~(\ref{lens_155_A}) and we have separated a term
$\displaystyle \frac{x_{{\rm 0}} + x_{{\rm 1}} - R}{R}\,x_1\,\varphi$ which can be shown to contribute to the
light-deflection $\varphi$ only to order $\displaystyle {\cal O} \left(\frac{m^2}{{d^{\;\prime}}^2}\right)$.
By inserting (\ref{simplest_form_5}) into (\ref{lens_155_A}) we obtain an quadratic equation which has the solution
\begin{eqnarray}
\varphi_{1,2} &=& \frac{1}{2} \left( \sqrt{\frac{d^2}{x_{{\rm 1}}^2}
+ 4\,(1+\gamma)\,\frac{m}{x_{{\rm 1}}}\,\frac{x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}} \cdot\ve{x}_{{\rm 1}}}{R\;x_{{\rm 1}}}}
\mp \frac{d}{x_{{\rm 1}}} \right) + {\cal O} \left(\frac{m^2}{{d^{\;\prime}}^2}\right).
\label{simplest_form_25}
\end{eqnarray}
\noindent
The solution with the upper (lower) sign is denoted by $\varphi_1$ ($\varphi_2$). For astrometry the solution $\varphi_1$
can be considered to be the more relevant solution, because $\varphi_2$ represents the second image of
one and the same source.
One can show, that the terms $\displaystyle {\cal O} \left(\frac{m^2}{{d^{\;\prime}}^2}\right)$
are smaller than or equal to $\displaystyle \frac{15\,\pi}{4}\,\frac{m^2}{{d^{\;\prime}}^2}$.
Equation (\ref{simplest_form_25}) represents the generalized lens equation. This equation is applicable not only
for any configurations where the post-Newtonian approach, the post-post-Newtonian approach, or
the classical lens equation is valid, but also for all those extreme configurations given
by the points (1) - (3) in the introductory section.
Especially, it allows an analytical investigation of light-deflection in binary systems \cite{Zschocke_Binaries}.
In the following Section we will show that the formula (\ref{simplest_form_25}) represents a link between
standard post-Newtonian approach and classical lens equation.
\section{Discussion of generalized lens equation\label{discussion}}
\subsection{Comparison with standard post-Newtonian and post-post-Newtonian approach}
In this Section we compare the generalized lens equation (\ref{simplest_form_25})
with the standard post-Newtonian and post-post-Newtonian approach of light-deflection.
A series expansion of the solution $\varphi_1$ in Eq.~(\ref{simplest_form_25}) for $d \gg m$ yields
\begin{eqnarray}
\varphi_1 &=& \varphi_{\rm pN} + \varphi_{\rm ppN} + {\cal O} \left(m^3\right)
+ {\cal O} \left(\frac{m^2}{{d^{\;\prime}}^2}\right),
\label{post_post-Newtonian_5}
\end{eqnarray}
\noindent
with
\begin{eqnarray}
\varphi_{\rm pN} &=& (1+\gamma)\,\frac{m}{d}\,
\frac{x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}} \cdot\ve{x}_{{\rm 1}}}{R\,x_{{\rm 1}}}
\le 4\,\frac{m}{d}\,,
\label{post_post-Newtonian_10}
\\
\nonumber\\
\varphi_{\rm ppN} &=& - (1+\gamma)^2\,\frac{m^2}{d^2}\,
\frac{\left(x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}} \cdot\ve{x}_{{\rm 1}}\right)^2}{R^2\;d\;x_{{\rm 1}}}
\le 16\,\frac{m^2}{d^2}\,\frac{x_{{\rm 1}}}{d}\,.
\label{post_post-Newtonian_15}
\end{eqnarray}
\noindent
Expression (\ref{post_post-Newtonian_10}) is called {\it standard post-Newtonian solution}, cf. Eq.~(24) in
\cite{Article_Klioner_Zschocke}. The expression (\ref{post_post-Newtonian_15}) is just the 'enhanced'
post-post-Newtonian term, cf. Eqs.~(3) and (4) in \cite{Report5}.
The 'enhanced term' can be arbitrarily large for small $d$ and large $x_{{\rm 1}}$.
That is the reason why the standard post-Newtonian and post-post-Newtonian solution is not applicable for extreme
configurations like binary stars. The term ${\cal O} \left(m^3\right)$ will be discussed below,
see Eq.~(\ref{post_post-post-Newtonian}); here it is only essential to realize that this term can be larger
than the neglected terms $\displaystyle {\cal O}\left(\frac{m^2}{{d^{\;\prime}}^2}\right)$.
\subsection{Comparison of generalized lens equation and classical lens equation \label{classical_lens_B}}
The standard weak-field lens equation is usually called {\it classical lens equation} and given, for instance, in Eq.~(67)
in \cite{exact_lens3} or Eq.~(24) in \cite{exact_lens4}. Let us briefly reconsider the classical lens equation.
According to the scheme in Figure~\ref{Fig: Lens}, we obtain the following geometrical relations
\begin{eqnarray}
\varphi + \psi &=& \delta \,,
\label{lens_5}
\\
a\, \tan \varphi &=& b\, \tan \psi\,.
\label{lens_10}
\end{eqnarray}
\noindent
Here, the angles are $\psi = \delta (\ve{\mu}, \ve{k})$ and $\delta =
\delta (\ve{n}, \ve{\mu})$, where $\ve{\mu} = \frac{\dot{\ve{x}}
(t_0)}{|\dot{\ve{x}} (t_0)|}$ is the unit tangent vector at the position
of the source in the direction of the propagation of the light signal.
If the source and observer are infinitely far from the massive body, then the total light-deflection angle
$\delta = \delta \left(\ve{n},\ve{\mu}\right)$ in Schwarzschild metric reads \cite{Bodenner_Will}
\begin{eqnarray}
\delta &=& 2\,\left(1 + \gamma\right) \frac{m}{d^{\;\prime}} + {\cal O} \left(\frac{m^2}{{d^{\;\prime}}^2}\right),
\label{lens_15}
\end{eqnarray}
\noindent
which is a coordinate independent result. The terms of order
$\displaystyle {\cal O}\left(\frac{m^2}{{d^{\;\prime}}^2}\right)$ can be estimated to be smaller than or equal to
$\displaystyle \frac{15\,\pi}{4}\,\frac{m^2}{{d^{\;\prime}}^2}\,$, see \cite{Bodenner_Will}. In classical lens approach,
the approximation $d^{\;\prime} \simeq d + a\,\tan \varphi$ is used, see Figure~\ref{Fig: Lens}.
Inserting this relation into (\ref{lens_15}), by means of geometrical relations (\ref{lens_5}) and (\ref{lens_10}), and
using $\tan \varphi = \varphi + {\cal O} (\varphi^3)$ and $\tan \psi = \psi + {\cal O} (\varphi^3)$, we obtain the
quadratic equation
\begin{eqnarray}
\varphi^2 + \frac{d}{a}\,\varphi - 2\,(1 + \gamma)\,\frac{m}{a}\,\frac{b}{a +b} &=& 0 \,.
\label{lens_20}
\end{eqnarray}
\noindent
The solution of Eq.~(\ref{lens_20}) is the classical lens equation:
\begin{eqnarray}
\varphi_{1,2}^{\rm class} &=& \frac{1}{2} \left( \sqrt{\frac{d^2}{a^2}
+ 8 (1 + \gamma)\,\frac{m}{a}\,\frac{b}{a + b}} \mp \frac{d}{a} \right),
\label{lens_25}
\end{eqnarray}
\noindent
which is valid in case of $a,b \gg d$; the solution with the upper (lower) sign is denoted by
$\varphi_1^{\rm class}$ ($\varphi_2^{\rm class}$).
\begin{figure}[!h]
\caption{A geometrical representation of classical lens.}
\vspace{10pt}
\begin{indented}
\item[]
\includegraphics[scale=0.8]{classical_lens.eps}
\label{Fig: Lens}
\end{indented}
\end{figure}
It should be noticed that in (\ref{lens_25}) not only the light deflection angle $\varphi$ is
assumed to be small, but also the source and observer are assumed to be far from the massive body,
i.e. $\delta (\ve{x}_0, \ve{x}_1) \simeq \pi$; note that due to that fact equation (\ref{lens_25}) agrees with
the classical lens equation (67) in \cite{exact_lens3}.
Therefore, the classical lens equation is not applicable for extreme configurations like binary systems
or light deflection of moons at their giant planets of solar system.
It can easily be shown that the classical lens equation (\ref{lens_25}) follows straightforward
from the generalized lens equation (\ref{simplest_form_25}). That means, if we rewrite (\ref{simplest_form_25}) in terms
of $a = \ve{k}\cdot \ve{x}_{{\rm 1}}$ and $b = - \ve{k}\cdot \ve{x}_{{\rm 0}}$ and perform a corresponding series expansion
of generalized lens equation (\ref{simplest_form_25}) then we just obtain the classical lens equation (\ref{lens_25}) as
the leading term in this series.
Furthermore, in the limit $d \rightarrow 0$, known as Einstein ring solution, the generalized lens equation
(\ref{simplest_form_25}) and the classical lens equation (\ref{lens_25}) yield the same result:
\begin{eqnarray}
\lim_{d \rightarrow 0} \varphi_{1,2} &=& \lim_{d \rightarrow 0}\, \varphi_{1,2}^{\rm class}
= \sqrt{2\left(1+\gamma\right)\,\frac{m}{x_{{\rm 1}}}\,
\frac{x_{{\rm 0}}}{x_{{\rm 0}}+x_{{\rm 1}}}}\,.
\label{d_0_10}
\end{eqnarray}
\noindent
Finally, we note that in the extreme configuration $b = 0$ (in this limit $\varphi_2$ does not exist) we obtain
from (\ref{simplest_form_25}) the result
\begin{eqnarray}
\lim_{b \rightarrow 0}\varphi_1 &=& \frac{1}{2}\left(\sqrt{\frac{d^2}{x_{{\rm 1}}^2}+ 4\left(1+\gamma\right)\frac{m}{x_{{\rm 1}}}\,
\frac{d\,a}{\left(x_{{\rm 1}}+d\right)\,x_{{\rm 1}}}}- \frac{d}{x_{{\rm 1}}}\right)
\le \sqrt{\left(1+\gamma\right)\frac{m}{x_{{\rm 1}}}}\,,
\label{limit_b_0}
\end{eqnarray}
\noindent
while the classical lens equation yields simply $\varphi_1^{\rm class}=0$. Obviously, in the limit $a\rightarrow 0$
the expression (\ref{limit_b_0}) yields zero as it has to be because in this
limit the distance between source and observer vanishes, that means no
light deflection.
\subsection{Comparison with exact solution}
The accuracy of (\ref{simplest_form_25}) and the stated estimate that the neglected terms are smaller than or
equal to $\displaystyle \frac{15\,\pi}{4}\,\frac{m^2}{{d^{\;\prime}}^2} $ has also been confirmed by a comparison with
the exact numerical solution of (\ref{exact_Schwarzschild_10}).
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.25,angle=270]{comparison1.ps}
\includegraphics[scale=0.25,angle=270]{comparison3.ps}
\caption{Comparison of solution $\varphi_1$ of generalized lens equation (\ref{simplest_form_25})
with exact numerical solution $\varphi_{\rm num}$ for the case of a grazing ray at Sun (A)
($\ve{x}_1 = (-1\,{\rm a.u.},0,0), m_{\odot}=1476.6\,{\rm m}$, $d^{\;\prime}=696.0 \times 10^6\,{\rm m}$) and Jupiter (B)
($\ve{x}_1 = (- 6.0\,{\rm a.u.},0,0), m_{\jupiter}=1.40987\,{\rm m}$, $d^{\;\prime}=71.492 \times 10^6\,{\rm m}$),
where ${\rm a.u.} = 1.496 \times 10^{11}\,{\rm m}$ denotes astronomical unit.}
\label{Fig: Comparison3}
\end{center}
\end{figure}
For that, we have solved the geodesic equation (\ref{exact_Schwarzschild_10}) in
Schwarzschild metric by numerical integrator ODEX \cite{ODEX} for several extreme astrometrical configurations.
Using forth and back integration a numerical accuracy of at least $10^{-24}$ in the components of position and velocity
of the photon is guaranteed. Thus, the numerical integration can be considered as an exact solution of geodesic equation
which is denoted by $\varphi_{\rm num}$.
This numerical approach has been described in some detail in \cite{Article_Klioner_Zschocke}.
In all considered extreme configurations the validity of (\ref{simplest_form_25}) and the given
estimate of neglected terms have been confirmed. As example, in Figure~\ref{Fig: Comparison3}
we present the results for light-deflection of a grazing ray at Sun and Jupiter.
These examples elucidate the fact that the accuracy of generalized lens equation (\ref{simplest_form_25}) is much beyond
microarcsecond level of accuracy in case of light-deflection at giant planets. The reason for this fact is that
$\displaystyle \frac{15\,\pi}{4}\,\frac{m^2}{{d^{\;\prime}}^2} \ll \mu{\rm as}$ for giant planets of the solar system; only
in the vicinity of the Sun we have $\displaystyle \frac{15\,\pi}{4}\,\frac{m^2}{{d^{\;\prime}}^2} \sim 11\,\mu{\rm as}$.
The accuracy shown in Figure~\ref{Fig: Comparison3} (B) is considerably better
than the post-post-Newtonian solution investigated in detail in \cite{Article_Klioner_Zschocke,Report2},
cf. Figure~\ref{Fig: Comparison3} (B) with FIG.~$2$ in \cite{Report2}.
In order to understand the numerical difference between Figure~\ref{Fig: Comparison3} (B) and
FIG.~$2$ in \cite{Report2}, we perform a further series expansion of Eq.~(\ref{simplest_form_25}) up to terms of
order $m^4$, that means
\begin{eqnarray}
\varphi_1 &=& \varphi_{\rm pN} + \varphi_{\rm ppN} + \varphi_{\rm pppN}
+ {\cal O} \left(m^4\right) + {\cal O} \left(\frac{m^2}{{d^{\,\prime}}^2}\right),
\label{expansion_2}
\end{eqnarray}
\noindent
where the 'enhanced' terms beyond post-post-Newtonian terms are:
\begin{eqnarray}
\varphi_{\rm pppN} &=& 2\,\left(1+\gamma\right)^3\,\frac{m^3}{d^3}\,
\frac{\left(x_{{\rm 0}}\,x_{{\rm 1}} - \ve{x}_{{\rm 0}} \cdot\ve{x}_{{\rm 1}}\right)^3}{R^3\;d^2\;x_{{\rm 1}}}
\le 128\,\frac{m^3}{d^3}\,\frac{x_{{\rm 1}}^2}{d^2}\,.
\label{post_post-post-Newtonian}
\end{eqnarray}
\noindent
The given estimation in (\ref{post_post-post-Newtonian}) shows, that for large $x_1$ this term can be considerably larger
than the neglected terms of order $\displaystyle {\cal O} \left(\frac{m^2}{{d^{\;\prime}}^2}\right)$.
Moreover, the numerical difference between Figure~\ref{Fig: Comparison3} and FIG.~$2$ in \cite{Report2}
is just given by the term (\ref{post_post-post-Newtonian}).
\section{Summary\label{summary}}
Modern astrometry has achieved a microarcsecond level of accuracy, e.g. astrometric missions SIM (NASA) or GAIA (ESA).
A time-efficient data reduction implies the need of approximative and highly precise solutions for the light deflection
on this level of accuracy. In our investigation we have suggested a generalized lens equation (\ref{simplest_form_25}) for weak gravitational fields of
Schwarzschild metric and valid for finite distances of source and observer
from the light deflecting body.
The derivation is based on the solution of geodesic equation (\ref{lens_155_A}) in
post-Newtonian metric and Chandrasekhar's coordinate independent impact parameter $d^{\;\prime}$ (\ref{impact_B}) and
its relation to the light-deflection angle $\varphi$ given in (\ref{simplest_form_5}).
The neglected terms in (\ref{simplest_form_25}) can be estimated to be smaller than or
equal to $\displaystyle \frac{15\,\pi}{4}\,\frac{m^2}{{d^{\;\prime}}^2}$. The accuracy of generalized lens equation
(\ref{simplest_form_25}) is considerably better than the standard post-Newtonian and post-post-Newtonian approach,
which has been investigated in some detail in \cite{Article_Klioner_Zschocke,Report2} and the reason for this fact
has been pointed out.
The generalized lens equation (\ref{simplest_form_25}) satisfies three conditions formulated in the introductory Section.
Moreover, we have shown that in the appropriate limits we obtain the post-Newtonian terms, 'enhanced'
post-post-Newtonian terms and the classical lens equation. Thus, the generalized lens equation
(\ref{simplest_form_25}) provides also a link between these essential approaches to determine the light-deflection.
Numerical investigations have confirmed the analytical results obtained.
The generalized lens equation (\ref{simplest_form_25}) allows an analytical understanding and investigation
of light-deflection in extreme astrometric configurations. Especially, the determination of light-deflection in
binary systems using of generalized lens equation (\ref{simplest_form_25})
has been investigated in \cite{Zschocke_Binaries}.
\section*{Acknowledgements}
This work was partially supported by the BMWi grants 50\,QG\,0601 and
50\,QG\,0901 awarded by the Deutsche Zentrum f\"ur Luft- und Raumfahrt
e.V. (DLR). Enlighting discussions with Prof. Sergei A. Klioner,
Prof. Michael Soffel and Prof. Chongming Xu are greatfully acknowledged.
\section*{References}
|
1,108,101,564,643 | arxiv | \section{Introduction}
The process of charge transport in molecular junctions has received much
attention recently.\cite{ree97:252,joa00:541,Nitzan01,nit03:1384,Cuniberti05,Selzer06,Venkataraman06,Chen07,Galperin08b,Cuevas10}
Single molecule junctions, consisting of single
molecules that are chemically bound to metal electrodes, are well-suited systems to study nonequilibrium
transport phenomena at the nanoscale and are also of interest for potential
applications in the field of molecular electronics. Recent
developments in experimental techniques, such as electromigration, mechanically controllable
break junctions, or scanning tunneling microscopy,\cite{ree97:252,par00:57,cui01:571,par02:722,smi02:906,rei02:176804,zhi02:226801,xu03:1221,qiu04:206102,liu04:11371,elb05:8815,Elbing05,Ogawa07,Schulze08,Pump08,Leon08,Osorio10,Tao10,Martin10} have made it possible to study transport properties of
molecular junctions. The rich experimental observations, e.g.,
Coulomb blockade,\cite{par02:722} Kondo effect,\cite{lia02:725} negative differential
resistance,\cite{che99:1550,Gaudioso00,Osorio10} switching and hysteresis,\cite{blu05:167,Riel06,Choi06}
have stimulated many theoretical developments for understanding quantum
transport at the molecular scale.
A particular challenge for the theory of charge transport in
molecular junctions is the accurate treatment of correlation effects beyond
the mean-field level. In molecular junctions, there are two types
of correlation effects due to electronic-vibrational and electron-electron interaction.
Considering vibrational induced correlation effects, a
variety of theoretical approaches have been developed, including scattering
theory,\cite{Bonca95,Ness01,Cizek04,Cizek05,Toroker07,Benesch08,Zimbovskaya09,Seidemann10}
nonequilibrium Green's function approaches,\cite{Flensberg03,Mitra04,Galperin06,Ryndyk06,Frederiksen07,Tahir08,Haertle08,Stafford09,Haertle09}
and master equation
methods.\cite{May02,Mitra04,Lehmann04,Pedersen05,Harbola06,Zazunov06,Siddiqui07,Timm08,May08,May08b,Leijnse09,Esposito09,Volkovich11,Haertle11}
In spite of the physical insight offered by these methods,
all of them involve significant approximations. For example, NEGF methods and
master equation approaches are usually based on (self-consistent) perturbation theory and/or employ
factorization schemes. Scattering theory approaches to vibrationally coupled electron transport,
on the other hand, neglect vibrational nonequilibrium effects and are limited to the treatment of a
small number of vibrational degrees of freedom.
These shortcomings have motivated us to develop a
systematic, numerically exact methodology to study quantum dynamics and quantum transport including
many-body effects --- the multilayer multiconfiguration time-dependent Hartree (ML-MCTDH) theory
in second quantization representation (SQR).\cite{wan09:024114} For a generic
model of vibrationally coupled
electron transport, we have demonstrated the importance of treating
electronic-vibrational coupling accurately. Comparison with approximate methods such as NEGF reveals the
necessity of employing accurate methods such as the ML-MCTDH-SQR, in particular in the strong coupling
regime.
In this paper, we extend the ML-MCTDH-SQR method to treat electron-electron
interaction. Considering the paradigmatic Anderson impurity model, we show
the applicability of the methodology to obtain an accurate
description. Furthermore, we consider a model which incorporates both
electron-electron and electronic-vibrational interaction. To the best of our
knowledge, the results reported for this model are the first obtained by a
numerically exact method.
It is noted that a variety of other powerful methods have been developed in
the recent years with the same goal, i.e., to facilitate numerically
exact simulations for nonequilibrium transport in model systems. These include
the numerical path integral approach,\cite{muh08:176403,wei08:195316,Segal10} real-time
quantum Monte Carlo simulations,\cite{Werner09,Schiro09} the numerical renormalization
group approach,\cite{and08:066804}, the time-dependent density matrix renormalization
group approach.\cite{HeidrichMeisner09}, and the hierarchical equations of motion
method \cite{Zheng09,Jiang12}. For a comparison and an comprehensive overview of various different
methods in the case of nonequilibrium transport with
electron-electron interaction see Ref.~\onlinecite{Eckel10}.
The remaining part of the paper is organized as follows.
Section~\ref{modeltight} outlines the physical model and the observables of interest.
The ML-MCTDH-SQR theory is described in Section~\ref{mlsqr}. Section~\ref{results}
presents numerical results for a variety of parameter regimes as well as an analysis of the
transport mechanisms. Section~\ref{conclusions} concludes with a summary.
\section{Model and Observables of Interest}\label{modeltight}
To study correlated electron transport in molecular junctions, we consider a
generic model which includes both electron-electron and
electronic-vibrational
interaction. The model comprises two discrete electronic states (spin up and
down) at the
molecular bridge,
two electronic continua describing the left and the right metal leads, respectively, and a distribution
of harmonic oscillators that models the vibrational modes of the molecular
bridge.
The Hamiltonian reads
\begin{subequations}\label{Htot}
\begin{equation}
\hat H = \hat H_{\rm el} + \hat H_{\rm nuc} + \hat H_{\rm el-nuc},
\end{equation}
where $\hat H_{\rm el}$, $\hat H_{\rm nuc}$, and $\hat H_{\rm el-nuc}$ describe the electronic
degrees of freedom, the nuclear vibrations, and their coupling terms, respectively
\begin{eqnarray}\label{H1tot}
\hat H_{\rm el} &=& \sum_{\sigma} E_d \hat{n}_{d,\sigma}
+ U_d \hat{n}_{d,\uparrow} \hat{n}_{d,\downarrow}
+ \sum_{k_L,\sigma} E_{k_L} \hat{n}_{k_L,\sigma}
+ \sum_{k_R,\sigma} E_{k_R} \hat{n}_{k_R,\sigma} \\
&& + \sum_{k_L,\sigma} V_{dk_L} ( \hat{d}^+_\sigma \hat{c}_{k_L,\sigma} + \hat{c}_{k_L,\sigma}^+ \hat{d}_\sigma )
+ \sum_{k_R,\sigma} V_{dk_R} ( \hat{d}^+_\sigma \hat{c}_{k_R,\sigma} + \hat{c}_{k_R,\sigma}^+ \hat{d}_\sigma ), \nonumber
\end{eqnarray}
\begin{equation}\label{Hnuc}
\hat H_{\rm nuc} = \frac{1}{2} \sum_j ( \hat{P}_j^2 + \omega_j^2 \hat{Q}_j^2 ), \\
\end{equation}
\begin{equation}
\hat H_{\rm el-nuc} = \sum_{\sigma} \hat{n}_{d,\sigma} \sum_j 2 c_j \hat{Q}_j.
\end{equation}
\end{subequations}
In the above expression $\hat{n}$ denotes the number operator, subscript ``$d$'' refers to the
bridge state, ``$k_L/k_R$'' the states of the left/right metal leads, and ``$\sigma=\uparrow,\downarrow$''
the two spin states. Operators $\hat{d}^+/ \hat{d}$, $\hat{c}_{k_L}^+/ \hat{c}_{k_L}$, $\hat{c}_{k_R}^+/ \hat{c}_{k_R}$ are the
fermionic creation/annihilation operators for the electronic states on the molecular bridge, the left
and the right leads, respectively. The second term in (\ref{H1tot}) describes
the on-site Coulomb repulsion of the electrons on the molecular bridge
with electron-electron coupling strength $U_d$. The energies of the electronic
states in the leads, $E_{k_L}$, $E_{k_R}$, as well as the molecule-lead
coupling parameters $V_{dk_L}$, $V_{dk_R}$ are assumed to be independent
of the spin polarization and are defined through the energy-dependent
level width functions
\begin{equation}
\Gamma_L (E) = 2\pi \sum_{k_L} |V_{dk_L}|^2 \delta(E-E_{k_L}), \hspace{1cm}
\Gamma_R (E) = 2\pi \sum_{k_R} |V_{dk_R}|^2 \delta(E-E_{k_R}).
\end{equation}
Without electronic-vibrational
interaction the model introduced above reduces to the well known Anderson
impurity model,\cite{Anderson61} which has been investigated in great detail
both in equilibrium and nonequilibrium.\cite{Hewson93,Eckel10} Neglecting, on
the other hand, electron-electron interaction, it corresponds to the
standard model of vibrationally coupled electron transport in molecular
junctions, which has also been studied in great detail, mostly based on approximate
methods. Recently a numerically exact treatment of the latter model
(i.e.\ without electron-electron interaction) became possible
using path integral techniques,\cite{muh08:176403,Albrecht12} as well as the
ML-MCTDH approach.\cite{wan09:024114,Wang11,Albrecht12}
To the best of our knowledge, the full model including electron-electron and
electronic-vibrational interaction has so far not been considered with
numerically exact methods.
In principle, the parameters of the model can be obtained for a specific
molecular junction employing first-principles electronic structure
calculations.\cite{Benesch09} In this paper, which focuses on the methodology
and general transport
properties, however, we will use a generic parameterization.
Employing a tight-binding model, the function $\Gamma (E)$ is given as
\begin{subequations}
\begin{equation}
\Gamma (E) = \left\{ \begin{array}{ll} \frac{\alpha_e^2}{\beta_e^2} \sqrt{4\beta_e^2-E^2}
\hspace{1cm} & |E| \leq 2 |\beta_e| \\
0 \hspace{1cm} & |E| > 2 |\beta_e| \end{array} \right.,
\end{equation}
\begin{equation}
\Gamma_L (E) = \Gamma (E-\mu_L), \hspace{1cm} \Gamma_R (E) = \Gamma (E-\mu_R),
\end{equation}
\end{subequations}
where $\beta_e$ and $\alpha_e$ are nearest-neighbor couplings between two lead sites
and between the lead and the bridge state, respectively. I.e., the width functions for
the left and the right leads are obtained by shifting $\Gamma(E)$ relative to the chemical potentials
of the corresponding leads. We consider a simple model of two identical leads, in which the chemical
potentials are given by
\begin{equation}
\mu_{L/R} = E_f \pm V/2,
\end{equation}
where $V$ is the source-drain bias voltage and $E_f$ the Fermi energy of the leads. Since only the
difference $E_d - E_f$ is physically relevant, we set $E_f = 0$.
Similarly, the frequencies $\omega_j$ and electronic-nuclear coupling
constants $c_j$ of the vibrational modes of the molecular junctions are modeled by a spectral density
function\cite{leg87:1,Weiss93}
\begin{equation}
\label{discrete}
J(\omega) = \frac{\pi} {2} \sum_{j} \frac{c_{j}^{2}} {\omega_j}
\delta(\omega - \omega_{j}).
\end{equation}
In this paper, the spectral density is chosen in Ohmic form with an exponential cutoff
\begin{equation}
\label{ohmic}
J_{\rm O}(\omega) = \frac{\pi}{2} \alpha \omega e^{-\omega/\omega_c},
\end{equation}
where $\alpha$ is the dimensionless Kondo parameter.
Both the electronic and the vibrational continua can be discretized by choosing a
density of states $\rho_e(E)$ and a density of frequencies
$\rho(\omega)$ such that\cite{tho01:2991,wan01:2979,wan03:1289}
\begin{subequations}
\begin{equation}
\int_0^{E_k} dE \; \rho_e(E) = k, \hspace{.5in}
|V_{dk}|^2 = \frac{\Gamma(E_k)}{2\pi \rho_e(E_k)}, \hspace{.5in}
k = 1,...,N_e,
\end{equation}
\begin{equation}
\int_0^{\omega_j} d\omega \; \rho(\omega) = j, \hspace{.5in}
\frac{c_{j}^{2}} {\omega_j} = \frac{2}{\pi} \frac{J_{\rm O}(\omega_j)}{\rho(\omega_j)},
\hspace{.5in} j = 1,...,N_b.
\end{equation}
\end{subequations}
where $N_e$ is the number of electronic states (for a single spin/single lead) and $N_b$ is the number of
bath modes in the simulation. In this work, we choose a constant $\rho_e(E)$, i.e., an equidistant
discretization of the interval $[-2\beta_e, 2\beta_e]$, to discretize the electronic continuum. For
the vibrational bath, $\rho(\omega)$ is chosen as
\begin{equation}
\rho(\omega) = \frac{N_b+1}{ \omega_c} e^{-\omega/\omega_c}.
\end{equation}
Within a given time scale the numbers of electronic states and bath
modes are systematically increased to reach converged results for the quantum dynamics in the
extended condensed phase system. In our calculations we employ 80-500 states for each electronic lead, implying
40-250 electrons per lead, and a bath with 100-400 modes.
The observable of interest for studying transport through molecular junctions is the current for
a given source-drain bias voltage, given by (in this paper we use atomic units where $\hbar = e = 1$)
\begin{subequations}
\begin{equation}
I_L(t) = - \frac{d N_L(t)} {dt} = -\frac{1}{{\rm tr}[\hat{\rho}]} {\rm tr}
\left\{ \hat{\rho} e^{i\hat{H}t} i[\hat{H}, \hat{N}_{L}] e^{-i\hat{H}t} \right\},
\end{equation}
\begin{equation}
I_R(t) = \frac{d N_R(t)} {dt} = \frac{1}{{\rm tr}[\hat{\rho}]} {\rm tr}
\left\{ \hat{\rho} e^{i\hat{H}t} i[\hat{H}, \hat{N}_{R}] e^{-i\hat{H}t} \right\}.
\end{equation}
\end{subequations}
Here, $N_{L/R}(t)$ denotes the time-dependent charge in each lead, defined as
\begin{equation}
N_{\zeta}(t) = \frac{1}{{\rm tr}[\hat{\rho}]} {\rm tr}
[\hat{\rho} e^{i\hat{H}t} \hat{N}_{\zeta} e^{-i\hat{H}t} ], \;\;\; \zeta=L, R,
\end{equation}
and $\hat{N}_{\zeta} = \sum_{k_\zeta,\sigma} \hat{n}_{k_\zeta,\sigma}$
is the occupation number operator for the electrons in each lead ($\zeta=L, R$).
For Hamiltonian (\ref{Htot}) the explicit
expression for the current operator is given as
\begin{equation}
\hat{I}_\zeta \equiv i[\hat{H}, \hat{N}_{\zeta}] = i \sum_{k_\zeta,\sigma}
V_{dk_\zeta} ( \hat{d}^+_\sigma \hat{c}_{k_\zeta,\sigma} - \hat{c}_{k_\zeta,\sigma}^+ \hat{d}_\sigma ), \;\;\;
\zeta=L, R.
\end{equation}
In the expressions above, $\hat{\rho}$ denotes the
initial density matrix representing a grand-canonical ensemble for each lead and a certain
preparation for the bridge state
\begin{subequations}\label{Initden}
\begin{equation}
\hat{\rho} = \hat{\rho}_d^0 \;{\rm exp} \left[ -\beta (\hat{H}_0
- \mu_L \hat{N}_L - \mu_R \hat{N}_R) \right],
\end{equation}
\begin{equation}
\hat{H}_0 = \sum_{k_L,\sigma} E_{k_L} \hat{n}_{k_L,\sigma}
+ \sum_{k_R,\sigma} E_{k_R} \hat{n}_{k_R,\sigma} + \hat{H}_{\rm nuc}^0.
\end{equation}
\end{subequations}
Here $\hat{\rho}_d^0$ is the initial reduced density matrix for the bridge state, which is usually chosen as
a pure state representing an occupied or an empty bridge state, and $\hat{H}_{\rm nuc}^0$ defines the initial
bath equilibrium distribution.
Various initial states can be considered. For example,
one may choose an initially unoccupied bridge state and the nuclear degrees of
freedom equilibrated with this state, i.e.\ an unshifted
bath of oscillators with $\hat H_{\rm nuc}^0$
as given in Eq.~(\ref{Hnuc}). On the other hand, one may also start with a fully occupied bridge state and
a bath of oscillators in equilibrium with the occupied bridge state
\begin{equation}
{\hat H}_{\rm nuc}^{0'} = \frac{1}{2} \sum_j \left[ P_j^2 + \omega_j^2 \left(Q_j +
\frac{c_j}{\omega_j^2}\right)^2 \right].
\end{equation}
Other initial states may also be prepared. The initial state may affect the
transient dynamics profoundly. The dependence of the steady-state current on
the initial density matrix is a more complex issue. Recent investigations for a model without
electron-electron interaction seem to indicate that different
initial states may lead to different (quasi)steady states,\cite{Gogolin02,Galperin05,Albrecht12}
although this has been debated.\cite{Alexandrov07}
Even without coupling to a vibrational bath, the initial bridge state
population may still affect the final stationary current in a time-dependent
simulation.\cite{Dzhioev11,Khosravi12}
For all results reported in this paper, our calculations show that the stationary state is
independent on the initial condition within the error bar of the simulation
(which is estimated to be less than 10\% relative error).
Since different sets of initial conditions also affect the time scale at which the current $I(t)$
reaches its stationary value, we typically choose initial conditions that are close to the final steady
state, e.g., an unoccupied initial bridge state if its energy is higher
than the Fermi level of the leads and an occupied bridge state otherwise.
The transient behavior of the thus defined currents $I_R(t)$ and $I_L(t)$ is usually different.
However, the long-time limits of $I_R(t)$ and $I_L(t)$, which define the stationary current, are the same.
It is found that the average current
\begin{equation}
I(t) = \frac{1}{2} [ I_R(t) + I_L(t) ],
\end{equation}
provides better numerical convergence properties by minimizing the transient characteristic, and
thus will be used in most calculations.
In our simulations the continuous set of electronic states of the leads is represented by
a finite number of states. The number of states required to properly describe the
continuum limit depends on the time $t$. The situation is thus similar to that of a quantum reactive
scattering calculation in the presence of a scattering continuum, where, with a finite number of basis
functions, an appropriate absorbing boundary condition is added to mimic the correct outgoing
Green's function.\cite{Goldberg1978,Kosloff1986,Neuhauser1989,Seideman1991}
Employing the same strategy for the present problem, the
regularized electric current is given by
\begin{equation}
I^{\rm reg} = \lim_{\eta \to 0^+} \int_0^{\infty} dt \, \frac{dI(t)}{dt} \, e^{-\eta t}.
\end{equation}
The regularization parameter $\eta$ is similar (though not identical) to the formal convergence parameter
in the definition of the Green's function in terms of the time evolution operator
\begin{equation}
G(E^+) = \lim_{\eta \to 0^+} (-i) \int_0^{\infty} dt\, e^{i(E+i\eta-H)t}.
\end{equation}
In numerical calculations, $\eta$ is chosen in a similar way as the absorbing potential used in quantum
scattering calculations.\cite{Goldberg1978,Kosloff1986,Neuhauser1989,Seideman1991} In particular,
the parameter $\eta$ has to be large enough to accelerate the convergence but still sufficiently small
in order not to affect the correct result. While in the reactive scattering calculation
$\eta$ is often chosen to be coordinate dependent, in our simulation $\eta$ is chosen
to be time dependent
\begin{equation}\label{damping}
\eta(t) = \left\{
\begin{array}{ll}
0 & \quad (t<\tau)\\
\eta_0\cdot (t-\tau)/t & \quad (t>\tau) .
\end{array}
\right.
\end{equation}
Here $\eta_0$ is a damping constant, $\tau$ is a cutoff time beyond which a steady state charge
flow is approximately reached. As the number of electronic states increases, one may choose a
weaker damping strength $\eta_0$ and/or longer cutoff time $\tau$. The former approaches zero and the
latter approaches infinity for an infinite number of states. In practice,
for the systems considered in this work, convergence can be reached with a reasonable number
of electronic states in the range of 80-500, with a typical $\tau =$ 30-80 fs (a smaller
$\tau$ for less number of states) and $1/\eta_0 =$ 3-10 fs.
To gain insight into the transport mechanisms, it is also useful to consider the
population of the electronic states localized on the the molecular bridge,
which is given by
\begin{equation}\label{population}
P_d(t) = \frac{1}{{\rm tr}[\hat{\rho}]} {\rm tr}
\left\{ \hat{\rho} e^{i\hat{H}t} \sum_{\sigma}\; \hat{n}_{d,\sigma}\; e^{-i\hat{H}t}
\right\}.
\end{equation}
\section{The Multilayer Multiconfiguration Time-Dependent Hartree Theory
in Second Quantization Representation}\label{mlsqr}
The time-dependent study of transport properties in the model introduced above
requires a method that is able to
describe many-body quantum dynamics in an accurate and efficient way. For this purpose
we employ the recently proposed Multilayer Multiconfiguration Time-Dependent Hartree Theory
in Second Quantization Representation (ML-MCTDH-SQR),\cite{wan10:78} which
allows a numerically exact treatment of the many-body problem. Here we give a brief
outline of the method.
\subsection{Overview of the ML-MCTDH theory}
The ML-MCTDH theory\cite{wan03:1289} is a rigorous variational method to propagate wave packets
in complex systems with many degrees of freedom. In this approach the wave function is represented
by a recursive, layered expansion,
\begin{subequations}\label{psiml}
\begin{equation}\label{L1}
|\Psi (t) \rangle = \sum_{j_1} \sum_{j_2} ... \sum_{j_p}
A_{j_1j_2...j_p}(t) \prod_{\kappa=1}^{p} |\varphi_{j_\kappa}^{(\kappa)} (t) \rangle,
\end{equation}
\begin{equation}\label{L2}
|\varphi_{j_\kappa}^{(\kappa)}(t)\rangle = \sum_{i_1} \sum_{i_2} ... \sum_{i_{Q(\kappa)}}
B_{i_1i_2...i_{Q(\kappa)}}^{\kappa,j_\kappa}(t) \prod_{q=1}^{Q(\kappa)}
|v_{i_q}^{(\kappa,q)}(t) \rangle,
\end{equation}
\begin{equation}\label{L3}
|v_{i_q}^{(\kappa,q)}(t)\rangle = \sum_{\alpha_1} \sum_{\alpha_2} ...
\sum_{\alpha_{M(\kappa,q)}}
C_{\alpha_1\alpha_2...\alpha_{M(\kappa,q)}}^{\kappa,q,i_q}(t)
\prod_{\gamma=1}^{M(\kappa,q)}
|\xi_{\alpha_\gamma}^{\kappa,q,\gamma}(t) \rangle,
\end{equation}
\begin{equation}
... \nonumber
\end{equation}
\end{subequations}
where $A_{j_1j_2...j_p}(t)$, $B_{i_1i_2...i_{Q(\kappa)}}^{\kappa,j_\kappa}(t)$,
$C_{\alpha_1\alpha_2...\alpha_{M(\kappa,q)}}^{\kappa,q,i_q}(t)$ and so on are the
expansion coefficients for the first, second, third, ..., layers, respectively;
$|\varphi_{j_\kappa}^{(\kappa)} (t) \rangle$, $|v_{i_q}^{(\kappa,q)}(t) \rangle$,
$|\xi_{\alpha_\gamma}^{\kappa,q,\gamma}(t) \rangle$, ..., are the ``single particle''
functions (SPFs) for the first, second, third, ..., layers.
In Eq.~(\ref{L1}), $p$ denotes the number of single
particle (SP) groups/subspaces for the first layer. Similarly, $Q(\kappa)$ in Eq.~(\ref{L2})
is the number of SP groups for the second layer that belongs to the $\kappa$th SP
group in the first layer, i.e., there are a total of $\sum_{\kappa=1}^{p} Q(\kappa)$
second layer SP groups. Continuing along the multilayer hierarchy,
$M(\kappa,q)$ in Eq.~(\ref{L3}) is the number of SP groups for the third layer that belongs
to the $q$th SP group of the second layer and the $\kappa$th SP group of the first layer,
resulting in a total of $\sum_{\kappa=1}^{p} \sum_{q=1}^{Q(\kappa)} M(\kappa,q)$ third
layer SP groups. Naturally, the size of the system that the ML-MCTDH theory can treat
increases with the number of layers in the expansion. In principle, such a recursive
expansion can be carried out to an arbitrary number of layers. The multilayer hierarchy
is terminated at a particular level by expanding the SPFs in the deepest layer in terms of
time-independent configurations, each of which may contain several Cartesian degrees of
freedom.
The variational parameters within the ML-MCTDH theoretical framework are dynamically
optimized through the use of the Dirac-Frenkel variational principle\cite{Frenkel34}
\begin{equation}
\langle \delta\Psi(t) | i \frac{\partial} {\partial t} - \hat{H} |
\Psi(t) \rangle = 0,
\end{equation}
which results in a set of coupled, nonlinear differential equations for the expansion
coefficients for all layers.\cite{wan03:1289,wan10:78,wan09:024114} For a $N$-layer version
of the ML-MCTDH theory there are $N+1$ levels of expansion coefficients. In this sense the
conventional wave packet propagation method is a ``zero-layer'' MCTDH approach.
The introduction of this recursive, dynamically optimized layering scheme in the ML-MCTDH
wavefunction provides more flexibility in the variational functional, which results in
a tremendous gain in our ability to study large many-body quantum systems. During the past
few years, significant progress has been made in further development of the theory to simulate
quantum dynamics and nonlinear spectroscopy of ultrafast electron transfer reactions in
condensed phases.\cite{tho06:210,wan06:034114,kon06:1364,wan07:10369,kon07:11970,tho07:153313,wan08:139,wan08:115005,ego08:214303,vel09:094109,wan10:78,Vendrell11}
The theory has also been generalized to study heat transport in molecular
junctions\cite{vel08:325} and to calculate rate constants for model proton transfer reactions
in molecules in solution.\cite{wan06:174502,cra07:144503}
Recent work of Manthe has introduced an even more adaptive formulation based on
a layered correlation discrete variable representation (CDVR).\cite{man08:164116,man09:054109}
\subsection{Treating identical particles using the second quantization representation of Fock space}
To extend the original ML-MCTDH approach to systems of identical quantum
particles requires a method that incorporates the exchange symmetry explicitly.
This is because an ordinary Hartree product
within the first quantized picture is only suitable to describe a configuration for a system of
distinguishable particles. One strategy is to employ a properly symmetrized wave function in
the first quantized framework, i.e., permanents in a bosonic case or Slater determinants in a
fermionic case. This led to the MCTDHF approach\cite{kat04:533,cai05:012712,nes05:124102} for
treating identical fermions and the MCTDHB approach\cite{alo08:033613} for treating identical bosons
as well as combinations thereof.\cite{alo07:154103} However, this wave function-based symmetrization
is only applicable to the single layer MCTDH theory but is incompatible with the ML-MCTDH theory
with more layers --- there is no obvious analog of a multilayer Hartree configuration if
permanents/determinants are used to represent the wave function. As a result, the ability to treat
much larger quantum systems numerically exactly was severely limited.
To overcome this limitation we proposed a novel approach\cite{wan09:024114} that follows a
fundamentally different route to tackle many-body quantum dynamics of indistinguishable particles ---
an operator-based method that employs the second quantization formalism of many-particle quantum
theory. This differs from many previous methods where the second quantization formalism is only
used as a convenient tool to derive intermediate expressions for the first quantized form. In the
new approach the variation is carried out entirely in the abstract Fock space using the occupation
number representation. Therefore, the burden of handling symmetries of identical particles in a
numerical variational calculation is shifted completely from wave functions to the algebraic properties
of operators.
The major difference between the ML-MCTDH-SQR theory for identical fermions and the previous
ML-MCTDH theory for distinguishable particles is the way how operators act. For example, in the second
quantized form the fermionic creation/annihilation operators fulfill the
anti-commutation relations
\begin{equation}\label{anticomm}
\{ \hat{a}_P, \hat{a}_Q^+ \} \equiv \hat{a}_P \hat{a}_Q^+ + \hat{a}_Q^+ \hat{a}_P = \delta_{PQ},
\hspace{1cm}
\{ \hat{a}_P^+, \hat{a}_Q^+ \} = \{ \hat{a}_P, \hat{a}_Q \} = 0.
\end{equation}
The symmetry of identical particles is thus realized by enforcing such algebraic
properties of the operators. This can be accomplished by introducing a permutation
sign operator associated with each fermionic creation/annihilation operator, which incorporates
the sign changes of the remaining spin orbitals in all the SPFs whose subspaces are prior to
it.\cite{wan09:024114} For example, if a purely electronic problem is
considered and only one layer is present, the overall wave
function and the SPFs have the form
\begin{subequations}
\begin{equation}
\label{mcsqr}
|\Psi (t) \rangle
= \sum_{j_1} \sum_{j_2} ... \sum_{j_L}
A_{j_1j_2...j_L}(t) \prod_{\kappa=1}^{L} |\varphi_{j_\kappa}^{(\kappa)} (t) \rangle,
\end{equation}
\begin{equation}\label{spsqr}
|\varphi_{j_\kappa}^{(\kappa)}(t)\rangle = \sum_{I_\kappa=1}^{2^{m_\kappa}}
B_{I_\kappa}^{\kappa,j_\kappa}(t) |\phi^{(\kappa)}_{I_\kappa} \rangle \equiv
\sum_{n_1=0}^1 \sum_{n_2=0}^1 ... \sum_{n_{m_\kappa}=0}^1
B_{n_1n_2...n_{m_\kappa}}^{\kappa,j_\kappa}(t)\; |n_1\rangle |n_2\rangle ...
|n_{m_\kappa} \rangle,
\end{equation}
\end{subequations}
where $n_i=0,1$ are the occupation numbers. A fermionic creation operator is actually
implemented as
\begin{equation}\label{fermcreat}
({a}_{\nu}^{(\kappa)})^+ = \left( \prod_{\mu=1}^{\kappa-1}\; \hat{S}_\mu \right)
\; ({\tilde{a}}_{\nu}^{(\kappa)})^+,
\end{equation}
where $\hat{S}_\mu$ ($\mu=1,2,...,\kappa-1$) is the permutation sign operator that accounts
for permuting $({a}_{\nu}^{(\kappa)})^+$ from the first subspace all the way through to
the $\kappa$th subspace, and
$({\tilde{a}}_{\nu}^{(\kappa)})^+$ is the reduced creation operator that only takes care of
the fermionic anti-commutation relation in the $\kappa$th subspace. The operator-based
anti-commutation constraint (\ref{anticomm}) results in the following operations
\begin{subequations}\label{permutesign}
\begin{equation}
({\tilde{a}}_{\nu}^{(\kappa)})^+ |\varphi_{j_\kappa}^{(\kappa)}(t)\rangle =
\sum_{n_1=0}^1 \sum_{n_2=0}^1 ... \sum_{n_{m_\kappa}=0}^1 \; \delta_{n_\nu,0}
\left[\prod_{q=1}^{\nu-1} (-1)^{n_q}\right]\; B_{n_1n_2...n_{m_\kappa}}^{\kappa,j_\kappa}(t)\;
|n_1\rangle |n_2\rangle ... |1_\nu\rangle ... |n_{m_\kappa} \rangle,
\end{equation}
\begin{equation}
\hat{S}_\mu |\varphi_{j_\mu}^{(\mu)}(t)\rangle =
\sum_{n_1=0}^1 \sum_{n_2=0}^1 ... \sum_{n_{m_\mu}=0}^1 \;
\left[\prod_{q=1}^{m_\mu} (-1)^{n_q}\right]\; B_{n_1n_2...n_{m_\mu}}^{\mu,j_\mu}(t)\;
|n_1\rangle |n_2\rangle ... |n_{m_\mu} \rangle.
\end{equation}
\end{subequations}
I.e., $({\tilde{a}}_{\nu}^{(\kappa)})^+$ not only creates a particle in the
$\nu$th spin orbital if it is vacant, but also affects the sign of each term in
this SPF according to where
$\nu$ is located and what the occupations are prior to it. Furthermore, the permutation sign
operators $\hat{S}_\mu$, $\mu=1,2,...,\kappa-1$, incorporate the sign changes of the remaining
spin orbitals in all the SPFs whose subspaces are prior to that of
$(\tilde{a}_{\nu}^{(\kappa)})^+$.
Thus, the occupation number states in the ML-MCTDH-SQR theory are treated in the same way as the degrees
of freedom in the original ML-MCTDH theory, except that the orderings of all the SP groups in all
layers need to be recorded and maintained in later manipulations. More importantly, the equations
of motion have the same form as in the original ML-MCTDH theory. The only difference is that
for identical fermions each creation/annihilation operator of the
Hamiltonian is effectively a product of operators: a reduced creation/annihilation operator
that only acts on the bottom-layer SPFs for the Fock subspace it belongs to, and a series of
permutation sign operators that accounts for the fermionic anti-commutation relations of all
the spin orbitals prior to it.
In the multilayer case the implementation is sophisticated but can still be reduced to handling
(many and complicated) basic building blocks in the MCTDH or ML-MCTDH theory --- products of operators.
Thereby, the action of each Hamiltonian term (product of creation/annihilation operators) can be
split into a series of operations on individual Fock subspaces.\cite{wan09:024114}
On the other hand, for identical bosons the implementation
is much simpler because there is no sign change upon permutation.
In the second quantized form, the wave function is represented in the abstract Fock space
employing the occupation number basis. As a result, it can be expanded in the same multilayer
form as that for systems of distinguishable particles. It is thus possible to extend the
numerically exact treatment to much larger systems. The symmetry of the wave function in
the first quantized form is shifted to the operator algebra in the second quantized form.
The key point is that, for both phenomenological models and more fundamental theories,
there are only a limited number of combination of fundamental operators.
For example, in electronic structure theory only one- and two-electron operators are present.
This means that one never needs to handle all, redundant possibilities of operator combinations as
offered by the determinant form in the first quantized framework. It is exactly this property
that provides the flexibility of representing the wave functions in multilayer form and treat
them accurately and efficiently within the ML-MCTDH-SQR theory. It is also noted that
the ML-MCTDH-SQR approach outlined above for fermions has
also be formulated for bosons or combinations of fermions, bosons and
distinguishable particles.\cite{wan09:024114}
\section{Results and Discussion}\label{results}
In this section, we present applications of the ML-MCTDH-SQR methodology to
the study of correlated electron transport employing the model described in
Sec.~\ref{modeltight}. In particular, we discuss the influence of
electron-electron and electronic-vibrational interaction on the transport
characteristics for selected examples. Unlike the noninteracting transport
model ($U_d = 0, c_j = 0$), these results represent nontrivial
solutions to a many-body quantum problem and are often beyond the perturbation
treatment. All calculations presented in this paper are for zero temperature,
which corresponds to the deep quantum regime, and is often the most challenging
physical regime of the problem. Meanwhile, this regime is relatively easy
for our approach since only one initial wave function is required.
An investigation of systems at finite temperature as well as an analysis of
the physical mechanisms in a broader
parameter range will be the topic of future work.
\subsection{Effect of electron-electron interaction on transport characteristics for fixed nuclei}
We first focus on the influence of electron-electron interaction and
consider models without electronic-vibrational coupling ($c_j=0$),
i.e.\ for fixed nuclei.
Fig.~\ref{fig1} shows the time-dependent current and the corresponding
bridge state population for a model with the following set of electronic
parameters: The tight-binding parameters for the
function $\Gamma (E)$ are $\alpha_e = 0.2$ eV, $\beta_e = 1$ eV,
corresponding to a moderate molecule-lead coupling and a bandwidth of 4 eV.
The energy of the bridge states $E_d$ is located 0.5 eV above Fermi energy and the
source-drain voltage is $V=0.1$ V, i.e.\ the model is in the off-resonant
transport regime. The results for both the current and the population show
pronounced transient oscillations that decay on a time scale
of $\approx \Gamma^{-1}$ and approach a stationary plateau at longer times,
which represents the steady state. The overall values of the current and population are
rather small because the transport takes place in the off-resonant regime.
The comparison of the results obtained for different parameters $U_d$
shows that for this model electron-electron
interaction has no significant influence on the population and the current,
and this includes both the transient behavior and the long-time stationary value.
Qualitatively this can be understood from the fact that the model is in the off-resonant transport
regime. At zero coupling strength $U_d$, the bare energies of the electronic
bridge states are the same and are outside the conductance window defined by the chemical potentials
of the two electrodes. Including the on-site repulsion term removes the degeneracy of these
two bridge states if one state is occupied. That is, when one of the bridge states is populated,
the electronic energy of the other state is increased by the value of $U_d$. However, due to the fact
that the initial bridge states are relatively far away from the conductance window, their populations
are small. As a result, the overall electronic correlation effect is small for this
set of model parameters. At a finer scale
it can be seen that with the increase in $U_d$ both the
stationary current and bridge state population decreases. This is consistent with the fact that upon
increasing $U_d$ the energy of the doubly occupied state ($E_d + U_d$) is moved to higher
energies and thus even further away from the conduction window.
Figure~\ref{fig2} shows the time-dependent current and the corresponding bridge state population for
another model, where the parameters are the same as in Fig.~\ref{fig1} except for $E_d-E_f=0$, i.e.,
energy of the bridge states is located at the Fermi energy of the leads.
For zero on-site coupling strength ($U_d=0$), this set of parameters
corresponds to the resonant tunneling regime and involves
mixed electron/hole transport. This results in a significantly larger
stationary current and a population of approximately one, because each bridge
state has ~50\% probability to be occupied.
In this parameter regime, electron-electron interaction has a pronounced
influence on the transport characteristics. Upon increase of $U_d$ the steady
state value of both the current and the population decreases
significantly. This is due to the fact that for increasing $U_d$ the energy of
the doubly occupied state moves out of the conductance window. For interaction strengths
$U_d\gg 0.1$eV, the bridge state can only be singly occupied, resulting in an overall
population of $n_d = 1/2$, and the doubly occupied state does not contribute to
the current.
We next consider in Figs.~\ref{fig3}, \ref{fig4} a model with the same
parameters as in the previous
two cases except that the energy
of the bridge state is below the Fermi energy, $E_d - E_f= -0.5$ eV. For
vanishing electron-electron interaction, $U_d=0$, this is again a
non-resonant case. However, because the bridge state is, in contrast to Fig.~\ref{fig1}, located
below the Fermi energy it is almost doubly
occupied when $U_d=0$. While the stationary current for $U_d=0$ (full black line in
Fig.~\ref{fig3}(a)) is, due to particle-hole
symmetry, in fact identical to that in Fig.~\ref{fig1}(a), the dependence of
the transport characteristics on the electron-electron interaction is more
complex than in the above two cases. Upon moderate increase of $U_d$, the energy of
the doubly occupied state, $E_d + U_d$, moves closer to the Fermi energy and
enters the conductance window. As a result, the population of this state
decreases as shown in Fig.~\ref{fig3}(b). The current (Fig.~\ref{fig3}(a)), on the other hand, increases because the doubly
occupied state provides a channel for resonant transport. It is also observed that
upon moderate increase of $U_d$ the transient dynamics undergoes a coherent to incoherent transition.
When $U_d$ is further increased ($U_d > 0.6$ eV), the
energy of the doubly occupied state becomes higher than the chemical potentials
of both electrodes. As shown in Fig.~\ref{fig4}, this causes a decrease of the
current and the population. For large values of $U_d$, the population in the
steady state approaches a value of unity, because the bridge state can only be
singly occupied.
It is interesting to note that for large Coulomb repulsion, e.g.\ $U_d=2$eV, the initial transient current
is negative. This is because in the simulation the bridge state is initially fully occupied. During the
early transient time, electrons flow from the bridge states to both the left and the right electrodes,
resulting in a negative transient current. As the bridge states approach their steady state population,
electrons move continuously from the left electrode to the right electrode, establishing a steady-state
current.
\subsection{Aspects of Coulomb blockade}
An interesting many-body
nonequilibrium effect in charge transport in mesoscopic and nanosystems is
Coulomb blockade.\cite{Glazman89,Beenakker91,Grabert92} This phenomenon involves the suppression of the electrical
current due to electron-electron interaction. Within the single-site Anderson
impurity model, the underlying mechanism is that the Coulomb repulsion
with an electron that already occupies the bridge state prevents a second
electron to transfer onto the bridge and thus reduces the current compared to
a noninteracting model.
This basic aspect of Coulomb blockade is demonstrated in Fig.~\ref{fig5},
which shows simulated current-voltage characteristics for a resonant transport
model, where the energy of the bridge states $E_d$ is at the Fermi energy of
the leads, $E_d-E_f=0$. The tight-binding parameters
for the function $\Gamma (E)$ are $\alpha_e = 0.1$ eV, $\beta_e = 1$ eV, corresponding to a smaller
molecule-lead coupling and a bandwidth of 4 eV. Besides the noninteracting model ($U_d=0$),
three values of the electron-electron coupling strengths are considered:
$U_d=0.5,$ 1, and 4 eV. To obtain the current-voltage characteristics, the
stationary plateau value from the time-dependent simulation of the current was
taken for each given voltage.
The results show that upon inclusion of electron-electron interaction,
the currents are suppressed at all voltages. The ratio
between the blocked and unblocked currents attain a stationary ratio of approximately 2/3 in the
plateau region (within the convergence range of less than 10\% relative error), and is nearly
independent of the electronic coupling strength $U_d$. Within a zeroth order picture, this result
can be rationalized as follows.
For the model without electron-electron interaction ($U_d=0$), there are three channels for electron
transport through the two bridge states: (i) Electron transport through an
initially unoccupied state. There are two such channels
corresponding to the two spin polarizations of the bridge state.
(ii) Electron transport through an initially singly occupied bridge resulting
in the third channel which involves double occupation of the bridge state.
When the source-drain voltage $V$ is small, roughly $|eV|< 2U_d$ in the zeroth-order picture, the
third two-electron transport channel is essentially closed, resulting in a
current value 2/3 of that for
the unblocked case. At approximately $|eV|= 2U_d$, e.g., $V=1$ V for the case $U_d=0.5$ eV in Fig.~\ref{fig5},
the two-electron transport channel becomes available and the current begins to increase with the
source-drain voltage. For finite molecule-lead coupling, the transition is broadened as shown in
Fig.~\ref{fig5}. For larger values of $U_d$, the energy of the doubly occupied
state is outside the conductance window of the bias voltages considered and thus the current is suppressed.
While Fig.~\ref{fig5} demonstrates the phenomenon of Coulomb blockade for
varying the source-drain voltage, it is also instructive to study the phenomenon
for varying the gate voltage. Assuming that an additional gate voltage $V_g$
predominantly shifts the energy of the bridge states $E_d$, we can investigate the
influence of the gate voltage by varying $E_d$ relative to the Fermi energy of
the leads. The result depicted in Fig.~\ref{fig6} exhibits the well known peak
structures of Coulomb blockade, with maxima at
energies $E_d=0$ and $E_d=-U_d$, where the singly occupied levels and the
doubly occupied level are in the conductance window, respectively.
The parameters here are the same as in Fig.~\ref{fig5} except for a fixed
$U_d=0.5$eV and a source-drain voltage $V = 0.1$V. It is noted that
the value of the voltage considered is already beyond the linear response regime.
\subsection{Effect of electron-electron interaction on transport characteristics in the presence of electron-vibrational coupling}
We finally consider a model which includes both electron-electron and electron-vibrational
interaction. The presence of both interactions increases the complexity
significantly.
To the best of our knowledge, the results presented here are the first
numerical exact simulations for this type of models.
Fig.~\ref{fig7} shows results for a model, where the electronic
parameters are the same as in Fig.~\ref{fig2}, i.e., $\alpha_e = 0.2$ eV, $\beta_e = 1$ eV,
$E_d - E_f= 0$, and a source-drain voltage of $V=0.1$ V. The electronic
degrees of freedom are coupled to a vibrational bath
modeled by an Ohmic spectral density, as described Sec.\ \ref{modeltight}. The characteristic frequency
and the reorganization energy of the vibrational bath are $\omega_c = 500$~cm$^{-1}$ and
$\lambda = 2\alpha\omega_c = 0.25$eV, respectively. These values are typical
for larger molecular systems.
Without Coulomb repulsion and coupling to
the vibrational bath, this model corresponds to the resonant transport regime. Including the couplings to
the vibrational modes has a significant impact on the electrical current. After a short transient time
the coupling to the vibrations becomes effective and results in a suppression of the current. As illustrated
by the solid black line in Fig.~\ref{fig7}, the effect is very pronounced and the stationary current is
essentially blocked. The underlying mechanism can be qualitatively rationalized by considering the
energy level of the bridge states. For any finite bias voltage, the bare energy of the
bridge states ($E_d - E_f = 0$) is located between the chemical potential of the leads and thus, within
a purely electronic model, current can flow. The coupling to the vibrations results in a polaron shift
of the energy of the bridge state given by the reorganization energy
$\lambda$. For electronic-vibrational coupling strengths with $\lambda > |V|/2$ the
polaron-shifted energy of the bridge state is below the chemical potentials of both leads and thus
current is blocked. This effect, referred to as phonon blockade of the current, has been observed e.g. in
quantum dots\cite{Weig04} and has been analyzed previously.\cite{Wang11}
As shown in Fig.~\ref{fig7}(b), the bridge states are almost fully occupied in
this case.
When the Coulomb repulsion term is included in the simulation (in addition to the vibrational
bath), the energy level of the doubly occupied bridge state is shifted to
higher energies as discussed for the previous models. For smaller
values of $U_d$, this brings the polaron-shifted bridge state back to the conduction window and thus
increases the stationary current. This can be seen from the currents for $U_d=0.5$eV and $U_d=1$eV in
Fig.~\ref{fig7}(a) and in Fig.~\ref{fig8}, which shows the current-voltage characteristics for a
few selected values of $U_d$. It is evident that for small $U_d$ the stationary current increases
versus $U_d$.
However, if $U_d$ becomes too large, e.g., $U_d=2$eV in Fig.~\ref{fig7}(a), the
doubly occupied bridge state has too high energy and is located above the
conduction window,
which again results in a suppression of the
stationary current. On the other hand, the overall population of the bridge state decreases monotonically
upon increase of $U_d$ and reaches a value of unity for large
$U_d$ because then the bridge state can only be singly occupied.
Due to the strongly correlated dynamics in this parameter regime, including both electron-electron and
electronic-vibrational coupling, convergence for larger values of $U_d$ and
larger voltages than those depicted in Figs,~\ref{fig7}, ~\ref{fig8} is
difficult within our present implementation of the ML-MCTDH-SQR methodology.
Experience shows that convergence in this regime can be facilitated by
transforming the current Hamiltonian to another form in order
to reduce correlation effects. This will be the subject of future work.
Although the interpretation of the above vibronic and electronic correlated transport properties
is appealing in terms of the energetics of the bridge states, it should be emphasized that the mechanism
involves the formation of correlated many-body states that are significantly more complex than this
noninteracting electronic picture, and cannot be fully described by just considering the static shift
of the energy of the bridge states. This is evident by examining the strength
of the interaction parameters $\lambda$ and $U_d$
in Fig.~\ref{fig7}. Thus, an accurate description of the vibrational and electronic dynamics as well
as their couplings is essential to obtain a quantitative description of the many-body quantum dynamics and
the transport characteristics.
\section{Concluding Remarks}\label{conclusions}
In this paper we have employed the ML-MCTDH-SQR method to study correlated electron transport
through model single-molecule junctions. Extending our previous
work,\cite{wan09:024114,Wang11} we have considered
models which include both electron-electron and electron-vibrational
interaction. The ML-MCTDH-SQR method allows an accurate, in principle
numerically exact treatment of this many-body quantum transport problem
including both the transient dynamics and the steady state.
The results obtained for selected model systems demonstrate the complex
interplay of electronic and vibrational
dynamics. For example, strong electron-vibrational coupling may result in a
pronounced suppression of the electrical current (phonon blockade), which is accompanied
by the formation of a polaron-like state. Including electron-electron
interaction, this suppression of the current can be partially lifted because the transport
channel provided by the doubly occupied bridge state shifts into the
conductance window.
In the present work we have considered a model with a single electronic state at the
molecular bridge. It should be noted, however, that the ML-MCTDH-SQR method can
also be applied to more complex models with various electronic states and
interacting electrons in the leads. In addition to transport problems it may also be used to
describe photoinduced dynamics in molecular adsorbates at metal or semiconductor
surfaces including a proper description of correlation effects.
Another important phenomenon in correlated electron transport is the Kondo
effect.\cite{Hewson93,Wiel00,lia02:725} The application of the methodology to simulate transport in the
Kondo regime, in particular for very small voltage, requires special
discretization techniques (e.g., the scheme pioneered by Wilson\cite{Wilson75}) and can
be facilitated by the use of correlated initial states. This will be considered in future work.
\section*{Acknowledgments}
This work has been supported by the National Science Foundation
CHE-1012479 (HW), the
German-Israeli Foundation for Scientific Development (GIF) (MT), and the
Deutsche Forschungsgemeinschaft (DFG) through SFB 953 and the Cluster of Excellence ’Engineering of Advanced
Materials’ (MT), and used resources
of the National Energy Research Scientific Computing Center, which is supported by the
Office of Science of the U.S. Department of Energy under Contract
No. DE-AC02-05CH11231.
MT gratefully acknowledges the hospitality of the IAS at the Hebrew University
Jerusalem within the workshop on molecular electronics.
\pagebreak
|
1,108,101,564,644 | arxiv | \section{Introduction}
RadioAstron is an international collaborative mission with a 10-m radio telescope onboard the SPEKTR-R spacecraft launched in July 2011 \cite[]{Kovalev2012,Sokolovsky2013, Kovalev2014}. The~space telescope observes at wavelengths of 92\,cm (324\,MHz, P-band), 18\,cm (1.7\,GHz, L-band), 6\,cm (4.8\,GHz, C-band), and 1.3\,cm (22.2\,GHz, K-band), forming space very long baseline interferometry (space VLBI, or SVLBI) together with ground-based radio telescopes, with the highest angular resolution achievable (e.g., $\sim$7 $\mu$as at the K-band) so far.
One of the Key Science Projects (KSPs) of the space mission is to investigate
the extremely high brightness temperature of active galactic nuclei (AGNs) with the unprecedented long baselines of up to 28 Earth diameters, which will significantly improve our understanding on the mechanisms of AGN radio emissions close to the central engine. Coordinated ground-based flux density monitoring of the RadioAstron targets at centimeter wavelengths is essential to estimate the effect of interstellar scintillation (ISS) on the SVLBI visibilities. As for an ISS `screen' within a few hundred parsecs from the Sun, the characteristic scale of scintillation pattern is comparable to the length of the RadioAstron VLBI baseline.
In order to measure the magnitudes and timescales of ISS of blazars observed by RadioAstron, in 2014 we started a RadioAstron target-triggered flux monitoring program with the Effelsberg 100-m radio telescope at 4.8\,GHz. The~single dish ISS monitoring and space VLBI with RadioAstron offer independent probes of the structure of blazar `cores' at microarcsecond angular scales. Direct measurements of the sizes of scintillating sources with RadioAstron help to determine properties of the interstellar scattering screens, such as their distance and scattering strength. In turn, the focusing and defocusing effects of ISS may have a significant influence on the measured space VLBI visibilities, and it is essential to understand these effects for a complete analysis of the RadioAstron data. Thus,~these two probes of microarcsecond-scale structure are highly complementary.
\textls[-15]{In the cm regime, rapid flux density variations at timescales of a day or less, known as intra-day variability (IDV, \cite[][]{Witzel1986, Heeschen1987}), are frequently observed in compact flat-spectrum radio sources. IDV is present in a significant fraction ($\sim$20--50\%) of flat-spectrum radio sources (e.g., quasars and BL Lac objects) \cite{Quirrenbach1992, Lovell2008}. The~physical mechanism responsible for such variability remains open for debate, with models involving both source-extrinsic and -intrinsic explanations. In many cases the IDV phenomenon is explained by scattering of radio waves by turbulent ionized structures in the Milky Way (e.g.,~\citep[][]{Kedziora-Chudczer1997,Jauncey2001, Dennett-Thorpe2002, Bignall2003, Bignall2006, Liu2013}). On the other hand, some observational evidence, such as large polarization swing, frequency dependence of IDV amplitude, multi-frequency correlation/anti-correlation, and intermittent IDV with structural change, etc., demands a source-intrinsic origin (e.g., \citep[][]{Qian2004, Krichbaum2002, Wagner1996, Liu2015a}).}
However, if interpreted as being source-intrinsic, the size of IDV emitting region---through causality and light-travel time arguments---should be less than tens of $\mu$as. This will lead to very high apparent brightness temperature ($T_B$) that is near or, in many cases, several orders of magnitude in excess of $10^{12}$ K the inverse-Compton (IC) limit \citep{Kellermann1969}. Thanks to the unprecedented long baseline of the RadioAstron space VLBI, we are able to study the $T_B$ of blazar cores up to 10$^{15}$--10$^{16}$ K. In fact, the~RadioAstron AGN survey program has already discovered $T_B$ well in excess of the inverse-Compton limit in some sources, e.g.,~$T_B\sim10^{14}$ K for 3C273 \cite{Kovalev2016}. The~cause of the excess---due to high Doppler boosting or a violation of the inverse-Compton limit---remains essentially undetermined. Recent study suggests it may arise from refractive substructure introduced by scattering in the ISM \cite{Johnson2016}.
In this paper, we present a statistical analysis of the IDV of our sample as the first results of the program. We test the dependence of IDV on various source properties and discuss the implications on the origin of IDV.
\section{Sample Selection, Observations and Data Reduction}
So far, five observing sessions have been carried out at 4.8\,GHz. For each session, the main targets were chosen from the RadioAstron block schedule\footnote{\url{http://www.asc.rssi.ru/radioastron/schedule/sched.html}}. In order to enable high-precision flux density measurements, a nearby non-variable calibrator was selected for each target based on the result of an IDV survey with the Urumqi 25-m radio telescope (\cite[][]{Liu2018}, in preparation) as well as a variability survey conducted with the Very Large Array (MASIV survey,~\cite{Lovell2003}). Note that both these two surveys were performed at frequencies close to 4.8\,GHz. A few sources of particular interest were occasionally added to the list as well. With this procedure of source selection, the final source number in each session is $\sim$40, and the total number of sources observed in the whole campaign is 112. However for the statistical analysis in present work, we remove five~steep-spectrum calibrators (0836 + 710, 0951 + 699, 3C286, 3C48 and NGC7027) which were observed in all epochs only for calibration purposes. This leaves us with a sample of 107 sources.
Since all the sources are point-like to the beam of the Effelsberg radio telescope at 4.8\,GHz, the observations were performed in cross-scan mode, where the antenna beam pattern was driven orthogonally over the source position, stacking four sub-scans for reaching the desired sensitivity. A duty cycle consisted of the observation of the target sources as well as their nearby non-variable secondary calibrators. The~average duty cycle is 0.36 h$^{-1}$ in the campaign, which translates into an average time sampling of $\sim$2.8 h for each source.
The~basic observing information is summarized in Table~\ref{tab:obs_info}. In column 1 to 7 we report the epoch designation, starting and ending date, duration, number of sources observed, mean number of flux density measurements per hour, average number of measurements per hour for each source (duty cycle, which represents the shortest time scale on which we can search for IDV), and the average raw modulation index of calibrators which characterizes the systematic uncertainty (see definition in Section~\ref{sec:def_m}), respectively.
\begin{table}[H]
\centering
\caption{Basic information for the five epochs of observing sessions.}
\label{tab:obs_info}
\scalebox{0.85}[0.85]{
\begin{tabular}{C{1.5cm}C{2cm}C{2cm}C{3cm}C{3cm}C{2cm}C{1.5cm} }
\toprule
\textbf{Epoch} & \textbf{Date} & \textbf{Duration [\boldmath {$h$}]} & \textbf{Number of Observed Sources} & \textbf{Average Sampling [\boldmath {$h^{-1}$]}} & \textbf{Duty Cycle [\boldmath {$h^{-1}$}]} & \boldmath {$m_c$ \textbf{ [\boldmath {\%}]}} \\
\midrule
A & 18.07--20.07.2014 & 62.0 & 37 & 14.8 & 0.40 & 0.50 \\
B & 12.09--15.09.2014 & 66.6 & 45 & 15.9 & 0.35 & 0.40 \\
C & 31.07--06.08.2015 & 73.6 & 42 & 14.3 & 0.34 & 0.40 \\
D & 17.12--21.12.2015 & 82.4 & 39 & 14.5 & 0.37 & 0.35 \\
E & 20.12--24.12.2016 & 84.4 & 41 & 14.0 & 0.34 & 0.60 \\
\bottomrule
\end{tabular}}
\end{table}
Frequent switching between targets and calibrators allows us to monitor the antenna gain variations with elevation and time, thus improving the subsequent flux density calibration. The~data calibration was done in the well-established standard manner, and enabled us to achieve a high precision of flux density measurements (see, e.g.,~\cite[][]{Kraus2003}). As the first step of the data calibration, a Gaussian profile is fitted to each sub-scan. The~amplitude of the Gaussian is a measure of the source strength, expressed in units of antenna temperature. After applying a correction for small pointing offsets, the amplitudes of all individual sub-scans in one cross-scan are averaged. Subsequently we correct the measurements for the elevation-dependent gain of the antenna and systematic time-dependent effects, using the secondary calibrators close to target sources. Finally, the measured antenna temperature is converted to absolute flux density by a scaling factor determined by utilizing the frequently observed primary calibrators 3C286, 3C48, and NGC7027 \citep{Baars1977, Ott1994}.
The~overall error on a single measurement is derived from the formal statistical uncertainty and error propagation in each step of data calibration. The~resulting uncertainties usually lie in the range of 0.3--0.7\% of total flux density. The~result of this calibration procedure can be evaluated by measuring the residual scatter, $m_c$, of the calibrators (see definition in Section \ref{sec:def_m}). For most of the observing sessions, the residual scatter is 0.3--0.5\% of total flux density.
\textls[-15]{With the data calibration procedure described above, the lightcurves of all the observed sources were obtained. In Figure~\ref{fig:lc_example} we present an example of lightcurve for the target source 1125 + 596 observed in epoch D. The~result of calibrator source 0836 + 710 in the same epoch is superimposed, to demonstrate the stability of the observing system and accuracy of data calibration.}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{lc_example}
\caption{The~intra-day variability (IDV) lightcurve (normalized flux density $S/\langle S \rangle$ versus time) of 1125 + 596 (blue line), and~the calibrator 0836 + 710 (red line) in epoch D.}
\label{fig:lc_example}
\end{figure}
\section{Variability Parameters}
\label{sec:var_pars}
\textls[-15]{A number of parameters are defined to characterize the IDV. For each light curve the `raw' modulation index $m$, `intrinsic' modulation index $\overline{m}$, $\chi^2$, and reduced $\chi^2$ are derived. Here we give a brief definition and description of these quantities, the reader is referred to ~\cite[][]{Fuhrmann2008, Richards2011} for more details.}
\subsection{Raw Modulation Index} \label{sec:def_m}
The~raw modulation index is related to the standard deviation of flux density $\sigma$ and the mean value of flux density $\langle S \rangle$ in the time series by
\begin{equation}
\label{eq:m}
m[\%]=100\cdot\frac{\sigma}{\langle S \rangle}
\end{equation}
and yields a measure for the strength of observed variations. The~average value of raw modulation index for all the observed calibrators ($m_c$) usually represents the calibration accuracy and characterizes the systematic uncertainty.
\subsection{Intrinsic Modulation Index} \label{sec:def_im}
The~intrinsic modulation index \cite[][]{Richards2011} is an alternative estimator to quantify the variability that would be observed in the absence of measurement errors with ideal time sampling. Note that this use of `intrinsic' is not referring to source-intrinsic variability, but includes any intrinsic and extrinsic (ISS-induced) variations in the received flux density.
With the assumption that the `true' flux densities for each source are normally distributed with mean $S_0$, standard deviation $\sigma_0$, and intrinsic modulation index $\overline{m}=\sigma_0/S_0$, the probability density for the true flux density $S_t$ is
\begin{equation}
p(S_t, S_0, \sigma_0)=\frac{1}{\sigma_0\sqrt{2\pi}} \exp \left[-\frac{(S_t-S_0)^2}{2\sigma_0^2}\right] \,.
\end{equation}
Furthermore, we assume that the observation process for the $j$th data point adds normally distributed error with mean $S_t$ and standard deviation $\sigma_j$. Then the likelihood for a single observation is given by
\vspace{30pt}
\begin{equation}
\ell_j(S_0, \sigma_0) = \int_{\,\rm all \,\, S_t} \!\!\!\!\!\!\!\!dS_t \frac{\exp \left[
-\frac{(S_t-S_j)^2}{2\sigma_j^2}\right] }{\sigma_j \sqrt{2\pi}} \frac{\exp \left[
-\frac{(S_t-S_0)^2}{2\sigma_0^2}\right]}{\sigma_o \sqrt{2\pi}} \,,
\end{equation}
which after combining $j=1,...N$ measurements and substituting $\overline{m}S_0=\sigma_0$, gives
\begin{equation} \label{eq:likelihood}
\mathcal{L}(S_0, \overline{m}) = S_0\left(\prod_{j=1}^N{\frac {1}{\sqrt{ 2\pi \left(
\overline{m}^2{S}_0^2+\sigma_j^2 \right)}}} \right) \exp \left(-\frac {1}{2} \sum
_{j=1}^N{\frac {\left( S_j-S_0 \right)^2}{\overline{m}^2{S}_0^2+\sigma_j^2}}\right) \,.
\end{equation}
By maximizing the joint likelihood given by Equation~(\ref{eq:likelihood}), we find our estimates of $S_0$ and $\overline{m}$.
The~maximum-likelihood we applied makes the assumption that distribution of flux densities from a source is distributed normally. For many sources, this is a good description of the data. As an example, in the left panel of Figure~\ref{fig:mle_example}, we plot the histogram of epoch D data set flux densities from 1125 + 596 (for which the IDV curve is shown in Figure~\ref{fig:lc_example}). It is clear that the histogram approximately forms a Gaussian profile. In the right panel of Figure~\ref{fig:mle_example}, we plot the most likely values and the $1\sigma$, $2\sigma$, and $3\sigma$ isolikelihood for the same source. The~contours were computed to contain 68.26\%, 95.45\%, and 99.73\% of the volume beneath the likelihood surface. In this way, we obtained the most likely values of $\overline{m}$ and $S_0$, as well as their $1\sigma$ uncertainties. We note that a rigorous estimate of the uncertainty in individual $\overline{m}$ is essential for evaluating the significance of differences between $\overline{m}$ through population studies which will be demonstrated in Section~\ref{sec:pop_comp}.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{hist_example}
\includegraphics[width=0.45\textwidth]{MLE_lc_1125+596_s4}
\caption{Maximum likelihood estimation for blazar 1125 + 596 observed in epoch D. (\textbf{Left}) Distribution of measured flux density. (\textbf{Right}) 1$\sigma$, 2$\sigma$, and 3$\sigma$ contours of the joint likelihood $\mathcal{L}(\overline{m}, S_0)$.}
\label{fig:mle_example}
\end{figure}
\subsection{\texorpdfstring{$\chi^2$}{chi2} and Reduced \texorpdfstring{$\chi^2$}{chi2_r}} \label{sec:def_chi}
Finally, as a criterion to identify the presence of variability, the null-hypothesis of a constant function is examined via a $\chi^2$-test
\begin{equation}
\chi^2 = \sum _{ j=1 }^{ N }{ { \left( \frac {S_j - \left< S \right> }{ \sigma_j }
\right) }^{ 2 } }
\end{equation}
and the reduced value of $\chi^2$
\begin{equation} \chi_r^2 = \frac { 1 }{ N-1 }\sum _{ j=1 }^{ N }{ { \left( \frac {S_j - \left< S \right> }{ \sigma_j } \right) }^{ 2 } } \end{equation}
A source is classified as variable if the $\chi^2$-test gives a probability of <0.01$\%$ for the assumption of constant flux density (99.99\% significance level for variability).
\section{Statistical Results} \label{sec:stat_overall}
Table~\ref{tab:all_result} summarizes the basic properties and statistical results of sources in the sample. Columns 1 and 2 give the source name and epoch designation. Observation results are presented in columns 3, 4, and 5. Column 6 gives variability classification. A `+' is given if the source is identified as variable. Source-basic properties, i.e., galactic latitude, spectral index, redshift, VLBI core size at 5\,GHz, and core-dominance, are shown in column 7--11, respectively. Column 12 lists the $\gamma$-ray loudness. A '+' is marked if the source is included in the Fermi Large Area Telescope Third Source Catalog \cite[3FGL,][]{Acero2015}. The galactic latitudes and redshifts were taken from the NASA/IPAC Extragalactic Database (NED\footnote{\url{https://ned.ipac.caltech.edu/}}) and spectral indexes (defined by $S\propto\nu^{\alpha}$ where $S$ the flux density and $\nu$ the observing frequency) from the Radio Fundamental Catalog\footnote{\url{http://astrogeo.org/rfc/}}. The~VLBI core sizes were extracted from~\citep{Pushkarev2015}. The~core-dominance, defined by $f_{\rm c}=S_{\rm core}/S_{\rm total}$, was derived by use of data from the Radio Fundamental Catalog.
In the following we overview the statistical properties of the sample. We present the distributions of source flux density, spectral index, galactic latitude, and intrinsic modulation index. For the purpose of population study, the distribution of $\overline{m}$ is analytically modeled.
\subsection{Variability Classification}
To verify the presence of IDV, a $\chi^2$-test is applied to each observed lightcurve. Of the 107 targets observed, 31 sources are found to exhibited IDV in at least one observing epoch, while the rest of the sample does not reveal evident IDV in any epoch. This leads to an IDV detection rate of $\sim$30\% for current sample, comparable to earlier studies~\cite{Quirrenbach1992, Lovell2008}.
In order to minimize the chance of misclassification, a cross-check of the intrinsic modulation index is performed alternatively. If the maximum-likelihood $\overline{m}$ is less than $3\sigma$ away from $\overline{m}$ = 0, we consider that significant variability cannot be established in the source. As a result, the new approach classifies three extra sources as variable, besides the 31 IDV sources previously identified by $\chi^2$ tests. The~result demonstrates that our variability verification with different approaches are mostly consistent with each other. In Figure~\ref{fig:idv_class} we show a scatter plot of the intrinsic modulation index $\overline{m}$ of all sources against their mean flux density, using different symbols for variables and non-variables identified with both approaches. A dashed horizontal line at $\overline{m}$ = 0.75\% roughly separates the variable from non-variable classifications, confirming the result of the $\chi^2$-test.
\subsection{Sample Properties}
\textls[-15]{An overview of the sample properties is presented in Figure~\ref{fig:sample_dist}. In each panel, IDV sources are plotted in red and non-IDVs in blue. The~distribution of flux density is plotted in panel (a), showing a bimodal profile for non-IDVs. It is obvious that IDV sources are mostly clustered at the lower flux density peak, while the higher flux density peak is predominantly occupied by the non-IDV population. Panel (b) shows the unimodal distribution of spectral index. The~variables are well distributed at $\alpha$ > $-$0.1, indicating deviation from the non-variable population. In panel (c) the source galactic latitude shows a distribution peaked at |b|$\sim$35$^{\circ}$. The~occurrence of IDV and non-IDV sources reveals a similar trend, and no clear difference can be visually observed between these two populations. We note that sources with |b|$<10^{\circ}$ are very rare in our sample. The~occurrence of IDV/non-IDV among $\gamma$-ray loud and $\gamma$-ray quiet subsamples is compared in panel (d). Of the 63 sources with a GeV detection by Fermi, 25~exhibit IDV, indicating that $\sim$40\% of the $\gamma$-ray loud sources are variable at cm-wavelength. By contrast the ratio is as low as $\sim$14\% for $\gamma$-ray quiet sources, indicating a higher occurrence rate of IDV in $\gamma$-ray loud population.}
\begin{figure}[H]
\centering
\includegraphics[width=0.55\textwidth]{im_flux}
\caption{Intrinsic modulation index $\overline{m}$ plotted against mean flux density. Sources classified as variables are plotted in red while non-variables are in blue. Classification with $\chi^2$-test ($\chi^2$) plotted as circles while maximum likelihood estimation (MLE) is shown as an error bar.}
\label{fig:idv_class}
\end{figure}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.96\textwidth]{dist_all}
\caption{Sample properties. Panels (\textbf{a}--\textbf{d}) present the distribution of flux density, spectral index, galactic latitude, and $\gamma$-ray loudness, respectively. In each panel, IDV sources are plotted in red and non-IDVs in blue.}
\label{fig:sample_dist}
\end{figure}
\subsection{Intrinsic Modulation Index \texorpdfstring{$\overline{m}$}{m}}
\label{sec:stat_im}
The~probability density of $\overline{m}$ for our monitoring sample is plotted in Figure~\ref{fig:dist_im}. As in \cite{Richards2011, Richards2014}, we~use a monoparametric exponential family of distributions
\begin{equation}
\label{eq:psd_im}
f(\overline{m})d\overline{m} = \frac{1}{m_0}\,\exp(-\frac {\overline{m}}{m_0})d\overline{m}
\end{equation}
with mean $m_0$ to model the observed probability density of $\overline{m}$. The~red line represents the best fit with mean $m_0$ = 0.63\% in the form of Equation~(\ref{eq:psd_im}). The~model, which provides an excellent description of the data, will be used to characterize various subgroups of our sample, as we will see in the next section. We note that a median value of $\overline{m}$ is adopted if the source was observed in multiple epochs. The~robustness of the population comparisons presented in Section~\ref{sec:pop_comp} will be discussed in Section~\ref{sec:discuss}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{dist_im}
\caption{Probability density of the maximum-likelihood intrinsic modulation indexes $\overline{m}$. The~red dashed line represents an exponential distribution with $m_0$ = 0.63\%.}
\label{fig:dist_im}
\end{figure}
\section{Population Comparisons} \label{sec:pop_comp}
We now investigate how the variability amplitude, as quantified by $\overline{m}$, depends on source properties, i.e., flux densities, spectral indexes, $\gamma$-ray loudness, and galactic latitudes. To this end, we determine the distribution of $\overline{m}$ for various subsets of our sample with, again, a likelihood maximization method as in \cite{Richards2011}.
\textls[-15]{The~likelihood analysis requires a parent distribution for $\overline{m}$. As shown in Section~\ref{sec:stat_im}, an~exponential distribution {as given in Equation~(\ref{eq:psd_im})}, is a qualitatively reasonable fit to the observed distribution of modulation indexes in our sample. {Under the assumption that the exponential distribution (Equation~(\ref{eq:psd_im})) is the correct underlying distribution for $\overline{m}$ in any given subset (i.e., population) of our source sample, we~can use the parameter $m_0$ as a measure for the population to show strong IDV. To find $m_0$, we perform a likelihood analysis, as described in Section~\ref{sec:pop_method}. We~hereby use the maximum of the normalized likelihood function to determine $m_0$ for each population, or rather we investigate the probability distribution of any possible $m_0$ values to estimate the statistical significance of differences in $m_0$ found for different~populations.}}
We also performed {$k$-sample} Anderson--Darling (A-D) tests \cite{Scholz1987} to each pair of subsamples for crosschecking. The~A-D test is a nonparametric statistical procedure testing the hypothesis that the populations from which two or more samples of data were drawn to be identical. It is a modification of the Kolmogorov--Smirnov (K-S) test and gives more weight to the tails than does the latter, thus~allowing a more sensitive test for {exponential-like} distribution and offering a better option for the statistics in current study.
\subsection{Likelihood Analysis}
\label{sec:pop_method}
Here we briefly introduce the methodology of maximum-likelihood used for population studies in this work. More details can be found in Section 6.3.3 of \cite{Richards2011} where the formalism is well demonstrated.
For a source $i$, the likelihood of observing a modulation index $\overline{m_i}$ with a Gaussian uncertainty $\sigma_i$ drawn from an exponential distribution with mean $m_0$ is
\begin{equation}
\begin{aligned}
\ell_i(m_0) & = \int_{\overline{m}=0}^\infty\,d\overline{m} \frac{1}{m_0} \exp\left(-\frac{\overline{m}}{m_0}\right) \frac{1}{\sigma_i\sqrt{2\pi}} \exp \left[-\frac{(\overline{m}-\overline{m_i})^2}{2\sigma_i^2}\right] \\
& = \frac{1}{m_0\sigma_i\sqrt{2\pi}} \exp \left[ -\frac{\overline{m_i}}{m_0} \left( 1-\frac{\sigma_i^2}{2m_0\overline{m_i}} \right) \right] \times
\int_{\overline{m}=0}^\infty \exp \left[ -\frac{[\overline{m}-(\overline{m_i}-\sigma_i^2/m_0)]^2}{2\sigma_i^2} \right]\,d\overline{m} \, ,
\end{aligned}
\end{equation}
where, to obtain the second expression, we have completed the square in the exponent of the integrand. The~last integral can be calculated analytically, yielding
\begin{equation}
\ell_i(m_0) = \frac{1}{2m_0} \exp \left[ -\frac{\overline{m_i}}{m_0} \left(1-\frac{\sigma_i^2}{2m_0\overline{m_i}}\right) \right] \times
\left\{1+\mathrm{erf} \left[\frac{\overline{m_i}}{\sigma_i\sqrt{2}} \left(1-\frac{\sigma_i^2}{m_0\overline{m_i}}\right) \right] \right\} \, .
\end{equation}
The~likelihood of $N$ observations of this type is
\begin{equation} \label{eq:pop_likelihood}
\mathcal{L}(m_0) = \prod_{i=1}^N \ell_i(m_0).
\end{equation}
{
The~probability density function (PDF) of $m_0$ is the normalization of $\mathcal{L}(m_0)$,
\begin{equation} \label{eq:pop_pdf}
\mathrm{pdf}(m_0)=\frac{\mathcal{L}(m_0)}{\int_{0}^{\infty}\mathcal{L} (m_0)\,d m_0}
\end{equation}
From the maximization of Equation~(\ref{eq:pop_pdf}) we obtain the maximum-likelihood value of $m_0$. The~statistical uncertainty (1$\sigma$ error) on this value can also be obtained by locating the isolikelihood $m_0$-values $m_{01}$ and $m_{02}$ for which
\begin{equation}
\mathrm{pdf}(m_{01})=\mathrm{pdf}(m_{02})
\end{equation}
and
\begin{equation} \label{eq:conf_interval}
\frac{\int_{m_{01}}^{m_{02}}\mathrm{pdf}(m_0)dm_0}{\int_{0}^{\infty}\mathrm{pdf}(m_0)dm_0}=0.6826.
\end{equation}
The~confidence intervals are derived in a similar way by substituting the right-hand side of Equation~(\ref{eq:conf_interval}) by, e.g., 0.9545 and 0.9973 for the cases of 2$\sigma$ and 3$\sigma$, respectively. We note that the 1$\sigma$ errors and confidence intervals for the difference of $m_0$ are calculated in the same way.
}\mdseries
With the above introduced formalism, we are able to examine whether the intrinsic modulation index $\overline{m}$ correlates with the properties of the sources in our sample. In the following sections, we will study the distributions of $\overline{m}$-values in the subgroups of our monitoring sample according to some source properties, i.e., flux densities, spectral indexes, $\gamma$-ray loudness, and Galactic latitudes as well.
A summary of population comparisons between various subsamples is tabulated in Table~\ref{tab:pop_result}. For each subsample, in column 1 we list the criteria used for subsample division; in column 2 we present the number of sources in subsample; in column 3 we estimate the most likely values of $m_0$ by maximizing the likelihood function given in Equation~(\ref{eq:pop_pdf}). The associated 1$\sigma$ uncertainties are also provided. For each pair of subsamples compared, in column 4 we calculate the most likely value along with the corresponding 1$\sigma$ uncertainties for the difference in $m_0$; in column 5 we list the significance of the difference. Finally, the $p$-value and significance as estimated from A-D tests are reported in Column 6 and 7, respectively. As shown in the table, the results from two statistical approaches are highly consistent with each other.
\begin{table}[H]
\centering
\caption{Results of population comparisons.}
\label{tab:pop_result}
\begin{tabular}{ccccrcc}
\toprule
& & \multicolumn{3}{c}{\textbf{Likelihood Analysis}} & \multicolumn{2}{c}{\textbf{Anderson--Darling Test}} \\
\midrule
\textbf{Subsample} & \textbf{Source Num.} & \boldmath{$m_0$}\textbf{ [\boldmath{\%}]} & \boldmath{$\Delta\,m_0$} \textbf{[\boldmath{\%}] } & \textbf{Significance} & \boldmath{$p$} & \textbf{Significance}\\
\midrule
$S_{4.8}<$\,1\,Jy &53 & $1.24_{-0.165}^{+0.198}$ & \multirow{2}{1.6cm}{$+0.78_{-0.187}^{+0.205}$} & \multirow{2}{1cm}{$5\sigma$} & \multirow{2}{1.6cm}{$1.32\times10^{-5}$} & \multirow{2}{0.4cm}{$4\sigma$} \\
$S_{4.8}\geq$\,1\,Jy &54 & $0.46_{-0.064}^{+0.077}$ & & \\
\midrule
$\alpha<$\,$-$0.1 &51 & $0.59_{-0.084}^{+0.102}$ & \multirow{2}{1.6cm}{$-0.47_{-0.189}^{+0.172}$} & \multirow{2}{1cm}{$3\sigma$} & \multirow{2}{1.6cm}{$5.23\times10^{-4}$} & \multirow{2}{0.4cm}{$3\sigma$} \\
$\alpha\geq$\,$-$0.1 &56 & $1.07_{-0.139}^{+0.167}$ & & \\
\midrule
$\gamma$-ray quiet &44 & $0.43_{-0.067}^{+0.082}$ & \multirow{2}{1.6cm}{$-0.69_{-0.179}^{+0.162}$} & \multirow{2}{1cm}{$4\sigma$} & \multirow{2}{1.6cm}{$4.37\times10^{-4}$} & \multirow{2}{0.4cm}{$3\sigma$} \\
$\gamma$-ray loud &63 & $1.13_{-0.139}^{+0.164}$ & & \\
\midrule
$|b|<\,20^{\circ}$ &34 & $0.98_{-0.163}^{+0.206}$ & \multirow{2}{1.6cm}{$+0.18_{-0.194}^{+0.229}$} & \multirow{2}{1.2cm}{<1$\sigma$} & \multirow{2}{1.6cm}{$9.67\times10^{-1}$} & \multirow{2}{0.9cm}{<1$\sigma$} \\
$|b|\geq\,20^{\circ}$ &73 & $0.79_{-0.091}^{+0.107}$ & & \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Flux Density and Redshift} \label{sec:pop_flux}
\textls[-15]{In order to exhibit ISS, a source must contain a component that is sufficiently compact, with~angular diameter comparable to or smaller than the first Fresnel zone of the scattering screen, i.e., on~the order of tens of $\mu$as near a few GHz~(e.g.,~\cite[][]{Bignall2006, Bignall2004, Dennett-Thorpe2000}).}
We start with testing the dependence of IDV on source flux density, which is tightly bounded with source angular size if the brightness temperature is inverse-Compton limited. A population study is also performed to examine the subsets defined by whether the source flux density is higher or lower than 1\,Jy. The~results of this test are displayed in Figure \ref{fig:pop_flux}. In the left panel, it is obvious that the curves for the two subsamples are not consistent with each other -- weaker sources have, on average, higher IDV amplitude. The~significance of this result is verified by the right panel of Figure \ref{fig:pop_flux}, where we plot the probability density of the difference between the $m_0$ of the two subsets {(which is formally equal to the cross-correlation of their individual distributions).
With the formalism introduced in Section~\ref{sec:pop_method}} the most likely difference is 0.84 percentage of points, which is more than 5$\sigma$ away from zero.
Our~result can be understood in terms of source angular scale. The~angular size of a source can be modeled as a function of its flux density, $S$, and the brightness temperature $T_B$ in source frame, as follows:
\begin{equation}
\label{eq:theta}
\theta=\sqrt{\dfrac{\lambda^2(1+z)S}{2\pi k \delta T_B}},
\end{equation}
where $\lambda$ is the observing wavelength, $z$ is the source redshift, $k$ is the Boltzmann constant, and $\delta$ is the Doppler boosting factor. Therefore, if these sources are limited in brightness temperature, either due to the inverse-Compton catastrophe \cite{Kellermann1969} or energy equipartition (between particles and the magnetic fields \cite{Readhead1994}), the source angular size scales as $\theta\propto S^{0.5}$.
In that case, the brighter sources have larger angular sizes, and thus suppress the ISS. In other words, our finding indicates a source compactness related IDV.
Moreover, with the availability of VLBI data on the core size (column 10 of Table~\ref{tab:all_result}), it is possible to verify the argument by testing the correlation between intrinsic modulation index and VLBI core size at 5\,GHz directly. The~two-tailed, non-parametric Spearman rank correlation test gives a correlation coefficient $r_s=-0.219$ and a $p$-value of $p=6.44\times10^{-2}$. Though it is less significant, a negative trend in the IDV strength over source angular size is implied.
Furthermore, the observed IDV may be also dependent on the source redshift, providing that the sample of sources are both flux density- and brightness temperature-limited \cite{Lovell2008}. In this case the redshift dependence of the angular size is $\theta\propto(1+z)^{0.5}$ due to cosmological expansion. However we do not find a statistically significant relation between redshift and IDV amplitudes in our sample. We note that the scatter in flux density is larger than the scatter in $(1+z)$, and there is no clear relationship seen between source flux density and redshift in this sample. Moreover, a suppression of ISS was observed in the MASIV survey only for sources with $z>2.5$ \cite{Lovell2008}. Our sample has only four sources with $z>2.5$.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{pop_flux1}
\includegraphics[width=0.46\textwidth]{pop_flux2}
\caption{(\textbf{Left panel}) Probability density of $m_0$ for sources with flux density lower (red solid line, maximum-likelihood value and $1\sigma$ error $m_0=1.24^{+0.198}_{-0.165}\%$) and higher (blue solid line, maximum-likelihood value and $1\sigma$ error $m_0=0.46^{+0.077}_{-0.064}\%$) than 1\,Jy in our monitoring sample. The~dashed vertical lines locate the peaks of probability density for the two subsamples. (\textbf{Right panel}) Probability density of the difference between the $m_0$ for the two sets considered in the left panel. The~dashed vertical line shows the peak of the probability density, while the dotted vertical lines represent the 1, 2, and 3$\sigma$ confidence interval. The~peak of the distribution ($0.78^{+0.205}_{-0.187}$) is over $5\sigma$ away from zero.}
\label{fig:pop_flux}
\end{figure}
\subsection{Spectral Index} \label{sec:pop_alpha}
Early observations showed that scintillating sources tend to have flat or inverted spectra, while~the steep-spectrum radio sources do not scintillate \cite{Heeschen1984}. This can be understood by considering that the flat-spectrum sources are dominated by optically thick, synchrotron self-absorbed components with very high-brightness temperature, and thus most of their flux density is confined to the ultra-compact core region. In contrast, the steep-spectrum sources are dominated by optically thin, less compact components with lower brightness temperatures, often related to an extended VLBI jet.
To test this argument, we split the sample at $\alpha=-0.1$. This criterion roughly splits our sample into flat and inverted spectra, and produces subsamples of similar numbers of objects. Figure \ref{fig:pop_alpha} depicts the probability distributions of $m_0$ as well as the difference between $m_0$ for the two subsamples. A Spearman rank test between the intrinsic modulation index and source core-dominance (column 11 of Table~\ref{tab:all_result}) confirms the result ($r_s=0.284$ and $p=9.72\times10^{-3}$). The~finding, as anticipated, suggests that sources with inverted spectra are significantly stronger in short-term variability. It has to be noted, however, that the sources in the present sample are mostly compact, core-dominated sources with flat spectrum, unlike the classical steep-spectrum sources reported by \citet{Heeschen1984} which are dominated by their extended emission. Our findings indicate that even within the flat spectrum sources the presence of additional less compact components could in principle reduce their core-dominance, thus~reducing the scintillation.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{pop_alpha1}
\includegraphics[width=0.46\textwidth]{pop_alpha2}
\caption{{Similar} to Figure \ref{fig:pop_flux} but for sources with spectral index lower (red solid line) and higher (blue solid line) than $-$0.1. In the left panel the maximum-likelihood value and the associated 1$\sigma$ error are indicated in the legend. In the right panel the peak of the distribution ($-0.47_{-0.189}^{+0.172}$) is over 3$\sigma$ away from zero.}
\label{fig:pop_alpha}
\end{figure}
\subsection{\texorpdfstring{$\gamma$}{gamma}-Ray Loudness} \label{sec:pop_gamma}
The~source of high energy emission is believed to be compact and located close to the central engine of AGNs (e.g., \cite[][]{Kovalev2009, Pushkarev2010}). Relations between the parsec-scale radio properties and $\gamma$-rays of blazars have been intensively investigated since the Fermi $\gamma$-ray Space Telescope was launched (e.g.,~\cite[][]{Lister2009, Ramakrishnan2014, Casadio2015, Hada2016}). In this study we test, through a statistical approach, whether there is a correlation between the $\gamma$-ray loudness and the 4.8\,GHz IDV properties.
We thus divide our sample in two subsets, based on whether the source has been detected by Fermi LAT at a significance level high enough to warrant inclusion in the 3FGL catalog. As shown in Figure \ref{fig:pop_gamma}, these two subsamples reveal different properties: the $\gamma$-ray loud sources have, on average, an IDV amplitude almost a factor of four higher than $\gamma$-ray quiet ones. The~result is very significant statistically, with the maximum-likelihood difference being 4$\sigma$ away from 0, as indicated in the right panel of Figure \ref{fig:pop_gamma}.
We further investigated the possible relation between the integrated $\gamma$-ray photon flux and the radio flux density at 4.8\,GHz. The~$\gamma$-ray fluxes have been extracted from the 3FGL catalog \citep{Acero2015} and are averaged over the entire operational time of Fermi satellite. The~4.8\,GHz measurements are averaged over a few days and hence are most likely free of long -term variability. Our analysis showed that the two are likely correlated. In the case of photon fluxes in the range 100--300 MeV, the Spearman correlation coefficient turns out to be $r_s=0.46$ with $p=1.36\times10^{-4}$. In the case of fluxes in the range 1--100 GeV the correlation weakens, with $r_s$ being around 0.37 and a $p$-value of $3.15\times10^{-3}$. {These~numbers show that the radio flux density (which is an indicator of source compactness and brightness temperature) and $\gamma$-ray photon flux are significantly correlated.}
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{pop_gamma1}
\includegraphics[width=0.46\textwidth]{pop_gamma2}
\caption{Similar to Figure \ref{fig:pop_flux} but for $\gamma$-ray quiet (red solid line) and $\gamma$-ray loud (blue solid line) sources. In the left panel the maximum-likelihood value and the associated 1$\sigma$ error are indicated in the legend. In the right panel the peak of the distribution ($-0.69^{+0.162}_{-0.179}$) is $\sim$4$\sigma$ away from zero.}
\label{fig:pop_gamma}
\end{figure}
\subsection{Galactic Latitude} \label{sec:pop_b}
\textls[-15]{We have also investigated the dependence of the IDV at the galactic latitude. In the case of ISS, a~galactic latitude dependence of IDV is anticipated, since the diffuse interstellar medium (ISM) is mostly distributed near the Galactic plane.
A contingency test by dividing the sample into low and high galactic latitude subsamples at $|b|=20^{\circ}$ reveals that although the low galactic latitude subsample on average exhibits marginally higher IDV amplitudes than that at high galactic latitude, the two distributions of $m_0$ are rather consistent with each other statistically. The~probability density for the difference between $m_0$ for the two subsamples is consistent with zero to within 1$\sigma$ (see Figure \ref{fig:pop_b}).}
In order to further verify this result, we then compare the IDV strength with the Galactic foreground emission measure (column density of the square of the electron density) as estimated from observations of H$\alpha$ emission (i.e., the Wisconsin H$\alpha$ Mapper Northern sky survey, WHAM \cite{Haffner2003}). The~integral H$\alpha$ intensity (in rayleighs) integrated over all velocities is believed to be proportional to the ISM emission measure in the line of sight, providing that the temperature of the emitting gas does not vary by a large percentage. By finding the integrated H$\alpha$ intensity from the WHAM nearest to each source, we are able to test the correlation between the IDV strength and the emission measure along the line of sight. We find no significant correlation between H$\alpha$ intensity and modulation index; the~Spearman correlation gives $r_s=0.180$ and $p=6.43\times10^{-2}$.
\textls[-15]{Previous studies with larger source samples suggest a significant variability dependence on Galactic latitude~(e.g.,~\cite[][]{Rickett2006, Lovell2008, Lazio2008}). A recent statistical study on AGN cores reveals that the effect of angular broadening by ISM scattering is significant only for sources at low galactic latitude (i.e.,~$|b|<10^{\circ}$)~\citep{Pushkarev2015}. Given the fact that only 12 of our sources are at low galactic latitudes $|b|<10^{\circ}$, the lack of such a dependence in the current study might be simply ascribed to this small number. }
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{pop_b1}
\includegraphics[width=0.46\textwidth]{pop_b2}
\caption{Similar to Figure \ref{fig:pop_flux} but for low (red solid line) and high (blue solid line) galactic latitude sources in our sample. In the left panel the maximum-likelihood value and the associated 1$\sigma$ error are indicated in the legend. In the right panel the peak of the distribution ($0.18^{+0.229}_{-0.194}$) is consistent with zero within $1\sigma$.}
\label{fig:pop_b}
\end{figure}
\section{Discussion} \label{sec:discuss}
\vspace{-6pt}
\subsection{Robustness of the Statistics}
A considerable portion of the sample was observed at multiple epochs. In that case, median values of source flux density and intrinsic modulation index are adopted for the statistical analysis presented in current study. To evaluate a possible bias introduced by the median-value selection approach for duplicate observations, both the minimum-value and maximum-value selection are tested. For~simplicity we hereafter refer to these two selection approaches as TMIN and TMAX, respectively.
\textls[-15]{The~distributions of intrinsic modulation index, for both TMIN and TMAX, show similar trends to that of the median-$\overline{m}$ selection, and can be characterized by an exponential distribution given in Equation~(\ref{eq:psd_im}), with $m_0$ = 0.62\% and 0.66\%, respectively. The~variability strengths for various subsamples are re-analyzed and compared by using the methodology demonstrated in Section~\ref{sec:pop_comp}. The~results are listed in Table~\ref{tab:pop_result_robust}. Though the estimated $m_0$ and $\Delta\,m_0$ values are systematically lower for TMIN and higher for TMAX, the significances are comparable. The~consistency between these results indicates that our previous findings on IDV dependence hold true even with `extreme' selection approaches on $\overline{m}$.}
\begin{table}[H]
\centering
\caption{Results of population comparisons with minimum- and maximum-$\overline{m}$ selection.}
\label{tab:pop_result_robust}
\begin{tabular}{ccccccc}
\toprule
& \multicolumn{3}{c}{\textbf{TMIN}} & \multicolumn{3}{c}{\textbf{TMAX}} \\
\midrule
\textbf{Subsample} & \boldmath {$m_0$} \textbf{[\boldmath{\%}] } & \boldmath {$\Delta\,m_0$ } \textbf{[\boldmath{\%}] } & \textbf{Significance} & \boldmath {$m_0$} \textbf{[\boldmath{\%}] } & \boldmath {$\Delta\,m_0$} \textbf{[\boldmath{\%}] } & \textbf{Significance} \\
\midrule
$S_{4.8}<$\,1\,Jy & $1.11_{-0.148}^{+0.178}$ & \multirow{2}{1.6cm}{$+0.65_{-0.172}^{+0.186}$} & \multirow{2}{0.8cm}{$4\sigma$}
& $1.48_{-0.197}^{+0.237}$ & \multirow{2}{1.6cm}{$+0.99_{-0.236}^{+0.220}$} & \multirow{2}{0.8cm}{$5\sigma$} \\
$S_{4.8}\geq$\,1\,Jy & $0.46_{-0.065}^{+0.078}$ & &
& $0.48_{-0.066}^{+0.080}$ & & \\
\midrule
$\alpha<$\,$-$0.1 & $0.58_{-0.082}^{+0.099}$ & \multirow{2}{1.6cm}{$-0.39_{-0.173}^{+0.160}$} & \multirow{2}{0.8cm}{$3\sigma$}
& $0.61_{-0.086}^{+0.104}$ & \multirow{2}{1.6cm}{$-0.68_{-0.222}^{+0.198}$} & \multirow{2}{0.8cm}{$4\sigma$} \\
$\alpha\geq$\,$-$0.1 & $0.97_{-0.126}^{+0.151}$ & &
& $1.30_{-0.169}^{+0.202}$ & & \\
\midrule
$\gamma$-ray quiet & $0.41_{-0.064}^{+0.079}$ & \multirow{2}{1.6cm}{$-0.62_{-0.168}^{+0.150}$} & \multirow{2}{0.8cm}{$4\sigma$}
& $0.45_{-0.069}^{+0.085}$ & \multirow{2}{1.6cm}{$-0.88_{-0.209}^{+0.184}$} & \multirow{2}{0.8cm}{$5\sigma$} \\
$\gamma$-ray loud & $1.05_{-0.128}^{+0.152}$ & &
& $1.34_{-0.164}^{+0.194}$ & & \\
\midrule
$|b|<\,20^{\circ}$ & $0.89_{-0.148}^{+0.188}$ & \multirow{2}{1.6cm}{$+0.14_{-0.178}^{+0.212}$} & \multirow{2}{1cm}{<1$\sigma$}
& $1.17_{-0.195}^{+0.248}$ & \multirow{2}{1.6cm}{$+0.28_{-0.233}^{+0.267}$} & \multirow{2}{1cm}{$1\sigma$} \\
$|b|\geq\,20^{\circ}$ & $0.74_{-0.086}^{+0.101}$ & &
& $0.88_{-0.102}^{+0.119}$ & & \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Variability Dependencies}
A strong dependence of IDV on source flux density is found in our sample. A similar effect was also observed in the MASIV survey \cite{Lovell2003}, where an increase in fractional amplitude of IDV with decreasing flux density was observed. The~result raised the possibility that the milliarcsecond-scale structures of the IDV sources may differ from those of non-IDVs, in the sense that the weaker sources are more compact. Consistent with this result, a direct comparison of milliarcsecond source structures between IDV/non-IDV sources showed that the former typically have smaller size than the latter \cite{Ojha2004d}. Moreover, the ISS-induced variability leads us to expect a flux density-dependent IDV, being common in compact objects but rare in objects with bright VLBI-scale jets~(e.g.,~\cite[][]{Quirrenbach1992, Lovell2003}). This is also the case for our sample, in which the fractional occurrence of IDV is $\sim$40$\%$ and $\sim$18$\%$ for the weak and strong subsamples, respectively. The~finding that IDV amplitude depends on the total flux density implies that the low flux-density sources identified in our sample are more compact.
It has long been suggested that flat-spectrum sources are more likely to show IDV \cite{Heeschen1987} than steep-spectrum sources. We narrow that conclusion by finding that in our selection of flat-spectrum sources, those with inverted spectra, on average, show stronger variability than sources with other spectral shapes. Since the flux density of inverted spectrum sources is more `core dominated', our~results strongly support the model that it is the most compact and core-dominant component of the blazars that causes IDV due to scintillation through the ISM.
We found a significant IDV dependence on $\gamma$-ray loudness in our sample. Prior to discussing the physical implications from this dependence, it is essential to discern the possible correlation between spectral indexes and $\gamma$-ray loudness, e.g., to test whether the $\gamma$-ray loud population is also dominated by inverted-spectrum sources.
The~K-S and A-D tests reveal that the distribution of the spectral index for $\gamma$-ray loud and $\gamma$-ray quiet sources is significantly different ($p_{K\text{-}S}=5.49\times10^{-4}$ and $p_{A\text{-}D}=1.34\times10^{-3}$ for K-S and A-D tests, respectively), with the $\gamma$-ray loud sample being dominated by the more inverted-spectrum sources. {As~mentioned above, the inverted-spectrum sources are dominated by optically thick compact components with higher brightness temperatures than their flat-spectrum counterparts.} For such sources the energy of radiative particles dominates over that of the magnetic field, and inverse-Compton (IC) scattering is therefore more efficient. This leads to an increased production of $\gamma$-rays {if IC
scattering is the dominant $\gamma$-ray emission process}. Another possibility is that the inverted spectrum (thus more core-dominated) sources are more strongly beamed~(e.g.,~\cite[][]{Hovatta2009}). In that case, as their apparent flux density is more strongly Doppler-boosted, for a given intrinsic brightness temperature they will have smaller angular sizes. Thus, they are more likely to scintillate and they are also more likely to have detectable $\gamma$-ray emission, which is dependent on beaming~(e.g.,~\cite[][]{Savolainen2010}). Furthermore, a positive correlation between the 4.8\,GHz radio flux density and the $\gamma$-ray photon flux is found, {which supports the view that it is the radio photons from the synchrotron branch of the spectral energy distribution (SED) that are up-scattered by the inverse-Compton process to the higher energies.}
{Our findings indicate a strong connection between the origin of radio and $\gamma$-ray emission. While~the radio emission of blazars is believed to be produced by synchrotron emission of relativistic electrons, which is Doppler-boosted, the origin of the $\gamma$-ray emission is still controversial, especially with regard to the target photon field and the location of the emission site~(see, e.g.,~\cite[][]{Maraschi1992, Dermer1992, Sikora1994}).
Besides~these leptonic models there are also models where the $\gamma$-rays originate from hadronic processes, i.e.,~ relativistic protons co-accelerated with the electrons~(e.g.,~\cite[][]{Mannheim1993}). In this case, rather strong magnetic fields would be required. Following \cite{Marscher1983}, the~typical magnetic field strength for the sources in our sample can be calculated. Due~to the lack of detailed spectral information for the scintillating component, we adopt a typical spectral turnover frequency near 15\,GHz from a recent statistical study of AGN jet compactness and brightness temperatures~(\cite[][]{Lee2016}, see Figure~1). For a source with an angular size of the order of the scattering size \mbox{$\theta$ = 0.05$\sim$0.25\,mas}, a~typical peak flux density \mbox{$\mathrm{S_m}$ = 1 Jy}, and Doppler factor $\delta$ = 10, one obtains a magnetic field strength of 1.7\,mG$\sim$1.1\,G, which would favor leptonic models, unless the turnover frequency or the Doppler factor would be much higher. Dedicated future studies will be necessary to further explore this~topic.}
The~presence of a relationship between ISS and galactic latitude has long been suggested from previous surveys~(e.g.,~\cite[][]{Rickett2006, Lovell2008, Lazio2008}). However, in this study only a marginal dependence was observed. Besides the smaller size of the sample as discussed in Section~\ref{sec:pop_b}, the lack of a clear correlation may indicate that {it is predominantly the intrinsic properties (e.g., angular size, core-dominance) of the blazars that determine how they scintillate, rather than properties of the distributed ISM. In other words, a sufficiently compact source is likely to show IDV, no matter through which part of the Galaxy it is observed. This statement excepts intra-hour variability~(e.g.,~\cite[][]{Kedziora-Chudczer1997, Dennett-Thorpe2000, Bignall2003}), an extremely rare ($\ll$1\%) phenomenon that requires an unusually nearby screen.}
\textls[-15]{{The~ISM is known to be highly inhomogeneous with small-scale discrete structures, such as photon-dominant regions (PDRs, e.g.,~\cite[][]{Hollenbach1997}), high-latitude clouds (HLCs, e.g.,~\cite[][]{Magnani1996}), local interstellar clouds (LICs, e.g.,~\cite[][]{Redfield2008}), and ionized flows driven by nearby hot stars~(e.g.,~\cite[][]{Walker2017}), etc. Such structures may dominate over the more diffuse ISM on the galactic latitude dependence of IDV: local structures within $\sim$100\,pc of the Sun can produce the strongest and most rapid IDV, due to the angular size of the first Fresnel zone scaling with the inverse square root of screen distance. Scintillation from more distant scattering screens is averaged out over the finite angular diameter of the source. The~high occurrence rate of IDV in compact radio sources indicates that for many AGN, the line-of-sight intersects such~inhomogeneities.}}
\subsection{Influence on SVLBI $T_B$ Measurements}
RadioAstron blazars that are strongly core-dominated should show significant IDV due to ISS, with the largest modulations expected near 5\,GHz (e.g.,~\cite[][]{Lovell2008}). To account for the effects of ISS during the RadioAstron observations, coordinated flux density monitoring over an extended period is necessary. To estimate the uncertainties of the $T_B$ measurements obtained from SVLBI and influenced by rapid flux variations, we assume that the source has a VLBI core component containing all the variable flux. We approximate this component with a circular Gaussian. The~relation between visibility and angular diameter of this Gaussian is expressed as \cite{Pearson1999}
\begin{equation}
F(\rho)=\exp{\left(\dfrac{-(\pi \theta \rho)^2}{4\ln 2}\right)}
\end{equation}
where $\theta$ is the angular diameter and $\rho$ the baseline length. The~uncertainty of $\theta$ is given by
\begin{equation}
\Delta\theta=-\frac{\overline{m}}{2}\cdot\frac{\sqrt{1+f_c^{-2}}}{\ln f_c}\cdot\theta
\end{equation}
in which $\overline{m}$ the intrinsic modulation index and $fc=\mathrm{S_{core}/S_{total}}$ the core-dominance. Analytically applying the error propagation, one obtains for the uncertainty of $T_B$
\begin{equation}
\Delta T_B=\overline{m}\cdot\left(1+\dfrac{1+f_c^{-2}}{\ln^2f_c}\right)^{1/2}\cdot T_B
\end{equation}
which leads to an uncertainty of $\sim$$70\%$ in $T_B$ for a source which varies with $\overline{m}=10\%$ and whose core dominance is $f_c=0.8$.
\section{Summary and Conclusions} \label{sec:summary}
\textls[-15]{We presented statistical results based on five observing sessions of an IDV monitoring program with the Effelsberg 100-m radio telescope at 4.8\,GHz. The~overall statistics of the observed AGN showed that 31 out of 107 sources exhibited IDV, leading to an IDV detection rate of $\sim$30\%. The~IDV occurrence for $\gamma$-ray loud sources is $\sim$40\%, which is significantly higher than that for the $\gamma$-ray quiet ones.}
Moreover, with a maximum-likelihood approach we investigated the IDV dependence on various sources properties and on the galactic latitude. We found significant differences in the strength of IDV, dependent on the source flux density, spectral index and $\gamma$-ray loudness. The~results show that, weak (S\,<\,1\,Jy), inverted spectrum ($\alpha>-0.1$), or $\gamma$-ray loud sources, on average, exhibit significantly stronger IDV (significance $> 3 \sigma$). On the other hand, we did not find a significant dependence of IDV on the galactic latitude, which may suggest that it is predominantly the intrinsic properties (e.g., angular size, core-dominance) of the blazars that determine how they scintillate, rather than the directional dependence in the ISM. We estimate that for the blazars which show strong IDV, the~uncertainty in the observed VLBI brightness temperature can be as high as $\sim $$70\%$. A better physical understanding of these findings should become possible from direct size measurements through the ongoing RadioAstron space VLBI observations.
\vspace{6pt}
\acknowledgments{The~authors thank the anonymous referee for his comments, which helped to improve the paper. We also thank B. Boccardi for discussion and comments. This paper made use of data obtained with the 100-m telescope of the MPIfR (Max-Planck-Institut f\"ur Radioastronomie) at Effelsberg. This research was partially supported by the the Light in China's Western Region program (Grant No. 2015-XBQN-B-01, YBXM-2014-02, XBBS201324), the National Natural Science Foundation of China (NSFC, Grant No. 11503071, 11503072), the~National Basic Research Program of China (973 program, Grant No. 2015CB857100), Xinjiang Key Laboratory of Radio Astrophysics (Grant No. 2016D03020), and the China Scholarship Council (CSC, Grant No. 201704910392). Yuri Y. Kovalev acknowledges support by the government of the Russian Federation (agreement 05.Y09.21.0018) and the Alexander von Humboldt Foundation.}
\authorcontributions{J.L, H.B, T.P.K, X.L, A.K, Y.Y.K proposed the observations. Y.Y.K and K.V.S provided the RadioAstron schedule for this campaign. J.L, T.P.K and A.K performed the observations and conducted data calibration. H.B, T.P.K, X.L, E.A and J.A.Z contributed in the discussion and interpretation of the data. J.L wrote the paper.}
\conflictsofinterest{The~authors declare no conflict of interest.}
|
1,108,101,564,645 | arxiv | \section{Introduction}
Let $G$ be a connected reductive group over $\mathbb{Q}$ and let $X$ be a $G(\mathbb{R})$-conjugacy class of homomorphisms $\mathbb{S}\to G_{\mathbb{R}}$ such that the pair $(G,X)$ is a Shimura datum. Then for any sufficiently small open and compact subgroup $K\subseteq G(\mathbb{A}_f)$ the associated Shimura variety $\Sh_K(G,X):=G(\mathbb{Q})\setminus X\times G(\mathbb{A}_f)/K$ is a smooth, projective complex variety and admits a canonical model over the reflex field $E$ of the Shimura datum $(G,X)$.
Let $p$ be a prime number. Suppose that $K_p$ is a hyperspecial subgroup of $G(\mathbb{Q}_p)$, and let $\Sh_{K_p}(G,X):=\varprojlim_{K^p}\Sh_{K_pK^p}(G,X)$, where the limit is taken over open compact subgroups $K^p\subseteq G(\mathbb{A}_f^p)$. In \cite{La} Langlands suggested that $\Sh_{K_p}(G,X)$ should have an integral canonical model $\mathscr{S}_{K_p}(G,X)$ over the local ring $o_{E,(v)}$ at any place $v$ of $E$ lying above $p$, this conjecture was later refined by Milne in \cite{Mi1} (see Def. \ref{CanModelDef} for the precise notion of an integral canonical model). In particular, if $\mathscr{S}_{K^p}(G,X)$ exists, then for any open compact $K=K_pK^p\subseteq G(\mathbb{A}_f)$ (with $K^p$ sufficiently small) the quotient $\mathscr{S}_K(G,X):=\mathscr{S}_{K_p}(G,X)/K^p$ is a smooth model for the Shimura variety $\Sh_K(G,X)$ over $O_{E,(v)}$.
Consider the classical example of a symplectic Shimura datum, associated to a symplectic rational vector space $(V,\psi)$.
In this case $G=\GSp(V,\psi)$ and the models $\mathscr{S}_K(G,S^{\pm})$ are given as moduli spaces over $\Spec(\mathbb{Z}_{(p)})$ of principally polarized abelian schemes of relative dimension $n:=\dim_{\mathbb{Q}}(V)/2$ together with a mod-$K$ level structure, so the special fibers of such models can be studied via this moduli interpretation: There is a universal abelian scheme $\mathcal{A}\to\mathscr{S}_K(G,S^{\pm})$, and thus for any algebraically closed field $k$ of characteristic $p$ a point $x\in\mathscr{S}_K(G,S^{\pm})(k)$ defines an abelian variety $\mathcal{A}_x$ of dimension $n$ over $k$. The classification of the associated $p$-divisible groups $\mathcal{A}_x[p^{\infty}]$ up to isogeny then gives the Newton polygon stratification of $\mathscr{S}_K(G,S^{\pm})\otimes\mathbb{F}_p$, which has been studied by Oort (\cite{Oo1}), de Jong-Oort (\cite{dJO}), and many others. On the other hand, the Ekedahl-Oort stratification (or EO-stratification) on $\mathscr{S}_K(G,S^{\pm})\otimes\overline{\mathbb{F}_p}$ is obtained by classifying the $p$-torsion subgroups $\mathcal{A}_x[p]$ up to isomorphism (as $BT_1$-groups), see \cite{Oo2}. Other examples of stratifications arise from considering $\mathcal{A}_x[p^{\infty}]$ up to isomorphism (\cite{Oo3}), and from the $p$-rank of $\mathcal{A}_x[p]$ (\cite{Kob}). The \emph{ordinary locus} in $\mathscr{S}_K(G,S^{\pm})\otimes\mathbb{F}_p$ is defined as the set of those points $x$ such that for a geometric point $\hat{x}$ lying over $x$ the $p$-divisible group $\mathcal{A}_{\hat{x}}[p^{\infty}]$ is isogenous to a product of {\'e}tale and multiplicative groups. It has been known for a long time that the ordinary locus is a dense subset of $\mathscr{S}_K(G,S^{\pm})$, see for example Koblitz' proof in \cite{Kob}.
In the case of a PEL-type Shimura variety integral canonical models have been constructed by Kottwitz in \cite{Ko2}. They have an explicit interpretation as moduli spaces over $\Spec(o_{E,(v)})$ of abelian schemes with additional structures, so again one has means to study the special fibers $\mathscr{S}_K(G,X)\otimes\kappa(v)$ (where $\kappa(v)$ is the residue class field of $E$ at $v$) via the universal object. However, the naive definitions lead to undesirable results in this context. For example the density theorem for the ordinary locus fails, for the simple reason that the ordinary locus may be empty. This has led to refined stratifications which pay respect to the additional structures on the abelian varieties in question (see e.g. \cite{RR} for the Newton stratificaton, \cite{VW} for the EO-stratification). The $\mu$-\emph{ordinary locus} of $\mathscr{S}_K(G,X)\otimes\kappa(v)$ is here defined to be the most general Newton stratum, it was shown to be open and dense by Wedhorn in \cite{We1} by a deformation theoretic argument. This theorem was later reproven by Moonen in \cite{Mo2} who showed that the $\mu$-ordinary Newton stratum coincides with the unique open Ekedahl-Oort stratum, which was by then known to be dense (see \cite{We2}). This is one instant of the fruitful interaction of these two statifications, another being the recent work of Viehmann and Wedhorn (see \cite{VW}), who deduce the nonemptiness of all Newton strata from the fact that all Ekedahl-Oort strata are nonempty.
In this article we prove the density of the $\mu$-ordinary locus in the case of a Shimura variety of Hodge type. Although in general we do not have a moduli interpretation of the models any more in this case, there is still a natural abelian scheme $\mathcal{A}\to\mathscr{S}_K(G,X)$ together with a tensor structure on $H_{\mathrm{dR}}^1(\mathcal{A}/\mathscr{S}_K(G,X))$, which allows to define Newton strata and EO-Strata in analogy to the PEL-case. In order to show the density of the $\mu$-ordinary locus we generalize the technique used in \cite{VW} to compare Newtonstrata and Ekedahl-Oort strata with the aid of group theoretic methods. In particular this allows to show that the $\mu$-ordinary locus once again agrees with the unique open Ekedahl-Oort stratum, thereby generalizing Moonen's result from \cite{Mo2}. We remark that the known proofs of these results for PEL-type Shimura varieties often rely on an explicit case-by-case analysis, so that the group theoretic approach also provides a new point of view in this case.\\
Let us explain in more detail the contents of this paper. We fix an embedding $(G,X)\hookrightarrow(\GSp(V,\psi),S^{\pm})$ of Shimura data for some symplectic vector space $(V,\psi)$. There is a connected reductive group scheme $\mathcal{G}$ over $\mathbb{Z}_p$ such that $K_p=\mathcal{G}(\mathbb{Z}_p)\subseteq G(\mathbb{Q}_p)$. The existence of integral canonical models for $\Sh_K(G,X)$ (where $K=K_pK^p$ for $K^p$ sufficiently small) was shown by Kisin in \cite{Ki1} (with some restrictions for $p=2$). The starting point of the construction is the observation that one may choose a lattice $\Lambda\subseteq V$ and a finite set of tensors $s$ over $\Lambda_{\mathbb{Z}_{(p)}}$ such that $\mathcal{G}\subseteq\GL(\Lambda_{\mathbb{Z}_p})$ is the stabilizer subgroup of $s_{\mathbb{Z}_p}$. The model $\mathscr{S}_K(G,X)$ is then defined as the normalization of the closure of $\Sh_K(G,X)$ in a suitable moduli space of abelian schemes over $o_{E,(v)}$, thus by construction $\mathscr{S}_K(G,X)$ naturally comes along with an abelian scheme $\mathcal{A}$ on it. The tensors $s$ can be shown to give rise to tensors $s_{\mathrm{dR}}$ over $H_{\mathrm{dR}}^1(\mathcal{A}\otimes E/\Sh_K(G,X))$. The key step in the proof that the schemes $\mathscr{S}_K(G,X)$ indeed give an integral canonical model for $\Sh_{K_p}(G,X)$ is now to show that $\mathscr{S}_K(G,X)$ is \emph{smooth}. The technique used in \cite{Ki1} to achieve this shows at the same time that the $s_{\mathrm{dR}}$ extend to tensors $s_{\mathrm{dR}}^{\circ}$ over $H_{\mathrm{dR}}^1(\mathcal{A}/\mathscr{S}_K(G,X))$, in a way such that for any closed or geometric point $x$ of the special fiber of $\mathscr{S}_K(G,X)$ one gets induced tensors over the contravariant Dieudonn{\'e} module $\mathbb{D}(\mathcal{A}_x[p^{\infty}])$ which are Frobenius invariant and define a subgroup isomorphic to $\mathcal{G}_{W(\kappa(x))}$.
To make this more precise, let us introduce some more notation: Let $\overline{\mathbb{F}}$ be an algebraic closure of $\mathbb{F}_p$. For simplicity, throughout the rest of the introduction we restrict ourselves to $\overline{\mathbb{F}}$-valued points. In fact, to introduce the stratifications we also need similar descriptions for points over more general algebraically closed fields and points with finite residue fields. However, since all strata will be locally closed subsets, the $\overline{\mathbb{F}}$-valued points contain all topological informations on the stratifications, once defined.\\
Let $\mathcal{O}:=W(\overline{\mathbb{F}})$ and $L:=\mathrm{Frac}(\mathcal{O})$, and let $\sigma$ be the Frobenius isomorphism of $\overline{\mathbb{F}}$ resp. of $\mathcal{O}$ and $L$. We fix a Borel pair $(\mathcal{B},\mathcal{T})$ of $\mathcal{G}$ over $\mathcal{O}$ which is $\sigma$-invariant. This exists since $\mathcal{G}$ is quasisplit. Our Shimura datum defines in the usual way a conjugacy class $[\nu]$ of cocharacters for $G$, and hence for $\mathcal{G}$. We define $\mu$ as the unique dominant cocharacter in $X_*(\mathcal{T})$ with respect to $\mathcal{B}$ such that $\sigma^{-1}(\mu)^{-1}$ lies in $[\nu]$.\\
Let $\Lambda^*$ be the dual $\mathbb{Z}$-module of $\Lambda$. The tensors $s$ can also be viewed as tensors over $\Lambda^*_{\mathbb{Z}_{(p)}}$ in a canonical way. We let $\mathcal{G}$ act on $\Lambda^*_{\mathbb{Z}_p}$ via the contragredient representation $\GL(\Lambda)\to\GL(\Lambda^*),\ g\mapsto g^{\vee}:=(g^{-1})^*$.
Now let $x$ be an $\overline{\mathbb{F}}$-valued point of $\mathscr{S}_K(G,X)$, and let $(\mathbb{D}_x,F,V):=\mathbb{D}(\mathcal{A}_x[p^{\infty}])$ be the associated contravariant Dieudonn{\'e} module over $\mathcal{O}$, then the following hold (Corollary \ref{CrysTensors}, Lemma \ref{LinearizationLem}):
\begin{enumerate}[(1)]
\item The tensors $s_{\mathrm{dR}}^{\circ}$ induce $F$-invariant tensors $s_{\mathrm{cris},x}$ on $\mathbb{D}_x$, and there is an isomorphism of $\mathcal{O}$-modules $\Lambda^*_{\mathcal{O}}\simeq\mathbb{D}_x$ which identifies $s_{\mathcal{O}}$ with $s_{\mathrm{cris},x}$.
\item If we identify $\mathbb{D}_x$ with $\Lambda^*_{\mathcal{O}}$ using an isomorphism as in (1), then $F=g^{\vee}(1\otimes\sigma)$ for some $\mathcal{G}(\mathcal{O})\mu(p)\mathcal{G}(\mathcal{O})$, and this element is independent of the choice of the isomorphism up to $\sigma$-conjugation by an element of $\mathcal{G}(\mathcal{O})$.
\end{enumerate}
Thus, writing $\mathcal{C}(\mathcal{G},\mu)$ for the set of $\mathcal{G}(\mathcal{O})$-$\sigma$-conjugacy classes of the double coset $\mathcal{G}(\mathcal{O})\mu(p)\mathcal{G}(\mathcal{O})\subseteq G(L)$, we obtain a well-defined map
\[
\gamma\colon \mathscr{S}_K(G,X)(\overline{\mathbb{F}})\longrightarrow\mathcal{C}(\mathcal{G},\mu)
\]
by sending $x$ to the conjugacy class of the element $g$.
Properties (1) and (2) are direct consequences of the results in \cite{Ki1}, though they are not explicitly stated there. We refer the reader also to \S 1 of the recent preprint \cite{Ki2}. The dual lattice appears here due to the fact that we use contravariant Dieudonn{\'e} theory, which is also the reason for our definition of $\mu$.\\
The Newton stratification is easily described in this context: We have a natural map $\tilde{\theta}\colon\mathcal{C}(\mathcal{G},\mu)\to B(G)$, where $B(G)$ denotes the set of $\sigma$-conjugacy classes in $G(L)$. The Newton strata are then given as the fibers $\mathcal{N}^b=\theta^{-1}(\{b\})$ of the composite map
\[
\theta\colon\mathscr{S}_K(G,X)(\overline{\mathbb{F}})\stackrel{\gamma}{\longrightarrow}\mathcal{C}(\mathcal{G},\mu)\stackrel{\tilde{\theta}}{\longrightarrow} B(G).
\]
Equivalently, two points $x,x'\in\mathscr{S}_K(G,X)(\overline{\mathbb{F}})$ lie in the same Newton stratum if and only if there is an isomorphism of isocrystals $\mathbb{D}_x\otimes_{\mathcal{O}}L\simeq\mathbb{D}_{x'}\otimes_{\mathcal{O}}L$ which respects the tensors $s_{\mathrm{cris},x}$ and $s_{\mathrm{cris},x'}$. These strata $\mathcal{N}^b$ are in fact already defined over $\kappa(v)$ (see Section \ref{NewtStratSec} for the precise definition), and a result of Vasiu (\cite{Va1}, 5.3.1.) shows that they are locally closed subsets of $\mathscr{S}_K(G,X)\otimes\kappa(v)$. The image of $\tilde{\theta}$ is the subset $B(G,\mu)\subseteq B(G)$ which has already been considered in the context of PEL-type Shimura varieties and affine Deligne-Lusztig sets. It is endowed with a partial order $\preceq$, and contains a unique maximal element $b_{\mathrm{max}}\in B(G,\mu)$ with respect to this order. We define the $\mu$-\emph{ordinary locus} in $\mathscr{S}_K(G,X)\otimes\kappa(v)$ as the Newton stratum $\mathcal{N}^{b_{\mathrm{max}}}$.\\
We can now state our first main theorem:
\begin{theorem}
\label{MainThm1}
The $\mu$-ordinary locus is open and dense in $\mathscr{S}_K(G,X)\otimes\kappa(v)$.
\end{theorem}
To prove this, we relate the Newton stratification to the Ekedahl-Oort stratification on $\mathscr{S}_K(G,X)\otimes\overline{\mathbb{F}}$, which has been defined by C. Zhang in \cite{Zh1}. Just as in the case of a PEL-type Shimura variety the definition of this stratification relies on the theory of $\mathcal{G}_{\mathbb{F}_p}$-zips. As the precise construction is somewhat involved, we only state the main results here, see Section \ref{EOStratSec} for details.
Let $(W,S)$ be the Weyl group of $\mathcal{G}$ with respect to $(\mathcal{B},\mathcal{T})$, and let $J\subseteq S$ be the type of the cocharacter $\sigma(\mu)$. In analogy to the PEL-case the Ekedahl-Oort stratification is then parametrized by the set ${^J}W$ of shortest left-coset representatives for $W_J$ in $W$. This set carries a partial order $\preceq$, which refines the Bruhat order, and there is again a unique maximal element $w_{\mathrm{max}}\in{^J}W$ with respect to $\preceq$. By the main results of \cite{Zh1}, each stratum $\mathcal{S}^w\subseteq\mathscr{S}_K(G,X)\otimes\overline{\mathbb{F}}$ is locally closed, and the closure of $\mathcal{S}^w$ is precisely the union of the strata $\mathcal{S}^{w'}$ with $w'\preceq w$. Since this in particular implies that $\mathcal{S}^{w_{\mathrm{max}}}$ is open and dense in $\mathscr{S}_K(G,X)\otimes\overline{\mathbb{F}}$, Theorem \ref{MainThm1} follows directly from our second main result, which generalizes Moonen's description of the $\mu$-ordinary locus in the PEL-case:
\begin{theorem}
\label{MainThm2}
The strata $\mathcal{N}^{b_{\mathrm{max}}}$ and $\mathcal{S}^{w_{\mathrm{max}}}$ are equal as subsets of $\mathscr{S}_K(G,X)\otimes\overline{\mathbb{F}}$. Furthermore, for any two $\overline{\mathbb{F}}$-valued points $x,x'$ in this set there is an isomorphism of Dieudonn{\'e} modules $\mathbb{D}_x\simeq\mathbb{D}_{x'}$ which identifies $s_{\mathrm{cris},x}$ with $s_{\mathrm{cris},x'}$.
\end{theorem}
To show this, we work with $\overline{\mathbb{F}}$-valued points and make use of the map $\gamma\colon\mathscr{S}_K(G,X)(\overline{\mathbb{F}})\to\mathcal{C}(\mathcal{G},\mu)$ considered above: By definition of the Ekedahl-Oort stratification, the $\overline{\mathbb{F}}$-valued points of the strata $\mathcal{S}^w$ are the fibers of a map $\zeta\colon\mathscr{S}_K(G,X)(\overline{\mathbb{F}})\to{^J}W$. We show that this map factors via $\gamma$, giving rise to a commutative diagram
\[
\begin{xy}
\xymatrix{
& & & B(G,\mu)\\
\mathscr{S}_K(G,X)(\overline{\mathbb{F}})\ar[rrru]^{\theta}\ar[rr]^{\gamma}\ar[rrrd]_{\zeta} & & \mathcal{C}(\mathcal{G},\mu) \ar[ru]_{\tilde{\theta}}\ar[rd]^{\tilde{\zeta}} & \\
& & & {^JW}
}
\end{xy}
\]
which allows to some extend to compare the stratifications inside the set $\mathcal{C}(\mathcal{G},\mu)$. This is in analogy to the method already used in \cite{VW} for the PEL-case. In particular, the diagram shows that Theorem \ref{MainThm2} follows once we know that $\tilde{\theta}^{-1}(\{b_{\mathrm{max}}\})=\tilde{\zeta}^{-1}(\{w_{\mathrm{max}}\})$ in $\mathcal{C}(\mathcal{G},\mu)$, and that this set consists of a single element. Since we can give a precise description of the fibers of $\tilde{\theta}$ and $\tilde{\zeta}$, we are reduced to a purely group theoretic result for the group $\mathcal{G}$.
We finally prove this to be true, in a more general context, in the final section of this article, which is logically independent of the rest of the paper. \\
\noindent
{\bf Acknowledgements.}
I wish to thank my advisor T. Wedhorn deeply for his continuous encouragement and interest in this work. Further I am grateful to E. Lau for helpful discussions on Dieudonn{\'e} theory and much useful advice, and to J.-S. Koskivirta for many helpful comments.
\section{General notations and conventions}
\begin{blank}
\label{sigmaPrep}
For a perfect field $k$ of positive characteristic $p$ we write $W(k)$ for the Witt ring over $k$, and $L(k)$ for its quotient field. We generally denote by $\sigma$ the Frobenius automorphism $a\mapsto a^p$ of $k$ (with the exception of the last section, where $\sigma$ will denote a finite power of this map), and also its lift to $W(k)$ and $L(k)$.
Let $k$ be a perfect field of characteristic $p$. Let $R$ be either $k$ or $W(k)$, and let $R_0\subseteq R$ be the subring of elements which are fixed by $\sigma$ (i.e., either $R_0=\mathbb{F}_p$ or $R_0=\mathbb{Z}_p$). For any $R$-module $M$ let $M^{(\sigma)}:=M\otimes_{R,\sigma}R$, and for a homomorphism $\beta\colon M\to N$ of $R$-modules write $\beta^{(\sigma)}:=\beta\otimes 1\colon M^{(\sigma)}\to N^{(\sigma)}$. If $f\colon M\to N$ is a $\sigma$-linear map of $R$-modules then
\[
M^{(\sigma)}\longrightarrow N,\quad m\otimes a\longmapsto af(m)
\]
is $R$-linear, and if $f$ is $\sigma^{-1}$-linear then
\[
M\longrightarrow N^{(\sigma)},\quad m\longmapsto f(m)\otimes 1
\]
is $R$-linear. In both cases we call the resulting homomorphism the \emph{linearization} of $f$ and denote it by $f^{\mathrm{lin}}$.
Now let $M_0$ be an $R_0$-module, and let $M=M_0\otimes_{R_0}R$. Then $\sigma$ and $\sigma^{-1}$ act on $M$ via $1\otimes\sigma$ and $1\otimes\sigma^{-1}$ respectively. Further, there is a canonical isomorphism
\[
M=M_0\otimes_{R_0}R\stackrel{\sim}{\longrightarrow} M\otimes_{R_0}R\otimes_{R,\sigma}R=M^{(\sigma)},\quad m\otimes a\mapsto m\otimes 1\otimes a.
\]
We will often use this isomorphism to identify $M$ with $M^{(\sigma)}$. For example, if $f\colon M\to N$ is $\sigma$-linear, we also write $f^{\mathrm{lin}}\colon M\cong M^{(\sigma)}\to N$, with this notation we then have that $f=f^{\mathrm{lin}}\circ(1\otimes \sigma)$.
If $M_0$ is a finitely generated free $R_0$-module, then $\sigma$ also acts on $\GL(M)$ and on the group of cocharacters $\Hom_R(\mathbb{G}_{m,R},\GL(M))$. For $g\in\GL(M)$ we have $\sigma(g)=(1\otimes\sigma)\circ g\circ (1\otimes\sigma^{-1})$, and for a cocharacter $\lambda\colon\mathbb{G}_{m,R}\to\GL(M)$ we find that $\sigma(\lambda)(a)=\sigma(\lambda(a))$ for all $a\in R$.
\end{blank}
\begin{blank}
\label{TensorPrep}
Let $R$ be any ring. If $M$ is a finitely generated free module over $R$, we denote by $M^{\otimes}$ the direct sum of all $R$-modules that arise from $M$ by applying the operations of taking duals, tensor products, symmetric powers and exterior powers a finite number of times. An element of $M^{\otimes}$ will be called a \emph{tensor} over $M$. We have an obvious notion of base change for tensors. Let $M^*$ be the dual $R$-module of $M$. Since there is a canonical identificaton of $M^{\otimes}$ with $(M^*)^{\otimes}$ we can view tensors over $M$ as tensors over $M^*$ as well.
Let $M$ and $M'$ be finitely generated free $R$-modules and let $s=(s_i)_{i\in I}$ and $s'=(s_i')_{i\in I}$ be families of tensors over $M$ and $M'$ respectively. Every isomorphism $f\colon M\to M'$ gives an isomorphism $(f^{-1})^*\colon (M)^*\to (M')^*$ and thus $f^{\otimes}\colon M^{\otimes}\to(M')^{\otimes}$. We will write $f\colon (M,s)\to (M',s')$ if and only if $f^{\otimes}$ takes $s_i$ to $s_i'$ for all $i\in I$. We say that a family of tensors $(s_i)_{i\in I}$ over $M$ \emph{defines} the subgroup $G\subseteq \GL(M)$ if
\[
G(R')=\{g\in\GL(M_{R'})\mid g^{\otimes}((s_i)_{R'})=(s_i)_{R'}\text{ for all }i\in I\}
\]
for every $R$-algebra $R'$. We have the contragredient representation
\[
(\cdot)^{\vee}\colon\GL(M)\longrightarrow\GL(M^*),\quad g\longmapsto g^{\vee}:=(g^{-1})^*,
\]
which is in fact an isomorphism of group schemes over $R$. Let $(s_i)_{i\in I}$ be a family of tensors over $M$, defining a subgroup $G\subseteq\GL(M)$. Then these tensors $(s_i)_{i\in I}$, when we consider them as tensors over $M^*$, define the subgroup $\{g^{\vee}\mid g\in G\}\subseteq\GL(M^*)$.
\end{blank}
\section{Shimura data of Hodge type and the tower of Shimura varieties}
\label{ShimVarSec}
Let $G$ be a connected reductive group over $\mathbb{Q}$ and let $X$ be a $G(\mathbb{R})$-conjugacy class of algebraic morphisms $\mathbb{S}\to G_{\mathbb{R}}$ such that $(G,X)$ is a Shimura datum of Hodge type. By definition, this means that there is an embedding $(G,X)\hookrightarrow(\GSp(V,\psi),S^{\pm})$ into a symplectic Shimura datum, which we fix once and for all. We will often simply write $\GSp(V)$ for $\GSp(V,\psi)$ with the symplectic pairing implied.
The datum $(G,X)$ defines conjugacy classes of cocharacters for $G$ as follows: Every element $h\in X$ defines a Hodge decomposition $V_{\mathbb{C}}=V^{(-1,0)}\oplus V^{(0,-1)}$ via the embedding $X\hookrightarrow S^{\pm}$.
\begin{definition}
\label{CocharDef}
\begin{enumerate}[(i)]
\item We define $\nu_h$ to be the cocharacter of $G_{\mathbb{C}}$ such that $\nu_h(z)$ acts on $V^{(-1,0)}$ through multiplication by $z$ and on $V^{(0,-1)}$ as the identity.
\item We denote by $[\nu]$ the unique $G(\mathbb{C})$-conjugacy class which contains all the cocharacters $\nu_h$, and by $[\nu^{-1}]$ the conjugacy class which contains the $\nu_h^{-1}$.
\end{enumerate}
\end{definition}
The reflex field $E$ of $(G,X)$ is defined as the field of definition of $[\nu]$ (or equivalently of $[\nu^{-1}]$), this is known to be a finite extention of $\mathbb{Q}$.
We fix a prime number $p$ such that $G$ is of good reduction at $p$. Let $K_p\subseteq G(\mathbb{Q}_p)$ be a hyperspecial subgroup. Consider subgroups of the type $K=K_pK^p\subseteq G(\mathbb{A}_f)$, where $K^p\subseteq G(\mathbb{A}_f^p)$ is open and compact. If $K^p$ is sufficiently small, then the double quotient
\[
\Sh_K(G,X):=G(\mathbb{Q})\setminus X\times G(\mathbb{A}_f)/K
\]
(where $G(\mathbb{Q})$ acts diagonally and $K$ acts on the right factor) has a natural structure as a smooth quasi-projective variety over $\mathbb{C}$, and further this variety has a canonical model over $E$. In the sequel we will always view $\Sh_K(G,X)$ as an algebraic variety over $E$.\\
The projective limit
\[
\Sh_{K_p}(G,X):=\varprojlim_{K^p} \Sh_{K_pK^p}(G,X),
\]
taken over the set of open and compact subgoups of $G(\mathbb{A}_f^p)$, carries a contiuous right action of $G(\mathbb{A}_f^p)$ in the sense of Deligne (see \cite{Mi1}, 2.1.): Elements $g\in G(\mathbb{A}_f^p)$ act by isomorphisms $\Sh_{K_p(gK^pg^{-1})}(G,X)\to\Sh_{K_pK^p}$ such that every $g\in K^p$ gives the identity map on $\Sh_{K_pK^p}(G,X)$, and such that for every normal subgroup $K'^p\subseteq K^p$ the natural covering map induces an isomorphism $\Sh_{K_pK'^p}(G,X)/(K^p/K'^p)\simeq\Sh_{K_pK^p}(G,X)$. In particular, we have an equality $\Sh_{K_pK^p}(G,X)=\Sh_{K_p}(G,X)/K^p$ for every open and compact $K^p\subseteq G(\mathbb{A}_f^p)$.\\
We fix a place $v$ of $E$ over $p$. The existence of the hyperspecial subgroup $K_p$ implies that $E$ is unramified at $p$ (\cite{Mi2}, 4.7.). Let $o_E$ be the ring of integers in $E$, and let $o_{E,(v)}$ be its localization at $v$.
\begin{definition}[\cite{Mi1}, \S 2]
\label{CanModelDef}
An \emph{integral canonical model} of $\Sh_{K_p}(G,X)$ over $o_{E,(v)}$ is a projective system $\mathscr{S}_{K_p}(G,X)=\varprojlim_{K^p}\mathscr{S}_{K_pK^p}(G,X)$ of schemes over $o_{E,(v)}$, indexed by the set of open and compact subgroups of $G(\mathbb{A}_f^p)$, together with a continuous right action of $G(\mathbb{A}_f^p)$ such that:
\begin{enumerate}[(i)]
\item If $K^p$ is sufficiently small, then $\mathscr{S}_{K_pK^p}(G,X)$ is smooth over $o_{E,(v)}$ and $\mathscr{S}_{K_pK'^p}(G,X)\to \mathscr{S}_{K_pK^p}(G,X)$ is {\'e}tale for every $K'^p\subseteq K^p$.
\item $\mathscr{S}_{K_p}(G,X)\otimes_{o_{E,(v)}} E$ is $G(\mathbb{A}_f^p)$-equivariantly isomorphic to $\Sh_{K_p}(G,X)$.
\item Let $Y$ be a regular, formally smooth $o_{E,(v)}$-scheme. Then every morphism $Y\otimes_{o_{E,(v)}} E\to \mathscr{S}_{K_p}(G,X)\otimes_{o_{E,(v)}} E$ extends to a morphism $Y\to \mathscr{S}_{K_p}(G,X)$.
\end{enumerate}
\end{definition}
Note that in the situation of (iii) the extension $Y\to \mathscr{S}_{K_p}(G,X)$ is automatically unique, since $Y$ is reduced and $Y\otimes_{o_{E,(v)}} E$ is dense in $Y$. Hence a model in the sense of Def. \ref{CanModelDef} is unique up to canonical isomorphism which justifies the name "canonical model". In \cite{Mi1} Milne conjectured that an integral canonical model of $\Sh_{K_p}(G,X)$ always exists (for a general Shimura datum, not necessarily of Hodge type), see also the treatment in (\cite{Mo1}, \S 3).
\begin{example}
\label{ExPEL1}
Consider a Shimura datum $(G,X)\hookrightarrow(\GSp(V,\psi),S^{\pm})$ of PEL-type (here $G$ is not connected in general): Let $(B,*)$ be a finite dimensional semi-simple $\mathbb{Q}$-algebra with involution which acts on $V$ such that $\psi(bv,w)=\psi(v,b^*w)$ for all $b\in B$ and all $v,w\in V$ and such that
\[
G(R)=\{g\in\GSp(V_R,\psi_R)\mid g(bv)=bg(v)\text{ for }b\in B_R, v\in V_R\}
\]
for any $\mathbb{Q}$-algebra $R$. Then $p$ is a prime of good reduction for $G$ if and only if $B_{\mathbb{Q}_p}$ is unramified. In this case, it is shown in (\cite{Ko2}, \S 5 and \S 6) that a canonical integral model exists if $p\geq3$ or if $p=2$ and $G^{\mathrm{ad}}$ has no factor of Dynkin type $D$. The schemes $\mathscr{S}_{K_pK^p}(G,X)$ then have an explicit description as a moduli space of abelian schemes with additional structures over $o_{E,(v)}$.
\end{example}
\section{Integral canonical models for Shimura varieties of Hodge type}
In this section we briefly describe the construction of the canonical integral model for $\Sh_{K_p}(G,X)$, following Kisin's proof in \cite{Ki1}, and introduce the objects which are fundamental for the study of the closed fiber which follows. In the case $p=2$ two restrictions arise in order for the construction to work.
\subsection{Construction of the integral models}
\label{IntModConstrSec}
Let $\mathcal{G}$ be a reductive model of $G$ over $\mathbb{Z}_p$ such that $K_p=\mathcal{G}(\mathbb{Z}_p)$. If $p=2$, we assume that $G^{\mathrm{ad}}$ has no factor of Dynkin type $B$. Then there is a lattice $\Lambda\subseteq V$ and a finite set of tensors $s:=(s_i)\subset\Lambda_{\mathbb{Z}_{(p)}}^{\otimes}$ such that $\mathcal{G}$ is identified with the subgroup of $\GL(\Lambda_{\mathbb{Z}_p})$ defined by $s_{\mathbb{Z}_p}\subset\Lambda_{\mathbb{Z}_p}^{\otimes}$ (\cite{Ki1}, 2.3.1., 2.3.2.) via our chosen embedding $G\hookrightarrow\GSp(V)$. Possibly passing to a homothetic lattice, we may and will further assume that the symplectic pairing $\psi$ on $V$ restricts to a pairing $\Lambda\times\Lambda\to\mathbb{Z}$. Note however that $\Lambda$ will not be self-dual with respect to $\psi$ in general.
Let $\tilde{K}_p$ be the stabilizer of $\Lambda_{\mathbb{Z}_p}$ in $\GSp(V)(\mathbb{Q}_p)$. Then $K_p=\tilde{K}_p\cap G(\mathbb{Q}_p)$. Let $K^p\subseteq G(\mathbb{A}_f^p)$ be an open and compact subgroup such that $K:=K_pK^p$ leaves $\Lambda_{\hat{\mathbb{Z}}}$ stable (which is the case for all sufficiently small $K^p$). It can be shown that there is an open and compact subgroup $\tilde{K}^p\subseteq\GSp(V)(\mathbb{A}_f^p)$ which contains $K^p$, such that $\tilde{K}:=\tilde{K}_p\tilde{K}^p$ also leaves $\Lambda_{\hat{\mathbb{Z}}}$ stable and such that the natural map $\Sh_K(G,X)\rightarrow\Sh_{\tilde{K}}(\GSp(V),S^{\pm})\otimes_{\mathbb{Q}}E$ is a closed embedding (\cite{Ki1}, 2.1.2., 2.3.2.). We call a subgroup $\tilde{K}^p$ with these properties \emph{admissible} for $K^p$. If $K'^p\subseteq K^p$ then there is an open and compact subgroup $\tilde{K}'^p\subseteq\tilde{K}^p$ which is admissible for $K'^p$ and we obtain a commutative diagram
\[
\label{CoupleCD}
\begin{xy}
\xymatrix{
\Sh_{K'}(G,X)\ar@^{(->}[r]\ar[d] & \Sh_{\tilde{K}'}(\GSp(V),S^{\pm})\otimes_{\mathbb{Q}}E\ar[d]\\
\Sh_K(G,X)\ar@^{(->}[r] & \Sh_{\tilde{K}}(\GSp(V), S^{\pm})\otimes_{\mathbb{Q}}E
}
\end{xy}
\]
where the horizontal arrows are closed embeddings.
\begin{construction}
\label{ModuliConstr}
We denote by $\Lambda'$ the dual lattice of $\Lambda$ with respect to $\psi$. Let $|\Lambda'/\Lambda|=d$, and let $\dim(V)=2n$. Let $K^p\subseteq G(\mathbb{A}_f^p)$ be an open and compact subgroup, and let $\tilde{K}^p$ be admissible for $K^p$. With respect to $\Lambda$, we consider the moduli space $\mathscr{M}_{n,d,\tilde{K}^p}$ over $\mathbb{Z}_{(p)}$ which parametrizes abelian schemes with a polarization of degree $d$ and a mod-$\tilde{K}^p$ level structure up to isomorphism (see \cite{Ki1}, 2.3.3.). By the classical result of Mumford, $\mathscr{M}_{n,d,\tilde{K}^p}$ is representable by a quasi-projective scheme over $\mathbb{Z}_{(p)}$ if $\tilde{K}^p$ is sufficiently small.
Let again $\tilde{K}=\tilde{K}_p\tilde{K}^p$. Due to the moduli interpretation of Shimura varieties of Siegel type, there is an embedding
\[
\label{ModuliMap}
\Sh_{\tilde{K}}(\GSp(V),S^{\pm})\hookrightarrow \mathscr{M}_{n,d,\tilde{K}^p}
\]
of $\mathbb{Z}_{(p)}$-schemes. We give a description of this map on $\mathbb{C}$-valued points, cf. (\cite{Va1}, 4.1.): Let
\[
[h,g]\in\Sh_{\tilde{K}}(\GSp(V),S^{\pm})(\mathbb{C})=\GSp(\mathbb{Q})\setminus S^{\pm}\times\GSp(\mathbb{A}^f)/\tilde{K}.
\]
Let $V_{\mathbb{C}}=V^{(-1,0)}\oplus V^{(0,-1)}$ be the Hodge decomposition induced by $h$. There is a unique $\mathbb{Z}$-lattice $\Lambda_g\subset V$ such that $(\Lambda_g)_{\hat{\mathbb{Z}}}=g(\Lambda_{\hat{\mathbb{Z}}})$ and a unique $\mathbb{Q}^{\times}$-multiple $\psi_{h,g}$ of $\psi$ such that $g(\Lambda'_{\hat{\mathbb{Z}}})$ is the dual lattice of $g(\Lambda_{\hat{\mathbb{Z}}})$ with respect to $\psi_{h,g}$ and such that the form $(v,w)\mapsto\psi_{h,g}(v,h(i)w)$ is positive definite on $V_{\mathbb{R}}$. Then $[h,g]$ is mapped to the isomorphism class of $(A, \lambda,\eta)$, where $A:=V^{(-1,0)}/\Lambda_g$, endowed with the polarization $\lambda$ induced by $\psi_{h,g}$, is the polarized complex abelian variety associated to $(V,\psi_{h,g},\Lambda_g,h)$ via Riemann's theorem (see \cite{De1}, 4.7.), and $\eta$ is the right $\tilde{K}^p$-coset of
\[
\Lambda_{\hat{\mathbb{Z}}^p}\stackrel{g^p}{\longrightarrow}g^p(\Lambda_{\hat{\mathbb{Z}}^p})=(\Lambda_g)_{\hat{\mathbb{Z}}^p}\cong H_1(A,\mathbb{Z})_{\hat{\mathbb{Z}}^p}\cong\prod_{l\neq p}T_l(A).
\]
\end{construction}
Recall that $v$ denotes a place of $E$ over $p$, and $o_{E,(v)}$ the localization of $o_E$ at $v$.
\begin{definition}
\label{IntModelDef}
Let $K^p\subseteq G(\mathbb{A}_f^p)$ be an open and compact subgroup, and let $\tilde{K}^p$ be admissible for $K^p$ such that $\mathscr{M}_{n,d,\tilde{K}^p}$ exists as a scheme. Let $K=K_pK^p$ and $\tilde{K}=\tilde{K}_p\tilde{K}^p$. We define $\mathscr{S}_K(G,X)$ as the normalization of the closure of $\Sh_K(G,X)$ in $\mathscr{M}_{n,d,\tilde{K}^p}\otimes_{\mathbb{Z}_{(p)}} o_{E,(v)}$ with respect to the embedding
\[
\Sh_K(G,X)\hookrightarrow\Sh_{\tilde{K}}(\GSp(V),S^{\pm})\otimes_{\mathbb{Q}}E\hookrightarrow \mathscr{M}_{n,d,\tilde{K}^p}\otimes_{\mathbb{Z}_{(p)}} o_{E,(v)}.
\]
\end{definition}
\begin{remark}
\label{ModelIndepRem}
This definition is indeed independent of the choice of $\tilde{K}^p$: Let $\tilde{K}'^p\subseteq\tilde{K}^p$ be an open and compact subgroup which contains $K^p$ (it is then automatically admissible for $K^p$), then the natural map $\mathscr{M}_{n,d,\tilde{K}'^p}\to\mathscr{M}_{n,d,\tilde{K}^p}$ is finite and there is a commutative diagram
\[
\begin{xy}
\xymatrix{
\Sh_K(G,X)\ar@^{(->}[r] \ar@^{(->}[rd] & \mathscr{M}_{n,d,\tilde{K}'^p}\otimes_{\mathbb{Z}_{(p)}}o_{E,(v)} \ar[d]\\
& \mathscr{M}_{n,d,\tilde{K}^p}\otimes_{\mathbb{Z}_{(p)}}o_{E,(v)}\ .
}
\end{xy}
\]
Let $Z$ be a component of $\Sh_K(G,X)$, and denote by $\overline{Z}'$ and $\overline{Z}$ the closures in the $o_{E,(v)}$-schemes on the right hand side of the diagram respectively. The induced map $\overline{Z}'\to\overline{Z}$ is finite and dominant, and is an isomorphism at the generic points. Hence the corresponding map of the respective normalizations is an isomorphism.
\end{remark}
By definition, for every $K^p$ the choice of an admissible $\tilde{K}^p$ gives a natural map $\mathscr{S}_K(G,X)\to \mathscr{M}_{n,d,\tilde{K}}\otimes_{\mathbb{Z}_{(p)}} o_{E,(v)}$, this defines an abelian scheme over $\mathscr{S}_K(G,X)$ which is independent of the choice of $\tilde{K}^p$ up to isomorphism by the preceeding remark. If $K'^p\subseteq K^p$ then we have a natural map $\mathscr{S}_{K'^p,\tilde{K}'^p}(G,X)\to\mathscr{S}_{K^p,\tilde{K}^p}(G,X)$ which is obtained by the choice of suitable admissible subgroups $\tilde{K}'^p\subseteq\tilde{K}^p$.
\begin{theorem}[Kisin, \cite{Ki1} Thm. 2.3.8.]
\label{CanModelThm}
If $p=2$, assume that $G^{\mathrm{ad}}$ has no factor of Dynkin type $B$, and that, for each $K^p$, the dual of each abelian variety associated to a point on the special fiber of $\mathscr{S}_K(G,X)$ has a connected $p$-divisible group.\\
Then the following hold:
\begin{enumerate}[(i)]
\item $\mathscr{S}_K(G,X)$ is a smooth $o_{E,(v)}$-scheme for each $K^p$.
\item The projective limit $\mathscr{S}_{K_p}(G,X):=\varprojlim_{K^p}\mathscr{S}_K(G,X)$ is an integral canonical model of $\Sh_{K_p}(G,X)$ over $o_{E,(v)}$ in the sense of Def. \ref{CanModelDef}.
\end{enumerate}
\end{theorem}
In particular, $\mathscr{S}_{K^p}(G,X)$ and hence also $\mathscr{S}_K(G,X)=\mathscr{S}_{K_p}(G,X)/K^p$ (for $K^p$ sufficiently small) does not depend on the choice of the embedding $(G,X)\hookrightarrow(\GSp(V),S^{\pm})$, nor on the choices made during the construcion.
\subsection{Tensors on the de Rham cohomology}
\label{dRTensorSec}
Although in general we do not have an interpretation of the integral models $\mathscr{S}_K(G,X)$ as moduli spaces of abelian schemes with additional structures, each model is by construction naturally endowed with an abelian scheme on it, and the tensors $s$ which define the group $\mathcal{G}$ induce tensors on de Rham cohomology of this abelian scheme. In this subsection we desribe the construction of these tensors and their relation to $s$, still following \cite{Ki1}.
We will systematically consider the tensors $s\subset\Lambda_{\mathbb{Z}_{(p)}}^{\otimes}$ chosen in the last subsection as tensors over $\Lambda^*_{\mathbb{Z}_{(p)}}$ and use the contragredient representation
\[
(\cdot)^{\vee}\colon\GL(\Lambda)\stackrel{\sim}{\longrightarrow}\GL(\Lambda^*),
\]
as discussed in Section \ref{TensorPrep}.
\begin{notation}
In the sequel we will work with a fixed model $\mathscr{S}:=\mathscr{S}_K(G,X)$ associated to some sufficiently small subgroup $K^p\subseteq G(\mathbb{A}_f^p)$ as in Def. \ref{IntModelDef}. We fix an open compact $\tilde{K}^p\subseteq \GSp(V)(\mathbb{A}_f^p)$ which is admissible for $K^p$ in the sense of the last subsection. Note that all the constructions below are in fact \emph{independent} of the choice of $\tilde{K}^p$. In the case $p=2$ we assume that the assumptions of Theorem \ref{CanModelThm} hold, so that $\mathscr{S}$ is smooth.
Let $\mathcal{A}\stackrel{\pi}{\longrightarrow}\mathscr{S}$ be the abelian scheme defined by the natural map $\mathscr{S}\to\mathscr{M}_{n,d,\tilde{K}^p}\otimes_{\mathbb{Z}_{(p)}}\mathcal{O}_{E,(v)}$. Let
\[
\mathcal{V}^{\circ}:=H_{\mathrm{dR}}^1(\mathcal{A}/\mathscr{S})\quad \text{and}\quad \mathcal{V}:=H_{\mathrm{dR}}^1(\mathcal{A}\otimes E/\Sh_K(G,X)).
\]
Then $\mathcal{V}^{\circ}$ and $\mathcal{V}$ are locally free modules over $\mathscr{S}$ and $\mathscr{S}\otimes E=\Sh_K(G,X)$ respectively, and $\mathcal{V}=\mathcal{V}^{\circ}\otimes E$. Let $\nabla$ denote the Gau{\ss}-Manin connection on $\mathcal{V}^{\circ}$ resp. $\mathcal{V}$. It is known that the Hodge spectral sequence $E_1^{p,q}=R^q\pi_*(\Omega^p_{\mathcal{A}/\mathscr{S}})\ \Longrightarrow\ H^{p+q}_{\mathrm{dR}}(\mathcal{A}/\mathscr{S})$ degenerates at $E_1$ (\cite{BBM}, 2.5.2.), giving rise to a filtration
\[
\mathcal{V}^{\circ}=H_{\mathrm{dR}}^1(\mathcal{A}/\mathscr{S})\supset\pi_*\Omega^1_{\mathcal{A}/\mathscr{S}}=:\Fil^1\mathcal{V}^{\circ},
\]
the \emph{Hodge filtration} on $\mathcal{V}^{\circ}$.
\end{notation}
Let $E'|E$ be any field extension which admits an embedding into $\mathbb{C}$, and let $\xi\in\mathscr{S}(E')$. Let $\overline{E'}$ be an algebraic closure of $E'$, choose an embedding $\overline{E'}\hookrightarrow\mathbb{C}$. We denote by $\bar{\xi}$ and $\xi_{\mathbb{C}}$ the $\overline{E'}$-valued and $\mathbb{C}$-valued points corresponding to $\xi$. From the embedding $\Sh_K(G,X)\hookrightarrow\mathscr{M}_{n,d,\tilde{K}^p}$ used in Constr. \ref{ModuliConstr} we get a natural isomorphism $V\simeq H_1(\mathcal{A}_{\xi_{\mathbb{C}}},\mathbb{Q})$. The dual of this isomorphism maps $s_{\mathbb{Q}}\subset(V^*)^{\otimes}$ to a set of tensors over $H^1(\mathcal{A}_{\xi_{\mathbb{C}}},\mathbb{Q})$, and using the comparison isomorphisms
\[
H^1(\mathcal{A}_{\xi_{\mathbb{C}}},\mathbb{Q})_{\mathbb{C}}\cong H^1_{\mathrm{dR}}(\mathcal{A}_{\xi_{\mathbb{C}}}/\mathbb{C}),\quad H^1(\mathcal{A}_{\xi_{\mathbb{C}}},\mathbb{Q})_{\mathbb{Q}_l}\cong H^1_{\acute{e}t}(\mathcal{A}_{\xi_{\mathbb{C}}},\mathbb{Q}_l)\cong H^1_{\acute{e}t}(\mathcal{A}_{\bar{\xi}},\mathbb{Q}_l)
\]
we obtain tensors $s_{\mathrm{dR},\xi}$ on the algebraic de Rham cohomology of $\mathcal{A}_{\xi_{\mathbb{C}}}$ and $s_{\acute{e}t,l,\xi}$ on the $l$-adic {\'e}tale cohomology of $\mathcal{A}_{\bar{\xi}}$ for every prime number $l$. By a result of Deligne (\cite{De2}, 2.11.), the family $(s_{\mathrm{dR},\xi}, (s_{\acute{e}t,l,\xi}))$ is an absolute Hodge cycle (see loc.cit. \S 2 for the definition of Hodge cycles and absolute Hodge cycles).
\begin{proposition}[\cite{Ki1}, 2.2.1., 2.2.2.]
\label{dRgeneric}
\begin{enumerate}[(i)]
\item For every $\xi\in\mathscr{S}(E')$ as above the tensors $s_{\mathrm{dR},\xi}$ are defined over $H^1_{\mathrm{dR}}(\mathcal{A}_{\xi}/E')$ and the tensors $s_{\acute{e}t,l,\xi}$ are $\mathrm{Gal}(\overline{E'}|E')$-invariant for each $l$.
\item There exist global sections $s_{\mathrm{dR}}\subset\mathcal{V}^{\otimes}$ defined over $E$, which are horizontal with respect to the Gau{\ss}-Manin connection $\nabla$, such that the pullback of $s_{\mathrm{dR}}$ to any $\xi\in\mathscr{S}(E')$ as above equals the tensors $s_{\mathrm{dR},\xi}\subset(H^1_{\mathrm{dR}}(\mathcal{A}_{\xi}/E'))^{\otimes}$.
\end{enumerate}
\end{proposition}
The extension of the tensors $s_{\mathrm{dR}}$ to sections of $(\mathcal{V}^{\circ})^{\otimes}$ relies on the following pointwise construction: Let $k$ be a perfect field of finite trancendence degree over $\mathbb{F}_p$. Let $W(k)$ be the Witt ring over $k$ and let $L(k):=\mathrm{Frac}(W(k))$. We consider a triple $(\tilde{x},\xi,x)$, where $\tilde{x}$ is a $W(k)$-valued point $\mathscr{S}$ and $\xi\in\mathscr{S}(L(k))$, $x\in\mathscr{S}(k)$ are the corresponding induced points.
Let $\mathbb{D}_x$ be the contravariant Dieudonn{\'e} module of the $p$-divisible group of $\mathcal{A}_x$. Recall that $\mathbb{D}_x$ is a free $W(k)$-module together with a $\sigma$-linear map $F$ and a $\sigma^{-1}$-linear map $V$ such that $FV=p=VF$. We have canonical isomorphisms
\[
H^1_{\mathrm{dR}}(\mathcal{A}_{\tilde{x}}/W(k))\cong H^1_{\mathrm{cris}}(\mathcal{A}_x/W(k))\cong\mathbb{D}_x.
\]
By our assumption on $k$, the field $L(k)$ can be embedded into $\mathbb{C}$. The choice of an embedding $\overline{L(k)}\hookrightarrow\mathbb{C}$ hence yields an absolute Hodge cycle $(s_{\mathrm{dR},\xi}, (s_{\acute{e}t,l,\xi}))$ as above.
On the other hand, there is also an isomorphism
\[
\Lambda_{\mathbb{Z}_p}\stackrel{\sim}{\longrightarrow}H_1(\mathcal{A}_{\xi_{\mathbb{C}}},\mathbb{Z})_{\mathbb{Z}_p}\cong T_p(\mathcal{A}_{\xi_{\mathbb{C}}})\cong T_p(\mathcal{A}_{\bar{\xi}}):
\]
With the notations of Constr. \ref{ModuliConstr}, if the $\mathbb{C}$-valued point $\xi_{\mathbb{C}}$ corresponds to the element $[h,g]\in\Sh_{\tilde{K}}(\GSp(V),S^{\pm})(\mathbb{C})$, then the first arrow is given by $g_p$. Dualizing this isomorphism, and paying respect to the $\mathrm{Gal}(\overline{L(k)}|L(k))$-operation on the right hand side, yields
\[
\Lambda^*_{\mathbb{Z}_p}\simeq T(\mathcal{A}_{\bar{\xi}})^*(-1)\cong H_{\acute{e}t}^1(\mathcal{A}_{\bar{\xi}},\mathbb{Z}_p),
\]
which sends the tensors $s_{\mathbb{Z}_p}\subset(\Lambda^*_{\mathbb{Z}_p})^{\otimes}$ to tensors $s_{\acute{e}t,\xi}^{\circ}$ over $H_{\acute{e}t}^1(\mathcal{A}_{\bar{\xi}},\mathbb{Z}_p)$. Since all the isomorphisms involved are compatible, the base change of $s_{\acute{e}t,\xi}^{\circ}$ to tensors over $H_{\acute{e}t}^1(\mathcal{A}_{\bar{\xi}},\mathbb{Q}_p)$ is exactly the $p$-adic component $s_{\acute{e}t,p,\xi}$ of the absolute Hodge cycle defined above. So Prop. \ref{dRgeneric}(i) implies that the $s_{\acute{e}t,\xi}^{\circ}$ are invariant under the action of $\mathrm{Gal}(\overline{L(k)}|L(k))$. Now it follows from Kisin's theory of crystalline representations and $\mathfrak{S}$-modules that the images of these tensors under the $p$-adic comparison isomorphism
\[
H_{\acute{e}t}^1(\mathcal{A}_{\bar{\xi}},\mathbb{Z}_p)\otimes_{\mathbb{Z}_p}B_{\mathrm{cris}}\stackrel{\sim}{\longrightarrow} H^1_{\mathrm{cris}}(\mathcal{A}_x/W(k))\otimes_{W(k)}B_{\mathrm{cris}} \cong \mathbb{D}_x\otimes_{W(k)}B_{\mathrm{cris}}
\]
are $F$-invariant and are already defined over $\mathbb{D}_x$ (\cite{Ki1}, 1.3.6.(1), 1.4.3.(1)). Using the identification $\mathbb{D}_x\cong H_{\mathrm{dR}}^1(\mathcal{A}_{\tilde{x}}/W(k))$ we thus obtain tensors $s_{\mathrm{dR},\tilde{x}}^{\circ}$ over $\mathcal{V}^{\circ}_{\tilde{x}}$.
\begin{proposition}
\label{dRExtProp}
\begin{enumerate}[(i)]
\item The tensors $s_{\mathrm{dR}}$ of Prop. \ref{dRgeneric} extend (uniquely) to global sections $s_{\mathrm{dR}}^{\circ}\subset(\mathcal{V}^{\circ})^{\otimes}$ which are horizontal with respect to $\nabla$.
\item Let $(\tilde{x},\xi,x)$ be a triple as considered above. Then the tensors $s_{\mathrm{dR},\tilde{x}}^{\circ}\subset(\mathcal{V}^{\circ}_{\tilde{x}})^{\otimes}$ which we obtained via the $p$-adic comparison isomorphism in the above construction are equal to the pullback of $s_{\mathrm{dR}}^{\circ}$ to $\tilde{x}$.
\item In the situation of (ii), assume in addition that $k$ is finite or algebraically closed. Then there is a $W(k)$-linear isomorphism
\[
(\Lambda^*_{W(k)},s_{W(k)})\stackrel{\sim}{\longrightarrow}(\mathcal{V}_{\tilde{x}}^{\circ},s_{\mathrm{dR},\tilde{x}}^{\circ})
\]
Further, if $\beta$ is any such isomorphism, then there is a cocharacter $\lambda$ of $\mathcal{G}_{W(k)}$ such that the filtration $\Lambda^*_{W(k)}\supset\beta^{-1}(\Fil^1\mathcal{V}_{\tilde{x}}^{\circ})$ is induced by $(\cdot)^{\vee}\circ\lambda$.
\end{enumerate}
\end{proposition}
\begin{proof}
(i) The existence of $s_{\mathrm{dR}}^{\circ}$ is shown in the proof of (\cite{Ki1}, 2.3.9.). These extensions are automatically unique, since $\mathscr{S}$ is in particular an integral scheme and $\mathcal{V}^{\circ}$ is locally free. By the same reasoning it follows that the $s_{\mathrm{dR}}^{\circ}$ are horizontal with respect to $\nabla$, as they are so over $\mathscr{S}\otimes E$.\\
(ii) If $x$ is a closed point of $\mathscr{S}$, then this is immediately clear from the definition of $s_{\mathrm{dR}}^{\circ}$ in (\cite{Ki1}, 2.3.9.). In general, as the equality of tensors in $(\mathcal{V}^{\circ}_{\tilde{x}})^{\otimes}$ may be tested over $\xi$, the statement amounts to the fact that the $p$-adic comparison isomorphism $H_{\acute{e}t}^1(\mathcal{A}_{\bar{\xi}},\mathbb{Q}_p)\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}}\simeq H^1_{\mathrm{dR}}(\mathcal{A}_{\xi}/L(k)) \otimes_{L(k)}B_{\mathrm{dR}}$ maps the $p$-adic {\'e}tale component of the absolute Hodge cycle $(s_{\mathrm{dR},\xi}, (s_{\acute{e}t,l,\xi}))$ to its de Rham component. If $\mathcal{A}_{\xi}$ can be defined over a number field, this is a theorem of Blasius and Wintenberger (\cite{Bl1}, 0.3.), and Vasiu (\cite{Va1}, 5.2.16.) observed that their result can also be extended to our more general situation.\\
(iii) Let $\widetilde{\mathbb{D}}$ be the contravariant crystal of the $p$-divisible group $\mathcal{A}_{\tilde{x}}[p^{\infty}]$ over $W(k)$. Then we have the natural identification
\[
\widetilde{\mathbb{D}}(W(k))\cong\mathbb{D}_x\cong \mathcal{V}_{\tilde{x}}^{\circ}
\]
which is compatible with the Hodge filtrations on both sides, and by (ii) the tensors $s_{\mathrm{dR},\tilde{x}}^{\circ}$ get identified with the images of $s_{\acute{e}t,\xi}^{\circ}$ under the $p$-adic comparison isomorphism. So the first statement of (iii) follows directly from (\cite{Ki1}, 1.4.3. (2)+(3)), applied to the $p$-divisible group $\mathcal{A}_{\tilde{x}}[p^{\infty}]$ and the tensors $s_{\acute{e}t,\xi}^{\circ}$. Likewise, the proof of (4) in loc.cit. (which proves more than what is claimed) shows that the filtration $\Lambda^*_{W(k)}\supset\beta^{-1}(\Fil^1\mathcal{V}_{\tilde{x}}^{\circ})$ is induced by a cocharacter of the subgroup of $\GL(\Lambda^*_{W(k)})$ which is defined by the tensors $s_{W(k)}\subseteq(\Lambda^*_{W(k)})^{\otimes}$. As this subgroup is exactly the image of $\mathcal{G}_{W(k)}$ under $(\cdot)^{\vee}$, the last claim follows.
\end{proof}
We remark that the existence of the tensors $s_{\mathrm{dR}}^{\circ}$ is closely related to the proof of Theorem \ref{CanModelThm}: In fact, the proof of the smoothness of $\mathscr{S}$ in (\cite{Ki1}, 2.3.5.) uses a variant of Prop. \ref{dRExtProp}(iii) as a main ingredience, and in turn the arguments given in that proof allow the construction of $s_{\mathrm{dR}}^{\circ}$ in (\cite{Ki1}, 2.3.9.).
\begin{corollary}
\label{CrysTensors}
Let $x\in\mathscr{S}(k)$, where $k$ is either a finite extension of $\mathbb{F}_p$ or algebraically closed of finite transcendence degree over $\mathbb{F}_p$, and let $\mathbb{D}_x$ be the contravariant Dieudonne module of the $p$-divisible group $\mathcal{A}_x[p^{\infty}]$. Let $\tilde{x}\in\mathscr{S}(W(k))$ be a lift of $x$.\\
Then the images $s_{\mathrm{cris},x}\subset(\mathbb{D}_x)^{\otimes}$ of $s_{\mathrm{dR},\tilde{x}}^{\circ}$ via the identification $H_{\mathrm{dR}}^1(\mathcal{A}_{\tilde{x}}/W(k))\cong\mathbb{D}_x$ are independent of the choice of $\tilde{x}$. Further, the tensors $s_{\mathrm{cris},x}$ are $F$-invariant, and there is a $W(k)$-linear isomorphism $(\Lambda_{W(k)}^*,s_{W(k)})\simeq(\mathbb{D}_x,s_{\mathrm{cris},x})$.
\end{corollary}
\begin{proof}
This follows immediately from (i) and (ii) of the last proposition.
\end{proof}
\begin{remark}
\label{PELRem2}
In the special case of a PEL-type Shimura variety it is known that one can choose the lattice $\Lambda\subset V$ in a way such that $\Lambda_{\mathbb{Z}_p}$ is selfdual with respect to $\psi$ and such that there is a maximal order $o_B$ of $(B_{\mathbb{Q}_p},*)$ which acts on $\Lambda_{\mathbb{Z}_p}$, and the tensors $s\subset\Lambda_{\mathbb{Z}_p}^{\otimes}$ then encode the action of $o_B$ on $\Lambda_{\mathbb{Z}_p}$. Due to the interpretation of $\mathscr{S}$ as a moduli space of abelian schemes with additional structure, for every $x\in\mathscr{S}(k)$ as in Cor. \ref{CrysTensors} there is an action of $o_B$ on the $p$-divisible group $\mathcal{A}_x[p^{\infty}]$. So in this case Cor. \ref{CrysTensors} is an analogon to the results in (\cite{VW}, \S2) on $p$-divisible groups with PEL-structure. Note however that the authors use covariant Dieudonn{\'e} theory in that article.
\end{remark}
\section{Stratifications of the special fiber}
Let $\kappa(v)$ be the residue class field of $\mathcal{O}_{E,(v)}$, and let $\overline{\mathbb{F}}$ be a fixed algebraic closure of $\mathbb{F}_p$. In this section we define the Newton stratification and the Ekedahl-Oort stratification on the special fiber $\mathscr{S}\otimes\kappa(v)$ of $\mathscr{S}$ resp. on $\mathscr{S}\otimes\overline{\mathbb{F}}$. These stratifications arise by considering the isocristals resp. Dieudonn{\'e} spaces associated to $\mathcal{A}_x$ for points $x$ as in Cor. \ref{CrysTensors}, while paying respect to the tensor structure. Just as in the Siegel case and the PEL case, the stratifications are parametrized by combinatorial data which only depends on the Shimura datum $(G,X)$.\\
We start by introducing some group theoretic notions: The reductive group scheme $\mathcal{G}$ over $\mathbb{Z}_p$ is quasisplit and split over a finite {\'e}tale extension of $\mathbb{Z}_p$. We fix a Borel subgroup $\mathcal{B}\subseteq \mathcal{G}$ and a maximal torus $\mathcal{T}\subseteq \mathcal{B}$ which are both defined over $\mathbb{Z}_p$. Let $(X^*(\mathcal{T}),\Phi,X_*(\mathcal{T}),\Phi^{\vee})$ be the root datum associated to $(\mathcal{G},\mathcal{T})$ over $\mathcal{O}$, and let $W$ be the associated Weyl group. The choice of $\mathcal{B}$ determines a set $\Phi^+\subset\Phi$ of positive roots and a set $S\subset W$ of simple reflections which give $(W,S)$ the structure of a finite Coxeter group. As usual, we call a cocharacter $\lambda\in X_*(\mathcal{T})$ dominant, if $\langle \alpha,\lambda\rangle\geq0$ for all $\alpha\in\Phi^+$ (here $\langle\cdot,\cdot\rangle$ is the natural pairing between $X^*(\mathcal{T})$ and $X_*(\mathcal{T})$). The group $W$ naturally acts on $X_*(\mathcal{T})$, and the dominant cocharacters form a full set of representatives for the orbits $W\setminus X_*(\mathcal{T})$.
For any local, strictly henselian $\mathbb{Z}_p$-algebra $R$ we have a realization of this data with respect to $\mathcal{G}_R$. In particular $W\cong N_{\mathcal{G}}(\mathcal{T})(R)/\mathcal{T}(R)$ and the inclusion $X_*(\mathcal{T})\cong\Hom_R(\mathbb{G}_{m,R},\mathcal{T}_R)\subseteq\Hom_R(\mathbb{G}_{m,R},\mathcal{G}_R)$ induces a bijection between the quotient $W\setminus X_*(\mathcal{T})$ and the set of conjugacy classes of cocharacters for $\mathcal{G}_R$. If $R\to R'$ is a homomorphism of local and strictly henselian $\mathbb{Z}_p$-algebras, then base change to $R'$ yields a bijection between the sets of conjugacy classes of cocharacters for $\mathcal{G}_R$ and $\mathcal{G}_{R'}$.\\
Putting $R=\mathbb{C}$, we see that the conjugacy class $[\nu^{-1}]$ from Def. \ref{CocharDef} determines an element of $W\setminus X_*(\mathcal{T})$. On the other hand, putting $R=W(k)$ for some algebraically closed field $k$ of characteristic $p$, we obtain an action of the Frobenius $\sigma$ on $X_*(\mathcal{T})$, $W$ and $\Phi$. Since $\mathcal{B}$ and $\mathcal{T}$ are defined over $\mathbb{Z}_p$, this action leaves $S$, $\Phi^+$ and the set of dominant cocharacters stable.
\begin{definition}
\label{muDef}
We define $\mu\in X_*(\mathcal{T})$ as the unique dominant element such that $\sigma^{-1}(\mu)\in[\nu^{-1}]$.
\end{definition}
\subsection{The Dieudonn{\'e} module at a geometric point}
\label{StratPrepSec}
We will use the notations and considerations of Section \ref{sigmaPrep}, and especially apply them to the case that $M_0=\Lambda_{\mathbb{Z}_p}$ or $M_0=\Lambda_{\mathbb{F}_p}$. Recall that we use the contragredient representation $(\cdot)^{\vee}\colon \GL(\Lambda)\to\GL(\Lambda^*)$ to let $\mathcal{G}(R)$ act on $\Lambda^*_R$.
\begin{lemma}
\label{CocharLem1}
Let $k$ be a perfect field of finite transcendence degree over $\mathbb{F}_p$, let $\bar{k}$ be an algebraic closure of $k$. Let $\tilde{x}\in\mathscr{S}(W(k))$, and suppose that there exists an isomorphism
\[
\beta\colon(\Lambda^*_{W(k)},s_{W(k)})\stackrel{\sim}{\longrightarrow}(\mathcal{V}_{\tilde{x}}^{\circ},s_{\mathrm{dR},\tilde{x}}^{\circ}).
\]
If $\lambda$ is a cocharacter of $\mathcal{G}_{W(k)}$ which such that $(\cdot)^{\vee}\circ\lambda$ induces on $\Lambda^*_{W(k)}$ the filtration $\Lambda^*_{W(k)}\supset\beta^{-1}(\Fil^1\mathcal{V}_{\tilde{x}}^{\circ})$, then $\lambda_{W(\bar{k})}\in[\nu^{-1}]$.
\end{lemma}
\begin{proof}
Let $\xi\in\mathscr{S}(L(k))$ be the generic point of $\tilde{x}$. Let $\overline{L(k)}$ be an algebraic closure of $L(k)$, then we have $W(\bar{k})\hookrightarrow\overline{L(k)}$. So if we choose an embedding $\overline{L(k)}\hookrightarrow\mathbb{C}$, it suffices to show that $\lambda_{\mathbb{C}}\in[\nu^{-1}]$.
Let $A:=\mathcal{A}_{\xi_{\mathbb{C}}}$. There is a pair $(h,g)\in X\times G(\mathbb{A}_f)$ such that
\[
\xi_{\mathbb{C}}=[h,g]\in\Sh_K(G,X)=G(\mathbb{Q})\setminus X\times G(\mathbb{A}_f)/K.
\]
If $V_{\mathbb{C}}=V^{(-1,0)}\oplus V^{(0,-1)}$ is the Hodge decomposition given by $h$ then, using the notation from Constr. \ref{ModuliConstr}, we have $A\simeq V^{(-1,0)}/\Lambda_g$, and in turn there is an isomorphism $H_1(A,\mathbb{C})\simeq(\Lambda_g)_{\mathbb{C}} =V_{\mathbb{C}}$. It follows from the construction of the Riemann correspondence for complex abelian varieties that the dual isomorphism
\[
\alpha_{\mathbb{C}}\colon V^*_{\mathbb{C}}\stackrel{\sim}{\longrightarrow}H^1(A,\mathbb{C})\cong H^1_{\mathrm{dR}}(A/\mathbb{C})=\mathcal{V}_{\xi_{\mathbb{C}}}
\]
identifies the Hodge decomposition $H^1_{\mathrm{dR}}(A/\mathbb{C})=H^{(1,0)}\oplus H^{(0,1)}$ with the decomposition $V^*_{\mathbb{C}}=(V^{(-1,0)})^*\oplus (V^{(0,-1)})^*$ (see e.g. \cite{Mi3}, 6.10., 7.5.), and a direct computation shows that the cocharacter $(\cdot)^{\vee}\circ\nu_h^{-1}$ (with $\nu_h$ as in Def. \ref{CocharDef}) acts on $(V^{(-1,0)})^*$ with weight $1$ and on $(V^{(0,-1)})^*$ with weight $0$, in other words,
\[
(\nu_h(z)^{-1})^{\vee}|_{(V^{(-1,0)})^*}=z,\qquad (\nu_h(z)^{-1})^{\vee}|_{(V^{(0,-1)})^*}=1.
\]
This means that $(\cdot)^{\vee}\circ\nu_h^{-1}$ induces on $V^*_{\mathbb{C}}$ the filtration
\[
V^*_{\mathbb{C}}\supset(V^{(-1,0)})^*=\alpha_{\mathbb{C}}^{-1}(H^{(1,0)})=\alpha_{\mathbb{C}}^{-1}(\Fil^1\mathcal{V}_{\xi_{\mathbb{C}}}),
\]
and further by construction of $s_{\mathrm{dR}}$ the isomorphism $\alpha_{\mathbb{C}}$ identifies $s_{\mathbb{C}}$ with $s_{\mathrm{dR},\xi_{\mathbb{C}}}$.
Now the isomorphism $\alpha_{\mathbb{C}}^{-1}\circ\beta_{\mathbb{C}}\colon V^*_{\mathbb{C}}\to V^*_{\mathbb{C}}$ fixes the tensors $s_{\mathbb{C}}$, which means that $\alpha_{\mathbb{C}}^{-1}\circ\beta_{\mathbb{C}}=g_{\mathbb{C}}^{\vee}$ for some $g_{\mathbb{C}}\in G(\mathbb{C})$. Note that we have $g_{\mathbb{C}}^{\vee}(\beta_{\mathbb{C}}^{-1}(\Fil^1\mathcal{V}_{\xi_{\mathbb{C}}}))=\alpha_{\mathbb{C}}^{-1}(\Fil^1\mathcal{V}_{\xi_{\mathbb{C}}})$. Conjugating $\lambda_{\mathbb{C}}$ with $g_{\mathbb{C}}$, we may therefore assume that $(\cdot)^{\vee}\circ\lambda_{\mathbb{C}}$ and $(\cdot)^{\vee}\circ\nu_h^{-1}$ both induce the same filtration on $V^*_{\mathbb{C}}$. Let $P$ be the stabilizer of this filtration in $G_{\mathbb{C}}$, that is, the subgroup of all $\tilde{g}\in G_{\mathbb{C}}$ such that $\tilde{g}^{\vee}$ leaves the filtration stable. Then $P\subseteq G_{\mathbb{C}}$ is a parabolic subgroup, and both $\lambda_{\mathbb{C}}$ and $\nu_h^{-1}$ factor via $P$. Since all maximal tori of $P$ are conjugate over $\mathbb{C}$, after conjugation by an element of $P(\mathbb{C})$ we may further assume that both cocharacters factor via the same (automatically split) maximal torus of $P$. But this implies that $\lambda_{\mathbb{C}}$ and $\nu_h^{-1}$ also induce (via $(\cdot)^{\vee}$) the same grading on $V^*_{\mathbb{C}}$, and hence that they are equal.
\end{proof}
\begin{construction}
\label{LinConstr}
Let $k$ be algebraically closed of finite transcendence degree over $\mathbb{F}_p$. Let $x\in\mathscr{S}(k)$. By Cor. \ref{CrysTensors} the tensors $s_{\mathrm{dR}}^{\circ}$ induce $F$-invariant tensors $s_{\mathrm{cris},x}\subset\mathbb{D}_x^{\otimes}$, and we find an isomorphism $\beta\colon (\Lambda^*_{W(k)},s_{W(k)})\stackrel{\sim}{\rightarrow}(\mathbb{D}_x,s_{\mathrm{cris},x})$. To $\beta$ we attach an element $g_{\beta}\in G(L(k))$ as follows:
Transporting $F$ via $\beta$, we obtain an injective $\sigma$-linear map $F_{\beta}:=\beta^{-1}\circ F\circ\beta$ on $\Lambda^*_{W(k)}=(\Lambda^*_{\mathbb{Z}_p})\otimes_{\mathbb{Z}_p}W(k)$. We can write it uniquely as $F_{\beta}=F_{\beta}^{\mathrm{lin}}\circ (1\otimes\sigma)$, where $F_{\beta}^{\mathrm{lin}}$ is by definition an automorphism of $\Lambda^*_{L(k)}$ which fixes the tensors $s_{L(k)}$. Therefore we have $F_{\beta}^{\mathrm{lin}}=g_{\beta}^{\vee}$ for a unique element $g_{\beta}$ of $G(L(k))$.\\
We can also summarize the construction differently: For each $\beta$ the associated $g_{\beta}\in G(L(k))$ is the unique element such that the diagram
\[
\begin{xy}
\xymatrix{
\mathbb{D}_x^{(\sigma)}\ar[rr]^{F^{\mathrm{lin}}} & & \mathbb{D}_x\\
\Lambda^*_{W(k)}\ar[rr]^{g_{\beta}^{\vee}} \ar[u]^{\beta^{(\sigma)}} & & \Lambda^*_{W(k)}\ar[u]_{\beta}
}
\end{xy}
\]
commutes (here we make the usual identification $(\Lambda^*_{W(k)})^{(\sigma)}\cong\Lambda^*_{W(k)}$).
\end{construction}
\begin{lemma}
\label{LinearizationLem}
Let $k$ be as in Constr. \ref{LinConstr}.
\begin{enumerate}[(i)]
\item Let $x\in\mathscr{S}(k)$, let $\beta\colon(\Lambda^*_{W(k)},s_{W(k)})\stackrel{\sim}{\longrightarrow}(\mathbb{D}_x,s_{\mathrm{cris},x})$ be an isomorphism. Then the isomorphisms between $(\Lambda^*_{W(k)},s_{W(k)})$ and $(\mathbb{D}_x,s_{\mathrm{cris},x})$, are exactly the ones of the form $\beta'=\beta\circ h^{\vee}$ for $h\in\mathcal{G}(W(k))$, further in this case $h$ is uniquely determined and we have $g_{\beta'}=h^{-1}g_{\beta}\sigma(h)$.
\item For every $x\in\mathscr{S}(k)$ and for every isomorphism $\beta\colon (\Lambda^*_{W(k)},s_{W(k)})\stackrel{\sim}{\longrightarrow}(\mathbb{D}_x,s_{\mathrm{cris},x})$ we have that $g_{\beta}\in\mathcal{G}(W(k))\mu(p)\mathcal{G}(W(k))$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) This is clear.
(ii) By the Cartan decomposition for $G(L(k))$ (see \cite{Ti1}, 3.3.3.) we know that $g_{\beta}$ lies in a double coset $\mathcal{G}(W(k))\eta(p)\mathcal{G}(W(k))$ for a unique dominant cocharacter $\eta\in X_*(\mathcal{T})$. By (i) we may further w.l.o.g. replace $\beta$ by $\beta\circ h^{\vee}$ for a suitable $h\in\mathcal{G}(W(k))$ to achieve that $g_{\beta}\in\mathcal{G}(W(k))\eta(p)$. In order to show that $\eta=\mu$, it suffices to check that the base change of $\sigma^{-1}(\eta)$ to $k$ lies in $[\nu^{-1}]$.
Let $\tilde{x}\in\mathscr{S}(W(k))$ be a lift of $x$, and identify $\mathbb{D}_x\cong\mathcal{V}_{\tilde{x}}^{\circ}$. Let
\[
\Lambda^*_{W(k)}\supset\beta^{-1}(\Fil^1\mathcal{V}_{\tilde{x}}^{\circ})
\]
be the pullback of the Hodge filtration on $\mathcal{V}_{\tilde{x}}^{\circ}$. By Prop. \ref{dRExtProp}(iii) there is a cocharacter $\lambda$ of $\mathcal{G}_{W(k)}$ which induces this filtration, and by Lemma \ref{CocharLem1} we know that $\lambda\in[\nu^{-1}]$.\\
Reducing the whole situation modulo $p$ we obtain the contravariant Dieudonne space $(\overline{\mathbb{D}_x}, \overline{F}, \overline{V})$ associated to the $p$-torsion $\mathcal{A}_x[p]$ and the isomorphism
\[
\bar{\beta}\colon\Lambda^*_k\stackrel{\sim}{\longrightarrow}\overline{\mathbb{D}_x}\cong\overline{\mathcal{V}_{\tilde{x}}^{\circ}}=\mathcal{V}_x^{\circ}=H_{\mathrm{dR}}^1(\mathcal{A}_x/k).
\]
By a result of Oda (\cite{Od1}, 5.11.), we have the equality $\Fil^1\mathcal{V}_x^{\circ}=\ker(\overline{F})$, which implies that
\[
\overline{\beta^{-1}(\Fil^1\mathcal{V}_{\tilde{x}}^{\circ})}=\bar{\beta}^{-1}(\ker(\overline{F}))=\ker(\bar{\beta}^{-1}\circ\overline{F}\circ\bar{\beta})=\ker(\overline{F_{\beta}}),
\]
and this filtration is induced via $(\cdot)^{\vee}$ by the reduction $\bar{\lambda}$ of $\lambda$.\\
On the other hand, we may write $g_{\beta}=g_0\eta(p)$ for some $g_0\in\mathcal{G}(W(k))$, therefore
\[
F_{\beta}=g_0^{\vee}\circ\eta(p)^{\vee}\circ(1\otimes\sigma)=(1\otimes\sigma)\circ\sigma^{-1}(g_0)^{\vee}\circ\sigma^{-1}(\eta)(p)^{\vee}.
\]
Let $\Lambda^*_{W(k)}=\bigoplus_{m\in\mathbb{Z}}\Lambda_m^*$ be the grading which is induced by the cocharacter $(\cdot)^{\vee}\circ\sigma^{-1}(\eta)$ on $\Lambda^*_{W(k)}$. The inclusions $p\cdot\Lambda^*_{W(k)}\subseteq\im(F_{\beta})\subseteq\Lambda^*_{W(k)}$ show that we must have $\Lambda^*_m=(0)$ for $m\neq 0,1$, and thus reducing modulo $p$ we find that
\[
\ker(\overline{F_{\beta}})=\ker(\overline{\sigma^{-1}(\eta)(p)^{\vee}})=\overline{\Lambda^*_1}.
\]
This implies that the two cocharacters $(\cdot)^{\vee}\circ\overline{\sigma^{-1}(\eta)}$ and $(\cdot)^{\vee}\circ\bar{\lambda}$ induce the same filtration on $\Lambda^*_k$. Now it follows by the same argument as in the proof of Lemma \ref{CocharLem1} that $\overline{\sigma^{-1}(\eta)}$ and $\bar{\lambda}$ are $\mathcal{G}(k)$-conjugate, which concludes the proof.
\end{proof}
\begin{corollary}
\label{LinearizationCor}
Let $k$ be as in Constr. \ref{LinConstr}, let $x,x'\in\mathscr{S}(k)$. Let $g_{\beta}, g_{\beta'}\in\mathcal{G}(W(k))$ be associated to isomorphisms $\beta$ and $\beta'$ between $(\Lambda^*_{W(k)},s_{W(k)})$ and $(\mathbb{D}_x,s_{\mathrm{cris},x})$ resp. $(\mathbb{D}_{x'},s_{\mathrm{cris},x'})$. Then there is an isomorphism of Dieudonn{\'e} modules $\mathbb{D}_x\simeq\mathbb{D}_{x'}$ which identifies $s_{\mathrm{cris},x}$ with $s_{\mathrm{cris},x'}$ if and only if $g_{\beta'}=hg_{\beta}\sigma(h)^{-1}$ for some $h\in\mathcal{G}(W(k))$.
\end{corollary}
\begin{proof}
By Constr. \ref{LinConstr} the existence of an isomorphism $\mathbb{D}_x\simeq\mathbb{D}_{x'}$ which respects the tensors on both sides is equivalent to the existence of an automorphism $\delta$ of $(\Lambda^*_{W(k)},s_{W(k)})$ such that
\[
g_{\beta'}^{\vee}\circ(1\otimes\sigma)\circ\delta=\delta\circ g_{\beta}^{\vee}\circ(1\otimes\sigma)\tag{*}
\]
Every such automorphism must be of the form $\delta=h^{\vee}$ for a unique $h\in\mathcal{G}(W(k))$, and an easy calculation shows that the property (*) is equivalent to $g_{\beta'}=hg_{\beta}\sigma(h)^{-1}$.
\end{proof}
\subsection{The Newton stratification}
\label{NewtStratSec}
In order to define the Newton stratification on $\mathscr{S}\otimes\kappa(v)$ we recall some facts on $\sigma$-conjugacy classes:
Let $k$ be algebraically closed of characteristic $p$. We denote by $[g]$ the $\sigma$-conjugacy class of an element $g\in G(L(k))$, and by $B(G)$ the set of all $\sigma$-conjugacy classes in $G(L(k))$. This definition is in fact independent of $k$ in the following sense: If $k'$ is any algebraically closed field of characteristic $p$, then every inclusion $k\subseteq k'$ induces a bijection between the $\sigma$-conjugacy classes of $G(L(k))$ and those of $G(L(k'))$ (see \cite{RR}, 1.3.).
To describe the set $B(G)$ one uses the two maps
\[
\nu_G\colon B(G)\longrightarrow \left(W\setminus X_*(\mathcal{T})_{\mathbb{Q}}\right)^{\langle\sigma\rangle},\qquad \kappa_G\colon B(G)\longrightarrow\pi_1(G)_{\langle\sigma\rangle},
\]
usually called the Newton map and the Kottwitz map of $G$ (see \cite{Ko1}, \cite{RR}). As explained in (\cite{RR}, \S2), the set $(W\setminus X_*(\mathcal{T})_{\mathbb{Q}})^{\langle\sigma\rangle}$ is endowed with a partial order $\preceq$ which generalizes the "lying above" order for Newton polygons.
We define
\[
\bar{\mu}:=\frac{1}{r}\sum_{i=0}^{r-1}\sigma^i(\mu)\in (X_*(\mathcal{T})_{\mathbb{Q}})^{\langle\sigma\rangle},
\]
where $r$ is some integer such that $\sigma^r(\mu)=\mu$ (obviously this does not depend on the choice of $r$), and also identify $\bar{\mu}$ with its image in $(W\setminus X_*(\mathcal{T})_{\mathbb{Q}})^{\langle\sigma\rangle}$. Let $\mu^{\natural}\in\pi_1(G)_{\langle\sigma\rangle}$ be the image of $\mu$ under the natural projection
\[
X_*(\mathcal{T})\to\pi_1(G)_{\langle\sigma\rangle}=(X_*(\mathcal{T})/\langle \alpha^{\vee}\mid\alpha^{\vee}\in\Phi^{\vee}\rangle)_{\langle\sigma\rangle}.
\]
\begin{definition}
Let $B(G,\mu):=\big\{b\in B(G)\mid \kappa_G(b)=\mu^{\natural},\ \nu_G(b)\preceq\bar{\mu}\big\}$. We endow $B(G,\mu)\subset B(G)$ with the induced partial order $\preceq$.
\end{definition}
By work of Kottwitz-Rapoport (\cite{KR}), Lucarelli (\cite{Lu}) and Gashi (\cite{Ga}) we know that $B(G,\mu)$ is exactly the image of the double coset $\mathcal{G}(W(k))\mu(p)\mathcal{G}(W(k))$ in $B(G)$ (for any algebraically closed field $k$ of char. $p$). We summarize some combinatorial properties of this set in the following remark:
\begin{remark}
\label{NewtParRem}
\begin{enumerate}[(1)]
\item $B(G,\mu)$ is a finite set (see \cite{RR}, 2.4.).
\item The set $B(G,\mu)$ contains a unique maximal element $b_{\mathrm{max}}$ with respect to $\preceq$, namely the $\sigma$-conjugacy class $[\mu(p)]$: In fact, the characterization of the Newton map in (\cite{Ko1}, 4.3.) shows that $\bar{\mu}$ is nothing but the image of $[\mu(p)]$ under $\nu_G$. On the other hand, it follows directly from the definition of $\kappa_G$ that $\mu^{\natural}$ is the image of $[\mu(p)]$ under $\kappa_G$. Therefore we have $[\mu(p)]\in B(G,\mu)$, and clearly the inequality $b\preceq [\mu(p)]$ holds for all $b\in B(G,\mu)$.
\item $B(G,\mu)$ contains a unique basic element $b_{\mathrm{bas}}$ (it corresponds to $\mu^{\natural}$ under the bijection between basic $\sigma$-conjugacy classes and $\pi_1(G)_{\langle\sigma\rangle}$, see \cite{RR}, 1.15.), which is also the unique minimal element with respect to $\preceq$ in $B(G,\mu)$.
\end{enumerate}
\end{remark}
Let us now define the Newton stratification on $\mathscr{S}\otimes\kappa(v)$: Consider a point $x\in\mathscr{S}\otimes\kappa(v)$, let $k(x)$ be the residue class field of $\mathscr{S}$ in $x$. Let $k$ be some algebraic closure of $k(x)$, and let $\hat{x}$ be the associated geometric point. Constr. \ref{LinConstr} associates to each isomorphism $\beta\colon (\Lambda^*_{W(k)},s_{W(k)})\simeq(\mathbb{D}_{\hat{x}},s_{\mathrm{cris},\hat{x}})$ an element $g_{\beta}\in G(L(k))$, and Lemma \ref{LinearizationLem} shows that the $\sigma$-conjugacy class $[g_{\beta}]$ is independent of the choice of $\beta$ and lies in $B(G,\mu)$. Further, this element only depends on $x$ and not on the choice of the algebraic closure of $k(x)$ in the sense explained at the beginning of this subsection. Thus the assignment $x\mapsto[g_{\beta}]$ gives a well-defined map
\[
\mathrm{Newt}\colon\mathscr{S}\otimes\kappa(v)\longrightarrow B(G,\mu).
\]
\begin{definition}
\label{NewtonStratDef}
\begin{enumerate}[(i)]
\item For an element $b\in B(G,\mu)$ we set $\mathcal{N}^b:=\mathrm{Newt}^{-1}(\{b\})\subseteq\mathscr{S}\otimes\kappa(v)$. We call $\mathcal{N}^b$ the \emph{Newton stratum} of $b$.
\item We call the stratum $\mathcal{N}^{b_{\mathrm{max}}}$ the $\mu$-\emph{ordinary locus} in $\mathscr{S}\otimes\kappa(v)$.
\end{enumerate}
\end{definition}
A priori, the $\mathcal{N}^b$ are just subsets of $\mathscr{S}_0$, but we will see below that they are in fact locally closed, which justifies the name "strata".
\begin{remark}
\begin{enumerate}[(1)]
\item In view of Cor. \ref{LinearizationCor} we see that two points $x_1,x_2\in\mathscr{S}\otimes\kappa(v)$ lie in the same Newton stratum if and only if the following holds: If $k$ is any algebraically closed field such that $k(x_1)$ and $k(x_2)$ both embed into $k$, with associated points $\hat{x}_1,\hat{x}_2\in\mathscr{S}(k)$, then there is an isomorphism of isocrystals $(\mathbb{D}_{\hat{x}_1})_{\mathbb{Q}}\simeq(\mathbb{D}_{\hat{x}_2})_{\mathbb{Q}}$ which identifies the tensors $s_{\mathrm{cris},\hat{x}_1}$ with $s_{\mathrm{cris},\hat{x}_2}$.
\item In the case of a PEL-type Shimura datum, at each geometric point $\hat{x}$ of $\mathscr{S}\otimes\kappa(v)$ the tensors $s_{\mathrm{cris},\hat{x}}$ describe the additonal structure on $\mathbb{D}_{\hat{x}}$ (cf. Rem. \ref{PELRem2}). Hence in this case the Newton strata from Def. \ref{NewtonStratDef} agree with those considered in \cite{RR}.
\end{enumerate}
\end{remark}
It is natural to conjecture that the Grothendieck specialization theorem holds for the $\mathcal{N}^b$. That is, if $x_1,x_2\in\mathscr{S}\otimes\kappa(v)$ such that $x_2$ is a specialization of $x_1$, then we expect that $\mathrm{Newt}(x_2)\preceq\mathrm{Newt}(x_1)$. To our knowledge, this has not yet been established for a general Shimura variety of Hodge type. In the PEL-case, it follows from the fact that the isocrystal over $\mathscr{S}\otimes\kappa(v)$ associated to the $p$-divisible group $\mathcal{A}\otimes\kappa(v)[p^{\infty}]$, with induced additional structure, can be understood as an isocrystal with $G$-strucure in the sense of (\cite{RR}, \S3).
There is, however, the following result of Vasiu. Since $\mathcal{A}\otimes\kappa(v)$ is a polarized abelian scheme over $\mathscr{S}\otimes\kappa(v)$, we have the classical stratification of $\mathscr{S}\otimes\kappa(v)$ by Newton polygons, as defined by Oort. For a symmetric Newton polygon $\Delta\in B(\GSp(V))$, denote the corresponding stratum by $\mathcal{N}_{\mathrm{NP}}^{\Delta}$. Then every $\mathcal{N}^b$ lies in a unique stratum $\mathcal{N}_{\mathrm{NP}}^{\Delta(b)}$, this defines a map $B(G,\mu)\to B(\GSp(V)),\ b\mapsto\Delta(b)$, which should be thought of as "forgetting the tensor structure".
\begin{proposition}[\cite{Va2}, 5.3.1.(ii)]
\label{NewtonStratProp}
For every $b\in B(G,\mu)$ the stratum $\mathcal{N}^b$ is an open and closed subset of $\mathcal{N}_{\mathrm{NP}}^{\Delta(b)}$.
\end{proposition}
As a consequence, since the strata $\mathcal{N}_{\mathrm{NP}}^{\Delta}$ are locally closed subsets of $\mathscr{S}\otimes\kappa(v)$, the same holds true for the strata $\mathcal{N}^b$. We endow them with the structure of a reduced subscheme of $\mathscr{S}\otimes\kappa(v)$.
\subsection{The Ekedahl-Oort stratification}
\label{EOStratSec}
Recall that $\overline{\mathbb{F}}$ denotes a fixed algebraic closure of $\mathbb{F}_p$. We now describe the Ekedahl-Oort stratification on $\mathscr{S}\otimes\overline{\mathbb{F}}$ which has been constructed and studied by Zhang in \cite{Zh1}. However, we give a slightly different definition, as we feel that one should work with $\Lambda^*$ rather than with $\Lambda$, further we make the definition independent of the choice of a cocharacter. The main results of \cite{Zh1} remain true with the obvious changes.\\
The definition of the Ekedahl-Oort stratification is based on the theory of $\mathcal{G}_{\mathbb{F}_p}$-Zips which has been developed in \cite{PWZ1}, \cite{PWZ2}, and which we will apply to a cocharacter in the conjugacy class $[\nu^{-1}]$. The stratification is parametrized by the subset $^JW$ of the Weyl group $(W,S)$, where $J\subseteq S$ is the type of the conjugacy class $[\nu^{-1}]$. Let us recall how this data is defined: We view $[\nu^{-1}]$ as an element of $W\setminus X_*(\mathcal{T})$, as explained at the beginning of this section. Let $\chi_{\mathrm{dom}}\in X_*(\mathcal{T})$ be the unique dominant element lying in $[\nu^{-1}]$, then $J:=\{s\in S\mid s(\chi_{\mathrm{dom}})=\chi_{\mathrm{dom}}\}$. The subset $J$ generates a subgroup $W_J$ of $W$, and $(W_J,J)$ is again a coxeter group. Every left coset of $W_J$ in $W$ contains a unique minimal element with respect to the length function on $(W,S)$, and the set $^JW$ set of these elements forms a full set of representatives for the left cosets of $W_J$ in $W$ (see for example \cite{BB}, \S 2.4.).
Let $w_0$ and $w_{0,J}$ be the longest elements of $W$ and $W_J$ respectively and let $x_J:=w_0w_{0,J}$. We have a partial order $\preceq$ on ${^JW}$ (\cite{PWZ1}, 6.3.) given as
\[
w'\preceq w\ :\Longleftrightarrow\ yw'\sigma(x_Jyx_J^{-1})\leq w\ \text{ for some }y\in W_J.
\]
This partial order induces a topology on $^JW$ such that a subset $U\subset{^JW}$ is open if and only if for any $w'\in U$ and any $w\in{^JW}$ with $w'\preceq w$ one also has $w\in U$.\\
If $X$ is a scheme or a sheaf over an $\mathbb{F}_p$-scheme $S$, denote by $X^{(\sigma)}$ its pullback by the absolute Frobenius $x\mapsto x^p$ on $S$, and likewise for morphisms of objects over $S$. Let $\kappa$ be an algebraic extension of $\mathbb{F}_p$. Since $\mathcal{G}_{\kappa}=\mathcal{G}\otimes\kappa$ is defined over $\mathbb{F}_p$, we have $\mathcal{G}_{\kappa}^{(\sigma)}\cong\mathcal{G}_{\kappa}$ canonically (compare Section \ref{sigmaPrep}). In particular, for every subgroup $H\subseteq \mathcal{G}_{\kappa}$ the pullback $H^{(\sigma)}$ is again a subgroup of $\mathcal{G}_{\kappa}$. Further, the composition $\mathcal{G}_{\kappa}\stackrel{\mathrm{Frob}_p}{\longrightarrow}\mathcal{G}_{\kappa}^{(\sigma)}\cong\mathcal{G}_{\kappa}$ of the relative Frobenius morphism of $\mathcal{G}_{\kappa}$ with this canonical isomorphism is an isogeny of the algebraic group $\mathcal{G}_{\kappa}$. By abuse of notation we denote this isogeny again by $\sigma$.
Let $\chi$ be a cocharacter of $\mathcal{G}_{\kappa}$ such that $\chi_{\overline{\mathbb{F}}}\in[\nu^{-1}]$. Using the identification $\mathcal{G}_{\kappa}^{(\sigma)}\cong\mathcal{G}_{\kappa}$ we can view $\chi^{(\sigma)}$ as a cocharacter of $\mathcal{G}_{\kappa}$ as well. Let $P_+, P_-\subseteq\mathcal{G}_{\kappa}$ be the parabolic subgroups which are characerized by the property that $\Lie(P_{\pm})$ is the sum of the non-negative (non-positive) weight spaces with respect to the adjoint operation of $\chi$ on $\Lie(\mathcal{G}_{\kappa})$. Denote by $U_+$ and $U_-$ the corresponding unipotent radicals and by $M$ the common Levi subgroup of $P_+$ and $P_-$.\\
\begin{definition}[\cite{PWZ2}, 3.1.]
\label{GZipDef}
Let $S$ be a scheme over $\kappa$. A $\mathcal{G}_{\mathbb{F}_p}$\emph{-zip} of type $\chi$ over $S$ is a quadruple $\underline{I}=(I,I_+,I_-,\iota)$, where
\begin{enumerate}[(1)]
\item $I$ is a right $\mathcal{G}_{\kappa}$-torsor for the fpqc-topology on $S$,
\item $I_+\subseteq I$ and $I_-\subseteq I$ are subsheaves such that $I_+$ is a $P_+$-torsor and $I_-$ is a $P_-^{(\sigma)}$-torsor,
\item $\iota\colon I_+^{(\sigma)}/U_+^{(\sigma)}\stackrel{\sim}{\longrightarrow}I_-/U_-^{(\sigma)}$ is an isomorphism of $M^{(\sigma)}$-torsors.
\end{enumerate}
A morphism $\underline{I}\longrightarrow\underline{I}'$ of $\mathcal{G}_{\mathbb{F}_p}$-zips over $S$ is a $\mathcal{G}_{\kappa}$-equivariant map $I\to I'$ which maps $I_+$ to $I_+'$ and $I_-$ to $I_-'$ and is compatible with $\iota$ and $\iota'$.
\end{definition}
With the natural notion of pullback the $\mathcal{G}_{\mathbb{F}_p}$-zips of type $\chi$ form a stack $\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa}^{\chi}$ over $(\mathbf{Sch}/\kappa)$ (\cite{PWZ2}, 3.2.).
\begin{proposition}[\cite{PWZ2}, 3.12., 3.20., 3.21.]
\label{StackTopSpace}
$\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa}^{\chi}$ is a smooth algebraic stack of dimension $0$ over $\kappa$, and there is a homeomorphism of topological spaces
\[
\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa}^{\chi}(\overline{\mathbb{F}})\simeq{^J}W
\]
where $^JW$ is endowed with the topology given by $\preceq$.
\end{proposition}
So in particular, there is a bijection between the set of isomorphism classes of $\mathcal{G}_{\mathbb{F}_p}$-zips of type $\chi$ over $\overline{\mathbb{F}}$ and the set $^JW$. We will give a precise description of this bijection in the following section.
\begin{construction}
\label{EOConstr}
Let $\overline{\mathcal{V}^{\circ}}:=H_{\mathrm{dR}}^1(\mathcal{A}\otimes\kappa(v)/\mathscr{S}\otimes\kappa(v))$, which is the reduction mod $p$ of $\mathcal{V}^{\circ}$. Let $\mathcal{C}:=\overline{\Fil^1\mathcal{V}^{\circ}}=\Fil^1\overline{\mathcal{V}^{\circ}}$ be the Hodge filtration, this is a locally direct summand of $\overline{\mathcal{V}^{\circ}}$. As explained in (\cite{MW}, \S7), the conjugate Hodge spectral sequence also gives rise to a locally direct summand $\mathcal{D}:=R^1\pi_*(\mathscr{H}^0(\Omega^{\bullet}_{\mathcal{A}\otimes\kappa(v)/\mathscr{S}\otimes\kappa(v)}))$ of $\overline{\mathcal{V}^{\circ}}$, and the (inverse) Cartier homomorphism provides isomorphisms
\[
\phi_0\colon (\overline{\mathcal{V}^{\circ}}/\mathcal{C})^{(\sigma)}\stackrel{\sim}{\longrightarrow}\mathcal{D},\qquad \phi_1\colon \mathcal{C}^{(\sigma)}\stackrel{\sim}{\longrightarrow}\overline{\mathcal{V}^{\circ}}/\mathcal{D}.
\]
We now fix a cocharacter $\chi$ and a \emph{finite} extension $\kappa$ of $\kappa(v)$ such that $\chi$ is defined over $\kappa$ and such that $\chi_{\overline{\mathbb{F}}}\in[\nu^{-1}]$. Recall that $\mathcal{G}_{\kappa}$ and thus $\chi$ and $\chi^{(\sigma)}$ act on $\Lambda^*_{\kappa}$ via the contragredient representation $(\cdot)^{\vee}$. Let
\[
\Lambda^*_{\kappa}=\Fil^0_{\chi}\supset \Fil^1_{\chi}\supset(0),\qquad (0)\subset\Fil_0^{\chi^{(\sigma)}}\subset\Fil_1^{\chi^{(\sigma)}}=\Lambda^*_{\kappa}
\]
be the descending resp. ascending filtration given in this way by $\chi$ and by $\chi^{(\sigma)}$. Then $P_+$ is nothing but the stabilizer of $\Fil^{\bullet}_{\chi}$ in $\mathcal{G}_{\kappa}$, that is,
\[
P_+=\{g\in\mathcal{G}_{\kappa}\mid g^{\vee}(\Fil^1_{\chi})=\Fil^1_{\chi}\},
\]
and in the same fashion $P_-^{(\sigma)}$ is the stabilizer of $\Fil_{\bullet}^{\chi^{(\sigma)}}$.
We denote by $\bar{s}_{\mathrm{dR}}$ the reduction of the tensors $s_{\mathrm{dR}}^{\circ}$ to $\overline{\mathcal{V}^{\circ}}$, and by $\bar{s}$ the base change of $s\subset(\Lambda^*_{\mathbb{Z}_p})^{\otimes}$ to $\Lambda^*_{\kappa}$. Define
\begin{align*}
I:= & \mathbf{Isom}_{\mathscr{S}\otimes\kappa}\big((\Lambda^*_{\kappa},\bar{s})\otimes\mathcal{O}_{\mathscr{S}\otimes\kappa},\, (\overline{\mathcal{V}^{\circ}},\bar{s}_{\mathrm{dR}})\otimes\mathcal{O}_{\mathscr{S}\otimes\kappa}\big),\\
I_+:= & \mathbf{Isom}_{\mathscr{S}\otimes\kappa}\big((\Lambda^*_{\kappa},\bar{s},\Fil^{\bullet}_{\chi})\otimes\mathcal{O}_{\mathscr{S}\otimes\kappa},\, (\overline{\mathcal{V}^{\circ}},\bar{s}_{\mathrm{dR}},\overline{\mathcal{V}^{\circ}}\supset\mathcal{C})\otimes\mathcal{O}_{\mathscr{S}\otimes\kappa}\big),\\
I_-:= & \mathbf{Isom}_{\mathscr{S}\otimes\kappa}\big((\Lambda^*_{\kappa},\bar{s},\Fil_{\bullet}^{\chi^{(\sigma)}})\otimes\mathcal{O}_{\mathscr{S}\otimes\kappa},\, (\overline{\mathcal{V}^{\circ}},\bar{s}_{\mathrm{dR}},\mathcal{D}\subset\overline{\mathcal{V}^{\circ}})\otimes\mathcal{O}_{\mathscr{S}\otimes\kappa}\big).
\end{align*}
We have a natural right action of $\mathcal{G}_{\kappa}$ on $I$ given by $\beta\cdot g:=\beta\circ g^{\vee}$, and $I_+$ and $I_-$ inherit actions of $P_+$ and $P_-^{(\sigma)}$.
\end{construction}
\begin{proposition}[\cite{Zh1}, 2.4.1.]
\label{GzipProp}
The Cartier isomorphisms induce an isomorphism $\iota\colon I_+^{(\sigma)}/U_+^{(\sigma)}\stackrel{\sim}{\longrightarrow}I_-/U_-^{(\sigma)}$ such that the tuple $\underline{I}=(I,I_+,I_-,\iota)$ is a $\mathcal{G}_{\mathbb{F}_p}$-zip of type $\chi$ over $\mathscr{S}\otimes\kappa$.
\end{proposition}
\begin{proof}
Let us show for our definitions that for every closed point $x$ of $\mathscr{S}\otimes\kappa$ the fibers $I_x$ and $(I_+)_x$ are trivial torsors for $\mathcal{G}_{\kappa}$ resp. $P_+$: Let $k(x)$ be the residue class field of $\mathscr{S}$ at $x$, which is a finite extension of $\kappa(v)$, let $\tilde{x}\in\mathscr{S}(W(k(x)))$ be a lift of $x$. By Prop. \ref{dRExtProp}(iii) and Lemma \ref{CocharLem1} we find an isomorphism $\bar{\beta}\colon(\Lambda^*_{k(x)},s_{k(x)})\stackrel{\sim}{\longrightarrow}(\overline{\mathcal{V}^{\circ}}_x, \bar{s}_{\mathrm{dR},x})$ and a cocharacter $\bar{\lambda}$ of $\mathcal{G}_{k(x)}$ which induces the filtration $\Lambda^*_{k(x)}\supset\bar{\beta}^{-1}(\mathcal{C}_x)$, and further we have $\bar{\lambda}_{\overline{\mathbb{F}}}\in[\nu^{-1}]$. Then $\bar{\beta}$ lies in $I_x(k(x))$, which shows that $I_x$ is trivial. Moreover, $\bar{\lambda}$ is conjugate to $\chi$ over some finite extension of $k(x)$. This implies that $(I_+)_x$ is a $P_+$-torsor, and as $P_+$ is connected and $k(x)$ is finite this torsor must be trivial.
Now all the arguments in the sections 2.2. - 2.4. and in the proof of 2.4.1. of \cite{Zh1} carry over to our definition of $I$, $I_+$ and $I_-$ with the necessary adjustments.
\end{proof}
For every scheme $S$ over $\mathscr{S}\otimes\kappa$ one now obtains a $\mathcal{G}_{\mathbb{F}_p}$-zip over $S$ by pulling back the $\mathcal{G}_{\mathbb{F}_p}$-zip $\underline{I}$, in other words, $\underline{I}$ defines a morphism of algebraic stacks
\[
\zeta\colon\mathscr{S}\otimes\kappa\to\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa}^{\chi}.
\]
\begin{theorem}[\cite{Zh1}, 3.1.2.]
\label{ZhangThm}
The morphism $\zeta$ is smooth. In particular it induces a continuous and open map of topological spaces
\[
\zeta(\overline{\mathbb{F}})\colon\mathscr{S}(\overline{\mathbb{F}})\longrightarrow \mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa}^{\chi}(\overline{\mathbb{F}})\simeq{^JW}.
\]
\end{theorem}
\begin{proof}
Again, the proof of (\cite{Zh1}, 3.1.2.) goes through with the obvious changes.
\end{proof}
\begin{remark}
\label{EOIndepRem}
Though the definition of $\zeta$ depends on the choice of a cocharacter $\chi$, the resulting map $\zeta(\overline{\mathbb{F}})\colon\mathscr{S}(\overline{\mathbb{F}})\to{^JW}$ is in fact independent of $\chi$. This is a consequence of the following two observations:
\begin{enumerate}[(1)]
\item Let $\kappa'$ be a finite extension of $\kappa$, let $\chi'=\chi_{\kappa'}$, and let $\underline{I}'$ be the $\mathcal{G}_{\mathbb{F}_p}$-zip of type $\chi'$ over $\mathscr{S}\otimes\kappa'$ given by Constr. \ref{EOConstr}. Then we have $\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa'}^{\chi'}=\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa}^{\chi}\otimes\kappa'$ and the equality $\underline{I}'=\underline{I}\otimes\kappa'$, which means that $\chi$ and $\chi'$ induce the same map $\zeta(\overline{\mathbb{F}})$.
\item Let $\chi'$ be a cocharacter of $\mathcal{G}_{\kappa}$ which is conjugate to $\chi$ over $\kappa$, say $\chi'=\mathrm{int}(g)\circ\chi$ for some $g\in\mathcal{G}(\kappa)$. Let $P'_{\pm}\subseteq\mathcal{G}_{\kappa}$ be the associated parabolic subgroups, with common Levi subgroup $M'$, and again denote by $\underline{I}'$ the $\mathcal{G}_{\mathbb{F}_p}$-zip associated to $\chi'$ (over $\kappa$). Applying the propositions \ref{GzipProp} and \ref{StackTopSpace} to $\kappa$ and $\chi'$ one obtains a map
\[
\zeta'(\overline{\mathbb{F}})\colon\mathscr{S}(\overline{\mathbb{F}})\to\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa'}^{\chi'}(\overline{\mathbb{F}})\simeq{^JW}.
\]
As $P'_{\pm}=g(P_{\pm})g^{-1}$ and $M'=gMg^{-1}$, the element $g$ defines an isomorphism of algebraic stacks
\begin{align*}
\Xi\colon\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa}^{\chi} & \stackrel{\sim}{\longrightarrow}\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}_{\kappa}^{\chi'}\\
\big(I,I_+,I_-,\iota\big) & \longmapsto\big(I,\;(I_+)\cdot g^{-1},\;(I_-)\cdot \sigma(g)^{-1},\; r_{\sigma(g){-1}}\circ\iota\circ r_{\sigma(g)}\big).
\end{align*}
(Here $r_{\sigma(g)}$ and $r_{\sigma(g)^{-1}}$ are the obvious isomorphisms $(I'_+)^{(\sigma)}/(U'_+)^{(\sigma)}\simeq I_+^{(\sigma)}/U_+^{(\sigma)}$ and $I_-/U_-^{(\sigma)}\simeq I'_-/(U'_-)^{(\sigma)}$ given by multiplication with $\sigma(g)$ resp. $\sigma(g)^{-1}$ on the right.)\\
It is easy to see that $\Xi(\underline{I})=\underline{I}'$. Further, going through the classification of $\mathcal{G}_{\mathbb{F}_p}$-zips in \cite{PWZ1}, \cite{PWZ2} (see also \ref{ZipClass}), a straightforward but tedious computation shows that $\Xi$ is compatible with the homeomorphisms from Prop. \ref{StackTopSpace}, which implies that $\zeta'(\overline{\mathbb{F}})=\zeta(\overline{\mathbb{F}})$.
\end{enumerate}
\end{remark}
Due to Theorem \ref{ZhangThm} and the definition of the topology on $^JW$, the inverse images of elements $w\in{^J}W$ under $\zeta(\overline{\mathbb{F}})$ are the $\overline{\mathbb{F}}$-valued points of locally closed subsets $\mathcal{S}^w\subseteq\mathscr{S}\otimes\overline{\mathbb{F}}$.
\begin{definition}
\label{EODef}
For $w\in{^JW}$ we call $\mathcal{S}^w\subseteq\mathscr{S}\otimes\overline{\mathbb{F}}$ the \emph{Ekedahl-Oort stratum} associated to $w$. We endow the strata $\mathcal{S}^w$ with the reduced subscheme structure.
\end{definition}
Let us collect some information on these strata:
\begin{remark}
\label{EOPropRem}
\begin{enumerate}[(1)]
\item Each $\mathcal{S}^w$ is either empty or equidimensional of dimension $l(w)$ (see \cite{Zh1}, 3.1.6.).
\item The $\mathcal{S}^w$ form a stratification of $\mathscr{S}\otimes\overline{\mathbb{F}}$ in the strict sense: For every $w\in{^JW}$ we have
\[
\overline{\mathcal{S}^w}=\bigcup_{w'\preceq w}\mathcal{S}^w.
\]
This follows from the fact that $\zeta$ is an open map and the structure of the topological space $^JW$.
\item The set $^JW$ contains a unique maximal element with respect to $\preceq$, namely $w_{\mathrm{max}}:=w_{0,J}w_0$, and a unique minimal element $w_{\mathrm{min}}:=1$. By (2), $\mathcal{S}^{w_{\mathrm{max}}}$ is the unique open EO-stratum and is dense in $\mathscr{S}\otimes\overline{\mathbb{F}}$, and $\mathcal{S}^{w_{\mathrm{min}}}$ is closed and contained in the closure of each stratum $\mathcal{S}^w$.
\item We do not know whether all Ekedahl-Oort strata are nonempty. In view of (2) and (3) this is equivalent to the question whether $\mathcal{S}^{w_{\mathrm{min}}}$ is nonempty.
\end{enumerate}
\end{remark}
\section{Comparing the stratifications}
\label{StratCompSec}
We will now restrict our attention to the geometric fiber $\mathscr{S}\otimes\overline{\mathbb{F}}$, where the Newton stratification and the Ekedahl-Oort stratification are both defined. The question as to how these stratifications are related to each other can be studied by looking at their $\overline{\mathbb{F}}$-valued points, since all strata are locally closed subvarieties of $\mathscr{S}\otimes\overline{\mathbb{F}}$. Let us fix some new notations:
\begin{notation}
We still denote by $\overline{\mathbb{F}}$ a fixed algebraic closure of $\mathbb{F}_p$. Let $\mathcal{O}:=W(\overline{\mathbb{F}})$ and $L:=L(\overline{\mathbb{F}})$. We write $\mathcal{S}:=\mathscr{S}(\overline{\mathbb{F}})$. By abuse of notation we will frequently identify geometric objects over $\overline{\mathbb{F}}$ with their $\overline{\mathbb{F}}$-valued points. For example we denote the $\overline{\mathbb{F}}$-valued points of $\mathcal{N}^b$ resp. of $\mathcal{S}^w$ by the same symbols. With these notations we have the decompositions
\[
\mathcal{S}=\bigcup_{b\in B(G,\mu)}^{\circ}\mathcal{N}^b,\qquad \mathcal{S}=\bigcup_{w\in{^JW}}^{\circ}\mathcal{S}^w.
\]
We write again $(\mathcal{B},\mathcal{T})$ for the base change to $\mathcal{O}$ of our fixed Borel pair, and denote by $(B,T)$ its base change to $\overline{\mathbb{F}}$. For every $w\in W$ we choose a representative $\dot{w}\in N_{\mathcal{G}}(\mathcal{T})(\mathcal{O})$.
Let $K:=\mathcal{G}(\mathcal{O})\subseteq G(L)$. The projection $\mathcal{O}\twoheadrightarrow\mathcal{O}/(p)=\overline{\mathbb{F}}$ induces a surjective homomorphism $K\to\mathcal{G}(\overline{\mathbb{F}}),\ g\mapsto\bar{g}$. For any subgroup $H\subseteq K$ we will denote its image in $\mathcal{G}(\overline{\mathbb{F}})$ by $\overline{H}$. The Frobenius $\sigma$ acts on $K$ and on $\mathcal{G}(\overline{\mathbb{F}})$, and these operations are compatible in the sense that $\sigma(\bar{g})=\overline{\sigma(g)}$. Let $K_1:=\{g\in K\mid \bar{g}=1\}$, this is a normal subgroup of $K$.
\end{notation}
\begin{definition}
\label{ConjClNotDef}
\begin{enumerate}[(i)]
\item We write $\langle g\rangle:=\{hg\sigma(h)^{-1}\mid h\in K\}$ for the $K$-$\sigma$-conjugacy class of an element $g\in G(L)$.
\item We set $\mathcal{C}(\mathcal{G},\mu):=\{\langle g\rangle\mid g\in K\mu(p)K\}$.
\end{enumerate}
\end{definition}
\subsection{A factorization lemma}
\label{FactorSubsec}
It follows from Constr. \ref{LinConstr} and Lemma \ref{LinearizationLem} that there is a well-defined map
\[
\gamma\colon\mathcal{S}\longrightarrow \mathcal{C}(\mathcal{G},\mu)
\]
which is given as follows: If $x\in\mathcal{S}$ and $\beta\colon(\Lambda_{\mathcal{O}}^*,s_{\mathcal{O}})\simeq(\mathbb{D}_x,s_{\mathrm{cris},x})$ is an isomorphism as in Cor. \ref{CrysTensors}, then $\gamma(x):=\langle g_{\beta}\rangle$, where $g_{\beta}\in G(L)$ such that $\beta^{-1}\circ F\circ\beta=:F_{\beta}=g_{\beta}^{\vee}\circ(1\otimes\sigma)$, here as usual $(\cdot)^{\vee}$ is the contragredient representation.
\begin{remark}
\label{gammaRem}
In the case of a PEL-type Shimura variety the map $\gamma$ can be shown to be \emph{surjective} (\cite{VW}, Thm. 11.2.). We do not know whether the surjectivity of $\gamma$ holds in general, though we expect this to be true.
\end{remark}
Let
\[
\tilde{\theta}\colon\mathcal{C}(\mathcal{G},\mu)\longrightarrow B(G,\mu),\quad\langle g\rangle\longmapsto[g]
\]
be the natural map and let $\theta:=\tilde{\theta}\circ\gamma\colon\mathcal{S}\to B(G,\mu)$. Then by Def. \ref{NewtonStratDef} we have $\mathcal{N}^b=\theta^{-1}(\{b\})$ for each $b\in B(G,\mu)$, and further we can describe the fibers of $\tilde{\theta}$ as follows:
\[
\tilde{\theta}^{-1}(\{b\})=\{\langle g\rangle\mid g\in K\mu(p)K\cap b\}=:\mathcal{C}(\mathcal{G},\mu)\cap b,\quad b\in B(G,\mu).
\]
On the other hand for every $w\in{^JW}$ the associated Ekedahl-Oort stratum is by definition given as $\mathcal{S}^w=\zeta^{-1}(\{w\})$. Here we simply write $\zeta$ for the map $\zeta(\overline{\mathbb{F}})\colon \mathcal{S}\to{^JW}$ from Theorem \ref{ZhangThm}. We will now explain that one also has a factorization $\tilde{\zeta}\colon\mathcal{C}(\mathcal{G},\mu)\to{^JW}$ such that $\zeta=\tilde{\zeta}\circ\gamma$, and give a precise description of the inverse image $\tilde{\zeta}^{-1}(\{w\})$ for $w\in{^JW}$.\\
As we have seen in Remark \ref{EOIndepRem}, the map $\zeta$ does not depend on the choice of the cocharacter $\chi$, nor on the choice of $\kappa$. We may and will therefore suppose without loss of generality that $\chi$ is the unique dominant cocharacter contained in $[\nu^{-1}]$, in other words that $\chi=\sigma^{-1}(\mu)$. To $\chi$ we have the associated subgroups $P_{\pm}$, $U_{\pm}$ and $M$ of $\mathcal{G}_{\overline{\mathbb{F}}}$ as in Section \ref{EOStratSec}. By (\cite{SGA3}, Exp. XXVI) we may extend these groups to $\mathcal{G}_{\mathcal{O}}$ as follows: Let $\mathcal{M}\subseteq\mathcal{G}_{\mathcal{O}}$ be the centralizer of $\chi$ in $\mathcal{G}_{\mathcal{O}}$. Let $\mathcal{P}_+$ be the (unique) parabolic subgroup of $\mathcal{G}_{\mathcal{O}}$ with Levi subgroup $\mathcal{M}$ which contains $\mathcal{B}$, let $\mathcal{P}_-$ be its opposite parabolic, and denote their unipotent radicals by $\mathcal{U}_{\pm}$. Then we have for example $P_+=\overline{\mathcal{P}_+(\mathcal{O})}$ and $P_-^{(\sigma)}=\overline{\sigma(\mathcal{P}_-(\mathcal{O}))}$.
\begin{blank}
\label{ZipClass}
Let us review in detail the homeomorphism $\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}^{\chi}(\overline{\mathbb{F}})\simeq{^JW}$ from Prop. \ref{StackTopSpace} in this situation: The Frobenius isogeny gives a morphism
\[
\sigma\colon P_+/U_+\cong M\longrightarrow M^{(\sigma)}\cong P_-^{(\sigma)}/U_-^{(\sigma)},
\]
such that the tuple $(\mathcal{G}_{\overline{\mathbb{F}}},P_+, P_-^{(\sigma)},\sigma)$ is an algebraic zip datum in the sense of (\cite{PWZ1}, 3.1.). The associated zip group is defined as
\[
E_{\chi}:=\big\{(mu_+,\sigma(m)u_-)\mid u_+\in U_+, u_-\in U_-^{(\sigma)}, m\in M\big\}\subseteq P_+\times P_-^{(\sigma)}.
\]
It acts on $\mathcal{G}_{\overline{\mathbb{F}}}$ on the left via $(p_+,p_-)\cdot g=p_+gp_-^{-1}$.
The isomorphism classes of $\mathcal{G}_{\mathbb{F}_p}$-zips of type $\chi$ over $\overline{\mathbb{F}}$ are identified with the orbits under this action by the following construction: Every $a\in\mathcal{G}_(\overline{\mathbb{F}})$ defines a $\mathcal{G}_{\mathbb{F}_p}$-zip $\underline{I}_a$ over $\overline{\mathbb{F}}$ by setting
\[
\underline{I}_a:=(\mathcal{G}_{\overline{\mathbb{F}}},\, P_+,\, a\cdot P_-^{(\sigma)},\, \iota_a),
\]
where $\iota_a$ is given by multiplication with $a$ on the left, more precisely,
\[
\iota_a\colon P_+^{(\sigma)}/U_+^{(\sigma)}\cong M^{(\sigma)}\stackrel{a\cdot}\longrightarrow a\cdot M^{(\sigma)}\cong a\cdot P_-^{(\sigma)}/U_-^{(\sigma)}.
\]
A zip of this form is called a \emph{standard} $\mathcal{G}_{\mathbb{F}_p}$-zip over $\overline{\mathbb{F}}$. By (\cite{PWZ2}, 3.5.), every $\mathcal{G}_{\mathbb{F}_p}$-zip $\underline{I}$ of type $\chi$ over $\overline{\mathbb{F}}$ is isomorphic to a standard zip: If $i_+\in I_+$ and $i_-\in I_-$ such that $\iota(i_+^{(\sigma)}U_+^{(\sigma)})=i_-U_-^{(\sigma)}$, and if $a\in \mathcal{G}(\overline{\mathbb{F}})$ such that $i_-=i_+\cdot a$, then $\underline{I}\simeq\underline{I}_a$. Further the assignment $\underline{I}_a\mapsto E_{\chi}\cdot a$ is well-defined and $\underline{I}_a\simeq\underline{I}_{a'}$ if and only if $E_{\chi}\cdot a = E_{\chi}\cdot a'$ (\cite{PWZ2}, 3.10). This construction can be made functorial and induces an isomorphism of stacks $\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}^{\chi}\otimes \overline{\mathbb{F}} \simeq \left[ E_{\chi}\setminus \mathcal{G}_{\overline{\mathbb{F}}}\right]$. (\cite{PWZ2}, 3.11.)
Let $\mathcal{B}_-$ be the Borel subgroup of $\mathcal{G}_{\mathcal{O}}$ which is opposite to $\mathcal{B}$ with respect to $\mathcal{T}$, let $B_-$ be its reduction to $\mathcal{G}_{\overline{\mathbb{F}}}$, and let $y:=\dot{w}_{0,J}\dot{w}_0$. Then the triple $(B_-,T,\bar{y})$ is a frame for the zip datum $(\mathcal{G}_{\overline{\mathbb{F}}},P_+,P_-^{(\sigma)},\sigma)$ in the sense of (\cite{PWZ1}, 3.6.).
Let $(W,S_-)$ be the Weyl group with respect to the pair $(B_-,T)$. We need to compare it to $(W,S)$, as for example explained in (\cite{PWZ1}, \S 2.3.): There is a unique isomorphism $\delta\colon (W,S)\to (W,S_-)$ of coxeter groups which is induced from an inner automorphism $\mathrm{int}(g)$, where $g\in\mathcal{G}(\overline{\mathbb{F}})$ such that $gBg^{-1}=B_-$ and $gTg^{-1}=T$. Since in our case we may choose $g=\bar{\dot{w}}_0$, we see that $\delta(w)=w_0ww_0$ for $w\in W$. Applying the results of (\cite{PWZ1}, \S6, \S7), which are formulated for the Weyl group $(W,S_-)$, we see that the assignment
\[
^JW\longrightarrow E_{\chi}\setminus\mathcal{G}_{\overline{\mathbb{F}}},\quad w\longmapsto O^w:= E_{\chi}\cdot(\bar{y}\overline{\dot{\delta(w)}})=E_{\chi}\cdot(\bar{\dot{w}}_{0,J}\bar{\dot{w}}\bar{\dot{w}}_0)
\]
is bijective, and that $O^{w'}$ is contained in the closure of $O^w$ if and only if $w'\preceq w$ (we defined $\preceq$ in section \ref{EOStratSec}).
Altogether we obtain the homeomorphism $\mathcal{G}_{\mathbb{F}_p}\mathtt{-Zip}^{\chi}(\overline{\mathbb{F}})\simeq{^JW}$ from \ref{StackTopSpace}: It maps the ismomorphism class of $\underline{I}_a$ to the unique $w\in{^JW}$ such that $a\in O^w$.
\end{blank}
\begin{remark}
\label{OrbitRepRem}
\begin{enumerate}[(1)]
\item We have already used the fact that for every $n\in N_{\mathcal{G}}(\mathcal{T})(\overline{\mathbb{F}})$ and $t\in \mathcal{T}(\overline{\mathbb{F}})=T$ we have $E_{\chi}\cdot nt=E_{\chi}\cdot n$. This is a consequence of Lang's Theorem, since $\{(t',\sigma(t'))\mid t'\in T\}\subseteq E_{\chi}$.\\
We will make liberal use of this property, for example we also use the set of representatives $\{\overline{\dot{w_{0,J}ww_0}}\mid w\in{^JW}\}$ for the set of $E_{\chi}$-orbits.
\item We have $\dot{w}_{0,J}\in\mathcal{M}(\mathcal{O})$ and therefore $(\bar{\dot{w}}_{0,J},\overline{\sigma(\dot{w}_{0,J})})\in E_{\chi}$ (independent of the choice of the lift for $w_{0,J}$). So $\{\overline{\dot{ww_0\sigma(w_{0,J})}}\mid w\in{^JW}\}$ is also a set of representatives for the $E_{\chi}$-orbits in $\mathcal{G}_{\overline{\mathbb{F}}}$.
\end{enumerate}
\end{remark}
\begin{definition}
\label{TrClassDef}
For an element $g\in G(L)$ let $[[g]]$ be the set of all $K$-$\sigma$-conjugates of elements in $K_1gK_1$, i.e.
\[
[[g]]:=\{hg'\sigma(h)^{-1}\mid g'\in K_1gK_1\}.
\]
If $g\in K\mu(p)K$, we will also identify $[[g]]$ with its image in $\mathcal{C}(\mathcal{G},\mu)$.
\end{definition}
Since $K_1$ is a normal subgroup of $K$, the sets $[[g]]$ form a decomposition of $G(L)$ (resp. $\mathcal{C}(\mathcal{G},\mu)$) into equivalence classes. These classes have been defined and studied in greater generality by Viehmann in \cite{Vi1}.
\begin{proposition}
\label{ZetaFacProp}
Define $\tilde{\zeta}\colon\mathcal{C}(\mathcal{G},\mu)\to{^JW}$ as the composition
\begin{align*}
\mathcal{C}(\mathcal{G},\mu) & \longrightarrow E_{\chi}\setminus\mathcal{G}_{\overline{\mathbb{F}}}\qquad\simeq\quad{^JW}.\\
\langle h_1\mu(p)h_2\rangle & \longmapsto E_{\chi}\cdot (\sigma^{-1}(\bar{h}_2)\bar{h}_1)
\end{align*}
The following hold:
\begin{enumerate}[(i)]
\item The map $\tilde{\zeta}$ is well-defined.
\item We have the identity $\zeta=\tilde{\zeta}\circ\gamma$.
\item For $g,g'\in K\mu(p)K$ we have $\tilde{\zeta}(\langle g\rangle)=\tilde{\zeta}(\langle g'\rangle)\Longleftrightarrow [[g]]=[[g']]$.
\end{enumerate}
\end{proposition}
\begin{proof}
For every root $\alpha\in\Phi$ let $\mathcal{U}_{\alpha}\colon\mathbb{G}_{a,\mathcal{O}}\to\mathcal{G}_{\mathcal{O}}$ be the associated root group. For every $\alpha\in\Phi$ and $\lambda\in X_*(\mathcal{T})$ we have the relation
\[
\lambda(p)\mathcal{U}_{\alpha}(x)\lambda(p)^{-1}=\mathcal{U}_{\alpha}(p^{\langle\alpha,\lambda\rangle}x)\quad\text{for all } x\in L. \tag{*}
\]
In particular, if $\langle\alpha,\lambda\rangle>0$ then $\lambda(p)\mathcal{U}_{\alpha}(\mathcal{O})\lambda(p)^{-1}\subseteq K_1$.
\begin{enumerate}
\item[(i)] To show that $\tilde{\zeta}$ is well-defined it is clearly enough to show that the orbit $E_{\chi}\cdot (\sigma^{-1}(\bar{h}_2)\bar{h}_1)$ is independent of the choice of $h_1$ and $h_2$ (see also \cite{HL}, where a similar result is proved in Lemma 4.1.). So let $h_1', h_2'\in K$ $(i=1,2)$ such that $h_1\mu(p)h_2=h_1'\mu(p)h_2'$. We define
\[
c_1:=h_1^{-1}h_1',\quad \tilde{c}_2:=h_2(h_2')^{-1}, \quad c_2:=\sigma^{-1}(\tilde{c}_2).
\]
Then $\bar{c}_2(\sigma^{-1}(\bar{h}_2')\bar{h}_1')\bar{c}_1^{-1}=\sigma^{-1}(\bar{h}_2)\bar{h}_1$, so it suffices to show that $(\bar{c}_2,\bar{c}_1)\in E_{\chi}$.
Since $c_1=\mu(p)\tilde{c}_2\mu(p)^{-1}$, we have $c_2\in K_{\chi}:= K\cap \chi(p)^{-1}K\chi(p)$. This is the stabilizer of two points in the Bruhat-Tits building of $\mathcal{G}_L$ (in fact, it is even a parahoric subgroup of $\mathcal{G}(L)$, since $\chi$ is minuscule), so we may apply the structure theory for these groups to $K_{\chi}$:\\
For all $\alpha\in\Phi$ define $\mathcal{U}_{\alpha}^{\chi}:=\mathcal{U}_{\alpha}(L)\cap K_{\chi}$. From $(*)$ we see that $\mathcal{U}_{\alpha}^{\chi} = \mathcal{U}_{\alpha}(p^a\mathcal{O})$ with $a=\max\{0,-\langle\alpha,\chi\rangle\}$. Note that $\mathcal{U}_{\alpha}^{\chi}\subseteq K_1$ if $\langle\alpha,\chi\rangle<0$. Now it follows from (\cite{Ti1}, 3.1.) that we may write $c_2=u_-u_+m$, where
\[
u_+\in\prod_{\langle\alpha,\chi\rangle>0}\mathcal{U}_{\alpha}^{\chi}\subseteq\mathcal{U}_+(\mathcal{O}),\quad u_-\in\prod_{\langle\alpha,\chi\rangle<0}\mathcal{U}_{\alpha}^{\chi}\subseteq K_1,\quad m\in\mathcal{M}(\mathcal{O})
\]
(here we use that $N_{\mathcal{G}}\mathcal{T}(L)\cap K_{\chi}\subseteq \mathcal{M}(\mathcal{O})$ which can be easily checked). Thus we have $\bar{c}_2\in P_+$, with Levi component $\bar{m}$. Using the equation $\sigma^{-1}(c_1)=\chi(p)c_2\chi(p)^{-1}$ we now see that $c_1=u_+'u_-'\sigma(m)$ with $u_-'\in\sigma(\mathcal{U}_-(\mathcal{O}))$ and
\[
u_+'\in\sigma\Big(\prod_{\langle\alpha,\chi\rangle>0}\chi(p)\mathcal{U}_{\alpha}^{\chi}\chi(p)^{-1}\Big)\subseteq K_1.
\]
Hence $\bar{c}_1\in P_-^{(\sigma)}$ and has Levi component $\overline{\sigma(m)}=\sigma(\bar{m})$, which shows that $(\bar{c}_2,\bar{c}_1)\in E_{\chi}$.
\item[(iii)] Now we investigate the fibers of $\tilde{\zeta}$ (cf. the proof of Thm. 1.1.(1) in \cite{Vi1}). Let $\langle g\rangle, \langle g'\rangle\in\mathcal{C}(\mu,K)$. Since everything only depends on the $K$-$\sigma$-conjugacy classes we may suppose that $g=h\mu(p)$ and $g'=h'\mu(p)$ for $h,h'\in K$. By definition of $\tilde{\zeta}$ we then have to show that
\[
E_{\chi}\cdot \bar{h}=E_{\chi}\cdot \bar{h}'\ \Longleftrightarrow\ [[h\mu(p)]]=[[h'\mu(p)]].
\]
The implication "$\Longleftarrow$" follows directly from the proof of (i). Conversely, let $(p_+,p_-)\in E_{\chi}$ such that $p_+\bar{h}p_-^{-1}=\bar{h}'$. We may choose
\[
m\in\mathcal{M}(\mathcal{O}),\quad u_+\in\mathcal{U}_+(\mathcal{O}),\quad u_-\in\sigma(\mathcal{U}_-(\mathcal{O}))
\]
such that $p_+=\bar{u}_+\bar{m}$ and $p_-=\bar{u}_-\sigma(\bar{m})$. By $(*)$ we have $\mu(p)^{-1}u_-^{-1}\mu(p)\in K_1$ and $\mu(p)\sigma(u_+)\mu(p)^{-1}\in K_1$. Thus, using the fact that $\sigma(m)^{-1}$ commutes with $\mu(p)$, we find that
\begin{align*}
[[h'\mu(p)]] & =[[u_+mh\sigma(m)^{-1}u_-^{-1}\mu(p)]]=[[u_+mh\sigma(m)^{-1}\mu(p)]]\\
& = [[mh\sigma(m)^{-1}\mu(p)\sigma(u_+)]]=[[mh\sigma(m)^{-1}\mu(p)]] \\
& =[[mh\mu(p)\sigma(m)^{-1}]]=[[h\mu(p)]].
\end{align*}
\item[(ii)] Finally we check that $\zeta=\tilde{\zeta}\circ\gamma$. Let $\underline{I}$ be the $\mathcal{G}_{\mathbb{F}_p}$-zip associated to $\chi$ in Constr. \ref{EOConstr}, which defines $\zeta$. Consider a point $x\in\mathcal{S}$. Let $\beta\colon (\Lambda^*_{\mathcal{O}},s_{\mathcal{O}})\to(\mathbb{D}_x,s_{\mathrm{cris},x})$ be an isomorphism, and let $F_{\beta}=g_{\beta}^{\vee}\circ(1\otimes\sigma)$ for $g_{\beta}\in K\mu(p)K$, then $\gamma(x)=\langle g_{\beta}\rangle$. We may suppose that $g_{\beta}=h\mu(p)$ for some $h\in K$ (compare Lemma \ref{LinearizationLem}). In view of the classification of $\mathcal{G}_{\mathbb{F}_p}$-zips over $\overline{\mathbb{F}}$ and the definitions of $\zeta$ and $\tilde{\zeta}$ we then have to show that the pullback $\underline{I}_x$ is isomorphic to the standard zip $\underline{I}_{\bar{h}}$.
Let $\Lambda^*_{\overline{\mathbb{F}}}=\Lambda^*_{\chi,0}\oplus\Lambda^*_{\chi,1}$ be the weight decomposition of with respect to the action of $\mu$ on $\Lambda^*_{\overline{\mathbb{F}}}$ (using, as always, the representation $(\cdot)^{\vee}$), and likewise for $\mu$. Keeping the notations of \ref{EOConstr}, by definition we have $\underline{I}_x=(I_x,I_{+,x},I_{-,x},\iota_x)$, where
\begin{align*}
I_x:= & \mathbf{Isom}_{\overline{\mathbb{F}}}\big((\Lambda^*_{\overline{\mathbb{F}}},\bar{s}),\, (\overline{\mathcal{V}^{\circ}}_x,\bar{s}_{\mathrm{dR},x})\big),\\
I_{+,x}:= & \mathbf{Isom}_{\overline{\mathbb{F}}}\big((\Lambda^*_{\overline{\mathbb{F}}},\bar{s},\Lambda^*_{\overline{\mathbb{F}}}\supset\Lambda^*_{\chi,1}),\, (\overline{\mathcal{V}^{\circ}}_x,\bar{s}_{\mathrm{dR},x},\overline{\mathcal{V}^{\circ}}_x\supset\mathcal{C}_x)\big),\\
I_{-,x}:= & \mathbf{Isom}_{\overline{\mathbb{F}}}\big((\Lambda^*_{\overline{\mathbb{F}}},\bar{s},\Lambda^*_{\mu,0}\subset\Lambda^*_{\overline{\mathbb{F}}}),\, (\overline{\mathcal{V}^{\circ}}_x,\bar{s}_{\mathrm{dR},x},\mathcal{D}_x\subset\overline{\mathcal{V}^{\circ}}_x)\big),
\end{align*}
and $\iota_x\colon I_{+,x}^{(\sigma)}/U_+^{(\sigma)}\to I_{-,x}/U_-^{(\sigma)}$ is given as follows: Fix an element $\eta_-\in I_{-,x}$. Then for each $\eta_+\in I_{x,+}$ the image $\iota_x(\eta_+^{(\sigma)}U_+^{(\sigma)})$ is the $U_-^{(\sigma)}$-coset of the isomorphism
\begin{align*}
\Lambda^*_{\overline{\mathbb{F}}}=\Lambda^*_{\mu,0}\oplus\Lambda^*_{\mu,1} & \stackrel{\eta_+^{(\sigma)}}{\longrightarrow}\eta_+^{(\sigma)}(\Lambda^*_{\mu,0})\oplus\mathcal{C}^{(\sigma)}
\simeq(\overline{\mathcal{V}^{\circ}}^{(\sigma)}/\mathcal{C}^{(\sigma)})\oplus \mathcal{C}^{(\sigma)}\\
& \stackrel{\phi_0\oplus\phi_1}{\longrightarrow} \mathcal{D}\oplus (\overline{\mathcal{V}^{\circ}}/\mathcal{D})
\simeq\mathcal{D}\oplus\eta_-(\Lambda^*_{\mu,1})=\overline{\mathcal{V}^{\circ}}
\end{align*}
(here we have omitted the subscripts), and this is in fact independent of the choice of $\eta_-$.\\
Let $(\overline{\mathbb{D}_x},\overline{F},\overline{V})$ be the reduction mod $p$ of $\mathbb{D}_x$, this is the contravariant Dieudonn{\'e} space of $\mathcal{A}_x[p]$ (compare the proof of Lemma \ref{LinearizationLem}). We use the canonical isomorphism $(\overline{\mathcal{V}^{\circ}}_x,\bar{s}_{\mathrm{dR},x})\cong(\overline{\mathbb{D}_x},s_{\mathrm{cris},x})$. By the result of Oda (\cite{Od1}, 5.11.) we know that $\mathcal{C}_x$ and $\mathcal{D}_x$ correspond to the subspaces $\ker(\overline{F})=\im(\overline{V})$ and $\ker(\overline{V})=\im(\overline{F})$ of $\overline{\mathbb{D}_x}$ respectively, and the isomorphisms $\phi_0$ and $\phi_1$ get identified with the maps
\[
\left(\overline{\mathbb{D}_x}/\ker(\overline{F})\right)^{(\sigma)}\ \stackrel{\overline{F}^{\mathrm{lin}}}{\longrightarrow}\ \im(\overline{F}),\quad \im(\overline{V})^{(\sigma)}\ \stackrel{(\overline{V}^{-1})^{\mathrm{lin}}}{\longrightarrow}\ \overline{\mathbb{D}_x}/\ker(\overline{V}).
\]
Note that $V^{\mathrm{lin}}\colon\mathbb{D}_x\to\mathbb{D}_x^{(\sigma)}$ is given by $p\cdot (F^{\mathrm{lin}})^{-1}$, so we have the identity $(\beta^{(\sigma)})^{-1}\circ V^{\mathrm{lin}}\circ\beta=p\cdot (g_{\beta}^{\vee})^{-1}$. Denote by $\mathrm{pr}_i$ the projections on the factors of the decomposition $\Lambda^*_{\overline{\mathbb{F}}}=\Lambda^*_{\mu,0}\oplus\Lambda^*_{\mu,1}$. The reduction mod $p$ of the map on $\Lambda^*_{\mathcal{O}}$ given by $\mu(p)^{\vee}$ is exactly $\mathrm{pr}_0$, and the reduction of the map $p\cdot(\mu(p)^{\vee})^{-1}$ is $\mathrm{pr}_1$, so we have the commutative diagrams
\[
\begin{xy}
\xymatrix{
\overline{\mathbb{D}_x}^{(\sigma)}\ar[rr]^{\overline{F}^{\mathrm{lin}}} & & \overline{\mathbb{D}_x}\\
\Lambda^*_{\overline{\mathbb{F}}}\ar[rr]^{\bar{h}^{\vee}\circ\mathrm{pr}_0} \ar[u]^{\bar{\beta}^{(\sigma)}} & & \Lambda^*_{\overline{\mathbb{F}}}\ar[u]_{\bar{\beta}}
}
\end{xy}
\qquad
\quad
\begin{xy}
\xymatrix{
\overline{\mathbb{D}_x}\ar[rr]^{\overline{V}^{\mathrm{lin}}} & & \overline{\mathbb{D}_x}^{(\sigma)}\\
\Lambda^*_{\overline{\mathbb{F}}}\ar[rr]^{\mathrm{pr}_1\circ(\bar{h}^{\vee})^{-1}} \ar[u]^{\bar{\beta}} & & \Lambda^*_{\overline{\mathbb{F}}}\ar[u]_{\bar{\beta}^{(\sigma)}}
}
\end{xy},
\]
and in particular $\bar{\beta}^{-1}(\ker(\overline{F}))=(1\otimes\sigma^{-1})(\Lambda^*_{\mu,1})=\Lambda^*_{\chi,1}$ and $\bar{\beta}^{-1}(\im(\overline{F}))=\bar{h}^{\vee}(\Lambda^*_{\mu,0})$. This implies that $\bar{\beta}\in I_{x,+}$ and $\bar{\beta}\cdot\bar{h}=\bar{\beta}\circ\bar{h}^{\vee}\in I_{x,-}$. Putting things together and setting $\mathcal{C}_0:=\bar{\beta}^{(\sigma)}(\Lambda^*_{\mu,0})$ and $\mathcal{D}_1:=\bar{\beta}(\bar{h}^{\vee}(\Lambda^*_{\mu,1}))$, from the diagrams above we obtain a commutative diagram
\[
\begin{xy}
\xymatrix{
**[r] \overline{\mathcal{V}^{\circ}}^{(\sigma)}\simeq\mathcal{C}_0\oplus\mathcal{C}^{(\sigma)}\ar[rrrr]^{\phi_0\oplus\phi_1} & & & & **[l] \mathcal{D}\oplus\mathcal{D}_1\simeq\overline{\mathcal{V}^{\circ}}\\
\Lambda^*_{\overline{\mathbb{F}}}\ar[rrrr]^{\bar{h}^{\vee}} \ar[u]^{\bar{\beta}^{(\sigma)}} & & & & \Lambda^*_{\overline{\mathbb{F}}}\ar[u]_{\bar{\beta}}
}
\end{xy}
\]
which shows that $\iota_x(\bar{\beta}^{(\sigma)}U_+^{(\sigma)})=(\bar{\beta}\circ\bar{h}^{\vee})U_-^{(\sigma)}$. So we have indeed $\underline{I}_x\sim\underline{I}_{\bar{h}}$, which concludes the proof.
\end{enumerate}
\end{proof}
\subsection{Group theoretic criteria and the $\mu$-ordinary locus}
\label{GrThCritSubsec}
Let us summarize the results of the last subsection: The Newton strata $\mathcal{N}^b$ and Ekedahl-Oort strata $\mathcal{S}^w$ are given as the fibers of the maps $\theta$ and $\zeta$ respectively. We have a commutative diagram
\[
\begin{xy}
\xymatrix{
& & & B(G,\mu)\\
\mathcal{S}\ar[rrru]^{\theta}\ar[rr]^{\gamma}\ar[rrrd]_{\zeta} & & \mathcal{C}(\mathcal{G},\mu)\ar[ru]_{\tilde{\theta}}\ar[rd]^{\tilde{\zeta}} & \\
& & & {^JW}
}
\end{xy}
\]
For $b\in B(G,\mu)$ we have $\tilde{\theta}^{-1}(\{b\})=\mathcal{C}(\mathcal{G},\mu)\cap b$ and, in view of \ref{ZipClass}, Remark \ref{OrbitRepRem} and Prop. \ref{ZetaFacProp}, for $w\in{^JW}$ we have $\tilde{\zeta}^{-1}(\{w\})=[[\dot{w}_{0,J}\dot{w}\dot{w}_0\mu(p)]]=[[\dot{w}\dot{w}_0\sigma(\dot{w}_{0,J})\mu(p)]]$, compare (\cite{Vi1}, Thm. 1.1.(1)).
\begin{definition}
\label{TrRepDef}
For $w\in{^JW}$ we define $\tilde{w}:=\dot{w}\dot{w}_0\sigma(\dot{w}_{0,J})\mu(p)$.
\end{definition}
We note a few consequences for the comparison of the two stratifications:
\begin{enumerate}[(1)]
\item For $b\in B(G,\mu)$ and $w\in{^JW}$ we have the following necessary criterion for the corresponding strata to intersect:
\begin{center}
If $\mathcal{N}^b\cap\mathcal{S}^w\neq\emptyset$, then $(\mathcal{C}(\mathcal{G},\mu)\cap b)\cap [[\tilde{w}]]=b\cap [[\tilde{w}]]\neq\emptyset$.
\end{center}
If the map $\gamma$ is surjective, then this criterion is also sufficient.
\item Let $b\in B(G,\mu), w\in{^JW}$. If $[[\tilde{w}]]\subseteq \mathcal{C}(\mathcal{G},\mu)\cap b$ (resp. $\supseteq$, resp. $=$), then $\mathcal{S}^w\subseteq \mathcal{N}^b$ (resp. $\supseteq$, resp. $=$).
\item Let $I:=\{g\in K\mid \bar{g}\in B\}\subseteq K$, this is an Iwahori subgroup of $\mathcal{G}(L)$. By (\cite{Vi1}, Thm. 1.1.(2)), for every $w\in{^JW}$ any element of $I \tilde{w}I$ is $I$-$\sigma$-conjugate to an element in $K_1 \tilde{w} K_1$, which means that in particular the sets $I \tilde{w}I$ and $[[\tilde{w}]]$ have the same image in $\mathcal{C}(\mathcal{G},\mu)$. Hence we may replace $[[\tilde{w}]]$ by $I \tilde{w} I$ in (1) and (2).\\
In particular the question whether Newton strata and Ekedahl-Oort strata intersect is closely related to the study of affine Deligne-Lusztig varieties: Let $w\in{^JW}$ and $b\in B(G,\mu)$, and choose a $b_0\in b$. The affine Deligne-Lusztig variety of $\tilde{w}$ and $b_0$ is defined as $X_{\tilde{w}}(b_0):=\{gI\in G(L)/I\mid g^{-1}b_0\sigma(g)\in I\tilde{w}I\}$. It is nonempty if and only if $b$ and $I\tilde{w}I$ intersect. So if $\mathcal{N}^b\cap\mathcal{S}^w\neq\emptyset$ then $X_{\tilde{w}}(b_0)\neq\emptyset$, and the converse is true if $\gamma$ is surjective.
\end{enumerate}
As an application of these principles we can now prove our main results on the $\mu$-ordinary locus:
\begin{proposition}
\label{OrdEqProp1}
We have the equalities $K\mu(p)K\cap[\mu(p)]=\langle\mu(p)\rangle=[[\mu(p)]]$.
\end{proposition}
In fact, this statement holds true in a more general setup. We will restate and prove this proposition in the next section. It implies the main theorems already stated in the introduction:
\begin{theorem}[cf. Thm. \ref{MainThm2}]
\label{OrdLocThm}
The $\mu$-ordinary locus in $\mathcal{S}$ is equal to the open Ekedahl-Oort stratum $\mathcal{S}^{w_{\mathrm{max}}}$, in particular it is open and dense. Further, for any two $\overline{\mathbb{F}}$-valued points $x,x'$ in the $\mu$-ordinary locus there is an isomorphism $(\mathbb{D}_x,s_{\mathrm{cris},x})\simeq(\mathbb{D}_{x'},s_{\mathrm{cris},x'})$.
\end{theorem}
\begin{proof}
The $\mu$-ordinary locus is by definition equal to $\mathcal{N}^{b_{\mathrm{max}}}$, and we have $b_{\mathrm{max}}=[\mu(p)]$, see Remark \ref{NewtParRem}. On the other hand, as $w_{\mathrm{max}}=w_{0,J}w_0$, we have $[[\tilde{w}_{\mathrm{max}}]]=[[\mu(p)]]$. So the equality of the strata follows from Prop. \ref{OrdEqProp1} and the observation (2) above. By Remark \ref{EOPropRem}(3), $\mathcal{S}^{w_{\mathrm{max}}}$ is open and dense in $\mathcal{S}$, so the same holds for the $\mu$-ordinary locus. Since Prop. \ref{OrdEqProp1} also asserts that we have the equality $\mathcal{N}^{b_{\mathrm{max}}}=\gamma^{-1}(\{\langle\mu(p)\rangle\})$, the last statement follows from Corollary \ref{LinearizationCor}.
\end{proof}
\begin{corollary}[cf. Thm. \ref{MainThm1}]
\label{OrdLocCor}
The $\mu$-ordinary locus in $\mathscr{S}\otimes\kappa(v)$ is open and dense.
\end{corollary}
\section{Ordinary loci for reductive unramified groups}
\label{GroupResSec}
In the final section of this paper we prove Prop. \ref{OrdEqProp1} in a more general setup which also includes the function field case. We modify our notation accordingly:
\begin{notation}
Let $\mathbb{F}_q$ be a finite extension of $\mathbb{F}_p$, and let $F$ be either $\mathbb{Q}_q$ or the field $\mathbb{F}_q((t))$ of Laurent series.
We denote by $L$ the completion of the maximal unramified extension of $F$, i.e. either $L=\Frac(W(\overline{\mathbb{F}}))$ or $L=\overline{\mathbb{F}}((t))$. Here as before $\overline{\mathbb{F}}$ is an algebraic closure of $\mathbb{F}_p$. Let $\mathcal{O}\subseteq L$ and $\mathcal{O}_F\subseteq F$ be the valuation rings respectively, let $\epsilon$ be a uniformizing element of $\mathcal{O}_F$ (and hence of $\mathcal{O}$), e.g. $\epsilon=p$ in the case of mixed characteristics or $\epsilon=t$ in the equicharacteristic case. In this section we denote by $\sigma$ the Frobenius map $a\mapsto a^q$ of $\overline{\mathbb{F}}$ over $\mathbb{F}_q$, and also the corresponding maps on $\mathcal{O}$ and on $L$.
Let $\mathcal{G}$ be a connected, reductive group over $\mathcal{O}_F$ (it is then quasisplit and split over a finite unramified extension of $\mathcal{O}_F$). Fix a Borel pair $(\mathcal{B},\mathcal{T})$ defined over $\mathcal{O}_F$. As before, we obtain the groups $X_*(\mathcal{T}), X^*(\mathcal{T})$ of cocharacters and characters of $\mathcal{T}$ over $\mathcal{O}$, the roots resp. positive roots $\Phi^*\subset\Phi\subset X^*(T)$ with respect to $\mathcal{T}$, and the Weyl group $(W,S)$, all equipped with an action of $\sigma$.
As in section \ref{StratCompSec} let $K:=\mathcal{G}(\mathcal{O})$ and $K_1:=\{g\in K\mid \bar{g}=1\}$, and for $g\in\mathcal{G}(L)$ define the sets $[g]$, $\langle g\rangle$ and $[[g]]$ as before. Let $I:=\{g\in K\mid \bar{g}\in \mathcal{B}(k)\}$, this is an Iwahori subgroup of $\mathcal{G}(L)$.
\end{notation}
The formulation of Prop. \ref{OrdEqProp1} in this context reads as follows:
\begin{proposition}
\label{OrdEqProp2}
Let $\mu\in X_*(\mathcal{T})$ be a dominant cocharacter. Then for $g\in \mathcal{G}(L)$ the following are equivalent:
\begin{enumerate}[(i)]
\item $g\in [\mu(\epsilon)]\cap K\mu(\epsilon)K$,
\item $g\in \langle\mu(\epsilon)\rangle$,
\item $g\in [[\mu(\epsilon)]]$.
\end{enumerate}
\end{proposition}
Note that $\mu$ is not assumed to be minuscule.
\begin{remark}
\label{MoonenCompRem}
This proposition should be seen as a generalization of (\cite{Mo2}, Thm. 1.3.7. resp. Thm. 3.2.7.): If $\mathcal{G}$ arises from a PEL-type Shimura datum, then the element $\mu(\epsilon)\in\mathcal{G}(L)$ takes the place of the $p$-divisible group $\underline{X}^{\mathrm{ord}}$ defined in \cite{Mo2}.
\end{remark}
We will prove Prop. \ref{OrdEqProp2} throughout the rest of the section. Of course, the implications $(ii)\Rightarrow(i)$ and $(ii)\Rightarrow(iii)$ are trivial.\\
The implication $(iii)\Rightarrow(ii)$ follows from the property that every element in the double coset $I\mu(\epsilon) I$ is $\sigma$-conjugate to $\mu(\epsilon)$ by an element of $I$. This is well-known, for example it is a consequence of the fact that, with the conventions of \cite{GHN}, the element $\mu\in \widetilde{W}=X_*(\mathcal{T})\rtimes W$ is a fundamental $(\emptyset,1,\sigma)$-alcove (see loc.cit., Thm. 3.3.1., Prop. 3.4.3.). So if $g\in [[\mu(\epsilon)]]$, say $g=hh_1\mu(\epsilon)h_1'\sigma(h)^{-1}$ for some $h\in K$ and $h_1,h_1'\in K_1$, then in particular $h_1\mu(\epsilon)h_1'\in I\mu(\epsilon)I$ and so there is an $i\in I$ such that $h_1\mu(\epsilon)h_1'=i\mu(\epsilon)\sigma(i)^{-1}$ and thus $g=(hi)\mu(\epsilon)\sigma(hi)^{-1}\in\langle\mu(\epsilon)\rangle$.\\
To show the remaining implication we will use the Hodge-Newton decomposition for affine Deligne-Lusztig sets in the affine Grassmannian, which was first formulated for unramified groups by Kottwitz and later generalized by Mantovan and Viehmann. We briefly recall the main statement: Let $\lambda\in X_*(\mathcal{T})$ be a cocharacter (which is usually taken to be dominant) and let $b_0\in\mathcal{G}(L)$, then the affine Deligne-Lusztig set associated to these elements is defined as
\[
X_{\lambda}^{\mathcal{G}}(b_0):=\{g\in\mathcal{G}(L)/K\mid g^{-1}b_0\sigma(g)\in K\lambda(\epsilon)K\}.
\]
Let $\mathcal{M}\subseteq\mathcal{G}$ be a Levi subgroup defined over $\mathcal{O}_F$ such that $\mathcal{T}\subseteq\mathcal{M}$ (note that this differs from the $\mathcal{M}$ considered in section \ref{StratCompSec}), then $\mathcal{M}$ is again a connected reductive group. Consider the Borel pair $(\mathcal{B}\cap\mathcal{M}, \mathcal{T})$. Then $\Phi_{\mathcal{M}}\subseteq\Phi$, $\Phi_{\mathcal{M}}^+=\Phi^+\cap\Phi_{\mathcal{M}}$ and the Weyl group of $\mathcal{M}$ is of the form $(W_{S_{\mathcal{M}}},S_{\mathcal{M}})$ for some subset $S_{\mathcal{M}}\subseteq S$. We have the Newton map and Kottwitz map for $\mathcal{M}$
\[
\nu_{\mathcal{M}}\colon B(\mathcal{M})\longrightarrow(W_{\mathcal{M}}\setminus X_*(\mathcal{T})_{\mathbb{Q}})^{\langle\sigma\rangle}, \qquad \kappa_{\mathcal{M}}\colon B(\mathcal{M})\longrightarrow \pi_1(\mathcal{M})_{\langle\sigma\rangle}.
\]
For every $\lambda\in X_*(\mathcal{T})$ and $b_0\in\mathcal{M}(L)$ there is the analogous Deligne-Lusztig set
\[
X_{\lambda}^{\mathcal{M}}(b_0)=\{m\in\mathcal{M}(L)/\mathcal{M}(\mathcal{O})\mid m^{-1}b_0\sigma(m)\in \mathcal{M}(\mathcal{O})\lambda(\epsilon)\mathcal{M}(\mathcal{O})\},
\]
and a natural map $X_{\lambda}^{\mathcal{M}}(b_0)\to X_{\lambda}^{\mathcal{G}}(b_0)$, which is clearly injective.
Let $V:=X_*(\mathcal{T})\otimes_{\mathbb{Z}}\mathbb{R}$, it carries an action of $W$ and of $\sigma$. Let $V_{\mathcal{M}}\subseteq V$ be the subspace of elements which are invariant under the action of $W_{\mathcal{M}}$ and $\sigma$, and let
\[
V_{\mathcal{M}}^+:=\{v\in V_{\mathcal{M}}\mid \langle\alpha,v\rangle >0\text{ for all }\alpha\in\Phi^+\setminus\Phi_{\mathcal{M}}^+\}.
\]
Fix an $n\in\mathbb{N}$ such that $\sigma^n$ acts as the identity on $V$. The composition of the two maps
\[
v\longmapsto \frac{1}{n}\sum_{i=0}^{n-1}\sigma^i(v),\quad \quad v\longmapsto \frac{1}{|W_{\mathcal{M}}|}\sum_{w\in W_{\mathcal{M}}}w(v)
\]
gives a projection map $V\twoheadrightarrow V_{\mathcal{M}}$, which induces an isomorphism $\pi_1(\mathcal{M})_{\langle\sigma\rangle}\otimes_{\mathbb{Z}}\mathbb{R}\simeq V_{\mathcal{M}}$. Let $\pi_1(\mathcal{M})_{\langle\sigma\rangle}^+$ be the subset of elements whose image under the resulting map $\pi_1(\mathcal{M})_{\langle\sigma\rangle}\to V_{\mathcal{M}}$ lies in $V_{\mathcal{M}}^+$.
\begin{proposition}[\cite{MV}, Thm. 6]
\label{HNDecProp}
Let $\lambda\in X_*(\mathcal{T})$ be $\mathcal{G}$-dominant, let $b_0\in\mathcal{M}(L)$ such that $\kappa_{\mathcal{M}}([b_0]_{\mathcal{M}})$ equals the image of $\lambda$ in $\pi_1(\mathcal{M})_{\langle\sigma\rangle}$. If $\kappa_{\mathcal{M}}([b_0]_{\mathcal{M}})\in\pi_1(\mathcal{M})_{\langle\sigma\rangle}^+$ and the $\mathcal{M}$-dominant representative of $\nu_{\mathcal{M}}([b_0]_{\mathcal{M}})\in (W_{\mathcal{M}}\setminus X_*(\mathcal{T})_{\mathbb{Q}})^{\langle\sigma\rangle}$ is also $\mathcal{G}$-dominant, then the natural map $X_{\lambda}^{\mathcal{M}}(b_0)\hookrightarrow X_{\lambda}^{\mathcal{G}}(b_0)$ is an isomorphism.
\end{proposition}
Now we show the remaining implication $(i)\Rightarrow (ii)$ of Prop. \ref{OrdEqProp2}: Consider an element $g\in [\mu(\epsilon)]\cap K\mu(\epsilon)K$. Then $g=h^{-1}\mu(\epsilon)\sigma(h)$ for some $h\in\mathcal{G}(L)$, and we need to show that we may replace $h$ by some element of $K$. By definition, $h$ lies in the affine Deligne-Lusztig set $X_{\mu}^{\mathcal{G}}(\mu(\epsilon))$. Let $\bar{\mu}:=\frac{1}{n}\sum_{i=0}^{n-1}\sigma^i(\mu)$, where as before $n\in\mathbb{N}$ is chosen such that $\sigma^n$ acts as the identity. Consider the subgroup
\[
\mathcal{M}:=\Cent_{\mathcal{G}}(\bar{\mu}):=\Cent_{\mathcal{G}}(n\cdot\bar{\mu})\subseteq\mathcal{G},
\]
this is a Levi subgroup (cf. \cite{SGA3}, Exp. XXVI, Cor. 6.10.) which is defined over $\mathcal{O}_F$, as $n\cdot\bar{\mu}$ is $\sigma$-invariant.
We claim that $\mu(\epsilon)$ is central in $\mathcal{M}$: Indeed, we have
\[
\mathrm{Z}(\mathcal{M})=\bigcap_{\alpha\in\Phi_{\mathcal{M}}}\ker(\alpha)\subseteq \mathcal{T}
\]
(see \cite{SGA3}, Exp. XXII, Cor. 4.1.6.). Let $\alpha\in\Phi_{\mathcal{M}}^+$. By definition of $\mathcal{M}$ the cocharacter $n\cdot\bar{\mu}$ maps to the center of $\mathcal{M}$, so we find that
\[
0=\langle\alpha,n\cdot\bar{\mu}\rangle=\sum_{i=0}^{n-1}\langle\alpha,\sigma^i(\mu)\rangle.
\]
Since $\sigma$ acts on the set of dominant cocharacters, every summand in the upper equation is nonnegative, so they are all equal to zero. In particular, $\langle\alpha,\mu\rangle=0$ (for every $\alpha\in\Phi_{\mathcal{M}}^+$), which implies that $\mu$ is also central in $\mathcal{M}$.
Next, note that the pair $(\mu,\mu(\epsilon))$ satisfies the conditions of Prop. \ref{HNDecProp}: The $\mathcal{M}$-dominant Newton vector of $[\mu(\epsilon)]_{\mathcal{M}}$ is exactly $\bar{\mu}$ (cf. Remark \ref{NewtParRem}), which is $\mathcal{G}$-dominant as well. Further, the image of $\kappa_{\mathcal{M}}([\mu(\epsilon)]_{\mathcal{M}})$ in $V_{\mathcal{M}}$ is the projection of $\mu \in V$ to $V_{\mathcal{M}}$, which is also equal to $\bar{\mu}$, and this lies in $V_{\mathcal{M}}^+$ by definition of $\mathcal{M}$. We have therefore the Hodge-Newton decomposition $X_{\mu}^{\mathcal{M}}(\mu(\epsilon))\cong X_{\mu}^{\mathcal{G}}(\mu(\epsilon))$, so there is an element $m\in\mathcal{M}(L)$ such that
\[
mK=hK\quad \text{and} \quad m^{-1}\mu(\epsilon)\sigma(m)\in\mathcal{M}(\mathcal{O})\mu(\epsilon)\mathcal{M}(\mathcal{O}).
\]
Since $\mu(\epsilon)$ commutes with every element of $\mathcal{M}(L)$, the last equation implies that $m^{-1}\sigma(m)\in\mathcal{M}(\mathcal{O})$. As $\mathcal{M}$ is a connected reductive group over $\mathcal{O}$, a variant of Lang's theorem holds for $\mathcal{M}(\mathcal{O})$ (see \cite{Vi1}, Lemma 2.1.), so we obtain an element $m'\in\mathcal{M}(\mathcal{O})$ such that $(m')^{-1}\sigma(m')=m^{-1}\sigma(m)$. Let $c\in K$ such that $h=mc$, then altogether we have
\begin{align*}
g=h^{-1}\mu(\epsilon)\sigma(h) & =c^{-1}(m^{-1}\mu(\epsilon)\sigma(m))\sigma(c)\\
& =c^{-1}(m^{-1}\sigma(m)\mu(\epsilon))\sigma(c)\\
& =c^{-1}((m')^{-1}\sigma(m')\mu(\epsilon))\sigma(c)\\
& =c^{-1}(m')^{-1}\mu(\epsilon)\sigma(m')\sigma(c) \in\langle\mu(\epsilon)\rangle,
\end{align*}
which was to be shown. This concludes the proof of Prop. \ref{OrdEqProp2}. \hfill{$\Box$}
|
1,108,101,564,646 | arxiv | \section{Introduction}
\label{sec:introduction}
In this work, we propose a novel computational approach for optimal
reconstruction of biomedical images based on any available measurements
usually obtained with some noise. In particular, this approach is useful
in various applications for medical practices dealing with models
characterized by parameters with or near to binary-type distributions,
e.g.~heat or electrical conductivity. The proposed computational framework
is built around a derivative-free optimization algorithm and supported
by a set of sample solutions. These samples are generated synthetically
with a geometry based on any available prior knowledge of the simulated
phenomena and the expected structure of obtained images. The ease of
parallelization allows operations on very large sample sets which
enables the best approximations for the initial guess, and, as a result,
close proximity to the local/global solution of the optimal control
problem. The controls are effectively defined to utilize individual
contributions from samples selected to represent the compound image
and the efficient parameterization obtained from the description of
the samples' geometry.
The proposed computational framework has an easy to follow design and
is tuned by a nominal number of computational parameters making the
approach simple for practical implementation for various applications
also beyond biomedical imaging. High computational efficiency is
achieved by applying the coordinate descent method customized to work
with individual controls in the predefined custom order.
As known from practical applications, fine scale optimization performed
on fine meshes provides high resolution images. On the other hand,
such solutions will require computational time increased due to the
size of the fine mesh and the associated control space. In addition,
obtained images may not provide clear boundaries between regions
identified by different physical properties in space. As a result,
a smooth transition cannot provide an accurate recognition of shapes,
e.g.~of cancer-affected regions while solving an inverse problem of
cancer detection~(IPCD). In our computations, fine mesh is used only
to assess the measurement fit in terms of evaluated cost functionals.
Without loss of generality, in the current paper, we keep the main
focus on applying our new computational approach to IPCD by the
Electrical Impedance Tomography~(EIT) technique, however, this
methodology could be easily extended to a broad range of problems
in biomedical sciences, also in physics, geology, chemistry, etc.
EIT is a rapidly developing non-invasive imaging technique gaining
popularity by enabling various medical applications to perform
screening for cancer detection \cite{Brown2003,
Holder2004,Lionheart2004,Abascal2011}.
A well-known fact is that the electrical properties, e.g.~electrical
conductivity or permittivity, of different tissues are different
if they are healthy or affected by cancer. This phenomenon is used
in EIT to produce images of biological tissues by interpreting their
response to applied voltages or injected currents
\cite{Brown2003,Holder2004}. The inverse EIT problem
deals with reconstructing the electrical conductivity by measuring
voltages or currents at electrodes placed on the surface of a test
volume. This so-called Calderon type inverse problem \cite{Calderon1980}
is highly ill-posed, refer to topical review \cite{Borcea2002}.
Since the 1980 various techniques have been suggested to solve it
computationally. We refer to the recent papers
\cite{Adler2015,Bera2018,Wang2020} with review on the current state
of the art and the existing open problems associated with EIT and
its applications.
This paper proceeds as follows. In Section~\ref{sec:math} we present
a very general mathematical description of the inverse EIT problem
formulated as an optimal control problem. Procedures for solving this
optimization problem with the proposed sample-based parameterization
are discussed in Section~\ref{sec:solution}. Model descriptions and
detailed computational results including discussion on chosen methods
are presented in Section~\ref{sec:results}. Concluding remarks are
provided in Section~\ref{sec:remarks}.
\section{Mathematical Model for Inverse EIT Problem}
\label{sec:math}
As discussed at length in \cite{AbdullaBukshtynovSeif} and
\cite{KoolmanBukshtynov}, the inverse EIT problem is formulated as a
PDE-constrained optimal control problem for an open and bounded set
$\Omega \subset \mathbb{R}^n, \ n = 2, 3$, representing body with electrical
conductivity at point $x \in \Omega$ given by function
$\sigma(x): \, \Omega \rightarrow \mathbb{R}_+$. In this paper we use
the so-called ``voltage--to--current" model where voltages (electrical
potentials) $U = (U_{\ell})_{\ell=1}^m \in \mathbb{R}^m$ are applied to $m$
electrodes $(E_{\ell})_{\ell=1}^m$ with contact impedances
$(Z_{\ell})_{\ell=1}^m \in \mathbb{R}^m_+$ subject to the ground
(zero potential) condition
\begin{equation}
\sum_{\ell=1}^m U_{\ell}=0.
\label{eq:ground_conds}
\end{equation}
These voltages initiate electrical currents
$(I_{\ell})_{\ell=1}^m\in \mathbb{R}^m$ through the same electrodes $E_{\ell}$
placed at the periphery of the body $\partial \Omega$.
The electrical currents may be computed as
\begin{equation}
I_{\ell} = \int_{E_{\ell}} \sigma(x) \Dpartial{u(x)}{n} \, ds,
\quad \ell = 1, \ldots, m
\label{eq:el_current}
\end{equation}
based on conductivity field $\sigma(x)$ and a distribution of electrical
potential $u(x): \, \Omega \rightarrow \mathbb{R}$ obtained as a solution of the
following elliptic problem
\begin{subequations}
\begin{alignat}{3}
\boldsymbol{\nabla} \cdot \left[ \sigma(x) \boldsymbol{\nabla} u(x) \right] &= 0,
&& \quad x \in \Omega \label{eq:forward_1}\\
\Dpartial{u(x)}{n} &= 0, && \quad x \in \partial \Omega -
\bigcup\limits_{\ell=1}^{m} E_{\ell} \label{eq:forward_2}\\
u(x) + Z_{\ell} \sigma(x) \Dpartial{u(x)}{n} &= U_{\ell},
&& \quad x \in E_{\ell}, \ \ell= 1, \ldots, m
\label{eq:forward_3}
\end{alignat}
\label{eq:forward}
\end{subequations}
in which $n$ is an external unit normal vector on $\partial \Omega$.
We set conductivity $\sigma(x)$ here as a control variable and
formulate the inverse EIT (conductivity) problem \cite{Calderon1980}
as a PDE-constrained optimal control problem \cite{AbdullaBukshtynovSeif}
by considering least-square minimization of mismatches
$\left( I_{\ell} - I_{\ell}^* \right)^2$, where
$(I_{\ell}^*)_{\ell=1}^m \in \mathbb{R}^m$ are measurements of electrical
currents $I_{\ell}$. In addition, we have to mention a well-known
fact that this inverse EIT problem to be solved in a discretized
domain $\Omega$ is highly ill-posed. Therefore, we enlarge the data
up to size of $m^2$ by adding new measurements following the
``rotation scheme'' described in detail in \cite{AbdullaBukshtynovSeif}
while keeping the size of the unknown parameters, i.e.~elements in the
discretized description for $\sigma(x)$, fixed. Having a new set of
data $(I_{\ell}^{k*})_{\ell,k=1}^m$ and in light of the Robin
condition \eqref{eq:forward_3} used together with \eqref{eq:el_current},
we define a complete form of the cost functional
\begin{equation}
\mathcal{J} (\sigma) = \sum_{k=1}^{m} \sum_{\ell=1}^m
\left[ \int_{E_{\ell}} \dfrac{U^k_{\ell}-u^k(x;\sigma)}{Z_{\ell}}
\, ds - I^{k*}_{\ell} \right]^2
\label{eq:cost_functional}
\end{equation}
for the optimal control problem
\begin{equation}
\hat \sigma(x) = \underset{\sigma}{\operatorname{argmin}} \ \mathcal{J}(\sigma)
\label{eq:minJ_sigma}
\end{equation}
subject to PDE constraint \eqref{eq:forward} where each function
$u^k(\cdot; \sigma), \ k = 1, \ldots, m$, solves elliptic PDE problem
\eqref{eq:forward_1}--\eqref{eq:forward_3}. We also note that these
solutions of forward EIT problem after applying \eqref{eq:el_current}
and adding some noise may be used for generating various model examples
(synthetic data) for inverse EIT problems to adequately mimic cancer
related diagnoses seen in reality.
\section{Solution by Sample-based Parameterization}
\label{sec:solution}
\subsection{Preliminaries and Main Notations}
\label{sec:step_0}
Without loss of generality, here we discuss our new algorithm for
solving problem \eqref{eq:minJ_sigma} in 2D ($n = 2$) domain
\begin{equation}
\Omega = \left\{ x \in \mathbb{R}^2 : \ | x |^2 < R^2 \right\}
\label{eq:domain}
\end{equation}
which is a disc of radius $R$. However, the same analysis could be
easily extended to any complexity of domain $\Omega$ in 3D ($n = 3$)
settings. In addition, we assume that the actual (true) electrical
conductivity $\sigma_{true}(x)$ we seek to reconstruct could be
represented by
\begin{equation}
\sigma_{true}(x) = \left\{
\begin{aligned}
\sigma_c, & \quad x \in \Omega_c,\\
\sigma_h, & \quad x \in \Omega_h,
\end{aligned}
\right.
\quad \Omega_c \cap \Omega_h = \emptyset
\label{eq:sigma_true}
\end{equation}
where $\sigma_c$ and $\sigma_h$ are known constants for the
respective cancer affected region $\Omega_c$ and the
healthy tissue part $\Omega_h$.
We seek for the solution of \eqref{eq:minJ_sigma} in a form
\begin{equation}
\sigma(x) = \sum_{i=1}^{N_s} \alpha_i \bar \sigma_i(x),
\qquad 0 \leq \alpha_i \leq 1,
\label{eq:sigma_main}
\end{equation}
where $\bar \sigma_i(x)$, $i = 1, \ldots, N_s$, are sample solutions
generated synthetically and convexly weighted by coefficients
$\alpha_i$
\begin{equation}
\sum_{i=1}^{N_s} \alpha_i = 1.
\label{eq:sigma_alpha}
\end{equation}
The entire collection of $N$ samples
\begin{equation}
\mathcal{C}(N) = (\bar \sigma_i(x))_{i=1}^{N}, \qquad N \gg N_s
\label{eq:smpl_collect}
\end{equation}
in fact could be generated based on any assumptions made for the
(geometrical) structure of the reconstructed images. Here we
assume that clear shapes for binary images could be obtained by
combining simple convex geometric shapes (elements) in 2D such
as triangles, squares, circles, etc. For example, in the current
research the $i$-th sample in our $N$-collection consists of $N_c^i$
circles of various radii $r \in \mathbb{R}_+$ and centers
$x^0 = (x^{01}, x^{02}) \in \mathbb{R}^2$ located inside domain $\Omega$,
i.e.
\begin{equation}
\bar \sigma_i(x) = \left\{
\begin{aligned}
\sigma_c, &\quad | x - x^0_j |^2 \leq r^2_j, \ j = 1,
\ldots, N_c^i\\
\sigma_h, &\quad {\rm otherwise}
\end{aligned}
\right.
\label{eq:smpl_par}
\end{equation}
In \eqref{eq:smpl_par} all $N_c^i$ circles parameterized by the set
of triplets
\begin{equation}
\mathcal{P}_i = (\{ x^{01}_j, x^{02}_j, r_j\})_{j=1}^{N_c^i},
\qquad i = 1, \ldots, N
\label{eq:smpl_triplet}
\end{equation}
are generated randomly subject to the following restrictions:
\begin{subequations}
\begin{alignat}{3}
&&|x^0_j| < R+r_j, \qquad j &= 1, \ldots, N_c^i,
\label{eq:smpl_restr_1}\\
&&1 \leq N_c^i \leq N_{c,\max}, \qquad i &= 1, \ldots, N.
\label{eq:smpl_restr_2}
\end{alignat}
\label{eq:smpl_restr}
\end{subequations}
Parameter $N_{c,\max}$ in \eqref{eq:smpl_restr_2} defines the
maximum number of circles in the samples and, in fact, sets the
highest level of complexity (resolution) for the reconstructed
image $\hat \sigma(x)$. Figure~\ref{fig:sample_param} shows
different scenarios of the $j$-th circle's appearance in the
$i$-th sample: regular case $C_j$ (fully inside $\Omega$) and
a few special cases
\begin{itemize}
\item[(a)] $S_1$ for circles which are partially outside the
domain $\Omega$;
\item[(b)] $S_2$ and $S_3$ for circles with respective partially
and fully overlapped regions; and
\item[(c)] degenerate cases $S_4$ and $S_5$ correspondingly
of zero radius or appear fully outside of domain $\Omega$.
\end{itemize}
We note that all circles of the special cases mentioned in (c) are
rejected when the samples are generated.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.33\textwidth]{sample_param.pdf}
\end{center}
\caption{Different scenarios of $j$-th circle appearance in the
$i$-th sample: regular case $C_j$ and special cases $S_1$
through $S_5$.}
\label{fig:sample_param}
\end{figure}
After completing the collection of $N$ sample solutions following
the description above, the proposed computational algorithm for
solving problem \eqref{eq:minJ_sigma} could be executed in 2 steps:
\begin{itemize}
\item[{\bf 1:}] Define the initial basis of samples
%
\begin{equation}
\mathcal{B}^0 = (\bar \sigma_i(x))_{i=1}^{N_s}
\label{eq:smpl_basis}
\end{equation}
%
by choosing $N_s$ best samples out of collection $\mathcal{C}(N)$ which
provide the best measurement fit in terms of cost functional \eqref{eq:cost_functional}.
\item[{\bf 2:}] Set all parameters $(\mathcal{P}_i)_{i=1}^{N_s}$ in the
description of sample basis $\mathcal{B}$ as controls to perform
optimization for solving problem \eqref{eq:minJ_sigma} numerically
to find the optimal basis $\hat \mathcal{B}$.
\end{itemize}
\subsection{Step 1: Defining Initial Basis}
\label{sec:step_1}
This step requires solving forward problem \eqref{eq:forward} and
evaluating cost functional \eqref{eq:cost_functional} $N$ times for
all samples in the $N$-collection. For a fixed scheme of potentials
$U$ the data $\mathcal{D}_i = I^{k}_{\ell}(\sigma_i) \in \mathbb{R}^{m^2}, \ i = 1,
\ldots, N$, could be precomputed by \eqref{eq:forward} and
\eqref{eq:el_current}, and then stored for multiple uses with different
models. In addition, this task may be performed in parallel with minimal
computational overhead which allows easy switching between various
schemes for electrical potentials. Easy parallelization enables taking
$N$ quite large which helps better approximate the solution by the
initial state of basis $\mathcal{B}$ before proceeding to Step 2.
The number of samples $N_s$ in basis $\mathcal{B}$ may be defined experimentally
based on the model complexity. We suggest $N_s$ to be sufficiently
large to properly support a local/global search for optimal solution
$\hat \sigma(x)$ during Step 2. At the same time this number should
allow the total number of controls while solving problem
\eqref{eq:minJ_sigma} to be comparable with data dimension, namely
$m^2$, for satisfying the well-posedness requirement.
Working with the models of complicated structures may require
increasing the current number of elements (circles) in every sample
within the chosen basis $\mathcal{B}$. In such a case, one could re-set
parameters $N_c^i, \ i = 1, \ldots, N_s$, to higher values and add
missing elements, for example, by generating new circles randomly
as degenerate cases $S_4$. This will project the initial basis
$\mathcal{B}^0$ onto a new control space of a higher dimension without
any loses in the quality of the initial solution.
Step 1 will be completed after ranking samples in ascending order in
terms of computed cost functionals \eqref{eq:cost_functional} while
comparing the obtained data $(\mathcal{D}_i)_{i=1}^{N}$ with true data
$(I_{\ell}^{k*})_{\ell,k=1}^m$ available from the actual measurements.
After ranking, the first $N_s$ samples will create the initial basis
$\mathcal{B}^0$ to be used in place of the initial guess for optimization
in Step 2.
\subsection{Step 2: Solving Optimal Control Problem}
\label{sec:step_2}
As discussed in Section~\ref{sec:step_0}, all elements (circles) in
all samples of basis $\mathcal{B}$ obtained during Step 1 ranking procedure
will be represented by a finite number of ``sample-based'' parameters
$(\mathcal{P}_i)_{i=1}^{N_s}$. In general, solution $\sigma(x)$ could be
uniquely represented as a function of $\mathcal{P}_i, \ i = 1, \ldots, N_s$.
The continuous form of optimal control problem \eqref{eq:minJ_sigma}
may be substituted with its new equivalent form defined over the
finite set of controls $\mathcal{P} = (\mathcal{P}_i)_{i=1}^{N_s}$. In addition to
this, problem \eqref{eq:minJ_sigma} could be further extended by
adding weights $\alpha = (\alpha_i)_{i=1}^{N_s}$
in \eqref{eq:sigma_main} to the set of new controls. After this we
arrive at the final form of the optimization problem to be later
solved numerically:
\begin{equation}
(\hat \mathcal{P}, \hat \alpha) = \underset{\mathcal{P},\alpha}{\operatorname{argmin}}
\ \mathcal{J}(\mathcal{P},\alpha)
\label{eq:minJ_ext}
\end{equation}
subject to PDE constraint \eqref{eq:forward}, linear constraint
\eqref{eq:sigma_alpha}, and properly established bounds for all
components of control $(\mathcal{P}, \alpha)$.
As easily followed from the structure of this new control, a
dimension of the parameterized solution space is bounded by
\begin{equation}
\max(\dim(\mathcal{P}, \alpha)) = N_s \cdot [ N_{c,\max} (n+1) + 1].
\end{equation}
When solving \eqref{eq:minJ_ext} iteratively one may choose
to terminate the optimization run at $k$-th (major) iteration
once the following criterion is satisfied
\begin{equation}
\left| \dfrac{\mathcal{J}^k - \mathcal{J}^{k-1}}{\mathcal{J}^k}
\right| < \epsilon
\label{eq:termination}
\end{equation}
subject to chosen tolerance $\epsilon \in \mathbb{R}_+$. Although both
\eqref{eq:minJ_sigma} and \eqref{eq:minJ_ext} are obviously
not separable optimization problems, the coordinate descent (CD)
method is used to solve \eqref{eq:minJ_ext}. This choice is
motivated by several reasons, namely
\begin{itemize}
\item simplicity of the form for establishing the equivalence
between controls $\sigma(x)$ and $(\mathcal{P}, \alpha)$ provided by
\eqref{eq:sigma_main} and
\eqref{eq:smpl_par}--\eqref{eq:smpl_triplet},
\item close proximity of samples in the initial basis $\mathcal{B}^0$
to the local/global solutions after completing Step 1, and
\item straightforward computational implementation.
\end{itemize}
The efficiency of the entire optimization framework is confirmed
by extensive computational results for multiple models of different
complexity presented in Section~\ref{sec:results}. A summary of the
complete computational framework to perform our new optimization with
sample-based parameterization is provided in Algorithm~\ref{alg:main_opt}.
We also note that, in order to improve the computational efficiency,
the applied CD method is modified by specifying the order in which all
controls are perturbed while solving problem \eqref{eq:minJ_ext}.
Briefly, instead of following the sensitivity feedback, choosing
controls is considered in the sample--by--sample order, and within each
sample $\bar \sigma_i$ we optimize over all circles' triplets
$\{ x^{01}_j, x^{02}_j, r_j\}$ and then over the sample's weight
$\alpha_i$, see Step 2 in Algorithm~\ref{alg:main_opt} for clarity.
\begin{algorithm}[htb!]
\begin{algorithmic}
\STATE set parameters: $N$, $N_{c,\max}$, $N_s$
\FOR{$i \leftarrow 1$ to $N$}
\STATE generate $\bar \sigma_i(x)$ by
\eqref{eq:smpl_par}--\eqref{eq:smpl_restr}
\STATE obtain data $\mathcal{D}_i = I^{k}_{\ell}(\bar \sigma_i)$ by
\eqref{eq:forward} and \eqref{eq:el_current}
\ENDFOR
\STATE \begin{center} \mybox{Step 1} \end{center}
\STATE select model and obtain true data
$(I_{\ell}^{k*})_{\ell,k=1}^m$
\FOR{$i \leftarrow 1$ to $N$}
\STATE compute $\mathcal{J}(\bar \sigma_i)$ by \eqref{eq:cost_functional}
\ENDFOR
\STATE choose $N_s$ best samples from $\mathcal{C}(N)$ by values
$\mathcal{J}(\bar \sigma_i)$
\STATE form initial basis $\mathcal{B}^0$ subject to $N_{c,\max}$
\STATE set initial weights $\alpha^0$
\STATE compute $\sigma^0(x)$ using $\mathcal{B}^0$ and $\alpha^0$ by
\eqref{eq:sigma_main}
\STATE \begin{center} \mybox{Step 2} \end{center}
\STATE $k \leftarrow 0$
\REPEAT
\FOR{$i \leftarrow 1$ to $N_s$}
\FOR{$j \leftarrow 1$ to $N^{i}_c$}
\STATE {\bf optimize} over $j$-th circle triplet
$\{ x^{01}_j, x^{02}_j, r_j\}$ in $\bar \sigma_i$
\ENDFOR
\STATE {\bf optimize} over weight $\alpha_i$
\ENDFOR
\STATE $k \leftarrow k + 1$
\STATE update $\sigma^k(x)$ using new basis $\mathcal{B}^k$ and
weights $\alpha^k$ by \eqref{eq:sigma_main}
\UNTIL termination criterion \eqref{eq:termination} is satisfied
to given tolerance
\end{algorithmic}
\caption{Computational workflow for optimization with sample-based
parameterized controls}
\label{alg:main_opt}
\end{algorithm}
\section{Computational Results}
\label{sec:results}
\subsection{Computational Model in 2D}
\label{sec:comp_model}
Our optimization framework integrates computational facilities for
solving forward PDE problem \eqref{eq:forward} and evaluating
cost functionals by \eqref{eq:cost_functional}. These facilities
are incorporated by using {\tt FreeFem++}, see \cite{FreeFem2012}
for details, an open--source, high--level integrated development
environment for obtaining numerical solutions of PDEs based on the
Finite Element Method (FEM). For solving numerically forward PDE
problem \eqref{eq:forward}, spatial discretization is carried out
by implementing 7730 FEM triangular finite elements: P2 piecewise
quadratic (continuous) representation for electrical potential $u(x)$
and P0 piecewise constant representation for conductivity field
$\sigma(x)$. Systems of algebraic equations obtained after such
discretization are solved with {\tt UMFPACK}, a solver for
nonsymmetric sparse linear systems \cite{UMFPACK}.
All computations are performed using a 2D domain \eqref{eq:domain}
which is a disc of radius $R = 0.1$ with $m = 16$ equidistant electrodes
$E_{\ell}$ with half-width $w = 0.12$ rad covering approximately 61\%
of boundary $\partial \Omega$ as shown in Figure~\ref{fig:model}(a).
Electrical potentials $U_{\ell}$, see Figure~\ref{fig:model}(b), are
applied to electrodes $E_{\ell}$ following the ``rotation scheme''
discussed in Section~\ref{sec:math} and chosen to be consistent with
the ground potential condition \eqref{eq:ground_conds}. Determining
the Robin part of the boundary conditions in \eqref{eq:forward_3} we
equally set the electrode contact impedance $Z_{\ell} = 0.1$.
\begin{figure}[htb!]
\begin{center}
\mbox{
\subfigure[]{\includegraphics[width=0.5\textwidth]{geometry_I.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{el_potentials.pdf}}}
\end{center}
\caption{(a)~Equispaced geometry of electrodes $(E_{\ell})_{\ell=1}^{16}$
and electrical currents $I_{\ell}$ (positive in red, negative in blue)
measured at $E_{\ell}$. Black arrows show the distribution of flux
$\sigma(x) \boldsymbol{\nabla} u(x)$ in the interior of $\Omega$.
(b)~Electrical potentials $U_{\ell}$.}
\label{fig:model}
\end{figure}
To solve optimal control problem \eqref{eq:minJ_ext} iteratively our
framework is utilizing non-derivative Coordinate Descent~(CD) or
alternating variables approach, see \cite{Nocedal2006} for more details.
The actual (true) electrical conductivity $\sigma_{true}(x)$ we seek to
reconstruct is defined analytically for each model in \eqref{eq:sigma_true}
by setting $\sigma_c = 0.4$ and $\sigma_h = 0.2$. The initial guess for
control $\mathcal{P}$ at Step 2 is provided by the parameterization of initial
basis $\mathcal{B}^0$ obtained after completion of Step 1. For control $\alpha$
the initial values are set to be equal, i.e.~$\alpha^0_i = 1/N_s$.
Termination criteria are set by tolerance $\epsilon = 10^{-4}$ in
\eqref{eq:termination} and the total number of cost functional evaluations
50,000 whichever is reached first.
For generating $N$-collection of samples discussed in Section~\ref{sec:step_0}
we use $N = 10000$ and $N_{c,max} = 8$. This set of sample solutions
$\mathcal{C}(10000)$ is created using a generator of uniformly distributed
random numbers. Therefore, each sample $\bar \sigma_i(x)$ ``contains''
from one to eight ``cancer-affected'' areas with $\sigma_c = 0.4$.
Each area is located randomly within domain $\Omega$ and represented
by a circle of randomly chosen radius $0 < r \leq 0.3 R$. Also, we
fix number of samples $N_s$ to 10 for all numerical experiments
shown in this paper.
\begin{figure*}[!htb]
\begin{center}
\mbox{
\subfigure[model \#1: $\sigma_{true}(x)$]
{\includegraphics[width=0.33\textwidth]{model_8_true.pdf}}
\subfigure[effect of re-setting $N_c^i$]
{\includegraphics[width=0.33\textwidth]{model_8_obj_comp.pdf}}
\subfigure[solution error]{\includegraphics[width=0.33\textwidth]
{model_8_sol_comp.pdf}}}
\mbox{
\subfigure[$\hat \sigma(x)$ by SNOPT]
{\includegraphics[width=0.33\textwidth]{model_8_SNOPT.pdf}}
\subfigure[$\hat \sigma(x)$ by BFO]
{\includegraphics[width=0.33\textwidth]{model_8_BFO.pdf}}
\subfigure[$\hat \sigma(x)$ distributions]
{\includegraphics[width=0.33\textwidth]{model_8_hist.pdf}}}
\end{center}
\caption{Model \#1. (a)~True electrical conductivity $\sigma_{true}(x)$.
(b)~Cost functionals $\mathcal{J}^k$ as a function of major iteration count
$k$ evaluated at Step 1 (pink dots) and Step 2 with no circles added
to samples (blue dots) and 8 circles per sample (red dots).
(c)~Solution error $\| \hat \sigma - \sigma_{true} \|_{L_2}$
as a function of number of cost functional evaluations for results
obtained by gradient-based SNOPT with PCA (blue dots) and
non-derivative BFO (pink dots) and CD (red dots) methods.
(d,e)~Solution images obtained respectively by SNOPT and BFO.
(f)~Histograms for solutions obtained by SNOPT (blue bars) and
CD (red bars).}
\label{fig:model_8a}
\end{figure*}
\subsection{Framework Validation}
\label{sec:model_valid}
To begin with checking the performance of the proposed optimization
framework, we created our (benchmark) model~\#1 to mimic a case when
a biological tissue contains three areas of different size and circular
shape suspected to be affected by cancer as seen in
Figure~\ref{fig:model_8a}(a). First, we investigate the
effect of re-setting parameters $N_c^i$ as discussed in
Section~\ref{sec:step_1}.
Figure~\ref{fig:model_8a}(b) shows the progress during Step 1 (first
10 major iterations, in pink) and Step 2 with no circles added (in blue)
and after adding new circles to each sample (in red) so that
$N_c^i = 8, \ i = 1, \ldots, 10$. Due to obviously better performance
in the latter case, as expected, we use the same strategy, setting
$N_c^i = N_{c,max}$, in all cases presented in this paper unless
otherwise stated.
\begin{figure*}[!htb]
\begin{center}
\mbox{
\subfigure[best sample $\bar \sigma_1(x)$]
{\includegraphics[width=0.33\textwidth]{model_8_best.pdf}}
\subfigure[solution $\sigma^0(x)$ after Step 1]
{\includegraphics[width=0.33\textwidth]{model_8_phase1.pdf}}
\subfigure[solution $\hat \sigma(x)$ after Step 2]
{\includegraphics[width=0.33\textwidth]{model_8_phase2.pdf}}}
\end{center}
\caption{Model \#1: solutions by (a) best sample $\bar \sigma_1(x)$
from initial basis $\mathcal{B}^0$, (b) complete initial basis $\mathcal{B}^0$
approximation $\sigma^0(x)$ obtained after Step 1, and (c) optimal
basis $\hat \mathcal{B}$ approximation $\hat \sigma(x)$ as a result of
Step 2 optimization.}
\label{fig:model_8b}
\end{figure*}
We also use model \#1 to compare the performance of the proposed
framework observed after applying two non-derivative approaches, namely
the brute force method from the {\tt BFO 2.0} package \cite{BFO2017},
and our CD method customized to a
predefined order of controls as described in Section~\ref{sec:step_2}.
To compare the quality of the obtained images we also apply a
gradient-based approach by {\tt SNOPT} \cite{SNOPTManual} with added
PCA techniques for control space reduction as described in detail in
\cite{AbdullaBukshtynovSeif}. The PCA is performed on the same collection
of samples $\mathcal{C}(10000)$ and with the same number of principal components
250 (preserving about 90\% of the ``energy'' in the full set of basis
vectors) as the total number of controls used in BFO and CD methods.
Figures~\ref{fig:model_8a}(d,e) and \ref{fig:model_8b}(c) show the
images obtained respectively by SNOPT, BFO, and CD approaches. As seen
in Figure~\ref{fig:model_8a}(c), gradient-based PCA approach (in blue)
terminates much faster even with termination tolerance $\epsilon = 10^{-9}$
setup in SNOPT. However, non-derivative BFO and CD provide solutions
of better quality, and only the CD method results in quality close to
the desired binary distribution, refer to Figure~\ref{fig:model_8a}(f)
for analysis by histograms.
To conclude on the superior performance of the proposed algorithm
supplied with the customized CD method for optimization, we refer
to Figure~\ref{fig:model_8b}. The left image shows the best, in terms
of the measurement fit, sample solution found in the collection
$\mathcal{C}(10000)$ for model \#1 and placed into the initial basis $\mathcal{B}^0$
as $\bar \sigma_1(x)$. The center and right images show the solutions
$\sigma^0(x)$ and $\hat \sigma(x)$ obtained respectively after Step~1
and Step~2. While the solution $\sigma^0(x)$ based on the initial basis
$\mathcal{B}^0$ poorly approximates model \#1, the proposed sample-based
parameterization enables the framework to accurately locate all cancer
affected regions including the smallest one.
\subsection{Effect of Noise in Data}
\label{sec:noise}
\begin{figure*}[!htb]
\begin{center}
\mbox{
\subfigure[CD: 0.5\% noise]{\includegraphics[width=0.25\textwidth]
{model_8_noise_05.pdf}}
\subfigure[1\% noise]{\includegraphics[width=0.25\textwidth]
{model_8_noise_1.pdf}}
\subfigure[2\% noise]{\includegraphics[width=0.25\textwidth]
{model_8_noise_2.pdf}}
\subfigure[5\% noise]{\includegraphics[width=0.25\textwidth]
{model_8_noise_5.pdf}}}
\mbox{
\subfigure[SNOPT: 0.5\% noise]{\includegraphics[width=0.25\textwidth]
{model_8_SNOPT_noise_05.pdf}}
\subfigure[1\% noise]{\includegraphics[width=0.25\textwidth]
{model_8_SNOPT_noise_1.pdf}}
\subfigure[2\% noise]{\includegraphics[width=0.25\textwidth]
{model_8_SNOPT_noise_2.pdf}}
\subfigure[5\% noise]{\includegraphics[width=0.25\textwidth]
{model_8_SNOPT_noise_5.pdf}}}
\end{center}
\caption{Model \#1: solution images obtained by (a-d)~the
proposed non-derivative customized CD and (e-h)~gradient-based
SNOPT with PCA methods when measurements are contaminated with
(a,e) 0.5\%, (b,f) 1\%, (c,g) 2\%, and (d,h) 5\% noise.}
\label{fig:model_8_noise_comp}
\end{figure*}
Now we would like to address a well-known issue of the noise present
in the measurements due to improper electrode--tissue contacts,
possible electrode misplacement, wire interference, etc. The effect
of noise has already been investigated by many researchers both
theoretically and within practical applications with suggested
approaches to mitigate its negative impact on the quality of images.
In this section we compare the effect of noise in reconstructions
obtained by the gradient-based SNOPT with PCA and our proposed
non-derivative customized CD methods.
In Figure~\ref{fig:model_8_noise_comp} we revisit model \#1 presented
first in Section~\ref{sec:model_valid} now with measurements
contaminated with 0.5\%, 1\%, 2\% and 5\% normally distributed noise.
As expected, we see that various levels of noise lead to oscillatory
instabilities in the images reconstructed by the gradient-based
approach utilizing parameterization via PCA. This will obviously
result in multiple cases of false positive screening. On the other
hand, our new approach with sample-based parameterization shows
its stable ability to provide clear and accurate images with the
appearance of false negative results for small regions only with
noise higher than 2\%. Figure~\ref{fig:model_8_noise_CD} provides
the complete comparison of the solution error for results obtained
by our framework for various levels of noise between 0\% and 5\%.
We close this section by concluding on using 0.5\% noise for the
rest of the numerical experiments shown in this paper.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.75\textwidth]{model_8_noise_comp.pdf}
\end{center}
\caption{Model \#1: solution error
$\| \hat \sigma - \sigma_{true} \|_{L_2}$ as a function of
the number of cost functional evaluations for results obtained
by the proposed framework for various levels of noise.}
\label{fig:model_8_noise_CD}
\end{figure}
\subsection{Validation with Complicated Models}
\label{sec:models_complex}
In this section we present results obtained using our new optimization
framework with sample-based parameterization applied to models with a
significantly increased level of complexity. The new algorithm has
already confirmed its ability to reconstruct accurately cancer affected
regions of various size and at multiple locations. Therefore, the added
complications we are focusing on here are of small sizes and non-circular
shapes for those regions.
\begin{figure*}[!htb]
\begin{center}
\mbox{
\subfigure[model \#2: $\sigma_{true}(x)$]
{\includegraphics[width=0.33\textwidth]{model_3_true.pdf}}
\subfigure[CD: no noise]
{\includegraphics[width=0.33\textwidth]{model_3_phase2.pdf}}
\subfigure[CD: 0.5\% noise]
{\includegraphics[width=0.33\textwidth]{model_3_noise_05.pdf}}}
\mbox{
\subfigure[solution error]{\includegraphics[width=0.33\textwidth]
{model_3_error.pdf}}
\subfigure[SNOPT: no noise]{\includegraphics[width=0.33\textwidth]
{model_3_SNOPT.pdf}}
\subfigure[SNOPT: 0.5\% noise]{\includegraphics[width=0.33\textwidth]
{model_3_SNOPT_noise_05.pdf}}}
\end{center}
\caption{Model \#2. (a)~True electrical conductivity $\sigma_{true}(x)$.
(b,c,e,f)~Solution images obtained by (b,c)~the proposed framework
and (e,f)~the gradient-based SNOPT with PCA with (b,e)~no noise added
and (c,f) 0.5\% noise in measurements. (d)~Solution error
$\| \hat \sigma - \sigma_{true} \|_{L_2}$ as a function of the number
of cost functional evaluations for images shown in (b,c,e,f).}
\label{fig:model_3}
\end{figure*}
Our model \#2 is created to mimic using the EIT techniques in medical
practice for recognizing cancer at early stages. The electrical
conductivity $\sigma_{true}(x)$ is shown in Figure~\ref{fig:model_3}(a).
This model contains four circular-shaped cancer-affected regions all of
the same size as the smallest region in model \#1. The known complication
comes from the fact that the order of difference in measurements generated
by this model and ``healthy tissue''
($\sigma(x) = \sigma_h, \ \forall x \in \Omega$) is very close to the
order of noise that appears naturally in the provided data. In addition to
this, small regions have a lower chance of being detected if they are
located closer to the center of the domain.
Figures~\ref{fig:model_3}(b-f) compares the results obtained by the
gradient-based SNOPT with PCA and our proposed non-derivative customized
CD methods without noise and with 0.5\% noise added to the measurements.
By analysing images in Figure~\ref{fig:model_3} we see that our approach
is able to provide more assistance in concluding on possible abnormal
changes in tissues and help to navigate the surgeons. When noise is negligible, Figure~\ref{fig:model_3}(b), all four small spots are
distinguishable with accurately reconstructed shapes. Although adding
noise, as seen in Figure~\ref{fig:model_3}(c), brings more complexity to
image interpretation, it still keeps the possibility to help identify
cancer-affected regions. In fact we cannot claim the same for images
obtained by means of PCA and gradients, see Figures~\ref{fig:model_3}(e)
and \ref{fig:model_3}(f). Figure~\ref{fig:model_3}(d) also proves
computational efficiency of our framework in comparison with the
gradient-based method.
Our last model \#3, the hardest one, is created to check the method's
performance when the reconstructed region is not of a circular shape,
see Figure~\ref{fig:model_15}(a) with a C-shape region. As seen in
Figure~\ref{fig:model_15}(d), the gradient-based method with PCA is
unable to get a clear image even without noise, as the PCA
transform honors the structures of samples in the
$\mathcal{C}(10000)$ collection. On the other hand, our new framework could
provide a good quality image with no noise in the data. This
performance may be further enhanced, even in the presence of noise,
once we re-set the number of circles condition to
$N_c^i = N_{c,max} = 20$. This proves the potential of the proposed
algorithm in applications with rather complex models.
\begin{figure*}[!htb]
\begin{center}
\mbox{
\subfigure[model \#3: $\sigma_{true}(x)$]
{\includegraphics[width=0.33\textwidth]{model_15_true.pdf}}
\subfigure[CD: 8 circles, no noise]
{\includegraphics[width=0.33\textwidth]{model_15_phase2.pdf}}
\subfigure[CD: 8 circles, 0.5\% noise]
{\includegraphics[width=0.33\textwidth]{model_15_noise_05.pdf}}}
\mbox{
\subfigure[SNOPT: no noise]{\includegraphics[width=0.33\textwidth]
{model_15_SNOPT.pdf}}
\subfigure[CD: 20 circles, no noise]
{\includegraphics[width=0.33\textwidth]{model_15_20circ.pdf}}
\subfigure[CD: 20 circles, 0.5\% noise]
{\includegraphics[width=0.33\textwidth]
{model_15_20circ_noise_05.pdf}}}
\end{center}
\caption{Model \#3. (a)~True electrical conductivity $\sigma_{true}(x)$.
(b-f)~Solution images obtained by (b,c,e,f)~the
proposed framework and (d)~the gradient-based SNOPT with PCA with
(b,e)~no noise added and (c,f) 0.5\% noise in measurements. Images
in (b,c,e,f) are obtained by utilizing (b,c)~$N_c^i = N_{c,max} = 8$
and (e,f)~$N_c^i = N_{c,max} = 20$ conditions for the number of circles.}
\label{fig:model_15}
\end{figure*}
\section{Concluding Remarks}
\label{sec:remarks}
In this work, we presented a novel computational approach for optimal
reconstruction of binary-type images useful in various applications
for (bio)medical practices. The proposed computational framework
uses a derivative-free optimization algorithm supported by a set
of sample solutions generated synthetically based on prior knowledge
of the simulated phenomena. This framework has an easy to follow design
tuned by a nominal number of computational parameters. High computational
efficiency is achieved by applying the coordinate descent method
customized to work with individual controls in the predefined custom order.
We investigated the performance of the complete framework in applications
to the 2D IPCD by the EIT technique. We claim, based upon our results,
that the proposed methodology of determining whether a certain region of
the tissue contains a cancerous growth has superior efficiency in
comparison with gradient-based techniques utilizing control space
parameterization via PCA. This is due primarily to the predominately
geometric nature of our approach, wherein we perturb known solutions
to similar related problems in order to converge to the best available
local/global minima.
\bibliographystyle{spmpsci}
|
1,108,101,564,647 | arxiv | \section{Introduction}
We consider bounded vector fields $b \in L^\infty \big((0,T)\times \mathbb{R}^d; \mathbb{R}^d\big)$.
Although the analysis of this paper is limited to the case of autonomous vector fields with $d=2$, we introduce the relevant notions and the related results in the general setting. The following notion of \emph{regular Lagrangian flow} is an appropriate extension for merely locally integrable vector fields of the classical flow associated to Lipschitz vector fields.
\begin{definition}
Given $b \in L^1_\text{\rm loc}((0,T)\times \mathbb{R}^d;\mathbb{R}^d)$, we say that $X:[0,T)\times \mathbb{R}^d\to \mathbb{R}^d$ is a \emph{regular Lagrangian flow} of the vector field $b$ if
\begin{enumerate}
\item for $\mathscr L^d$- a.e. $x\in \mathbb{R}^d$ the map $t\mapsto X(t,x)$ is absolutely continuous, $X(0,x)=x$ and for $\mathscr L^1$-a.e. $t\in (0,T)$ it holds $\partial_tX(t,x)=b(t,X(t,x))$;
\item for every $t \in [0,T)$ it holds
\begin{equation*}
X(t,\cdot)_\sharp \mathscr L^d \le L\mathscr L^d,
\end{equation*}
for some $L>0$.
\end{enumerate}
\end{definition}
Regular Lagrangian flows have been introduced in a different form in \cite{DL_transport}, where the authors proved their existence and uniqueness for vector fields
$b \in L^1_tW^{1,p}_x$ with $p\ge 1$ and bounded divergence.
The theory has been extended to vector fields $b \in L^1_t \BV_x$ with bounded divergence in \cite{Ambrosio_transport}.
Uniqueness of regular Lagrangian flows was finally achieved in the more general class of nearly incompressible vector fields with bounded variation in \cite{BB_Bressan}, introduced in the study of the hyperbolic system of conservation laws named after Keyfitz and Kranzer (see \cite{DL_notes}).
\begin{definition}\label{D_NI}
A vector field $b \in L^1_\text{\rm loc} ((0,T)\times \mathbb{R}^d;\mathbb{R}^d)$ is called \emph{nearly incompressible} if there exist $C>0$ and $\rho \in C^0([0,T); L^\infty_w(\mathbb{R}^d))$ solving the continuity equation
\begin{equation}\label{E_CE}
\partial_t \rho + \div_x(\rho b)=0
\end{equation}
with $\rho(t,x) \in [C^{-1},C]$ for $\L^{d+1}$-a.e. $(t,x)\in (0,T)\times \mathbb{R}^d$.
\end{definition}
Several results about the differentiability properties of regular Lagrangian flows are available now.
By the contributions in \cite{LL_differentiability, AM_differentiability}, it follows that regular Lagrangian flows associated to vector fields $b \in L^1_t W^{1,1}_x$ are \emph{differentiable in measure}
(see \cite{AM_differentiability} for the definition of this notion). The same regularity property has been obtained recently in \cite{BD_differentiability} for nearly incompressible vector fields with bounded variation.
The stronger property of \emph{approximate differentiability} was obtained in \cite{ALM_differentiability} for regular Lagrangian flows associated to vector fields $b \in L^1_t W^{1,p}_x$ with $p>1$. A quantitative version of the same regularity property was provided in \cite{CDL_DiPerna-Lions}, where the authors proved a quantitative Lusin-Lipschitz regularity of the flow.
The optimality of the regularity estimates obtained in \cite{CDL_DiPerna-Lions} is discussed in \cite{Jabin_example}.
In particular the author provided through a random construction an example of time dependent divergence-free Sobolev vector field in $\mathbb{R}^2$ such that the regular Lagrangian flow has not bounded variation.
\subsection{2d autonomous vector fields}
The analysis in the setting of 2d autonomous vector fields is facilitated by the following Hamiltonian structure: if $b \in L^\infty(\mathbb{R}^2;\mathbb{R}^2)$ with $\div \,b=0$, then there exists a Lipschitz Hamiltonian $H:\mathbb{R}^2 \to \mathbb{R}$ such that
\begin{equation}\label{E_Hamiltonian}
b = \nabla^\perp H = (-\partial_2 H, \partial_1 H).
\end{equation}
At least formally the Hamiltonian is preserved by the flow, so that the trajectories of the flow are contained in the level sets of $H$.
In the series of papers \cite{ABC1,ABC2,ABC3}, the authors reduced the uniqueness problem for the continuity equation to a family of one-dimensional problems on the level sets of $H$.
With this approach they were able to characterize the Hamiltonians for which the uniqueness for \eqref{E_CE} holds in the class of $L^\infty$ solutions,
and therefore the uniqueness for the regular Lagrangian flow, including in particular the case of BV vector fields.
It is worth to mention that, before the general result in \cite{BB_Bressan} was available, the approach introduced above allowed to obtain in \cite{BBG_NI2d} a simpler and more direct proof of
the uniqueness of regular Lagrangian flow for nearly incompressible vector fields with bounded variation; see also \cite{BG_steadyNI} for the intermediate step of steady nearly incompressible vector fields, namely vector fields satisfying Def. \ref{D_NI} with $\rho$ constant in time.
The approximate differentiability of the flow has been obtained for autonomous divergence free vector field $b \in \BV(\mathbb{R}^2;\mathbb{R}^2)$ in \cite{BM_Lusin-Lip}, as a consequence of a suitable Lusin-Lipschitz property.
In the present paper we investigate under which assumptions the regular Lagrangian flow inherits the Sobolev or BV regularity of
the vector field.
The first result is a local estimate for nearly incompressible vector fields.
\begin{proposition}\label{P_local}
Let $b\in \BV(\mathbb{R}^2;\mathbb{R}^2)$ be a bounded nearly incompressible vector field and let $\Omega\subset \mathbb{R}^2$ be an open ball of radius $R>0$ such that there exist $\delta>0$ and $e\in \S^1$ for which $b\cdot e >\delta$ a.e. in $\Omega$.
Let $\Omega' \subset \Omega$ be an open set and $\bar t>0$ be such that $\dist(\Omega', \partial \Omega)>\|b\|_{L^\infty}\bar t$.
Then
\begin{equation*}
X(\bar t) \in \BV(\Omega').
\end{equation*}
Moreover, if $b \in W^{1,p}(\mathbb{R}^2;\mathbb{R}^2)$ for some $p\ge 1$, then
\begin{equation*}
X(\bar t) \in W^{1,p}(\Omega').
\end{equation*}
\end{proposition}
The following global result is stated for divergence-free vector fields and we additionally assume that the vector field $b \in \BV(\mathbb{R}^2;\mathbb{R}^2)$ is continuous. Since we are going to consider bounded vector fields, by finite speed of propagation, it is not restrictive to assume that $b$ has compact support.
In particular there exists a unique Hamiltonian $H \in C^1_c(\mathbb{R}^2)$ satisfying \eqref{E_Hamiltonian} and it is straightforward to check that the set of critical values
\begin{equation*}
\mathcal S:= \{ h \in \mathbb{R}: \exists x \in \mathbb{R}^2 \left( H(x)=h \mbox{ and }b(x)=0 \right)\}
\end{equation*}
is closed.
Therefore the set of regular values $\mathcal R := H(\mathbb{R})\setminus \mathcal S$ and $\Omega = H^{-1}(\mathbb{R}\setminus \mathcal S)= H^{-1}(\mathcal R)$ are open.
\begin{theorem}\label{T_global}
Let $b \in \BV(\mathbb{R}^2;\mathbb{R}^2)$ be a continuous divergence-free vector field with bounded support and let $\Omega$ be defined as above.
Then for every $t>0$ the regular Lagrangian flow has a representative
\begin{equation*}
X(t) \in C^0(\Omega) \cap \BV(\Omega).
\end{equation*}
If moreover $b \in W^{1,p}(\mathbb{R}^2;\mathbb{R}^2)$, then $X(t) \in W^{1,p}(\Omega)$.
\end{theorem}
The last result is an example that shows that the existence of $\delta>0$ as in Proposition \ref{P_local} cannot be dropped, as well as the restriction to $\Omega$ in Theorem \ref{T_global}.
\begin{proposition}\label{P_example}
There exists a divergence-free vector field $b:\mathbb{R}^2\to \mathbb{R}^2$ such that $b\in W^{1,p}_\text{\rm loc}(\mathbb{R}^2;\mathbb{R}^2)$ for every $p\in [1,\infty)$, $b(z)\cdot e_1>0$ for $\L^2$-a.e. $z\in \mathbb{R}^2$ and for every time $ t>0$
the regular Lagrangian flow
\begin{equation*}
X(t) \notin \BV_\text{\rm loc}(\mathbb{R}^2;\mathbb{R}^2).
\end{equation*}
\end{proposition}
The construction of the Hamiltonian $H$ associated to $b$ in Proposition \ref{P_example} is a suitable modification of the construction in
\cite{ABC2} of a Lipschitz Hamiltonian for which the uniqueness of the corresponding regular Lagrangian flow fails.
As opposed to the already mentioned result in \cite{Jabin_example}, the proposed construction is deterministic and disproves the Sobolev
regularity of the regular Lagrangian flow also for autonomous vector fields.
We finally mention that the question about the Sobolev or BV regularity of the regular Lagrangian flow associated to autonomous planar vector
fields was posed to the author by M. Colombo and R. Tione, motivated by the study of the commutativity property of the flows
associated to vector fields with vanishing Lie bracket \cite{CT_Lie}.
\section{Local estimate for nearly incompressible vector fields}
In this section we prove Proposition \ref{P_local}.
We begin with two preliminary lemmas about autonomous nearly incompressible vector fields in $\mathbb{R}^d$.
In the first lemma we show that in the case of autonomous nearly incompressible vector fields we can assume without loss of generality that the existence
time $T$ of $\rho$ in Definition \ref{D_NI} is arbitrarily large.
\begin{lemma}\label{L_all_T}
Let $b:\mathbb{R}^d \to \mathbb{R}^d$ be an autonomous nearly incompressible vector field and let $\rho:[0,T]\times \mathbb{R}^d\to \mathbb{R}$, $C>0$ be as in Definition \ref{D_NI}.
Then there exists $\tilde \rho \in C^0([0,+\infty); L^\infty_w(\mathbb{R}^d))$ solving \eqref{E_CE} such that for $\L^{d+1}$-a.e. $(t,x) \in \mathbb{R}^+\times \mathbb{R}^d$ it holds
\begin{equation}\label{E_allT}
\tilde C^{-1} \le \tilde \rho(t,x) \le \tilde C, \qquad \mbox{with} \qquad \tilde C = \tilde C(t):= C^{\frac{2t}{T}+1}.
\end{equation}
\end{lemma}
\begin{proof}
By Ambrosio's superposition principle (see \cite{AC_bologna}), there exists a Radon measure $\eta$ on $\Gamma_T:=C([0,T];\mathbb{R}^d)$ such that for every $t \in [0,T]$ it holds
\begin{equation*}
(e_t)_\sharp \eta = \rho (t,\cdot)\L^d,
\end{equation*}
where $e_t(\gamma):=\gamma(t)$ denotes the evaluation map at time $t$ defined on $\Gamma_T$.
We denote by $\{\eta_x\}_{x\in \mathbb{R}^d} \subset \mathcal P(\Gamma_T)$ its disintegration with respect to the evaluation map at time 0, so that
\begin{equation*}
\eta= \int_{\mathbb{R}^d}\rho(0,x)\eta_x dx
\end{equation*}
and we define
\begin{equation*}
\eta'= \int_{\mathbb{R}^d}\rho(T,x)\eta_xdx.
\end{equation*}
Since $\rho \in [C^{-1},C]$, it holds $C^{-2}\rho(0,x)\le \rho(T,x) \le C^2\rho(0,x)$ for $\L^d$-a.e. $x \in \mathbb{R}^d$.
In particular $C^{-2}(e_t)_\sharp \eta \le (e_t)_\sharp \eta' \le C^2 (e_t)_\sharp \eta$ for every $t\in [0,T]$, therefore
\begin{equation}\label{E_est_rho'}
(e_t)_\sharp \eta' = \rho'(t,\cdot) \L^d, \qquad \mbox{with}\qquad C^{-2}\rho(t,\cdot) \le \rho'(t,\cdot) \le C^2\rho(t,\cdot).
\end{equation}
Let $\tilde \rho : (0,2T)\times \mathbb{R}^d \to \mathbb{R}$ be defined by
\begin{equation*}
\tilde \rho (t,z) = \begin{cases}
\rho(t,z) & \mbox{if }t \in (0,T], \\
\rho'(t-T,z) & \mbox{if }t \in (T,2T).
\end{cases}
\end{equation*}
Since $\tilde \rho$ solves \eqref{E_CE} in $\mathcal{D}'((0,T)\times \mathbb{R}^d)$ and $\mathcal{D}'((T,2T)\times \mathbb{R}^d)$ separately and
$t\mapsto \tilde \rho(t)$ is continuous with respect to the weak* topology in $L^\infty(\mathbb{R})$, then $\tilde \rho$ solves \eqref{E_CE} in $\mathcal{D}'((0,2T)\times \mathbb{R}^d)$.
By \eqref{E_est_rho'} it follows that for every $t \in [T,2T]$ it holds
\begin{equation}\label{E_-T}
C^{-2}\tilde \rho (t - T, \cdot)\L^d \le \tilde \rho(t,\cdot)\L^d \le C^2 \tilde \rho (t-T,\cdot)\L^d.
\end{equation}
Iterating the construction above we obtain a solution $\tilde \rho:\mathbb{R}^+\times \mathbb{R}^d \to \mathbb{R}$ of \eqref{E_CE} such that \eqref{E_-T} holds for every $t\ge T$.
In particular for every $N\in \mathbb{N}$ and for every $t \in [NT, (N+1)T]$ it holds
\begin{equation*}
C^{-2N}\rho (t-NT,\cdot) \L^d \le \tilde \rho(t,\cdot)\L^d \le C^{2N}\rho(t-NT,\cdot)\L^d,
\end{equation*}
which immediately implies \eqref{E_allT} since $\rho \in [C^{-1},C]$.
\end{proof}
The vector fields for which the function $\rho$ in Definition \ref{D_NI} can be chosen independent of $t$ are called \emph{steady nearly incompressible}.
Although not every nearly incompressible autonomous vector field is steady nearly incompressible, we can reduce to the latter case under the assumptions of Proposition \ref{P_local}.
The proof of the following lemma is an adaptation of the argument in \cite{BBG_NI2d}.
\begin{lemma}\label{L_steady}
Let $b:\mathbb{R}^d \to \mathbb{R}^d$ be an autonomous, bounded, nearly incompressible vector field and let $\Omega\subset \mathbb{R}^d$ be an open ball of radius $R>0$.
Assume that there exist $\delta>0$ and $e \in \S^{d-1}$ for which for $\L^d$-a.e. $x\in \Omega$ it holds $b(x)\cdot e \ge \delta$.
Then $b\llcorner \Omega$ is steady nearly incompressible, namely there exists $r:\Omega \to \mathbb{R}$ and $\tilde C>0$ such that
\begin{equation*}
\tilde C^{-1}\le r \le \tilde C \qquad \mbox{and} \qquad \div (rb)=0 \quad \mbox{in } \Omega.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\rho:(0,T)\times \mathbb{R}^d \to \mathbb{R}$ and $C>0$ be as in Definition \ref{D_NI}.
Let $\eta \in \mathcal{M}(\Gamma_T)$ the Radon measure provided by Ambrosio's superposition principle.
In particular if we denote by
\begin{equation*}
\begin{split}
\tilde e: \Gamma_T\times [0,T] & \to [0,T]\times \mathbb{R}^d \\
(\gamma,t) & \mapsto (t,\gamma(t))
\end{split}
\end{equation*}
it holds
\begin{equation*}
\tilde e_\sharp (\eta \times \L^1) = \rho \left(\L^1 \times \L^d\right).
\end{equation*}
For every $\gamma\in \Gamma$ we set
\begin{equation*}
I_\gamma := \{t \in [0,T]: \gamma(t)\in \Omega\}.
\end{equation*}
Let $I_{\gamma,0}=[0,t^-_\gamma)$ be the (possibly empty) connected component of $I_\gamma$ containing $0$ and similarly let $I_{\gamma,T}=(t^+_\gamma,T]$ be the connected component of $I_\gamma$ containing $T$.
We denote by
\begin{equation*}
\tilde I_\gamma = [0,T] \setminus (I_{\gamma,0} \cup I_{\gamma,T})
\end{equation*}
and
\begin{equation*}
\Gamma^-:= \{ \gamma \in \Gamma_T : \gamma(0)\in \Omega)\}, \qquad \Gamma^+:= \{ \gamma \in \Gamma_T : \gamma(T)\in \Omega)\}.
\end{equation*}
Moreover we consider
\begin{equation*}
\tilde \eta = \eta \otimes (\L^1\llcorner \tilde I_\gamma).
\end{equation*}
By definition $\tilde \eta \le \eta \times \L^1$ therefore there exists $\tilde \rho \in L^\infty((0,T)\times \mathbb{R}^d)$ such that
\begin{equation*}
\tilde e_\sharp \tilde \eta = \tilde \rho (\L^1 \times \L^d).
\end{equation*}
with $0\le \tilde \rho \le \rho$.
The following standard computation shows that the density $\tilde \rho$ satisfies the continuity equation
\begin{equation*}
\partial_t \tilde \rho + \div_x(\tilde \rho b) = \mu \qquad \mbox{in }\mathcal{D}'((0,T)\times\mathbb{R}^d),
\end{equation*}
where
\begin{equation*}
\mu = \int_{\Gamma^-}\delta_{t^-_\gamma, \gamma(t^-_\gamma)} d\eta(\gamma) - \int_{\Gamma^+}\delta_{t^+_\gamma, \gamma(t^+_\gamma)} d \eta (\gamma).
\end{equation*}
Given $\varphi \in C^\infty_c((0,T)\times \mathbb{R}^d)$ it holds
\begin{equation*}
\begin{split}
\langle \partial_t \tilde \rho + \div_x(\tilde \rho b), \varphi \rangle = &~ - \int \tilde \rho(t,x) (\varphi_t(t,x) + b(t,x)\cdot \nabla_xb(t,x))dxdt \\
=&~ - \int (\varphi_t(t,\gamma(t)) + b(t,\gamma(t))\cdot \nabla_xb(t,\gamma(t))) \chi_{\tilde I_\gamma}(t)dt d\eta (\gamma) \\
=&~ - \int (\varphi_t(t,\gamma(t)) + \dot\gamma(t)\cdot \nabla_xb(t,\gamma(t))) \chi_{\tilde I_\gamma}(t)dt d\eta (\gamma) \\
= &~ - \int \frac{d}{dt}(\varphi(t,\gamma(t))\chi_{\tilde I_\gamma}(t)dt d\eta (\gamma) \\
= &~ \int \left( \varphi(t^-_\gamma,\gamma(t^-_\gamma))- \varphi(t^+_\gamma,\gamma(t^+_\gamma))\right) d\eta(\gamma) \\
= &~ \int \varphi d \mu,
\end{split}
\end{equation*}
where in the last equality we used that $\varphi (0,\cdot)=\varphi(T,\cdot)\equiv 0$.
In particular $\mu$ is concentrated on $\partial \Omega$ so that
\begin{equation}\label{E_CE_Omega}
\partial_t \tilde \rho + \div_x(\tilde \rho b) = 0 \qquad \mbox{in }\mathcal{D}'((0,T)\times\Omega).
\end{equation}
Since $b\cdot e >\delta$ in $\Omega$, every connected component of $I_\gamma$ has length at most $2R/\delta$.
Up to change the constant $C>0$, by Lemma \ref{L_all_T} we can assume that
\begin{equation*}
T\ge \frac{6R}{\delta},
\end{equation*}
therefore it follows that $\tilde \rho(t,z) = \rho(t,z)$ for $\L^1\times \L^d$-a.e. $(t,z) \in [T/3,2T/3] \times \Omega$, in particular
\begin{equation}\label{E_lower}
\tilde \rho (t,z) \ge C^{-1} \qquad \mbox{in }[T/3, 2T/3] \times \Omega.
\end{equation}
Being $\tilde \rho(0,x)=\tilde \rho (T,x)=0$ for $\L^d$-a.e. $x \in \Omega$, by integrating \eqref{E_CE_Omega} with respect to $t$, it follows that
\begin{equation*}
r(x) := \frac{1}{T}\int_0^T \tilde \rho(t,x)dx
\end{equation*}
satisfies $\div(rb)=0$ in $\mathcal{D}'(\Omega)$.
From \eqref{E_lower} and the definition of $r$, it follows that for $\L^d$-a.e. $x \in \mathbb{R}^2$ it holds
\begin{equation*}
\frac{1}{3 C} \le r(x) \le \|\rho\|_{L^\infty} \le C
\end{equation*}
and this proves the claim with $\tilde C = 3C$.
\end{proof}
In the following of this paper we restrict to the case $d=2$ and in the remaining part of this section we will always assume that the hypothesis in Proposition \ref{P_local} are satisfied.
In particular there exists a Lipschitz Hamiltonian $H:\Omega \to \mathbb{R}$ such that
\begin{equation}\label{E_Hamilton}
rb=\nabla^\perp H \qquad \L^2\mbox{-a.e. in }\Omega.
\end{equation}
The generic point in $\mathbb{R}^2$ will be denoted by $z=(x,y)$ and we assume without loss of generality that $e=e_1$. Being $b\cdot e_1>\delta$ and $r \in [\tilde C^{-1},\tilde C]$
for every $h \in H(\Omega)$ there exist an open set $O_h \subset \mathbb{R}$ and a Lipschitz function $f_h:O_h \to \mathbb{R}$ such that
\begin{equation*}
\{z\in \Omega: H(z)=h\} = \{ (x,y) : x \in O_h, y = f_h(x)\}.
\end{equation*}
We will also denote by
\begin{equation}\label{E_tildef}
\tilde f_h(x) := (x,f_h(x))
\end{equation}
for every $x\in O_h$.
The Lipschitz constant $L$ of $f_h$ can be estimated by
\begin{equation}\label{E_L}
L \le \frac{\|rb\|_{L^\infty}}{\inf_{\Omega} (rb\cdot e_1)}\le \frac{C^2\|b\|_{L^\infty}}{\delta}.
\end{equation}
In the following we consider vector fields $b$ with bounded variation.
As already mentioned in the introduction, the uniqueness problem for the regular Lagrangian flow associated to $rb$ was solved in \cite{BBG_NI2d}, where in particular it is proven that the local Hamiltonian is preserved by the flow, namely
\begin{equation*}
H(t,X(t,z))=H(z) \qquad \forall z \in \Omega \mbox{ and } t< \dist(\partial\Omega, z).
\end{equation*}
We will consider the representative of the regular Lagrangian flow defined as follows:
for every $z = (x,y) \in \Omega$ and $t< \dist(\partial\Omega, z)$, we set $X(t,z)=(X_1(t,z),f_{H(z)}(X_1(t,z)))$ where $X_1(t,z)$ is uniquely determined by
\begin{equation}\label{E_precise}
\int_{x}^{X_1(t,z)}\frac{1}{\tilde b_1(s, f_{H(z)}(s))}ds = t,
\end{equation}
and where $\tilde b$ denotes the precise representative of $b$, defined at $\H^1$-a.e. $z \in \mathbb{R}^2$ (see for example \cite{AFP_book}).
In particular, if we denote by
\begin{equation*}
R:=\{h\in H(\Omega): |Db|(H^{-1}(h)\cap \Omega)=0\},
\end{equation*}
it holds that $\H^1$-a.e. $z \in H^{-1}(h)\cap \Omega$ is a Lebesgue point of $b$ with value $\tilde b(z)$. In the following we will still denote by $b$ the precise representative $\tilde b$.
\begin{proposition}\label{P_Lip_NI}
Let $b \in \BV(\mathbb{R}^2;\mathbb{R}^2)$ be a bounded autonomous nearly incompressible vector field.
Let $\Omega\subset \mathbb{R}^2$ be an open ball of radius $R>0$ such that there exist $\delta>0$ and $e\in \S^1$ for which $b\cdot e >\delta$ a.e. in $\Omega$.
Then there exists $g \in \BV_\text{\rm loc}(\mathbb{R})$ and a constant $C'=C'(R,\|b\|_{L^\infty},\delta, C)>0$ such that for every $z,z' \in \Omega$ with $H(z),H(z')\in R$ and every $\bar t >0$ for which
\begin{equation*}
\dist (z,\partial \Omega), \dist (z',\partial \Omega) > \|b\|_{L^\infty} \bar t,
\end{equation*}
it holds
\begin{equation}\label{E_Lip_loc}
|X(\bar t,z)-X(\bar t,z')|\le C' \big(|z-z'| + |g(H(z))-g(H(z'))|\big),
\end{equation}
where $C$ is the compressibility constant in Definition \ref{D_NI} and $H$ is the Hamiltonian introduced in \eqref{E_Hamilton}.
If moreover
$b\in W^{1,p}(\mathbb{R}^2;\mathbb{R}^2)$ for some $p\in [1,\infty)$, then \eqref{E_Lip_loc} holds with $g \in W^{1,p}_\text{\rm loc}(\mathbb{R})$.
\end{proposition}
\begin{proof}
We denote by $h=H(z)$ and $h'=H(z')$. By \eqref{E_precise} it follows that
\begin{equation}\label{E_period}
\int_{x}^{X_1(\bar t,z)}\frac{1}{b_1(\tilde f_{h}(s))}ds = \bar t = \int_{x'}^{X_1(\bar t,z')}\frac{1}{b_1(\tilde f_{h'}(s))}ds,
\end{equation}
where $\tilde f_h$ and $\tilde f_{h'}$ are defined in \eqref{E_tildef}.
Without loss of generality we assume $x\le x'$ and we also suppose that $X_1(\bar t, z)\le X_1(\bar t, z')$, being the opposite case analogous.
We first estimate the distance of the horizontal components of the flows.
We denote by
\begin{equation*}
I_1=(x,x'), \quad I_2 = (x', X_1(\bar t, z)), \quad I_3=(X(\bar t, z), X(\bar t,z')).
\end{equation*}
If $I_2 = \emptyset$, since $b \cdot e_1>\delta$ in $\Omega$, then
\begin{equation*}
|z'-z|\ge |x'-x|\ge |X_1(\bar t,z) - x| \ge \bar t \delta,
\end{equation*}
therefore
\begin{equation*}
\begin{split}
|X_1(\bar t, z')-X_1(\bar t, z)| & \le ~ |X_1(\bar t, z')-x'| + |x'-x| + |x- X_1(\bar t, z)| \\
& \le ~ \|b\|_{\infty} \bar t + |x'-x| + \|b\|_{\infty} \bar t \\
& \le ~ \left( \frac{2\|b\|_{\infty}}{\delta} + 1\right) |x'-x|.
\end{split}
\end{equation*}
If $I_2\ne \emptyset$, it follows by \eqref{E_period} that
\begin{equation*}
\begin{split}
|X_1(\bar t, z') - X_1(\bar t, z)| & \le ~ \|b\|_{\infty}\int_{I_3}\frac{1}{b_1(\tilde f_{h'}(s))}ds \\
& = ~ \|b\|_{\infty}\left( \int_{I_1}\frac{1}{b_1(\tilde f_{h}(s))}ds + \int_{I_2}\frac{1}{b_1(\tilde f_{h}(s))}ds - \int_{I_2} \frac{1}{b_1(\tilde f_{h'}(s))}ds\right) \\
& \le ~ \frac{\|b\|_{\infty}}{\delta}|x'-x| + \|b\|_{\infty}\int_{I_2}\left|\frac{1}{b_1(\tilde f_{h}(s))}-\frac{1}{b_1(\tilde f_{h'}(s))}\right| ds\\
& \le ~ \frac{\|b\|_{\infty}}{\delta}|x'-x| + \frac{\|b\|_{\infty}}{\delta^2}|Db| (H^{-1}(I(h,h'))),
\end{split}
\end{equation*}
where $I(h,h')$ denotes the closed interval with endpoints $h$ and $h'$; in the last inequality we used that the function $v\mapsto 1/v$ is $\delta^{-2}$-Lipschitz on $(\delta,+\infty)$.
Then we estimate the difference of the vertical components:
\begin{equation}\label{E_vert}
|X_2(\bar t, z')-X_2(\bar t, z)| \le ~ |X_2(\bar t, z') - f_{h'}(X_1(\bar t, z))| + |f_{h'}(X_1(\bar t, z)) - X_2(\bar t, z)|.
\end{equation}
By definition of $f_{h'}$ it holds
\begin{equation}\label{E_vert1}
|X_2(\bar t, z') - f_{h'}(X_1(\bar t, z))| = |f_{h'}(X_1(\bar t, z')) - f_{h'}(X_1(\bar t, z))| \le L |X_1(\bar t, z') - X_1(\bar t, z)|,
\end{equation}
where $L$ denotes the Lipschitz constant of the function $f_{h'}$ and is bounded by $\frac{C^2\|b\|_{L^\infty}}{\delta}$ as in \eqref{E_L}.
By definition of $f_h$ we have that
\begin{equation}\label{E_vert2}
\begin{split}
|f_{h'}(X_1(\bar t, z)) - X_2(\bar t, z)| = & ~ |f_{h'}(X_1(\bar t, z)) - f_{h}(X_1(\bar t, z))| \\
\le & ~ \frac{|h'-h|}{\inf_\Omega |\partial_2 H|} \\
\le & ~ \frac{C|h'-h|}{\delta} \\
\le &~ \frac{C^2\|b\|_{L^\infty}|z'-z|}{\delta}
\end{split}
\end{equation}
where we used $b_1\ge \delta$, $\partial_y H = rb_1$ and $\|\nabla H\|_{L^\infty}\le C\|b\|_{L^\infty}$.
Plugging \eqref{E_vert1} and \eqref{E_vert2} in \eqref{E_vert}, we finally obtain
\begin{equation*}
|X_2(\bar t, z')-X_2(\bar t, z)| \le
\frac{C^2 \|b\|_{L^\infty}}{\delta} \left[ \left(1 + \frac{2\|b\|_{L^\infty}}{\delta}\right)|z'-z|
+ \frac{\|b\|_{L^\infty}}{\delta^2} |Db| (H^{-1}(I(h,h')))
\right]
\end{equation*}
so that \eqref{E_Lip_loc} holds with
\begin{equation*}
C'=\frac{C^2\|b\|_{\infty}}{\delta^2}\left(1+\frac{2\|b\|_{L^\infty}}{\delta}+\frac{\|b\|_{L^\infty}}{\delta^2}\right), \qquad g(h)= |Db| (\{H\le h\}).
\end{equation*}
Notice that $g \in \BV_\text{\rm loc}(\mathbb{R})$ by construction, since $Dg = H_\sharp |Db|$ is a finite Radon measure.
If $b\in W^{1,p}(\mathbb{R}^2;\mathbb{R}^2)$ the same computation leads to \eqref{E_Lip_loc} with
\begin{equation*}
g(h) = \int_{\{H\le h\}}|Db|(z)dz.
\end{equation*}
It only remains to check that $g \in W^{1,p}_\text{\rm loc}(\mathbb{R})$.
Denoting by $E_h=\{z\in \Omega: H(z)=h\}$, by the coarea formula we have that
\begin{equation*}
|\nabla H| \L^2\llcorner \Omega = \int_\mathbb{R} \H^1\llcorner E_h dh \qquad \mbox{so that} \qquad \L^2 = \int_\mathbb{R} \frac{1}{|\nabla H|}\H^1\llcorner E_h dh
\end{equation*}
and therefore
\begin{equation*}
g'(h) = \int_{ E_h } \frac{|Db|}{|\nabla H|} d\H^1.
\end{equation*}
Being $|\nabla H| = |r b | >\delta/C$, then by Jensen's inequality and co-area formula we get
\begin{equation}\label{E_gp}
\begin{split}
\int |g'|^p = &~ \int_\mathbb{R} \left| \int_{E_h} \frac{|Db|}{|\nabla H|} d\H^1\right|^p dh \\
\le &~ \int_\mathbb{R} \frac{(C\H^1 (E_h))^{p-1}}{\delta^{p-1}}\int_{ E_h} \frac{|Db|^p}{|\nabla H|} d\H^1 dh \\
\le &~ \left(\frac{2C\sqrt{1+L^2}R}{\delta}\right)^{p-1} \int_\Omega |Db|^p,
\end{split}
\end{equation}
where $L$ is as above. This concludes the proof of the proposition.
\end{proof}
In order to conclude the proof of Proposition \ref{P_local}, we deduce in the following two lemmas the BV and Sobolev regularity of the flow from the pointwise estimate obtained in Proposition \ref{P_Lip_NI}.
\begin{corollary}\label{C_BV}
In the same setting as in Proposition \ref{P_Lip_NI} let $\Omega' \subset \Omega$ be an open set and $\bar t>0$ be such that $\dist(\Omega', \partial \Omega)>\|b\|_{L^\infty}\bar t$.
Then
\begin{equation*}
X(\bar t) \in \BV(\Omega').
\end{equation*}
\end{corollary}
\begin{proof}
From Proposition \ref{P_Lip_NI} it is sufficient to check that $g\circ H\in \BV(\Omega')$. Let $g_n$ be a sequence of smooth functions converging to $g$ in $L^1(\mathbb{R})$ with $\text{\rm Tot.Var.}_\mathbb{R} (g_n) \le \text{\rm Tot.Var.}_\mathbb{R} (g)$.
By coarea formula
\begin{equation*}
H_\sharp (\L^2\llcorner \Omega') = \rho \L^1, \qquad \mbox{with} \quad\rho(h) = \int_{E_h}\frac{1}{|\nabla H|}d\H^1.
\end{equation*}
In particular
\begin{equation*}
\rho(h) \le \frac{C\H^1(E_h)}{\delta} \le \frac{2RC\sqrt{1+L^2}}{\delta}
\end{equation*}
is uniformly bounded.
Hence $g_n\circ H$ converges in $L^1(\Omega')$ to $g\circ H$ and
\begin{equation*}
\begin{split}
\text{\rm Tot.Var.}_{\Omega'} (g \circ H) \le &~ \liminf_{n\to \infty} \text{\rm Tot.Var.}_{\Omega'}(g_n\circ H) \\
= &~ \liminf_{n\to \infty} \int_{\Omega'} |g_n'( H(z))\nabla H(z)| dz \\
\le &~ \liminf_{n\to \infty} \|\nabla H\|_{L^\infty} \int |g_n'(h)|\rho(h)dh \\
\le &~ \|\nabla H\|_{L^\infty} \|\rho\|_{L^\infty} \text{\rm Tot.Var.}_\mathbb{R}(g). \qedhere
\end{split}
\end{equation*}
\end{proof}
\begin{corollary}\label{C_W1p}
Let us consider the same setting as in Proposition \ref{P_Lip_NI} with $b\in W^{1,p}(\mathbb{R}^2;\mathbb{R}^2)$. Let $\Omega' \subset \Omega$ be an open set and $\bar t>0$ be such that $\dist(\Omega', \partial \Omega)>\|b\|_{L^\infty}\bar t$. Then
\begin{equation*}
X(\bar t) \in W^{1,p}(\Omega').
\end{equation*}
\end{corollary}
\begin{proof}
From Proposition \ref{P_Lip_NI} it is sufficient to check that $g\circ H \in W^{1,p}(\Omega')$. By chain rule and coarea formula we have
\begin{equation*}
\begin{split}
\int_{\Omega'} |D (g\circ H)|^p dz \le &~ \int_{\Omega'}|g' \circ H|^p |\nabla H|^p dz \\
\le &~ (C\|b\|_{\infty})^{p-1} \int_{\Omega'}|g' \circ H|^p |\nabla H| dz \\
\le &~ (C\|b\|_{\infty})^{p-1} \int_\mathbb{R} \int_{E_h}|g'\circ H|^pd \H^1dh \\
= &~ (C\|b\|_{\infty})^{p-1} \int_\mathbb{R} |g'|^p(h) \H^1(E_h)dh \\
\le &~ 2\sqrt{1+L^2}R(C\|b\|_{\infty})^{p-1}\int_\mathbb{R} |g'|^p(h)dh. \qedhere
\end{split}
\end{equation*}
\end{proof}
\section{Global estimate for divergence free vector fields}
In this section we prove Theorem \ref{T_global}.
In the next lemma we show that we can cover $\Omega$ with countably many open sets invariant for the flow and such that $|b|$ is uniformly bounded from below far from 0.
\begin{lemma}\label{L_Omega_k}
Let $b$ and $\Omega$ as in Theorem \ref{T_global} and let $H \in C^1_c(\mathbb{R}^2)$ be the Hamiltonian associated to $b$ as in \eqref{E_Hamiltonian}.
For every $k\in \mathbb{N}$ let $\Omega_k := H^{-1}(\{ h: \min_{H^{-1}(h)}|b|>1/k \})$. Then $\Omega_k$ is open,
\begin{equation*}
\overline \Omega_k \subset \Omega_{k+1}, \qquad \mbox{and} \qquad \Omega = \bigcup_{k\in \mathbb{N}} \Omega_k.
\end{equation*}
\end{lemma}
\begin{proof}
The last equality follows immediately from the definition of $\Omega$ and the continuity of $b$. In order to complete the proof it is sufficient to check that
the map
\begin{equation*}
h \mapsto \min_{H^{-1}(h)}|b|
\end{equation*}
is continuous on the set $\mathcal R$ of regular values of $H$.
Both the lower semicontinuity and the upper semicontinuity are straightforward consequences of the continuity of $b$ and the compactness of the level sets $H^{-1}(h)$ with regular value $h$.
\end{proof}
The main estimate in the proof of Theorem \ref{T_global} is proven in the following lemma.
\begin{lemma}\label{L_main}
Let $H \in C^1_c(\mathbb{R}^2)$ be such that $b := \nabla ^\perp H \in \BV(\mathbb{R}^2)$ and let $k\in \mathbb{N}$ and $\Omega_k\subset \mathbb{R}^2$ be as above.
Then there exist a representative of the regular Lagrangian flow $X$, $g \in \BV_\text{\rm loc} \cap C^0(\mathbb{R})$ and $r>0$ such that for every $t>0$ the following holds:
there exist $c_1>0$ and $c_2>0$ such that $\forall \bar z \in \Omega _k$ and every $z \in B_r(\bar z)$ there exists $s>0$ such that
\begin{enumerate}
\item $|X(t,\bar z) -X(s,z)| \le c_1 |H(\bar z) - H(z)|$ \\
\item $ |t-s| \le c_2 \left( |g(H(\bar z)) - g(H(z))| + |\bar z -z|\right)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is divided in several steps.
\noindent
\emph{Step 1}.
By Lemma \eqref{L_Omega_k} $\Omega_k$ is compactly contained in the open set $\Omega_{k+1}$. Since $b$ is continuous and uniformly bounded from below on $\Omega_{k+1}$, for every $L>0$ there exist $\bar r>0$ and a finite covering $(B_{\bar r}(z_i))_{i=1}^N$ of $\Omega_k$ such that
\begin{enumerate}
\item for every $i=1,\ldots, N$ it holds $B_{4\bar r}(z_i)\subset \Omega_{k+1}$;
\item for every $i=1,\ldots, N$ there exists $e_i\in \S^1$ such that
\begin{equation}\label{E_flat}
b(z) \cdot e_i \ge |b(z)| \cos (\tan^{-1}(L)) \qquad \forall z \in B_{4\bar r}(z_i).
\end{equation}
\end{enumerate}
We take $L>0$ sufficiently small so that $\cos(\tan^{-1}(L))> 1/2$ and such that
for every $i=1,\ldots, N$ and for every $h \in H(B_{3\bar r}(z_i))$ there exist an open interval $I_{i,h}\subset \mathbb{R}$
and a $L$-Lipschitz function $f_{i,h}:I_{i,h}\to \mathbb{R}$ such that
\begin{equation*}
H^{-1}(h) \cap B_{4\bar r}(z_i) = \left\{ z \in \mathbb{R}^2: z\cdot e_i \in I_{i,h}, \quad z\cdot e_i^\perp = f_{i,h}(z\cdot e_i) \right\}.
\end{equation*}
\noindent
\emph{Step 2}.
We show that the function $g:\mathbb{R}\to \mathbb{R}$ defined by
\begin{equation*}
g(h):= |Db|(\{H\le h\}\cap \Omega_{k+1})
\end{equation*}
is continuous and with bounded variation.
Since $Dg= H_\sharp |Db|\llcorner \Omega_{k+1}$ is a finite measure, the function $g$ has bounded variation.
In order to prove that $g$ is continuous it is sufficient to check that for every $h\in \mathbb{R}$ it holds
\begin{equation*}
|Db|(H^{-1}(h)\cap \Omega_{k+1})=0.
\end{equation*}
By Step 1, the set $H^{-1}(h)\cap \Omega_{k+1}$ is the union of finitely many Lipschitz curve of finite length. Being $b$ continuous, the measure $|Db|$ vanishes on all sets with finite $\mathcal H^1$ measure (see for example \cite{AFP_book}), and this proves the continuity of $g$.
\noindent
\emph{Step 3}. Given $T>0$ and $\bar z \in \Omega_k$, we denote by
\begin{equation*}
\tilde N := \left\lceil \frac{T\|b\|_{L^\infty}}{\bar r} \right\rceil, \qquad \mbox{and} \qquad I_j:=\left[ \frac{j-1}{\tilde N}T, \frac{j}{\tilde N}T\right]=:[t_{j-1},t_j] \quad \mbox{for }j=1,\ldots, \tilde N.
\end{equation*}
Moreover for every $j \in 1,\ldots, \tilde N$ we consider $i=i(j) \in 1,\ldots, N$ such that
\begin{equation*}
X(t_j,\bar z) \in B_{\bar r}(z_{i(j)}).
\end{equation*}
We set
\begin{equation}\label{E_def_r}
r:= \min \left\{ \bar r, \frac{\bar r}{2(k+1)\|b\|_{L^\infty}}, \frac{T}{2\tilde N (k+1)} \right\}.
\end{equation}
For every $z\in \mathbb{R}^2$ with $|z-\bar z |\le r$ we prove that for every $j=1,\ldots, \tilde N$ there exists $s_j >0$ such that
\begin{equation*}
|X(t_j,\bar z)-X(s_j,z)|\le 2(k+1) |\bar h - h|
\end{equation*}
and
\begin{equation}\label{E_recursion}
|t_j-s_j| \le 2(k+1)|z-\bar z| + j(k+1)^2|g(\bar h) - g(h)| + 2(j-1)(k+1)^2|\bar h -h|.
\end{equation}
By \eqref{E_def_r} and the definition of $(z_{i})_{i=1}^N$, for every $j=1,\ldots, \tilde N$ there exists a unique point $\tilde z_j \in \Omega_{k+1}$ in $z \in B_{2\bar r}(z_{i(j)})$ such that
\begin{equation*}
H(\tilde z_j) = h \qquad \mbox{and} \qquad \tilde z_j \cdot e_i = X(t_j, \bar z) \cdot e_i.
\end{equation*}
We immediately have
\begin{equation*}
|X(t_j,\bar z) - \tilde z_j| = | ( X(t_j,\bar z) - \tilde z_j)\cdot e_i^\perp| \le \frac{|\bar h - h|}{\min_{B_{2\bar r}(z_{i(j)})}b\cdot e_j}\le 2(k+1)|\bar h - h|.
\end{equation*}
Notice in particular that $|X(t_j,\bar z) - \tilde z_j|\le \bar r$ by \eqref{E_def_r}.
In order to prove the claim, it is sufficient to show that for every $j=1,\ldots, \tilde N$, there exists $s_j\ge 0$ satisfying \eqref{E_recursion} and such that $X(s_j)=\tilde z_j$. We prove this by induction on $j$.
\noindent \emph{Case $j=1$}.
First we observe that
\begin{equation*}
\left\{ \bar z \cdot e_{i(1)}, z \cdot e_{i(1)}, X(t_1,\bar z) \cdot e_{i(1)}\right\} \subset I_{i(1),h} \cap I_{i(1),\bar h}.
\end{equation*}
In particular $z$ and $\tilde z_1$ belong to the same connected component of $H^{-1}(h)$.
Since $|z-\bar z|< r$ it trivially holds
\begin{equation*}
\bar z \cdot e_{i(1)} - r< z \cdot e_{i(1)}< \bar z \cdot e_{i(1)} + r.
\end{equation*}
Moreover
\begin{equation*}
X(t_1, \bar z) \cdot e_{i(1)} \ge \bar z \cdot e_{i(1)} + t_1 \min_{B_{4\bar r}(z_{i(1)})}b\cdot e_{i(1)} \ge \bar z \cdot e_{i(1)} + \frac{t_1}{2(k+1)} > \bar z \cdot e_{i(1)} + r.
\end{equation*}
We assume $ z \cdot e_{i(1)} \ge \bar z \cdot e_{i(1)}$, being the opposite case analogous. We denote by $\tilde t_0\ge 0$ the unique
$t \in [0,t_1)$ such that $X(t,\bar z) \cdot e_{i(1)}= z \cdot e_{i(1)}$.
We have
\begin{equation*}
\tilde t_0 \le \frac{|z-\bar z|}{\min_{B_{4\bar r}(z_{i(1)})} b\cdot e_{i(1)}} \le 2(k+1)|z-\bar z|.
\end{equation*}
Moreover
\begin{equation*}
t_1-\tilde t_0= \int_{z\cdot e_{i(1)}}^{\tilde z_1 \cdot e_{i(1)}} \frac{1}{b\cdot e_{i(1)}}(\bar f_{i,\bar h}(x))dx
\end{equation*}
and similarly
\begin{equation*}
s_1 = \int_{z\cdot e_{i(1)}}^{\tilde z_1 \cdot e_{i(1)}}\frac{1}{b\cdot e_{i(1)}}(\bar f_{i, h}(x))dx.
\end{equation*}
We denote by
\begin{equation*}
S:= \{ z' \in \mathbb{R}^2 : z'\cdot e_{i(1)}\in (z\cdot e_{i(1)}, \tilde z_1 \cdot e_{i(1)}), z'\cdot e_{i(1)} \in
(f_{i(1),h}(z'\cdot e_{i(1)}), f_{i(1),\bar h}(z'\cdot e_{i(1)}) ) \}.
\end{equation*}
Therefore
\begin{equation*}
\begin{split}
|t_1 - s_1| \le &~ |\tilde t_0| + |t_1-\tilde t_0 - s_1| \\
\le &~ 2(k+1)|z-\bar z|+ \int_{z\cdot e_{i(1)}}^{\tilde z_1 \cdot e_{i(1)}} \left| \frac{1}{b\cdot e_{i(1)}}(\bar f_{i,\bar h}(x)) - \frac{1}{b\cdot e_{i(1)}}(\bar f_{i, h}(x)) \right| dx \\
\le &~ 2(k+1)|z-\bar z| + \left(\frac{1}{\inf_{S} b \cdot e_{i(1)}}\right)^2 |Db|(S) \\
\le &~ 2(k+1) |z-\bar z| + 4(k+1)^2 |g(\bar h)- g(h)|,
\end{split}
\end{equation*}
where, in the third inequality we used that the maps $v \mapsto 1/v$ is $1/\delta^2$-Lipschitz on $[\delta, +\infty)$ with $\delta= \inf_{S} b\cdot e_{i(1)}$.
This proves \eqref{E_recursion} for $j=1$.
\noindent \emph{Case $j>1$}. We assume
\begin{equation}\label{E_hyp_rec}
|t_{j-1}-s_{j-1}| \le 2(k+1)|z-\bar z| + (j-1)(k+1)^2|g(\bar h) - g(h)| + 2(j-2)(k+1)^2|\bar h -h|
\end{equation}
and we prove \eqref{E_recursion}.
We observe that $|X(t_j,\bar z)-X(t_{j-1},\bar z)|\le \bar r$, therefore \eqref{E_flat} implies that $b \cdot e_{i(j)}> 1/(k+1)$ in $B_{2\bar r}(X(t_{j-1},\bar z))$.
In particular we can define $\tilde w_{j-1}$ as the unique $z\in B_{2\bar r}(X(t_{j-1},\bar z))$ such that
\begin{equation*}
H(z) = h \qquad \mbox{and} \qquad z \cdot e_{i(j)} = X(t_{j-1},\bar z) \cdot e_{i(j)}.
\end{equation*}
We have
\begin{equation*}
|\tilde w_{j-1}- \tilde z_{j-1}| \le |\tilde w_{j-1}- X(t_{j-1},\bar z)| + |X(t_{j-1},\bar z)- \tilde z_{j-1}| \le 2(k+1)|h-\bar h|.
\end{equation*}
Since $\tilde w_{j-1}\cdot e_{i(j)}, \tilde z_{j-1}\cdot e_{i(j)} \in I_{i(j-1),h}$, then there exists $s_{j-1}>0$ such that
$\tilde w_{j-1}= X(\tilde s_{j-1}, z)$ with
\begin{equation}\label{E_est1}
|\tilde s_{j-1} - s_{j-1}| \le \frac{2(k+1)}{\min_{B_{4\bar r}(z_{i(j-1)}))} b\cdot e_{i(j-1)}}|\bar h - h| \le 2(k+1)^2 |\bar h -h|.
\end{equation}
By the triangular inequality
\begin{equation}\label{E_triangular}
\begin{split}
|t_j - s_j| \le &~ | t_j - t_{j-1} + t_{j-1} - s_{j-1} + s_{j-1} - \tilde s_{j-1} + \tilde s_{j-1} - s_j| \\
\le &~ |t_{j-1}-s_{j-1}| + |t_j - t_{j-1}- s_j + \tilde s_{j-1}| + | \tilde s_{j-1}- s_{j-1}|.
\end{split}
\end{equation}
The same computation as in the case $j=1$ gives
\begin{equation}\label{E_est2}
\begin{split}
|t_j - t_{j-1}- s_j + \tilde s_{j-1}| = &~ \left| \int_{X(t_{j-1},\bar z)\cdot e_{i(j)}}^{X(t_{j},\bar z)\cdot e_{i(j)}}
\left(\frac{1}{b\cdot e_{i(j)}}(\bar f_{i,\bar h}(x)) - \frac{1}{b\cdot e_{i(j)}}(\bar f_{i, h}(x)) \right) dx \right| \\
\le &~ (k+1)^2 |g(\bar h)-g(h)|.
\end{split}
\end{equation}
By plugging \eqref{E_hyp_rec}, \eqref{E_est1} and \eqref{E_est2} into \eqref{E_triangular}, we finally get \eqref{E_recursion}.
The statement is therefore proven with $s=s_{\tilde N}$, $c_1=2(k+1)$ and $c_2= \tilde N(k+1)^2 (1+2 \|b\|_{L^\infty}) + 2(k+1)$.
\end{proof}
\begin{remark}\label{R_W1p}
If we additionally assume that $b \in W^{1,p}(\mathbb{R}^2)$ in Lemma \ref{L_main}, then the statement holds true with
\begin{equation*}
g(h):= \int_{\{H\le h\}\cap \Omega_{k+1}}|\nabla H| dz.
\end{equation*}
In particular we showed in the proof of Proposition \ref{P_local} that $g \in W^{1,p}_\text{\rm loc}(\mathbb{R})$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{T_global}.]
By Lemma \ref{L_Omega_k}, it is sufficient to prove that $X(t)\in \BV(\Omega_k)$ for every $k \in \mathbb{N}$.
In the same setting as in Lemma \ref{L_main}, if $\bar z \in \Omega_k$ and $z\in B_r(\bar z)$, then
\begin{equation}\label{E_est_global}
\begin{split}
|X(t,\bar z) - X(t,z)| \le &~ |X(t,\bar z) - X(s, z)| + |X(s,z)- X(t,z)| \\
\le &~ c_1 |H(\bar z) - H(z)| + \|b\|_{L^\infty} |t-s| \\
\le &~ c_1 |H(\bar z) - H(z)| + c_2 \|b\|_{L^\infty} \left( |g(H(\bar z)) - g(H(z))| + |\bar z -z|\right) \\
\le &~ \|b\|_{L^\infty}(c_1+c_2) |\bar z -z| + c_2 \|b\|_{L^\infty} |g(H(\bar z)) - g(H(z))|.
\end{split}
\end{equation}
The argument in the proof of Corollary \ref{C_BV} shows that $g\circ H \in \BV(\Omega_k)$ therefore it follows from \eqref{E_est_global} that
$X(t) \in \BV(\Omega_k \cap B_r(z))$ for every $z \in \Omega_k$. Being $\Omega_k$ bounded this proves that $X(t) \in \BV(\Omega_k)$.
Finally the continuity of $X(t)$ follows immediately from \eqref{E_est_global} and the continuity of $g$.
If moreover we assume that $b \in W^{1,p}(\mathbb{R}^2)$, then the same argument proves that $X(t) \in W^{1,p}(\Omega_k)$ thanks to Remark \ref{R_W1p} and Corollary \ref{C_W1p}.
\end{proof}
\begin{remark}
By inspection in the argument used to prove Lemma \ref{L_main}, we observe that $\|X(t)\|_{\BV}(\Omega)$ (or $\|X(t)\|_{W^{1,p}}(\Omega)$) is locally bounded for $t\in [0,+\infty)$ and it diverges at most linearly in $t$ as $t \to \infty$.
\end{remark}
\section{Example}
In this section we prove Proposition \ref{P_example}.
\subsection{Construction of $C_n$, $D_n$, $E_n$ and $F_n$}
We consider the following parameters:
\begin{equation}\label{E_parameters}
c_n = \frac{1}{n^22^n}, \qquad a_n = \frac{n-1}{2n}\left( \frac{c_n}{2}-c_{n+1}\right)\sim \frac{1}{2^{n-1}n^3}, \qquad r_n = \frac{1}{2n}\left( \frac{c_n}{2}-c_{n+1}\right)\sim \frac{1}{2^{n-1}n^4}.
\end{equation}
We set $C_1= [0,1/2]^2\subset \mathbb{R}^2$ and we inductively define $C_{n+1}$ for $n\ge 1$ as follows:
$C_{n+1}\subset C_{n}$ and every connected component $R'$ of $C_{n}$ contains two connected components of $C_{n+1}$, which are squares of side $c_n$ as in Figure \ref{F_Cn}.
For every $n\in \mathbb{N}$ we also consider the sets $D_n, E_n, F_n \subset C_n$ as in Figure \ref{F_Cn}.
\begin{figure}
\centering
\def0.6\columnwidth{\columnwidth}
\input{C_n.pdf_tex}
\caption{Construction of the set $C_{n+1}, D_n, E_n, F_n \subset C_n$. The two pictures represent a connected component of $C_n$.}\label{F_Cn}.
\end{figure}
We observe that for every $n\ge 1$ it holds
\begin{equation*}
C_{n+1}= \{ z \in D_n : \dist (z,\partial D_n) \ge r_n\}.
\end{equation*}
\subsection{Construction of $f_n$ and $h_n$}
The function $f_0:\mathbb{R}^2 \to \mathbb{R}$ is defined by $f_0(x,y)=y$. The function $f_n$ coincides with $f_{n-1}$ on $\mathbb{R}^2 \setminus C_n$ and its level lines in $C_n$ are as in Figure \ref{F_lines}.
In particular $f_n$ coincides with $f_{n-1}$ on $F_n$.
\begin{figure}
\centering
\def0.6\columnwidth{0.6\columnwidth}
\input{lines.pdf_tex}
\caption{Level sets of $f_n$ on a connected component $R'$ of $C_n$. The set $D_n\cap R'$ is colored in gray.}\label{F_lines}.
\end{figure}
Let $R$ be a connected component of $D_n$, then $f_n$ is affine on $R$ and depends only on $y$, therefore $\nabla f_n=(0,v_n)$.
Let $R'$ be a connected component of $C_n$ and denote by $s_n=\mathrm Osc(f_n,R')$.
\begin{equation*}
v_n = \frac{\mathrm Osc(f_n,R)}{\mathrm height(R)} = \frac{\mathrm Osc(f_n,R')}{\mathrm height(R')},
\end{equation*}
where $\mathrm height(R)=c_{n+1}+2r_n$, $\mathrm height(R')=c_{n+1}$, $\mathrm Osc(f_n,R) = s_n/4$ and $ \mathrm Osc(f_n,R') = s_{n+1}$ so that
\begin{equation*}
4s_{n+1}= \frac{c_{n+1}}{c_{n+1}+2r_n}s_n.
\end{equation*}
In particular
\begin{equation*}
4^ns_n = 4 c_1 \prod_{l=2}^n\frac{c_l}{c_l+2r_{l-1}} \searrow 4c_1 \prod_{l=2}^\infty \frac{c_l}{c_l+2r_{l-1}}.
\end{equation*}
From the choice \eqref{E_parameters} it follows that $r_{l-1} c_l^{-1}=O(l^{-2})$ therefore
\begin{equation*}
\log\left(\frac{c_l}{c_l+2r_{l-1}}\right) = O(l^{-2}).
\end{equation*}
In particular the infinite product is strictly positive and we denote it by $\sigma$. We finally get
\begin{equation*}
v_n = \frac{s_{n+1}}{c_{n+1}} \sim c_1\sigma\frac{n^2}{2^n}.
\end{equation*}
Similarly we compute the speed $\nabla f_n = (0,v_n')$ in the region $E_n$ as in the picture.
Denoting by $R''$ one of its components, we have
\begin{equation*}
v'_n = \frac{\mathrm Osc(f_n,R'')}{\mathrm height(R'')} = \frac{s_n}{8a_n} \sim \frac{c_1\sigma}{4} \frac{n^3}{2^n}.
\end{equation*}
\subsection{Estimates on the norms of $\nabla f_n$ and $\nabla h_n$}
We first estimate $\|\nabla f_n\|_{L^\infty(C_n)}$. From Figure \ref{F_lines} we observe that $ \|\partial_2 f_n\|_{L^\infty(C_n)} = v'_n$
and the maximal slope of the level sets of $f_n$ in $C_n$ is $\frac{c_n - 8 a_n}{4a_n}$.
Therefore
\begin{equation*}
\|\nabla f_n\|_{L^\infty}(C_n) \le v'_n\left( \frac{c_n - 8 a_n}{4a_n} + 1\right) \sim \frac{c_1\sigma}{16}\frac{n^4}{2^n}.
\end{equation*}
Since $f_n$ and $f_{n-1}$ coincide outside $C_n$, it holds
\begin{equation*}
\|\nabla h_n\|_{L^\infty}\le \|\nabla f_n\|_{L^\infty(C_n)} + \|\nabla f_{n-1}\|_{L^\infty(C_{n-1})} = O \left(\frac{n^4}{2^n}\right).
\end{equation*}
This proves that
\begin{equation*}
f = \lim_{n\to \infty}f_n = f_0 + \sum_{l=1}^\infty h_l
\end{equation*}
is a Lipschitz function.
\subsection{Estimate on crossing time}\label{Ss_crossing}
Let $T^1_n$ be the amount of time needed by an integral curve of the vector field $-\nabla^\perp f_{n-1}$ to cross a connected component of $C_n$.
Then
\begin{equation*}
T^1_n= \frac{c_n}{v_{n-1}}\sim \frac{1}{2c_1\sigma n^4}.
\end{equation*}
Let moreover $T^s_n$ be the amount of time needed by an integral curve of the vector field $-\nabla^\perp f_n$ intersecting $D_n$ to cross a connected component of $C_n$.
Since $a_n+r_n=o(c_n)$, then $T^s_n$ is asymptotically equivalent at the sum of the amounts of time needed to cross a connected component of $D_n$ and a connected component of $F_n$, namely
\begin{equation*}
T^s_n \sim \frac{1}{2c_1\sigma n^4} + \frac{1}{4c_1\sigma n^4} .
\end{equation*}
Finally let $T^f_n$ be the amount of time needed by an integral curve of the vector field $-\nabla^\perp f_n$ intersecting $E_n$ to cross a connected component of $C_n$:
similarly as above we have the $T^f_n$ is asymptotically equivalent to the amount of time needed to cross a connected component of $F_n$, namely
\begin{equation*}
T^f_n \sim \frac{1}{4c_1\sigma n^4}.
\end{equation*}
Let us denote by $X_n$ the flow of $-\nabla^\perp f_n$ and by $X$ the flow of $-\nabla^\perp f$.
For every $z=(x,y) \in \mathbb{R}^2$ with $x<0$ we define $t_1(z)$ as the unique $t>0$ such that $X(t_1(x))\cdot e_1 = 0$ and
$t_2(z)$ as the unique $t>0$ such that $X(t_1(x))\cdot e_1 = 1$. Since $f(x,y)=y$ for every $(x,y)\in \mathbb{R}^2$ with $x<0$, then the function $z\mapsto t_2(z)-t_1(z)$ depends only on $y$. We therefore set
\begin{equation*}
T(y):= t_2(-1,y)-t_1(-1,y)
\end{equation*}
and we observe that for every $x<0$ it holds $T(y)=t_2(x,y)-t_1(x,y)$.
Let $z=(x,y) \in \mathbb{R}^2$ be such that $x <0$ and there exists $t>0$ for which $X_n(t,z) \in E_n$. Then, by construction, $X(t,z)=X_n(t,z)$ for every $t>0$ and therefore
\begin{equation*}
T(y)= T_1 + \sum_{l=2}^{n-1} ( T^s_l - T^1_l) + T^f_n - T^1_n= : T_n
\end{equation*}
By construction there exist $0<y_1<y_2<\ldots < y_{2^{n-1}}<1$ such that for every $x<0$ and every $k \in [1, 2^{n-1}] \cap \mathbb{N}$
there exists $t=t(k)>0$ for which
\begin{equation*}
X(t,-1,y_{k}) \in E_n \quad \mbox{if $k$ is even}\qquad \mbox{and} \qquad X(t,-1,y_{2k-1})\in E_{n-1} \quad \mbox{if $k$ is odd}.
\end{equation*}
In particular
\begin{equation*}
\text{\rm Tot.Var.}_{(0,1)} T \ge 2^{n-1} (T_n- T_{n-1}) = 2^{n-1}(T_n^f - T^1_n + T^s_{n-1}-T^f_{n-1}) \sim \frac{2^n}{8c_1 \sigma n^4}.
\end{equation*}
This shows that $T$ has not bounded variation.
\subsection{Regularity of the flow}\label{Ss_regularity}
We observe that the function $T$ constructed in the previous section is bounded:
indeed
\begin{equation*}
\sup T = \sup_n T_n\le T_0 + \sum_{n=2}^\infty |T_n-T_{n-1}| \le 1 + C \sum_{n=1}^\infty \frac{1}{4c_1 \sigma n^4} < \infty
\end{equation*}
for some universal constant $C>0$. Since $f(x,y)= y$ for every $(x,y) \in \mathbb{R}^2 \setminus [0,1/2]^2$, for every $t> \sup T$ it holds
\begin{equation*}
X(t,x,y)\cdot e_1= x + 1 + t -T(y) \qquad \forall (x,y) \in (\sup T - t, 0) \times \mathbb{R}.
\end{equation*}
Since $T$ has not bounded variation, then $X(t) \notin \BV((-\varepsilon,0)\times (0,1/2))$ for every $\varepsilon>0$ and every $t> \sup T$.
If $R=[a,b]\times [c,d]$ denotes a connected component of $C_n$, the same argument as above shows that $X(t)\notin \BV((a-\varepsilon,a)\times (c,d))$ for every $\varepsilon>0$ and every $t> t_n$ for some $t_n\to 0$ as $n\to \infty$.
In particular $X(t)\notin \BV_\text{\rm loc}(\mathbb{R}^2;\mathbb{R}^2)$ for every $t>0$.
\subsection{More regular vector field}
The example constructed above does not prove Proposition \ref{P_example} since the vector field $b=-\nabla^\perp f$ has no Sobolev regularity.
In order to make the vector field more regular, we consider
\begin{equation*}
\tilde f = f_0 + \sum_{l=1}^\infty h_l \ast \rho_l,
\end{equation*}
where
\begin{equation*}
\rho_l(z)=r_l^{-2} \rho(z/r_l)
\end{equation*}
and $\rho:\mathbb{R}^2 \to \mathbb{R}$ is a positive smooth function such that
\begin{enumerate}
\item $\supp \rho \subset B_{1/2}(0)$;
\item $\int_{\mathbb{R}^2}\rho = 1$ and $\int_{\mathbb{R}^2}z\rho(z)dz=0$.
\end{enumerate}
Let us first check that $\tilde f\in W^{1,p}_\text{\rm loc}(\mathbb{R}^2,\mathbb{R}^2)$ for every $p \in [1,\infty)$: indeed
\begin{equation*}
\begin{split}
\|\nabla^2 (h_l\ast \rho_l)\|_{L^p}\le & ~ \|\nabla h_l \ast \nabla \rho_l\|_{L^p} \\
\le &~ \|\nabla h_l\|_{L^p}\|\nabla \rho_l\|_{L^1} \\
\le &~ \|\nabla h_l\|_{L\infty} \left(\L^2(\supp (\nabla h_l))\right)^{1/p}\|\nabla \rho_l\|_{L^1} \\
\le &~ O(l^4 2^{-l}) \left(\L^2(C_{l-1})\right)^{1/p} O(r_l^{-1}) \\
= &~ O(l^4 2^{-l}) O(2^{-l/p}l^{-4/p}) O(2^ll^4) \\
= &~ O(2^{-l/p}l^{8-\frac{4}{p}}).
\end{split}
\end{equation*}
Being $\|\nabla^2 (h_l\ast \rho_l)\|_{L^p}$ summable, the sequence
\begin{equation*}
\tilde f_n:= f_0 + \sum_{l=1}^n h_l \ast \rho_l
\end{equation*}
converges to $\tilde f$ in $W^{2,p}_\text{\rm loc}(\mathbb{R}^2)$ for every $p\in [1,+\infty)$.
We now prove that the same argument of Sections \ref{Ss_crossing} and \ref{Ss_regularity} for $f$ can be applied to $\tilde f$.
Being $f_n$ affine on each connected component of $D_n$, it follows from the properties of the convolution kernel that
$h_n\ast \rho_n (z)= h_n(z)$ for every $z \in D_n$ such that $\dist (z,\partial D_n)> r_n$.
We denote by
\begin{equation*}
\begin{split}
\tilde D_n&:= \{x \in D_n: \dist (x,\partial D_n)>r_n\}, \\
\tilde E_n&:= \{x \in E_n: \dist (x,\partial E_n)>r_n\}, \\
\tilde F_n&:= \{x \in D_n: \dist (x,\partial F_n)>r_n\}.
\end{split}
\end{equation*}
Observe that all the sets above are non-empty by the choice of the parameters \eqref{E_parameters}.
Since $\tilde D_n=C_{n+1}$, we have in particular that $f_n= \tilde f_n$ on the set $C_{n+1}$.
Similarly $h_n\ast \rho_n = h_n$ on $\tilde E_n \cup \tilde F_n$.
As in Section \ref{Ss_crossing}, we denote by $\tilde T^1_n$ the total amount of time needed by an integral curve of the vector field
$-\nabla^\perp \tilde f_{n-1}$ to cross a connected component of $C_n$. Since $f_{n-1}=\tilde f_{n-1}$ on $C_n$, then $\tilde T^1_n=T^1_n$.
We moreover denote by $\tilde T^s_n$ the amount of time needed by an integral curve of the vector field $-\nabla^\perp \tilde f_n$ intersecting $\tilde D_n$ to cross a connected component of $C_n$.
Since $\tilde f_n = f_n$ in $\tilde D_n \cup \tilde F_n$ it is straightforward to check that $\tilde T^s_n \sim T^s_n$.
Similarly, we denote $\tilde T^f_n$ the amount of time needed by an integral curve of the vector field $-\nabla^\perp\tilde f_n$ intersecting $\tilde E_n$ to cross a connected component of $C_n$.
Since $\tilde f_n = f_n$ in $\tilde E_n \cup \tilde F_n$ it is straightforward to check that $\tilde T^f_n \sim T^f_n$.
We are now in position to repeat the argument in Sections \ref{Ss_crossing} and \ref{Ss_regularity} and this proves that for every $t>0$ the regular Lagrangian flow $\tilde X$ of the vector field $-\nabla^\perp \tilde f$ satisfies
\begin{equation*}
\tilde X(t) \notin \BV_\text{\rm loc}(\mathbb{R}^2,\mathbb{R}^2).
\end{equation*}
This concludes the proof of Proposition \ref{P_example}.
\bibliographystyle{alpha}
|
1,108,101,564,648 | arxiv |
\subsection{Selection as a Function of Treatment Alone}
Consider the most basic scenario of IV selection bias in Figure \ref{fig:IVDAGBaseline}.
As stated above, $ Z $ in this model is a valid instrumental variable for the causal effect of $ T $ on $ Y $, $ \beta $ , if the analysis does not condition on $ S $.
Conditioning on $ S $, however, invalidates $ Z $ as an instrumental variable, because $ S $ is a descendant of $ T $, and $ T $ is a collider on the path $ Z\rightarrow T\leftarrow U\rightarrow Y $.
Conditioning on $ S $ opens this path, which induces an association between $ Z $ and $ Y $ via $ U $ and hence violates the exclusion condition.
Proposition \ref{prop:IVAdjBias} gives the selection bias in the standard IV estimator when the analysis adjusts for S.
\begin{proposition}\label{prop:IVAdjBias}
In a linear and homogeneous model with normal errors represented by Figure \ref{fig:IVDAGBaseline} and covariate adjustment on $ S $, the standard instrumental variables estimator converges in probability to
\[
\beta_{IV|Adj}=\beta -\delta_1 \delta_2 \frac{\gamma^2}{1-\gamma ^2}.
\]
\end{proposition}
The proof follows from regression algebra and Wright's rule (\citealt{Wright1934}).
The magnitude of selection bias due to covariate adjustment in the IV estimator depends on two components.
First, selection bias increases with the strength of unobserved confounding between $ T $ and $ Y $ via $ U $, $ \delta_1 \delta_2 $ (which corresponds to the path $ Z\rightarrow T\leftarrow U\rightarrow Y $ that is opened by conditioning on $ S $, less the first stage $ Z\rightarrow T $).
Second, selection bias increases with the effect of the treatment $ T $ on the selection variable, $ S $ , $ \gamma $ .
When $ \gamma =0 $, $ S $ contains no information about the collider $ T $, conditioning on $ S $ does not open the path $ Z\rightarrow T\leftarrow U\rightarrow Y $, and selection bias is zero.
By contrast, as $ |\gamma |\rightarrow 1 $, the magnitude of the bias increases without bound because adjusting for $ S $ increasingly amounts to adjusting for the collider $ T $ itself, while at the same time reducing the first stage.
(If the analysis directly adjusted for $ T $, then the first stage would go to zero and the IV estimator would not be defined.)
Proposition \ref{prop:IVTruncBias} derives the IV selection bias due to interval truncation on S.
\begin{proposition}\label{prop:IVTruncBias}
In a linear and homogeneous model with normal errors represented by Figure \ref{fig:IVDAGBaseline} and truncation on $ S $, $ R=\mathbf{1}(S\geq s_0) $, the standard instrumental variables estimator converges in probability to
\[
\beta_{IV|Tr}=\beta -\delta_1 \delta_2 \frac{\psi \gamma ^2}{1-\psi \gamma^2}, \quad \text{where } \psi =\frac{\phi (s_0)}{1-\Phi(s_0)} \left( \frac{\phi (s_0 )}{1-\Phi(s_0 )} - s_0 \right),
\]
and $ \phi (\cdot) $ and $ \Phi(\cdot ) $ are the standard normal pdf and cdf, respectively.
\end{proposition}
Proposition \ref{prop:IVTruncBias} (proved in Appendix \ref{sec:TruncBiasProof}) illustrates that IV selection bias due to truncation (Proposition \ref{prop:IVTruncBias}) differs from IV selection bias due to adjustment (Proposition \ref{prop:IVAdjBias}) only in that truncation deflates the contribution of the effect of $ T $ on $ S $, $ \gamma $, by the factor $ \psi \in (0,1) $.
Since $ \psi\ $ is the derivative of the standard normal hazard function, it monotonically increases with the \emph{severity of truncation}, $Pr(R=0)=\Phi(s_0)$, as shown in Figure \ref{fig:PsiVsSelection}. Hence, interval truncation leads to less IV selection bias than covariate adjustment in Figure \ref{fig:IVDAGBaseline},
\begin{figure}[t!]
\centering
\begin{subfigure}{0.48\linewidth}
\centering
\textbf{\qquad Truncation Severity versus $ \psi $}
\includegraphics[width=\linewidth]{figures/PsiVsProbRSQ.png}
\caption{}
\label{fig:PsiVsSelection}
\end{subfigure}
\begin{subfigure}{0.50\linewidth}
\centering
\textbf{\qquad Least Biased Estimator}\\
\includegraphics[width=\linewidth]{figures/IVvOLSBiasPreferenceProbRSQ.png}
\caption{}
\label{fig:LeastBiasEstPreference}
\end{subfigure}
\caption{(\subref{fig:PsiVsSelection}) $\psi$ monotonically increases with truncation severity. (\subref{fig:LeastBiasEstPreference}) Whether OLS or IV is less biased under selection depends on truncation severity and the effect of $ T $ on $ S $, $ |\gamma| $.}
\label{fig:PsiPlots}
\end{figure}
\begin{corollary}\label{cor:IVTruncIVAdj}
In a linear and homogeneous model with normal errors represented by Figure \ref{fig:IVDAGBaseline}, the magnitude of IV-adjustment bias is weakly larger than that of IV-truncation bias:
$
\left| \beta_{IV|Adj} - \beta \right| \geq \left| \beta_{IV|Tr} - \beta \right|.
$
\end{corollary}
Corollary \ref{cor:IVTruncIVAdj} makes intuitive sense.
Adjustment involves first exactly stratifying and then averaging across strata defined by $ S=s $.
Exact stratification on $ S $ uses all information about $ T $ that is contained in $ S $, hence opening the biasing path as much as conditioning on $ S $ possibly can.
By contrast, interval truncation amounts to imprecise stratification on $ S $ (retaining observations across a range of values on $ S $, but not exactly stratifying on any particular value), hence ``less opening" the biasing path.
Of some methodological interest, we further note, in Figure \ref{fig:IVDAGBaseline}, that IV selection bias by truncation converges on IV selection bias by covariate adjustment as the severity of truncation increases to shrink the remaining sample to a single point.
Proposition \ref{prop:AdjIsPtTrunc} states that this observation is true for all models, not only for Figure \ref{fig:IVDAGBaseline}.
\begin{proposition}\label{prop:AdjIsPtTrunc}
In a linear and homogeneous model with normal errors, selection bias in the standard instrumental variables estimator due to covariate adjustment is the limiting case of selection bias due to point truncation,
\[
\lim_{s_0\rightarrow \infty}\beta_{IV|Tr} = \beta_{IV|Adj}.
\]
\end{proposition}
This proposition makes intuitive sense.
Covariate adjustment involves exact stratification on $ S=s $, which defines point truncation.
Since the probability limits of all $s$-stratum specific estimators are identical in linear Gaussian models, selection bias by adjustment equals selection bias by point truncation.
The proof in Appendix \ref{sec:TruncAdjProof} formalizes this intuition.
Proposition \ref{prop:IVTruncBias} helps inform empirical choices in practice. When selection is unavoidable (e.g. because the data were truncated during data collection), should analysts choose IV or OLS?
Figure \ref{fig:LeastBiasEstPreference} shows that the IV estimator is preferred to OLS, with respect to bias, for most combinations of $ \gamma $ and truncation severity.
Since OLS bias (with or without truncation) only depends on unobserved confounding, i.e. $ \beta _{OLS|Tr} - \beta=\delta_1 \delta_2 $, the difference in magnitude between the OLS and IV biases with truncation is given by
\[
\left|\beta_{OLS|Tr}-\beta\right| - \left|\beta_{IV|Tr} - \beta \right|=\left|\delta_1 \delta_2 \right| \frac{1-2\psi \gamma^2}{1-\psi \gamma^2}.
\]
Hence, the IV estimator is preferred when $ \psi \gamma^2 \leq \frac{1}{2} $.
Specifically, when fewer than 29.1\% of observations are truncated (corresponding to $ \psi \leq 0.5 $), IV is preferred regardless of the effect of $ T $ on $ S $, $ \gamma $.
Conversely, when $ |\gamma |<\sqrt{0.5}\approx 0.707 $, no amount of truncation makes OLS preferable over IV.
Recalling that $ \gamma $ cannot exceed 1 in magnitude, the selection variable $ S $ would have to be an extraordinarily strong proxy for $ T $ to make IV more biased than OLS at any level of truncation.
Perhaps most useful for practice, we note that selection bias (by truncation or adjustment) in Figure \ref{fig:IVDAGBaseline} is proportional to the negative of OLS confounding bias.
Therefore, the OLS and IV estimators under selection bound the true causal effect.
\begin{corollary}
In a linear and homogeneous model with normal errors represented by Figure \ref{fig:IVDAGBaseline}, the OLS estimator and the instrumental variables estimator with selection bound the causal effect of $ T $ on $ Y $, $ \beta $ ,
\begin{align*}
\beta_{IV|Tr}\leq & \beta \leq \beta_{OLS}, \quad \text{when } \quad \delta_1 \delta_2>0, \\
\beta_{IV|Tr}\geq & \beta \geq \beta_{OLS}, \quad \text{when } \quad \delta_1 \delta_2<0.
\end{align*}
\end{corollary}
The fact that the IV selection bias has the opposite sign of the OLS selection bias in Figure \ref{fig:IVDAGBaseline} is owed to linearity and homogeneity: in linear and homogeneous models, conditioning on a collider or its descendant reverses the sign of the product of the path parameters for the associated path.
For example if all path parameters along the biasing path $ Z\rightarrow T\leftarrow U\rightarrow Y $ are positive, then conditioning on $ S\in desc(T) $ will induce a negative association along this path.
Since the IV bias hinges on conditioning on $ S $, the selection bias would be negative.
By contrast, OLS bias in Figure \ref{fig:IVDAGBaseline} does not hinge on conditioning on $ S $ and instead results from confounding along $ T\leftarrow U\rightarrow Y $.
Therefore, OLS bias would be positive.
\subsection{Selection as a Function of a Mediator}\label{sec:medselection}
Next, consider models in which the selection variable, $ S $, is a mediator of the effect of treatment on the outcome, as in the causal graphs in Figures \ref{fig:DAGTandY} and \ref{fig:DAGTandYandV}.
These situations are worth investigating for two reasons: first, often empiricists are interested in the direct causal effect of $ T $ on $ Y $, which necessitates conditioning on $ S $;
second, they result in qualitatively different bias representations.
\begin{figure}[b!]
\centering
\begin{subfigure}{0.48\linewidth}
\input{figures/IVDAG_T_and_Y.tex}
\caption{}
\label{fig:DAGTandY}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\input{figures/IVDAG_T_and_Y_and_V.tex}
\caption{}
\label{fig:DAGTandYandV}
\end{subfigure}
\caption{IV scenarios where the selection variable is both a descendant of treatment and a mediator.}
\label{fig:IVDAGMediated}
\end{figure}
Suppose that the analyst is interested in the direct causal effect of $ T $ on $ Y $, $ \beta $, in the model of Figure \ref{fig:DAGTandY}.
The bias in the IV and OLS estimators under interval truncation and adjustment for $ S $ is given in Proposition \ref{prop:IVMedBias}.
\begin{proposition}\label{prop:IVMedBias}
In a linear and homogeneous model with normal errors represented by Figure \ref{fig:DAGTandY}. The standard instrumental variables estimator with selection on $ S $, converges in probability to
\[
\beta_{IV|S}= \beta -\delta_1 \delta_2 \frac{\psi \gamma ^2}{1-\psi \gamma ^2} + \gamma\tau \frac{1-\psi}{1-\psi \gamma ^2},
\]
and the OLS estimator with selection on $ S $ converges in probability to
\[
\beta_{OLS|S}= \beta +\delta_1 \delta_2+\gamma \tau \frac{1-\psi}{1-\psi \gamma ^2},
\]
where
\[
\psi =\begin{cases}
\frac{\phi (s_0)}{1-\Phi(s_0)} \left( \frac{\phi (s_0 )}{1-\Phi(s_0 )} - s_0 \right) & \text{with truncation on }S, R=\mathbf{1}(S\geq s_0) \\
1 & \text{with adjustment on }S
\end{cases}.
\]
\end{proposition}
All bias expressions in Proposition \ref{prop:IVMedBias} have a straightforward graphical interpretation.
With \emph{adjustment} on $ S $, the indirect causal path $ T\rightarrow S\rightarrow Y $ is completely blocked, because $ S $ is a non-collider on this path.
Hence, the bias in the IV and OLS estimators with adjustment on $ S $ equals the IV and OLS adjustment biases in Figure \ref{fig:IVDAGBaseline}, where $ S $ was not a mediator.
With adjustment on $ S $, IV is biased by selection, whereas OLS is biased by confounding;
IV selection bias will generally be smaller in magnitude than OLS confounding bias (unless the effect of $ T $ on $ S $ is very large); and IV and OLS with adjustment bound the true direct causal effect.
With \emph{truncation} on $ S $, however, the indirect path $ T\rightarrow S\rightarrow Y $ is not completely blocked and hence contributes a new term to both IV and OLS bias. For both IV and OLS, this term equals the strength of the partially blocked indirect path, $ \gamma \tau $ , deflated by the multiplier $ 0\leq(1-\psi)/(1-\psi \gamma^2 )\leq 1 $.
The size of the multiplier depends both on the truncation severity, $ \psi $ , and on the effect of $ T $ on $ S $, $ \gamma $ , but in opposite directions.
As $ \gamma $ is fixed and truncation increases, $ \psi \rightarrow 1 $, the analysis conditions ever more precisely on an ever smaller range of values of $ S $; hence the indirect path is increasingly blocked, and both the multiplier and the bias term tend to 0.
By contrast, when $ \psi $ is fixed and the effect of $ T $ on $ S $ increases, $ |\gamma| \rightarrow 1 $, the information about $ T $ contained in $ S $ increases, the multiplier tends to 1, and the path is increasingly opened.
By Proposition \ref{prop:AdjIsPtTrunc}, it remains true in Figure \ref{fig:DAGTandY} that IV selection bias due to adjustment is the limiting case of IV selection bias due to point truncation.
However, it is no longer necessarily true that IV with adjustment is more biased than IV with truncation.
The bias ordering now depends on the signs and relative sizes of the two additive bias term (representing the biasing paths $ T\leftarrow U\rightarrow Y $ and $ T\rightarrow S\rightarrow Y $), and on how well the indirect path $ T\rightarrow S\rightarrow Y $ is closed by truncation.
Hence, when selection is made on a mediator of the treatment effect, selection bias by adjustment could be larger or smaller in magnitude than selection bias by truncation.
Bounding the true causal effect also becomes more difficult.
With truncation on $ S $, IV and OLS with selection do not necessarily bound the true direct causal effect.
The analysis is further complicated when the effect of $ S $ on $ Y $ is confounded by some unobserved variable, $ W $, as in Figure \ref{fig:DAGTandYandV}.
This situation is arguably more realistic than the model in Figure \ref{fig:DAGTandY}, because mediators in observational studies are expected to be confounded.
Here, conditioning on $ S $ (by adjustment or truncation) in IV analysis opens a new path, $ Z\rightarrow T\rightarrow S\leftarrow W\rightarrow Y $, which violates the exclusion assumption; and in OLS it opens $ T\rightarrow S\leftarrow W\rightarrow Y $, which biases OLS regression.
The resulting bias expressions are the same as those in Proposition \ref{prop:IVMedBias} with an additional bias term, $ -\gamma \delta_3 \delta_4 \frac{\psi}{1-\psi \gamma^2} $.
Once more, IV selection bias due to adjustment is the limiting case of IV selection bias due to point truncation.
However, no pair of estimators (among $ \beta_{IV|Tr}, \beta_{IV|Adj},\beta_{OLS|Tr},\beta_{OLS|Adj} $) can be relied on to bound the true direct causal effect in the model of Figure \ref{fig:DAGTandYandV}.
\subsection{Selection on Treatment and the Unobserved Confounder}\label{sec:medandconfselection}
Finally, we consider situations where the selection variable, $ S $, is also a descendant of the unobserved $ U $ that confounds the effect of treatment on the outcome (Figure \ref{fig:DAGTandU}).
\begin{figure}[t!]
\centering
\input{figures/IVDAG_T_and_U.tex}
\caption{IV scenario where the selection variable is both a descendant of the treatment and the unobserved confounder.}
\label{fig:DAGTandU}
\end{figure}
\begin{proposition}\label{prop:IVUnobsBias}
In a linear and homogeneous model with normal errors represented by Figure \ref{fig:DAGTandU}. The standard instrumental variables estimator with selection on $ S $ converges in probability to
\[
\beta_{IV|S} = \beta -\delta_1 \delta_2 \frac{\psi \gamma ^2}{1-\psi \gamma (\gamma +\delta_1 \delta_3 )} - \gamma\delta_3\delta_2\frac{\psi}{1-\psi \gamma (\gamma +\delta_1 \delta_3 )},
\]
and the OLS estimator with selection on S converges in probability to
\[
\beta_{OLS|S} = \beta +\delta_1\delta_2 \frac{1-\psi (\gamma^2+\gamma \delta_1 \delta_3+\delta_3^2)}{1-\psi\gamma(\gamma+\delta_1 \delta_3 )^2} - \gamma \delta_3 \delta_2\frac{\psi}{1-\psi \gamma (\gamma +\delta_1 \delta_3 )^2},
\]
where
\[
\psi =\begin{cases}
\frac{\phi (s_0)}{1-\Phi(s_0)} \left( \frac{\phi (s_0 )}{1-\Phi(s_0 )} - s_0 \right) & \text{with truncation on }S,R=\mathbf{1}(S\geq s_0) \\
1 & \text{with adjustment on }S
\end{cases}.
\]
\end{proposition}
Three points stand out about selection bias in Figure \ref{fig:DAGTandU}.
First, when $ S $ is a descendant of both $ T $ and $ U $, conditioning on $ S $ opens a new path, $ T\rightarrow S\leftarrow U\rightarrow Y $, which biases IV and OLS with adjustment or truncation on $ S $.
Second, in contrast to models considered previously, the bias term associated with each biasing path ($ T\leftarrow U\rightarrow Y $ and $ T\rightarrow S\leftarrow U\rightarrow Y $) is now a function of the path parameters of both paths. In other words, the path-specific biases interact. Pearl's graphical causal models provide intuition for this interaction.
Consider, for example, the second bias term. First, conditioning on $ S $ opens the path $ T\rightarrow S\leftarrow U\rightarrow Y $. Hence, the bias term depends on $ \gamma \delta_3 \delta_2 $. Second, conditioning on $ S $ also absorbs variance from $ U $ (a non-collider on $ T\rightarrow S\leftarrow U\rightarrow Y $), because $S$ is a descendant of $U$ along the path $U\rightarrow T\rightarrow S$. Hence, the bias term also depends on $ \delta_1 $.
Third, the direction of the interaction, and hence the overall bias, depends on the specific parameter values. This makes the bias order of these estimators fairly unpredictable and prevents generic recommendations for or against any one estimator. This ambiguity provides additional motivation for using exact bias formulas for sensitivity analysis.
\subsection{Proof of Truncation Bias Expressions}\label{sec:TruncBiasProof}
We derive the bias under truncation by leveraging a result from \citet{Tallis1965}.
\begin{lemma} \label{lem:Tallis}
Let $V\in \mathbb{R}^k$ follow a multivariate normal distribution, $ V \sim N \left( 0, \Sigma \right) $, and define the truncated random vector $ \widetilde{V} = \{ v\in V: c'v \geq p \} $ with $ p\in \mathbb{R} $,$ c\in\mathbb{R}^k $, and $ |c|=1 $.
Then the expectation and variance of the truncated random vector are given by
\begin{align*}
E\left[ \widetilde{V} \right] & = \Sigma c \kappa^{-1} \lambda \left( \frac{p}{\kappa}\right)\\
Var\left( \widetilde{V} \right) & = \Sigma - \Sigma cc' \Sigma \kappa^{-2}\psi
\end{align*}
where $ \kappa = \left(c'\Sigma c \right)^{-1/2} $, $ \lambda(x) = \frac{\phi(x)}{1-\Phi(x)}$ is the hazard function of the standard normal distribution, and
\[
\psi = \lambda \left( \frac{p}{\kappa}\right)\left( \lambda \left( \frac{p}{\kappa}\right) - \frac{p}{\kappa} \right).
\]
\end{lemma}
Using properties of the standard normal hazard function it can be shown that $\psi$ is in fact the derivative of the hazard function.
\begin{proof}[Proof of Proposition \ref{prop:IVTruncBias}]
Consider the model described by Figure \ref{fig:IVDAGBaseline}.
Since the idiosyncratic shocks are all normally distributed, all variables in the model are normally distributed.
Specifically for vectors $ V = \begin{bmatrix} Z & U & T & S & Y \end{bmatrix}' $ and $ \varepsilon = \begin{bmatrix} \varepsilon_Z & \varepsilon_U & \varepsilon_T & \varepsilon_S & \varepsilon_Y \end{bmatrix}' $,
the standardized\footnote{Standardization implies non-unit variance for some of the shocks. For example when $ Var(T)=1 $, then $ \varepsilon_{T} $ is $ Var(\varepsilon_T) = 1-\pi^2-\delta_1^2 $.} model has the reduced form $ V = \Gamma \varepsilon $, where $ \varepsilon \sim N(0,\Sigma_{\varepsilon}) $ and
\[
\Gamma = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
\pi & \delta_{1} & 1 & 0 & 0 \\
\gamma \pi & \gamma \delta_{1} & \gamma & 1 & 0 \\
\beta \pi & \beta \delta_{1} + \delta_{2}& \beta & 0 & 1 \\
\end{bmatrix} \qquad
\Sigma_{\varepsilon}= \begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1-\pi^2-\delta_1^2 & 0 & 0 \\
0 & 0 & 0 & 1-\gamma^2 & 0 \\
0 & 0 & 0 & 0 & 1-\beta^2-\delta_2^2-2\beta\delta_1\delta_2 \\
\end{bmatrix}.
\]
Since this implies that $ V \sim N\left(0, \Gamma \Sigma_{\varepsilon}\Gamma' \right) $, our truncation scenario, $ R = \mathbf{1}(S\geq s_0) $, allows for direct application of Lemma \ref{lem:Tallis} to derive the covariance matrix of the truncated distribution, $ \widetilde{V} = V|S\geq s_0 $.
For Lemma \ref{lem:Tallis}, $ c= \begin{bmatrix} 0 & 0 & 0 & 1 & 0 \end{bmatrix}' $, $ p=s_0 $, and $ \Sigma = \Gamma \Sigma_{\varepsilon}\Gamma' $.
This implies $ \kappa = 1 $ and thus
\[
Var\left( \widetilde{V} \right) = \Gamma \Sigma_{\varepsilon}\Gamma' - \Gamma \Sigma_{\varepsilon}\Gamma' cc' \Gamma \Sigma_{\varepsilon}\Gamma'\psi
\quad \text{where} \quad \psi = \lambda( s_0 )\left( \lambda( s_0 ) - s_0 \right).
\]
Finally the IV estimand with truncation is given by the ratio of the truncated covariance between instrument and outcome and the truncated covariance between instrument and treatment. After some enjoyable algebra, we evaluate $ Var(\widetilde{V}) $, extract the relevant covariances, and obtain
\begin{align*}
\beta_{IV|Tr} & = \frac{Cov(Z,Y|S\geq s_0)}{Cov(Z,T|S\geq s_0)} = \frac{\beta\pi - \psi \gamma\pi\left( \beta\gamma + \gamma\delta_1\delta_2 \right)}{\pi - \psi\gamma^2\pi} = \beta - \delta_1\delta_2\frac{\psi\gamma^2}{1-\psi\gamma^2}.
\end{align*}
\end{proof}
The proofs of Propositions \ref{prop:IVMedBias} and \ref{prop:IVUnobsBias} proceed analogously, using the appropriate reduced form matrix, $ \Gamma $, for each scenario.
\subsection{Proof of Adjustment as Point Truncation (Proposition \ref{prop:AdjIsPtTrunc})}\label{sec:TruncAdjProof}
\begin{proof}
Define the stratum specific IV estimator when $S=s$ as
\[
\beta_{IV|S}\left(s\right)=\frac{Cov\left(Z,Y|S=s\right)}{Cov\left(Z,T|S=s\right)}
\]
Notice that $\beta_{IV|S}\left(s\right)$ is the IV estimator under point truncation (i.e. the limit of the interval truncated estimator as the interval collapses to a point).
In a homogeneous linear model with normal errors, $ V = \begin{bmatrix} Z & U & T & S & Y \end{bmatrix}' $ will follow a multivariate normal distribution.
Multivariate normal distributions have the useful property that their conditional distributions have constant covariances across the conditioning level.
Hence, for all $ V_1, V_2, V_3 \in \{Z, U, T, S, Y\} $ and $ v_0, v_1 \in \mathbb{R} $, we have that
\[
Cov(V_1, V_2| V_3 = v_0) = Cov(V_1, V_2| V_3 = v_1).
\]
It follows that $\beta_{IV|S}\left(s_{0}\right)=\beta_{IV|S}\left(s_{1}\right)$
for any $s_{0},s_{1}\in\mathbb{R}$.
Since the stratum specific IV estimator is constant across strata of $ S $, this implies that the IV estimator under adjustment on $ S $ is the same as any stratum specific IV estimator.
\end{proof}
\chapter*{Instrumental Variables with Treatment-Induced Selection: Exact Bias Results}
\begin{LARGE}Felix Elwert\end{LARGE}\\
\begin{large}University of Wisconsin--Madison\end{large}\vspace{0.3cm}\\
\begin{LARGE}Elan Segarra\end{LARGE}\\
\begin{large}University of Wisconsin--Madison\end{large}\\
\begin{quote} \begin{small}
Instrumental variables (IV) estimation suffers selection bias when the analysis conditions on the treatment. Judea Pearl's [2000:248] early graphical definition of instrumental variables explicitly prohibited conditioning on the treatment. Nonetheless, the practice remains common.
In this paper, we derive exact analytic expressions for IV selection bias across a range of data-generating models, and for various selection-inducing procedures.
We present four sets of results for linear models.
First, IV selection bias depends on the conditioning procedure (covariate adjustment vs. sample truncation). Second, IV selection bias due to covariate adjustment is the limiting case of IV selection bias due to sample truncation.
Third, in certain models, the IV and OLS estimators under selection bound the true causal effect in large samples.
Fourth, we characterize situations where IV remains preferred to OLS despite selection on the treatment.
These results broaden the notion of IV selection bias beyond sample truncation, replace prior simulation findings with exact analytic formulas, and enable formal sensitivity analyses.
\end{small} \end{quote}
\section{Introduction}
\input{01_intro.tex}
\section{Causal Graphs}\label{sec:causal graphs}
\input{02_causal_graphs.tex}
\section{Instrumental Variables}\label{sec:IV}
\input{03_IV.tex}
\section{Selection Bias in IV: Qualitative Analysis}\label{sec:qualitative}
\input{04_qualitative.tex}
\section{Selection Bias in IV: Quantitative Analysis}\label{sec:quantitative}
\input{05_quantitative.tex}
\section{Conclusion}\label{sec:conclusion}
\input{06_conclusion.tex}
\bibliographystyle{mcp-acm}
|
1,108,101,564,649 | arxiv | \section{Hardness of Euclidean All-Nearest Neighbors}
Here we prove hardness for All-Nearest Neighbors in $\omega(\log n)$ dimensions:
\begin{reminder}{Theorem~\ref{all-nn}} Under OVC, the All-Nearest Neighbors problem in $\ell^{\omega(\log n)}_2$ requires $n^{2-o(1)}$ time, even restricted to vectors with entries from $\{-1,0,1\}$.
\end{reminder}
It seems plausible that there is a sub-quadratic-time reduction from All-Nearest Neighbors to $\ell_2$-Closest Pair (even in high dimensions), so we think of Theorem~\ref{all-nn} as good evidence that $\ell_2$-Closest Pair is also hard for $\omega(\log n)$ dimensions.
\begin{proof} Let $d = O(\log n)$. We begin with the Subset Containment problem: \emph{Given $n$ red subsets of $[d]$ and $n$ blue subsets of $[d]$, is there some red subset that is contained in some blue subset?} It is well-known that this problem is equivalent to \text{\bf OV}{} on $n$ vectors in $d$ dimensions~\cite{Williams05} (imagine you have a red/blue version of \text{\bf OV}{} with vectors in $\{0,1\}^d$, and flip all the bits of the blue vectors; this converts the \text{\bf OV}{} instance to a Subset Containment instance).
The main idea of the proof is to use error correcting codes over $\{-1,1\}$ to keep the red points ``far apart'' from each other, so that the nearest neighbor of each red point $x$ is a blue point $y$ which is as close to being a superset of $x$ as possible.
Let $R$ be the collection of red sets and let $B$ be the blue sets. We will think of them as vectors in $\{0,1\}^d$ in the natural way. First, we do a trick which will help control the vector norms. Try all pairs of integers $d_1, d_2 \in [d]$ with $d_1 < d_2$. Take the subset $R_{d_1}$ of $R$ which only contains vectors having exactly $d_1$ ones, and take the subset $B_{d_2}$ of $B$ which only contains vectors having exactly $d_2$ ones. We will work with the collections $R' := R_{d_1}$ and $B' := B_{d_2}$ in the following. (The benefit is that we now may assume that all red vectors have the same norm value $v_A$, and all blue vectors have the same norm value $v_B$, and it only costs $O(d^2)$ extra calls.)
Let $\varepsilon \in (0,1/2)$. We say that a \emph{code with distance at least $(1/2-\varepsilon)$} is a collection of vectors $S \subseteq \{-1,1\}^k$ such that for all $u, v \in S$ with $u \neq v$, $\ip{u}{v} \leq 2\varepsilon k$. Note this condition is equivalent to saying that the Hamming distance between each pair of $k$-dimensional vectors is at least $(1/2-\varepsilon)k$. Such codes are known to have polynomial-time constructions. In particular, it was recently shown how to efficiently construct a set $S$ of at least $n$ such vectors, with dimension only $k \leq t=O(\log n)/\varepsilon^{2+o(1)}$~\cite{TaShma17}. In the following, let $S$ be such a code with $\varepsilon = 1/8$ and the dimension $k$ as a parameter to be set later.
We will add $k$ dimensions to all vectors in $R'$ and $B'$. For each vector $v_i \in R$, for $i=1,\ldots,n$, we concatenate the $i$th codeword from $S$ to the end of it, obtaining a $(d+k)$-dimensional vector $v'_i$. For each vector $w_i \in B$, we concatenate $k$ zeroes to the end, obtaining a $(d+k)$-dimensional $w'_i$.
Observe that for all vectors $v'_i$ from $R'$, their $\ell_2$-norm squared is
\begin{align}\label{red-norm-squared}||v'_i||^2_2 = d_1 + \sum_{i=1}^{k} 2^2 = d_1 + 4k\end{align} For all vectors $w'_i$ from $B'$, we have $||w'_i||^2_2 = d_2$.
Furthermore, observe that for every two vectors $v'_i,v'_j$ from $R'$, their inner product is at most $(d_1-1)+2\varepsilon k$, because the original vectors $v_i$ and $v_j$ were distinct vectors with exactly $d_1$ ones (so their inner product is at most $d_1-1$), and the inner product of any two distinct codewords is at most $2\varepsilon k$. Therefore we have
\begin{align*}
||v'_i-v'_j||^2_2 &= ||v'_i||^2_2 + ||v'_j||^2_2 - 2\ip{v'_i}{v'_j}\\
& = 2(d_1 + 4k) - 2\ip{v'_i}{v'_j} \text{~~~~~~(by \eqref{red-norm-squared})}\\
& \geq 2(d_1 + 4k) - 2(d_1-1 + 2\varepsilon k) = 8k + 2 - 4\varepsilon k = (8-1/2)k + 2.\end{align*}
On the other hand, for a vector $v'_i$ from $R'$ and a vector $w'_j$ from $B'$,
\[||v'_i-w'_j||^2_2 = ||v'_i||^2_2 + ||w'_j||^2_2 - 2\ip{v'_i}{w'_j} = d_1 + 4k + d_2 - 2\ip{v'_i}{w'_j}.\]
Note that the inner product $\ip{v'_i}{w'_j}$ is maximized when the original subset $v_i$ (of cardinality $d_1$) is contained in the subset $w_j$ (of cardinality $d_2$), in which case $\ip{v'_i}{w'_j} = d_1$. So the minimum possible distance between $v'_i$ and $w'_j$ is
\[||v'_i-w'_j||^2_2 = d_1 + 4k + d_2 - 2\ip{v'_i}{w'_j} = (d_2 - d_1) + 4k.\]
Putting it all together, suppose we set $k$ large enough that \[(8-1/2)k + 2 > d+4k\] (e.g. $k \geq d$ will do). From there, if there is some red set (of cardinality $d_1$) in $R$ contained in a blue set (of cardinality $d_2$) in $B$, then the nearest neighbor of the corresponding point in $R'$ will be a point in $B'$ with distance precisely $(d_2 - d_1) + 4k$ from it. Set $k = \Theta(\log n)$ so that it is at least $d$, and it is large enough to support at least $n$ distinct codewords with $\varepsilon = 1/8$.
We have reduced \text{\bf OV}{} with $n$ vectors in $\{0,1\}^{c \log n}$ to $n$ points in $\{-1,0,1\}^{\Theta(\log n)}$, such that computing all-nearest neighbors in $\ell_2$ will determine if the original instance had a red set contained in a blue set. In particular, we can check for every point whether its nearest neighbor corresponds to a set containing it in the original instance, or a set it contains. By the above, there is a red set contained in a blue set if and only if for the cardinalities $d_1$ and $d_2$ of these respective sets, the nearest neighbor to some point $v$ in $R_{d_1}$ is a point in $B_{d_2}$ with distance only $(d_2 - d_1) + 4k$ from $v$.
\end{proof}
\section{Introduction}
Point proximity and location problems have been core to computer science and computational geometry since Minsky and Papert~\cite{Minsky-Papert69} and Knuth's post office problem~\cite{knuth1973art}. In this paper, we study the problems of finding the closest pair or furthest pair in a point set (i.e., the \emph{diameter}) in moderate dimensions under the most natural norms, and incidence problems such as Hopcroft's problem~\cite{Matousek93,Erickson95,Erickson96}: given $n$ points in ${\mathbb R}^d$ and $n$ hyperplanes through the origin, does any point lie on any line? (Note this is equivalent to asking whether there are two vectors which are orthogonal, i.e., have inner product $0$.) For closest and furthest pair problems, we also consider their \emph{bichromatic} versions where there are $n$ red points, $n$ blue points, and we wish to find a closest (or furthest) red/blue pair.
\footnote{Note we do not consider $\ell_1$ and $\ell_2$ bichromatic furthest pair explicitly, since it is easy to efficiently reduce between the bichromatic version and the uncolored version. For example, we can reduce from bichromatic to non-bichromatic by adding one extra dimension with large (positive if red, negative if blue) coordinates.}
We consider these problems under the $\ell_p$ metric for $p \in \{1,2\}$, as well as $\ell_{\infty}$. As is standard, we use $\ell_p^d$ to denote the metric space $({\mathbb R}^d,\ell_p)$, with the distance functions $||x-y||_p = (\sum_{i=1}^d |x_i - y_i|^p)^{1/p}$ and $||x-y||_{\infty} = \max_{i} |x_i - y_i|$.
For the case of very large $n$ and modest $d$, some of these problems appear to be far more difficult to solve than others, for reasons which are still not well-understood (beyond the fact that known techniques do not work). As early as 1976, Bentley and Shamos~\cite{bentley1976divide} noticed an apparent difference in the difficulties of solving furthest pair and closest pair in $\ell_2$ in higher dimensions, and raised it as an important issue to study. The following table gives a rough classification of key problems which are known to be ``easy'' and which seem to be ``hard'' for large $n$ and modest $d$.\footnote{In this paper, we assume a machine model that allows basic arithmetic on entries of vectors and comparisons of points in ${\mathbb Z}^d$ in $\text{poly}(d,\log M)$ time, where $M$ is the largest magnitude of an integer in an input. Such a concrete model is necessary for our hardness results, which are concerned with discrete tasks such as SAT-solving in typical Turing models of computation.}
\begin{center}
\begin{tabular}{l||l}
{\bf Nearly-Linear} ($d^{\text{poly}(d)} \cdot n \log^{O(d)} n$ time) & {\bf Barely-Subquadratic} ($f(d) \cdot n^{2-1/\Theta(d)}$ time) \\
\hline
(Bichrom.) $\ell_{\infty}^d$-Furthest Pair~\cite{Yao82,Gabow-Bentley-Tarjan} &
$\ell_2^d$-Furthest Pair~\cite{Yao82,Agarwal1990euclidean} \\
$\ell_2^d$-Closest Pair~\cite{bentley1976divide,khuller1995simple,dietzfelbinger1997reliable}
& Bichrom.~$\ell_2^d$-Closest Pair~\cite{Agarwal1990euclidean}\\
$\ell_1^d$-Furthest Pair~\cite{Yao82,Gabow-Bentley-Tarjan}
& $d$-dim. Hopcroft's Problem~\cite{Chazelle93,Matousek93}\\
(Bichrom.) $\ell_1^d$ and $\ell_{\infty}^d$-Closest Pair \\
~~~~~~\cite{Gabow-Bentley-Tarjan,Preparata1985introduction,dietzfelbinger1997reliable,Chan17}
\end{tabular}
\end{center}
Note that there are many other core geometry problems with one of the two above runtime types; the above are just some of the core bottlenecks. For example, Hopcroft's problem is a special case of problems such as (batch) point location and ray shooting, which also suffer from the same $n^{2-1/\Theta(d)}$ dependency~(see Erickson's work on hardness from Hopcroft's problem~\cite{Erickson95} for more).
Why do some problems fall on the right side of the table, and can they be moved to the left side? Besides the natural question of understanding the difference between furthest and closest pair, here is another motivating example. In 1984, Gabow, Bentley, and Tarjan~\cite{Gabow-Bentley-Tarjan} showed that the $\ell_{\infty}$-furthest pair problem (and its bichromatic version) in ${\mathbb R}^d$ is \emph{very} easy, solvable in $\tilde{O}(d \cdot n)$ time. Using this fast algorithm, along with an isometric embedding of $\ell_1^d$ into $\ell_{\infty}^{2^d}$, they then solve the (bichromatic or not) furthest pair problem for $\ell_1$ in $\tilde{O}(2^d \cdot n)$ time.
So computing the $\ell_{\infty}$-diameter and $\ell_1$-diameter are both ``nearly-linear'' time problems in low dimensions.
{\bf Can similar bounds be achieved for $\ell_2$-furthest pair?} As the above table indicates, the best known algorithms for furthest pair in $\ell_2$ (bichromatic or not) still have running time bounds of the form $O(n^{2-1/\Theta(d)})$, which is ``barely subquadratic.'' Is there a fundamental reason why this problem is so much harder in $\ell_2$ than in $\ell_1$ or in $\ell_{\infty}$?
The situation is arguably counter-intuitive, because $\ell_1$ and $\ell_{\infty}$ are technically more ``universal'' metrics than $\ell_2$, so one might think that problems should be more difficult under the former than the latter. For instance, efficient isometric embeddings of $n$-point sets from $\ell_2$ into $\ell_1$ and into $\ell_{\infty}$ \emph{are} known in the literature on metric embeddings (see the book~\cite{Deza-Laurent97} for references), whereas the converse is not true (see for example~\cite[Chapter 2]{Wells-Williams75}). However, these isometric embeddings need $\Omega(n)$ dimensions in the most general cases. There may still be embeddings (perhaps randomized) which map low-dimensional $n$-point sets in $\ell_2$ into sub-exponential-dimensional $\ell_1$ (or $\ell_{\infty}$). Indeed, in the case of low distortion (where the distances in an embedding are allowed to shrink or grow by small multiplicative amounts) these are well-known, even deterministically in some regimes~\cite{linial1994geometry,indyk2007uncertainty,guruswami2010almost}. The results of this paper show that ``nice'' isometric embeddings of $\ell_2$ into $\ell_1$ would have major implications in fine-grained complexity.
\subsection{Strong Difficulty of Proximity Problems in the Euclidean metric}
We offer good reasons why furthest pair in $\ell_2$ and other barely-subquadratic problems will be difficult to solve as fast as closest pair, even in very low dimensions. We do this by relating $\ell_2$-furthest pair and other ``barely subquadratic'' problems to the Orthogonal Vectors Conjecture~\cite{Williams05,DBLP:conf/icalp/AbboudVW14} and the Strong Exponential Time Hypothesis~\cite{IP01,CIP09} in a novel way.
The \text{\bf OV}{} problem is: \emph{given $n$ vectors $v_1,\ldots,v_n \in \{0,1\}^d$, are there $i,j$ such that $\langle v_i,v_j\rangle = 0$?} Clearly $O(n^2 d)$ time suffices for solving \text{\bf OV}{}, and slightly subquadratic-time algorithms are known in the case of small $d$~\cite{AbboudWY15,DBLP:conf/soda/ChanW16}. It is conjectured that there is no \text{\bf OV}{} algorithm running in (say) $n^{1.99}$ time for dimensionality $d = \omega(\log n)$.
\begin{conjecture}[Orthogonal Vectors Conjecture (OVC) \cite{Williams05,DBLP:conf/icalp/AbboudVW14}]
For every $\varepsilon > 0$, there is a $c \geq 1$ such that \text{\bf OV}{} cannot be solved in $n^{2-\varepsilon}$ time on instances with $d = c\log n$.
\end{conjecture}
In other words, OVC states that \text{\bf OV}{} requires $n^{2-o(1)}$ time on instances of dimension $\omega(\log n)$. OVC is plausible because it is implied by (and looks much more likely than) the popular Strong Exponential Time Hypothesis~\cite{IP01,CIP09} on the time complexity of solving $k$-SAT~\cite{Williams05,Williams-Yu14}.
Straightforward transformations show that OVC implies that both furthest and bichromatic closest pair in $\ell_1^{\omega(\log n)}$ and $\ell_2^{\omega(\log n)}$ require $n^{2-o(1)}$ time~\cite{Williams05,AlmanW15}. Also assuming OVC, David, Kartik, and Laekhanukit~\cite{David16} show that (non-bichromatic) closest pair in $\ell_p^{\omega(\log n)}$ for $p > 2$ and $\ell_{\infty}^{\omega(\log n)}$ also require $n^{2-o(1)}$ time. It is not so surprising that some proximity search problems in super-log dimensions are hard under OVC, because OVC is a hardness conjecture about a problem in super-log dimensions.
In this paper, we show that OVC implies bichromatic closest pair and furthest pair in $\ell_2$ require essentially quadratic time for even \emph{poly-loglog} dimensions, in stark contrast with bichromatic closest pair and furthest pair in both $\ell_1$ and $\ell_{\infty}$ (which both have $n^{1+o(1)}$-time solutions in this case). Our main technical tool is the following dimensionality reduction for Orthogonal Vectors:
\begin{lemma}[Dimensionality Reduction for OV] \label{dim-red} Let $\ell \in [1,d]$. There is an $n \cdot d^{O(d/\ell)}$-time reduction from \text{\bf OV}{} for $n$ points in $\{0,1\}^d$ to $d^{O(d/\ell)}$ instances of \text{\bf OV}{} for $n$ points in ${\mathbb Z}^{\ell+1}$, with vectors of $O((d \log d)/\ell)$-bit entries.
\end{lemma}
Applying this lemma, we establish quadratic-time hardness for the barely-subquadratic Hopcroft's problem, $\ell_2$-Furthest Pair, and Bichromatic $\ell_2$-Closest Pair in small (poly-log-log) dimensions. It follows that if any one of these three problems became ``nearly-linear'', then there would have many interesting algorithmic consequences, including new SAT-solving algorithms. For example:
\begin{theorem}[Hardness of Hopcroft's Problem] \label{hopcroft} Under SETH (or OVC), Hopcroft's problem in $\omega(\log \log n)$ dimensions requires $n^{2-o(1)}$ time, with vectors of $O(\log n)$-bit entries.
\end{theorem}
\begin{theorem}[Hardness of $\ell_2$-Furthest Pair] \label{furthest-pair} Under SETH (or OVC), finding a furthest pair in $\omega(\log \log n)^2$ dimensions under the $\ell_2$ norm requires $n^{2-o(1)}$ time, with vectors of $O(\log n)$-bit entries.
\end{theorem}
Therefore, computing the diameter of an $n$-point set in low-dimensional $\ell_2$ is surprisingly more difficult to solve than in the $\ell_1$ metric, or in the $\ell_{\infty}$ metric. By Gabow-Bentley-Tarjan~\cite{Gabow-Bentley-Tarjan}, there are $n^{2-\varepsilon}$-time algorithms for furthest pair under $\ell_1$ up to ${\varepsilon \log n}$ dimensions, and under $\ell_{\infty}$ up to $n^{1-\varepsilon}$ dimensions. There seems to be an exponential curse of dimensionality in computing the diameter of a point set, going from $\ell_{\infty}$ to $\ell_1$, and \emph{also} going from $\ell_1$ to $\ell_2$. The following table summarizes the consequences for barely-subquadratic problems.
\begin{center}
\begin{tabular}{l|l}
{\bf Barely-Subquadratic Problem} & {\bf Lower Bound (Under SETH or OVC)}\\
\hline
$\ell_2^d$-Furthest Pair~\cite{Yao82,Agarwal1990euclidean} & $n^{2-o(1)}$ time for $d=\omega(\log \log n)^2$ \\
Bichrom.~$\ell_2^d$-Closest Pair~\cite{Agarwal1990euclidean}
& $n^{2-o(1)}$ time for $d=\omega(\log \log n)^2$
\\
$d$-dim. Hopcroft's Problem~\cite{Chazelle93,Matousek93}
& $n^{2-o(1)}$ time for $d=\omega(\log \log n)$
\\
\end{tabular}
\end{center}
Under the present landscape of fine-grained complexity conjectures, it follows that none of the barely-subquadratic problems we have identified can be made nearly-linear:
\begin{corollary} Under SETH (or OVC), {\bf none} of $\ell_2$-Furthest Pair, Bichromatic $\ell_2$-Closest Pair, or Hopcroft's problem are solvable in $n^{2-\varepsilon} \cdot \log^{2^{o(\sqrt{d})}} n$ time, for all $\varepsilon > 0$.
\end{corollary}
Since the above barely-subquadratic problems have closely-related nearly-linear problems, these results also show that OVC and SETH have consequences for the theory of metric embeddings. For example, since $\ell^d_{\infty}$-Furthest Pair can be solved in $\tilde{O}(d \cdot n)$ time, every $n^{1.99}$-time isometric embedding from $n$ points in $\ell^d_2$ into $\ell_{\infty}$ with $d=\omega(\log \log n)^2$ must blow up the dimension doubly-exponentially to $n^{1-o(1)}$ --- unless OVC and SETH are false. This is striking when one remembers that \emph{every $n$-point metric} can be (efficiently) isometrically embedded into $\ell_{\infty}$ with $n-1$ dimensions (by the classical Frechet embedding).
Unfortunately, the above conditional lower bounds only hold for exact solutions to the problems. Our reductions from \text{\bf OV}{} to closest/furthest pair no longer work if we only have $(1+\varepsilon)$-approximations to the closest/furthest pair (if they did, this paper would be about how OVC is false, thanks to many fast approximation algorithms for these problems~\cite{AndoniIndyk17}).
\paragraph{Hardness for All-Nearest Neighbors.} The best known algorithms for the $\ell_2$-Closest Pair problem are nearly-linear, running in $2^{O(d)} n \log^{O(1)}n$ time. A prominent open problem is whether the exponential dependence on $d$ is necessary: \emph{Does $\ell_2$-Closest Pair require $n^{2-o(1)}$ time in $\omega(\log n)$ dimensions?} Could we show hardness under (for example) OVC or SETH?
The question is rather subtle. As mentioned earlier, the related problems of Bichromatic $\ell_2$-Closest Pair and $\ell_2$-Furthest Pair are easily shown to be \text{\bf OV}{}-hard in $\omega(\log n)$ dimensions~\cite{Williams05,AlmanW15}. Intuitively speaking, in both of the latter problems, our reductions can ``control'' the distances between points in such a way that it is easy to encode \text{\bf OV}{}. But for $\ell_2$-Closest Pair (with no colors), we have much less control, and it is difficult to keep large sets of points far enough apart to successfully encode an \text{\bf OV}{} instance~\cite{David16}.
Here we report some progress on this open problem. In the closely-related \emph{All-Nearest Neighbors} problem, the task is to report the $\ell_2$-closest pair for all points in the given set. Nearly-linear algorithms are also known for All-Nearest Neighbors, which have essentially the same complexity as $\ell_2$-Closest Pair~\cite{Clarkson83,Vaidya89}. We can show \text{\bf OV}{}-hardness for All-Nearest Neighbors:
\begin{theorem} \label{all-nn} Under OVC, the All-Nearest Neighbors problem in $\ell^{\omega(\log n)}_2$ requires $n^{2-o(1)}$ time, even restricted to vectors with entries from $\{-1,0,1\}$.
\end{theorem}
The reduction goes through the Set Containment problem (equivalent to \text{\bf OV}{}), and uses error-correcting codes to keep one half of the vectors ``distant'' from each other, and the other half relatively ``close'' to the first half.
\section{A Dimensionality Self-Reduction for Orthogonal Vectors}
In this section, we set up the framework for proving hardness for the aforementioned ``barely-subquadratic'' problems. We begin with the following more general theorem, which will imply the dimensionality reduction lemma.
\begin{theorem}\label{REDDIM}
For every $d$ and integer $\ell \in [1,d]$, given two sets of vectors $U,V\subseteq\{0,1\}^d$, there is a deterministic algorithm running in $n \cdot d^{O(d/\ell)}$ time which outputs a list of $t = d^{O(d/\ell)}$ integers $\{k_1,\ldots,k_t\} \subseteq [0,t]$, along with sets $U',V'\subseteq{\mathbb Z}^{\ell+1}$ such that $|U'| = |U|$, $|V'|=|V|$, and all entries in $u',v'$ are $O((d\log d)/\ell)$-bit integers. There is an orthogonal pair $u\in U,v\in V$ if and only if there is a pair $u'\in U',v'\in V'$ such that $\langle u',v'\rangle = k_i$ for some $i$.
\end{theorem}
Although it may be difficult to see in hindsight, the proof of Theorem~\ref{REDDIM} is inspired by the Merlin-Arthur communication protocol with $\tilde{O}(\sqrt{d})$ communication for Inner Product, due to Aaronson and Wigderson~\cite{Aaronson-Wigderson09}. In that protocol, two parties each hold a $d$-bit vector, and they wish to determine if their vectors are orthogonal. The protocol shows how a prover can send an $\tilde{O}(\sqrt{d})$-bit message to the two parties, such that the two parties only need to exchange $\tilde{O}(\sqrt{d})$-bits (with $O(\log d)$ public randomness) to determine orthogonality with high probability. They do this by encoding $d$-bit vectors with $O(\sqrt{d})$-degree bivariate polynomials, and their protocol uses the key good property of low-degree polynomials that we know (they have few roots, so evaluating distinct two polynomials at a random point will yield two distinct values, with decent probability).
In the below proof of Theorem~\ref{REDDIM}, there are several major differences. First, we forget one of the variables, and encode our $d$-bit vectors with $\ell$-dimensional vectors whose entries are $d/\ell$-degree univariate polynomials. Second, the parameter $\ell$ allows for a trade-off between the length of the vector and the degrees of the polynomials. (This corresponds to a trade-off between the length of the prover's message and the length of the others' messages, in the Merlin-Arthur protocol.) Third, we do \emph{not} pick random points to evaluate the polynomial on, but rather a single deterministic value. This actually suffices for our purposes.
\begin{proof} Without loss of generality, assume $d$ is a multiple of $\ell$, otherwise we can add zeroes at the end of each vector to satisfy this assumption.
Consider two vectors $u \in U,v \in V$. Divide the $d$ dimensions of both vectors into $\ell$ contiguous blocks, each of which contains $d/\ell$ dimensions.
Suppose the $i$th block of $u$ is $[u_{i,1},\ldots,u_{i,d/\ell}]$ and the $i$th block of $v$ is $[v_{i,1},\ldots,v_{i,d/\ell}]$, where all $u_{i,j}, v_{i,j} \in \{0,1\}$. Construct the polynomials
\[P_{u,i}(x) =\sum_{j=1}^{d/\ell} u_{i,j}\cdot x^{j-1}\] and
\[Q_{v,i}(x) = \sum_{j=1}^{d/\ell} v_{i,j} \cdot x^{d/\ell-j}.\]
Let $P_u(x)$ be the $\ell$-dimensional vector $[P_{u,1},\ldots,P_{u,\ell}]$ and
$Q_v(x)$ be the $\ell$-dimensional vector $[Q_{v,1},\ldots,Q_{v,\ell}]$. Observe that the coefficient of $x^{d/\ell-1}$ in the polynomial $R_{u,v}(x) = \ip{P_u(x)}{Q_v(x)}$ is exactly
\[\sum_{i=1}^{\ell} \sum_{j=1}^{d/\ell} u_{i,j}\cdot v_{i,j} = \ip{u}{v}.\] Furthermore, note that for any $u \in U$ and $v \in V$, the polynomial $R_{u,v}(x)$ has degree at most $2d/\ell$, and each of its coefficients are integers in $[0,d]$.
Now we are ready to describe the reduction. First, enumerate all $t = d^{O(d/\ell)}$ polynomials $R(x)$ of degree at most $2d/\ell$ with coefficients in $[0,d] \cap {\mathbb Z}$ such that the coefficient of $x^{d/\ell-1}$ equals $0$.
Set $x_0 := d+1$. Note that, given the integer value $k = R(x_0) = \sum_{i=0}^{\ell-1}P_{u,i}(x_0) \cdot Q_{v,i}(x_0)$, the polynomial $R(x)$ is uniquely determined, because all of its coefficients are integers in $[0,d]$.
For all $u \in U$ and $v \in V$, compute $u' := P_u(x_0)$ and $v' := Q_v(x_0)$, creating two sets of vectors $U'$ and $V'$ where all vectors have $\ell$ dimensions, with entries of bit length at most $O((d \log d)/\ell)$.
By enumerating over all such polynomials $R(x)$, we obtain sets $U'$, $V'$, and collection of $t$ integers $\{R(x_0)\}$ each in $O(d\log d)/\ell$ bits, satisfying the conclusion of the theorem. In particular, the vectors $u \in U$, $v \in V$ satisfy $\ip{u}{v} = 0$ if and only if there is \emph{some} polynomial $R(x)$ of degree at most $2d/\ell$ with coefficients in $[0,d] \cap {\mathbb Z}$ such that $\ip{P_u(x_0)}{Q_v(x_0)} = R(x_0)$. \end{proof}
Now we prove the dimensionality reduction lemma:
\begin{reminder}{Lemma~\ref{dim-red}}[Dimensionality Reduction for OV] Let $\ell \in [1,d]$. There is an $n \cdot d^{O(d/\ell)}$-time reduction from \text{\bf OV}{} for $n$ points in $\{0,1\}^d$ to $d^{O(d/\ell)}$ instances of \text{\bf OV}{} for $n$ points in ${\mathbb Z}^{\ell+1}$, with vectors of $O(\log n)$-bit entries.
\end{reminder}
\begin{proof} Given a set $S$ of $n$ (non-zero) vectors in $\{0,1\}^d$, set $U := S$ and $V := S$ in Theorem~\ref{REDDIM}, which produces $n$ vectors $U'$ and $V'$ in ${\mathbb Z}^{\ell}$ along with a set of $d^{O(d/\ell)}$ numbers $T$ such that $S$ has an orthogonal pair if and only if there is some $u \in U'$, $v \in V'$, and $k \in T$ such that $\ip{u}{v}=k$.
For every $k \in T$, create new sets of vectors $U'_k,V'_k$, where every $u \in U'$ is replaced by $u_k := [u, 1]$ in $U'_k$, and every $v \in V'$ is replaced by $v_k := [v, -k]$ in $V'_k$. Since all entries in $u$ and $v$ are non-negative, we observe:
\begin{enumerate}
\item for all $u \in U'$ and $v \in V'$, $\ip{u}{v}=k$ if and only if $\ip{u_k}{v_k}=0$,
\item for every pair $u_k, u'_k \in U_k$, $\ip{u_k}{u'_k}\geq 1$, and
\item for every pair $v_k, v'_k \in V_k$, $\ip{v_k}{v'_k}\geq k^2$.
\end{enumerate}
Consider the set $S_k := U_k \cup V_k \subset {\mathbb Z}^{\ell}$. By the above three facts, we could only obtain an orthogonal pair of vectors in $S_k$ by taking one vector from $U_k$ and one vector from $V_k$, and $S_k$ contains an orthogonal pair if and only if there is some $u \in U'$ and $v \in V'$ such that $\ip{u}{v}=k$. Our reduction calls $\text{\bf OV}{}$ on $S_k$ for every $k \in T$, and outputs the relevant orthogonal pair for $S$ if any of the calls return an orthogonal pair for some $S_k$.
\end{proof}
\subsection{Consequences}
Here we show how the above Dimensionality Reduction for OV implies hardness for the barely-subquadratic problems mentioned in the introduction.
\begin{reminder}{Theorem~\ref{hopcroft}}[Hardness of Hopcroft's Problem] Under SETH (or OVC), Hopcroft's problem in $\omega(\log \log n)$ dimensions requires $n^{2-o(1)}$ time, with vectors of $O(\log n)$-bit entries.
\end{reminder}
\begin{proof} Let $c \geq 1$ be an arbitrary constant and let $d := c\log n$. We show how an oracle for Hopcroft's problem in $\omega(\log \log n)$ dimensions, running in $O(n^{2-\delta})$ time for some universal $\delta > 0$, can be used to solve \text{\bf OV}{} for $n$ vectors in $d$ dimensions in $n^{2-\delta+\varepsilon}$ time (regardless of $c$) for every $\varepsilon > 0$, which would refute the OVC.
Set $\ell := c(\log d)/\alpha = c \log(c \log n)/\alpha$, for a small parameter $\alpha > 0$ to be set later. Applying Lemma~\ref{dim-red} to a given subset $S \subseteq \{0,1\}^d$, the reduction runs in time \[n \cdot (c \log n)^{O(c\log n)/\ell)} \leq n \cdot c^{O(\alpha \log n)} \leq n^{1+O(\alpha \log(c))},\]and produces $n^{O(\alpha \log(c))}$ instances of \text{\bf OV}{} with $n$ points in ${\mathbb Z}^{(c \log \log n)/\alpha + O(1)}$, with vectors of $O(\log n)$-bit entries. Setting $\alpha \ll \varepsilon/\log(c)$, the reduction generates $O(n^{\varepsilon})$ instances in $\Omega(1/\varepsilon \cdot c \log(c) \cdot \log \log n)$ dimensions, each of which our Hopcroft oracle solves in $n^{2-\delta}$ time, by assumption. This concludes the proof.
\end{proof}
\begin{reminder}{Theorem~\ref{furthest-pair}}[Hardness of $\ell_2$-Furthest Pair] Under SETH (or OVC), finding a $\ell_2$-furthest pair in $\omega(\log \log n)^2$ dimensions requires $n^{2-o(1)}$ time, with vectors of $O(\log n)$-bit entries.
\end{reminder}
\begin{proof} Given a fast algorithm for $\ell_2$-furthest pair in $\omega(\log \log n)^2$ dimensions, we show how to quickly solve Hopcroft's problem on $n$ points in $\omega(\log \log n)$ dimensions, and appeal to Theorem~\ref{hopcroft}.
Let $S$ be a set of $n$ vectors in ${\mathbb Z}^{\ell}$ with $\ell = \omega(\log \log n)$ and with $O(\log n)$-bit entries. Let $k > 1$ be such that every entry of every vector has magnitude less than $n^k$. In the following, let $v[i]$ denote the $i$th component of a vector $v$.
For every vector $u \in S$, define the $(\ell^2+2)$-dimensional vector
\[u' := [u[1]\cdot u[1], u[1]\cdot u[2], \ldots, u[i]\cdot u[j], \ldots, u[\ell]\cdot u[\ell],0, n^{2k+1}].\] That is, the first $\ell^2$ components of $u'$ are all possible products of two components of $u$, followed by the entries $0$ and $n^{2k+1}$. Put each $u'$ in a set $U'$. Also for every vector $v \in S$, define the $(\ell^2+2)$-dimensional vector
\[v' := [v[1]\cdot v[1], v[1] \cdot v[2], \ldots, v[i]\cdot v[j], \ldots, v[\ell]\cdot v[\ell],n^{2k+1},0],\] and put $v'$ in a set $V'$. Now observe that:
\begin{itemize}
\item for $u'_1,u'_2 \in U'$ coming from some $u_1,u_2 \in S$, $\ip{u'_1}{u'_2} = \sum_{i,j\in[\ell]} u_1[i]u_1[j] u_2[i] u_2[j] + n^{4k+2}$.
\item for $v'_1,v'_2 \in V'$ coming from some $v_1,v_2 \in S$, $\ip{v'_1}{v'_2} = \sum_{i,j\in[\ell]} v_1[i]v_1[j] v_2[i] v_2[j] + n^{4k+2}$.
\end{itemize} Note that by our choice of $k$, $|u_1[i]u_1[j] u_2[i] u_2[j]| \leq n^{4k}$ for all $i,j$. So all inner products are positive, in both of the above cases. In contrast, for $u' \in U'$ and $v' \in V'$, \[\langle u',v'\rangle = \sum_{i,j \in [\ell]} u[i]u[j]v[i]v[j] = \sum_{i,j} u[i]v[i]\cdot u[j]v[j] = (\langle u,v\rangle)^2.\] Now, all possible inner products between every $u' \in U'$ and $v' \in V'$ are non-negative, and $\ip{u'}{v'}=0$ if and only if $\ip{u}{v}=0$.
Suppose we normalize all vectors in $U'$ and $V'$, replacing each vector $u'$ and $v'$ by $u'' := u'/||u'||_2$. Since \[\langle u'',v''\rangle = \frac{1}{||u'||_2\cdot ||v'||_2}\langle u,v\rangle,\]
the vector pairs in $U',V'$ with zero inner product are exactly preserved, and all inner products of pairs within $U'$ (and of pairs within $V'$) are still positive. By the law of cosines, for all $u'' \in U'$ and $v'' \in V'$ we have
\[||u''-v''||^2_2 = ||u''||^2_2 - ||v'||^2_2 - 2\langle u'',v''\rangle = 2 - 2\langle u'',v''\rangle.\]
Therefore, taking $S := U' \cup V'$, solving Hopcroft's problem on $S$ is equivalent to finding two vectors with $\ell_2$-distance at least $\sqrt{2}$, and this is the maximum possible distance between two vectors in the instance. It follows that solving $\ell_2$-furthest pair on these instances will solve Hopcroft's problem on them as well.
\end{proof}
\begin{corollary}[Hardness of Bichromatic $\ell_2$-Closest Pair] \label{closest-pair} Under SETH (or OVC), finding a bichromatic $\ell_2$-closest pair in $\omega(\log \log n)^2$ dimensions requires $n^{2-o(1)}$ time, with vectors of $O(\log n)$-bit entries.
\end{corollary}
\begin{proof}
As before, we begin from the proof of hardness for Hopcroft's problem (Theorem~\ref{hopcroft}). The reduction there computes $O(n^{\varepsilon})$ instances of Hopcroft's problem on $n$ points in $\Omega(1/\varepsilon \cdot c \log(c) \cdot \log \log n)$ dimensions, for any desired $\varepsilon > 0$. We will slightly modify the proof of Theorem~\ref{furthest-pair} for furthest pair to work for bichromatic closest pair.
Let $S$ be a set of $n$ vectors in ${\mathbb Z}^{\ell}$ with $\ell = \omega(\log \log n)$ and $O(\log n)$-bit entries. We wish to know if two vectors in $S$ are orthogonal.
Let $v[i]$ denote the $i$th component of a vector $v$. We define the vectors in $U'$ very similarly to the proof of Theorem~\ref{furthest-pair}: for all $u \in S$, make the $\ell^2$-dimensional vector
\[u' := [u[1]\cdot u[1], u[1]\cdot u[2], \ldots, u[i]\cdot u[j], \ldots, u[\ell]\cdot u[\ell]].\] That is, each component of $u'$ is a product of two components of $u$. Put each $u'$ in a set $U'$ of \emph{red points}. For every vector $v \in S$, define the $\ell^2$-dimensional vector
\[v' := [-v[1]\cdot v[1], -v[1] \cdot v[2], \ldots, -v[i]\cdot v[j], \ldots, -v[\ell]\cdot v[\ell]],\] and put $v'$ in a set $V'$ of \emph{blue points}.
Now observe that for every red $u' \in U'$ and every blue $v' \in V'$,
\[\ip{u'}{v'} = -(\langle u,v\rangle)^2.\] Thus the inner product between a red $u'$ and a blue $v'$ is zero when $\ip{u}{v} = 0$, and is otherwise negative. If we normalize all vectors in $U'$ and $V'$, those red-blue pairs with zero inner product are preserved, and the rest of the red-blue pairs still have negative inner product. Analogously as in Theorem~\ref{furthest-pair}, this means that the red-blue pairs with zero inner product have Euclidean distance $\sqrt{2}$, and all other red-blue pairs have distance strictly greater than $\sqrt{2}$. Therefore finding the closest red-blue pair in this $\ell^2$-dimensional instance will solve the original instance $S$ of Hopcroft's problem.
\end{proof}
\input{all-nearest-hardness}
\section{Conclusion}
We have given some rigorous explanation for why certain point-location and proximity problems only admit barely-subquadratic time algorithms: they can encode difficult high-dimensional Boolean problems in surprisingly low dimensions. In contrast, the nearly-linear proximity problems seem incapable of such an encoding; moreover, if any of them \emph{were} found to be capable, we would be refuting some major conjectures in fine-grained complexity.
It is likely that many more consequences can be derived than what we have shown here.
\begin{itemize}
\item For one example, Backurs and Indyk (personal communication) have noticed that our lower bound for bichromatic $\ell_2$-Closest Pair implies an inapproximability result for the \emph{fast Gauss transform}~\cite{Greengard-Strain91}, where we are given a set of $n$ red vectors $R$ and $n$ blue vectors $B$ in ${\mathbb R}^d$, and are asked to compute \[F(r) = \sum_{b \in B} e^{-||a-b||^2}\] for every $r \in R$. In particular, they have observed that (under OVC) $F$ cannot be approximated with an additive $\varepsilon$-error in $n^{2-\delta} \cdot \text{poly}(\log(1/\varepsilon),2^d)$ time, for any fixed $\delta > 0$.
\item For another example, a variant of the reduction in Lemma~\ref{dim-red} (where instead of setting $x := d+1$ in the polynomials, we imagine trying \emph{all} choices for $x$ from a large-enough field, and we build larger-dimensional Boolean vectors whose inner products model the process of computing inner products among all values of $x$) was used in recent work with Abboud and Rubenstein~\cite{ARW17} to show that finding a vector pair of maximum inner product among a set of $n$ Boolean $n^{o(1)}$-dimensional vectors is hard to non-trivially approximate in sub-quadratic time.
\end{itemize}
There are many interesting questions to pursue further; here are some particularly compelling ones.
\begin{enumerate}
\item Can the $\omega(\log \log n)$ and $\omega(\log \log n)^2$ dimensionality in our hardness reductions be reduced, all the way down to $\omega(1)$ dimensions? This would demonstrate very tight hardness for solving these problems. The main bottleneck is that in the main reduction (Theorem~\ref{REDDIM} and Lemma~\ref{dim-red}) it seems we have to compute $O(\log n)^{O(\log n)/\ell}$ different instances to go from $O(\log n)$ dimensions down to $\ell$ dimensions; perhaps there is a more efficient reduction method.
\item All of the nearly-linear problems discussed in this paper actually have $2^{O(d)} \cdot n \log^{O(1)} n$-time algorithms, except for bichromatic $\ell_1$ and $\ell_{\infty}$ closest pair, for which their best known algorithms have the running time bound $n \cdot \log^{O(d)} n$. Could stronger hardness be established for these two problems, or can their dependence on $d$ be improved? So far, prior work~\cite{Williams05,David16} has only established quadratic-time hardness for these problems when $d =\omega(\log n)$, so it is quite possible that they are in fact solvable in $2^{O(d)} \cdot n \log^{O(1)} n$ time, like the other nearly-linear problems.
\item The All-Nearest Neighbors problem is solvable in $2^{O(d)} \cdot n\log^{O(1)} n$ time in the general case, not just when all vectors are in $\{-1,0,1\}$. Is there is a dimensionality reduction for the special case of $\{-1,0,1\}$, similar to Lemma~\ref{dim-red}? (Please note that this would likely refute OVC and SETH.)
\item Do any of the ``popular conjectures'' in fine-grained complexity imply that $\ell_2$-Closest Pair requires $n^{2-o(1)}$ time in $\omega(\log n)$ dimensions?
\end{enumerate}
\section*{Acknowledgements}
I am grateful to Amir Abboud, Arturs Backurs, Piotr Indyk, Aviad Rubenstein, and Huacheng Yu for useful comments and discussions. In particular, several years ago Huacheng took patient notes from one of our meetings, and wrote up a version of the main lemma presented here. Unfortunately he declined to be an author on this paper.
\bibliographystyle{alpha}
|
1,108,101,564,650 | arxiv | \section{Introduction}
\renewcommand{\thethm}{\Alph{thm}}
Recently, Du and Ni \cite{DN21} considered the following monostable cooperative systems with nonlocal diffusion and free boundaries
\bes\label{1.1}\left\{\begin{aligned}
&\partial_t u_i=d_i\mathcal{L}[u_i](t,x)+f_i(u_1,u_2,\cdots,u_m),&&t>0,~x\in(g(t),h(t)),1\le i\le m_0,\\
&\partial_t u_i=f_i(u_1,u_2,\cdots,u_m), &&t>0,~x\in(g(t),h(t)),~m_0\le i\le m,\\
&u_i(t,g(t))=u_i(t,h(t))=0,& &t>0, ~1\le i\le m,\\
&g'(t)=-\sum_{i=1}^{m_0}\mu_i\int_{g(t)}^{h(t)}\int_{-\yy}^{g(t)}J_i(x-y)u_i(t,x){\rm d}y{\rm d}x, & & t>0,\\
&h'(t)=\sum_{i=1}^{m_0}\mu_i\int_{g(t)}^{h(t)}\int_{h(t)}^{\yy}J_i(x-y)u_i(t,x){\rm d}y{\rm d}x, & & t>0,\\
&h(0)=-g(0)=h_0>0,\;\; u_i(0,x)=u_{i0}(x),& &|x|\le h_0, ~1\le i\le m,
\end{aligned}\right.
\ees
where $1\le m_0\le m$, $d_i>0$, $\mu_i\ge0$, $\sum_{i=1}^{m_0}\mu_i>0$, and
\bes\label{1.dn}
\mathcal{L}[u_i](t,x):=\int_{g(t)}^{h(t)}J_i(x-y)u_i(t,y){\rm d}y-u_i(t,x).
\ees
For $1\le i\le m_0$, kernel functions $J_i$ satisfy
\begin{enumerate}[leftmargin=4em]
\item[{\bf(J)}]$J\in C(\mathbb{R})\cap L^{\yy}(\mathbb{R})$, $J\ge 0$, $J(0)>0,~\dd\int_{\mathbb{R}}J(x)\dx=1$, \ $J$\; is even,
\end{enumerate}
and initial functions $u_{i0}(x)$ satisfy
\[u_{i0}\in C([-h_0,h_0]), ~ u_{i0}(\pm h_0)=0<u_{i0}(x), ~ ~ \forall ~ x\in(-h_0,h_0).\]
This model can be used to describe the spreading of some epidemics and the interactions of various species, for example, see \cite{ZZLD} and \cite{DNwn}, where the spatial movements of agents are approximated by the nonlocal diffusion operator \eqref{1.dn} instead of random diffusion (also known as local diffusion). Such kind of free boundary problem was firstly proposed in \cite{CDLL} and \cite{CQW}. Especially, it can be seen from \cite{CDLL} that the introduction of nonlocal diffusion brings about some different dynamical behaviors from the local version in \cite{DL2010}, and also gives arise to some technical difficulties. Since these two works appeared, some related research has emerged. For example, one can refer to \cite{DLZ} for the first attempt to the spreading speed of \cite{CDLL}, \cite{LWW20,DWZ,CLWZ} for the Lotka-Volterra competition and prey-predator models, \cite{DN212} for high dimensional and radial symmetric version for Fisher-KPP equation, and \cite{LW21} for the model with a fixed boundary and a moving boundary.
Before introducing our results for \eqref{1.1}, let us briefly review some conclusions obtained by Du and Ni \cite{DN21}. The following notations and assumptions are necessary.
{\bf Notations:}
(i)\ $\mathbb{R}^m_+:=\{x=(x_1,x_2,\cdots,x_m)\in\mathbb{R}^m:x_i\ge0, ~ 1\le i\le m\}$.
(ii)\ For any $x=(x_1,x_2,\cdots,x_m)\in\mathbb{R}^m$, simply write $x=(x_i)$. For $x=(x_i),\ y=(y_i)\in\mathbb{R}^m$,
\bess
&&x\preceq(\succeq) y ~ ~ \text{means} ~ ~ x_i\le(\ge) y_i ~ \text{for} ~ 1\le i\le m,\\
&&x\precneqq(\succneqq) y ~ ~ \text{means} ~ ~ x\preceq(\succeq) y ~ \text{but} ~ x\neq y,\\
&&x\prec(\succ) y ~ ~ \text{means} ~ ~ x_i<(>) y_i ~ \text{for} ~ 1\le i\le m.
\eess
(iii)\ If $x\preceq y$, $[x,y]$ represents the set of $\{z\in\mathbb{R}^m: x\preceq z\preceq y\}$.
(iv)\ Hadamard product: For any $x=(x_i),\ y=(y_i)\in\mathbb{R}^m$, $x\circ y:=(x_iy_i)\in\mathbb{R}^m$.
{\bf Assumptions on reaction term $f_i$ :}\\
{\bf(f1)}\, (i)\ Let $F=(f_1,f_2,\cdots, f_m)\in [C^1(\mathbb{R}^m_+)]^m$. $F(u)={\bf 0}$ has only two roots in $\mathbb{R}^m_+$: ${\bf 0}=(0,0,\cdots,0)$ and ${\bf u^*}=(u^*_1,u^*_2,\cdots, u^*_m)\succ {\bf 0}$.
(ii)\, $\partial_jf_i(u)\ge0$ for $i\neq j$ and $u\in[{\bf 0},{\bf \hat{u}}]$, where either ${\bf \hat{u}}=\yy$ meaning $[{\bf 0},{\bf \hat{u}}]=\mathbb{R}^m_+$, or ${\bf u^*}\prec {\bf \hat{u}}\in\mathbb{R}^m_+$, which implies that system \eqref{1.1} is cooperative in $[{\bf 0},{\bf \hat{u}}]$.
(iii)\, The matrix $\nabla F({\bf0})=(\partial_jf_i({\bf 0}))_{m\times m}$ is irreducible with positive principal eigenvalue.
(iv)\, If $m_0<m$, then $\partial_jf_i(u)>0$ for $1\le j\le m_0<i\le m$ and $u\in[{\bf 0},{\bf u^*}]$.\\
{\bf(f2)}\, $F(ku)\ge kF(u)$ for any $0\le k\le 1$ and $u\in[{\bf 0},{\bf \hat{u}}]$.\\
{\bf(f3)}\, The matrix $\nabla F({\bf u^*})$ is invertible, ${\bf u^*}\nabla F({\bf u^*})\preceq {\bf 0}$ and for every $i\in\{1,2,\cdots, m\}$, either
(i)\, $\sum_{j=1}^{m}\partial_jf_i({\bf u^*})u^*_j<0$, or
(ii)\, $\sum_{j=1}^{m}\partial_jf_i({\bf u^*})u^*_j=0$ and $f_i(u)$ is linear in $[{\bf u^*}-\ep_0{\bf 1},{\bf u^*}]$ for some small $\ep_0>0$, where ${\bf 1}=(1,1,\cdots,1)\in\mathbb{R}^m$.\\
{\bf(f4)}\, The ODE system
\[\bar{u}_t=F(\bar{u}), ~ ~ \bar{u}(0)\succ{\bf 0}\]
has a unique global solution $\bar{u}$ and $\lim_{t\to\yy}\bar{u}(t)={\bf u^*}$.\\
{\bf(f5)}\, The problem
\bes\label{1.2}
U_t=D\circ\int_{\mathbb{R}}{\bf J}(x-y)\circ U(t,y)\dy-D\circ U+F(U) ~ ~ \text{for} ~ t>0, ~ x\in\mathbb{R}
\ees
has a invariant set $[{\bf 0},{\bf \hat{u}}]$ and its every nontrivial solution is attracted by the equilibrium ${\bf u^*}$. That is, if the initial $U(0,x)\in[{\bf 0},{\bf \hat{u}}]$, then $U(t,x)\in[{\bf 0},{\bf \hat{u}}]$ for all $t>0$ and $x\in\mathbb{R}$; if further $U(0,x)\not\equiv{\bf0}$, then $\lim_{t\to\yy}U(t,x)={\bf u^*}$ locally uniformly in $\mathbb{R}$.
Throughout this paper, we always make the following assumptions:
(i)\, $J_i$ satisfy condition {\bf (J)} for $i\in\{1,2,\cdots,m_0\}$.
(ii)\, $d_i=0$ and $J_i(x)\equiv0$ for $x\in\mathbb{R}$, $i\in\{m_0+1,m_0+2,\cdots, m\}$, $D=(d_i)$ and ${\bf J}=(J_i(x))$.
(iii)\, {\bf(f1)}-{\bf(f5)} hold true.
(iv)\, initial value $(u_{i0}(x))\in[{\bf 0},{\bf \hat{u}}]$.
Under the above assumptions, one easily proves that \eqref{1.1} has a unique global solution $(u,g,h)$. Here we suppose that its longtime behaviors are governed by a spreading-vanishing dichotomy, namely, one of the following alternatives must happen for \eqref{1.1}
\sk{\rm(i)}\, \underline{Spreading:} $\lim_{t\to\yy}-g(t)=\lim_{t\to\yy}h(t)=\yy$ and $\lim_{t\to\yy}u(t,x)={\bf u^*}$ locally uniformly in $\mathbb{R}$.
\sk{\rm(ii)}\, \underline{Vanishing:} $\lim_{t\to\yy}h(t)-g(t)<\yy$ and $\lim_{t\to\yy}\sum_{i=1}^{m}\|u_i(t,\cdot)\|_{C([g(t),h(t)])}=0$.
The authors of \cite{DN21} discussed the spreading speeds of $g(t)$ and $h(t)$ when spreading happens for \eqref{1.1}, and proved that there is a finite spreading speed for \eqref{1.1} if and only if the following condition holds for $J_i$
\begin{enumerate}[leftmargin=4em]
\item[{\bf(J1)}]$\dd\int_{0}^{\yy}xJ_i(x)\dx<\yy$ for any $i\in\{1,2,\cdots, m_0\}$ such that $\mu_i>0$.
\end{enumerate}
Exactly, they obtained the following conclusion.
\begin{thm}{\rm \cite[Theorem 1.3]{DN21} }\label{A} Let $(u,g,h)$ be a solution of \eqref{1.1}, and spreading happen. Then
\begin{align*}
\lim_{t\to\yy}\frac{-g(t)}{t}=\lim_{t\to\yy}\frac{h(t)}{t}=\left\{\begin{aligned}
&c_0& &{\rm if~{\bf(J1)}~holds},\\
&\yy& &{\rm if~{\bf(J1)}~does ~ not ~ hold},
\end{aligned}\right.
\end{align*}
where $c_0$ is uniquely determined by the semi-wave problem
\bes\label{1.3}\left\{\begin{array}{lll}
D\circ\dd\int_{-\yy}^{0}{\bf J}(x-y)\circ \Phi(y)\dy-D\circ \Phi+c\Phi'(x)+F(\Phi)=0, ~ ~ -\yy<x<0,\\
\Phi(-\yy)={\bf u^*}, ~ ~ \Phi(0)={\bf0}, ~ ~ \Phi(x)=(\phi_i(x)),
\end{array}\right.
\ees
and
\bes\label{1.4}
c=\sum_{i=1}^{m_0}\mu_i\int_{-\yy}^{0}\int_{0}^{\yy}J_i(x-y)\phi_i(x)\dy\dx.
\ees
\end{thm}
When {\bf(J1)} does not hold, we usually call the phenomenon by the accelerated spreading.
For problem \eqref{1.3}, they obtained a dichotomy result between it and the traveling wave problem
\bes\label{1.5}\left\{\begin{array}{lll}
D\circ\dd\int_{\mathbb{R}}{\bf J}(x-y)\circ \Psi(y)\dy-D\circ \Psi+c\Psi'(x)+F(\Psi)=0, ~ ~ -\yy<x<\yy,\\
\Psi(-\yy)={\bf u^*}, ~ ~ \Psi(\yy)={\bf0}, ~ ~ \Psi(x)=(\psi_i(x)).
\end{array}\right.
\ees
To state the dichotomy, we firstly show a new condition on $J_i$, namely,
\begin{enumerate}[leftmargin=4em]
\item[{\bf(J2)}]$\dd\int_{0}^{\yy}e^{\lambda x}J_i(x)\dx<\yy$ for some $\lambda>0$ and any $i\in\{1,2,\cdots, m_0\}$.
\end{enumerate}
Clearly, condition {\bf (J2)} indicates {\bf (J1)} but not the other way around.
\begin{thm}{\rm \cite[Theorem 1.1 and Theorem 1.2]{DN21} }\label{B} The followings hold:
{\rm(i)} There exists a $C_*\in(0,\yy]$ such that problem \eqref{1.3} has a unique monotone solution if and only if $c\in(0,C_*)$, and \eqref{1.5} has a monotone solution if and only if $c\ge C_*$.
{\rm(ii)} $C_*\neq\yy$ if and only if {\bf(J2)} holds.
{\rm(iii)} The semi-wave problem \eqref{1.3}-\eqref{1.4} has a unique solution pair $(c_0,\Phi_0)$ with $c_0>0$ and $\Phi_0$ nonincreasing in $(0,\yy]$ if and only if {\bf(J1)} holds.
\end{thm}
Some more accurate estimates on free boundaries were also derived if $J_i$ additionally satisfy
\begin{enumerate}[leftmargin=4em]
\item[${\bf(J^\gamma)}$] there exist $C_1,C_2>0$ such that $C_1|x|^{-\gamma}\le J_i(x)\le C_2|x|^{-\gamma}$ for $|x|\gg 1$ and $i\in\{1,\cdots, m_0\}$.
\end{enumerate}
\begin{thm}{\rm \cite[Theorem 1.5]{DN21}} Suppose that ${\bf(J^\gamma)}$ holds with $\gamma\in(1,2]$, then
\bess\left\{\begin{array}{lll}
-g(t), ~ h(t)\approx t\ln t ~ ~ &&{\rm if} ~ \gamma=2,\\
-g(t), ~ h(t)\approx t^{1/(\gamma-1)} ~ ~ &&{\rm if} ~ \gamma\in(1,2).
\end{array}\right.
\eess
\end{thm}
Inspired by the above interesting results, we here focus on the following four aspects:
(i) When spreading happens for \eqref{1.1}, we give more accurate longtime behaviors of solution component $u$ rather than that of spreading case mentioned above. Particularly, if ${\bf(J^\gamma)}$ holds with $\gamma\in(1,2]$ and $m_0=m$, then we obtain some sharp estimates on solution component $u$ which are closely related to the behaviors of free boundaries near infinity.
(ii) Assume that {\bf(J1)} holds. Choose a $\mu_i>0$, and fix other $\mu_j$ with $j\neq i$. Letting $\mu_i\to\yy$, we obtain the limiting profile of solution pair $(c_0,\Phi_0)$ of \eqref{1.3}-\eqref{1.4}.
(iii) We obtain the dynamics of \eqref{1.2} with $U(0,x)=(u_{i0}(x))$, namely, if {\bf(J2)} holds, then $C_*$ is asymptotic spreading speed of \eqref{1.2}; if {\bf(J2)} does not hold, then accelerated spreading happens for \eqref{1.2}. Moreover, if ${\bf(J^\gamma)}$ holds with $\gamma\in(1,2]$ and $m_0=m$, which implies that the accelerated spreading occurs, then more accurate longtime behaviors are obtained.
(iv) Choose a any $\mu_i>0$ and fix other $\mu_j$. We prove that the limiting problem of \eqref{1.1} is the problem \eqref{1.2} as $\mu_i\to\yy$.
Now we introduce our first main result.
\begin{theorem}\label{t1.1}Let $(u,g,h)$ be a solution of \eqref{1.1} and spreading happen. Then
\bess\left\{\begin{aligned}
&\lim_{t\to\yy}\max_{|x|\le ct}\sum_{i=1}^{m}|u_i(t,x)-u^*_i|=0 ~ {\rm for ~ any ~ } c\in(0,c_0) ~ ~ {\rm if ~ {\bf(J1)} ~ holds},\\
&\lim_{t\to\yy}\max_{|x|\le ct}\sum_{i=1}^{m}|u_i(t,x)-u^*_i|=0 ~ {\rm for ~ any ~ } c>0 ~ ~ {\rm if ~ {\bf(J1)} ~ does ~ not ~ hold},
\end{aligned}\right.\eess
where $c_0$ is uniquely determined by \eqref{1.3}-\eqref{1.4}.
\end{theorem}
\begin{remark}\label{r1.1}By Theorem \ref{A} and Theorem \ref{t1.1}, we know that if one of the kernel functions $J_i$ with $\mu_i>0$ violates
\[\dd\int_{0}^{\yy}xJ_i(x)\dx<\yy,\]
then the accelerated spreading will happen, which means that the species $u_i$ will accelerate the propagation of other species. This phenomenon is also captured by Xu et al \cite{XLL} for the cauchy problem, and is called by the transferability of acceleration propagation.
\end{remark}
Before giving our next main result, we further introduce an assumption on $F$, i.e.,\\
{\bf(f6)}\, For $i\in\{1,\cdots,m\}$, $\sum_{j=1}^{m}\partial_jf_i({\bf0})u^*_j>0$, $\sum_{j=1}^{m}\partial_jf_i({\bf u^*})u^*_j<0$ and $f_i(\eta {\bf u^*})>0$ with $\eta\in(0,1)$.
\begin{theorem}\label{t1.2}Assume that {\bf(f6)} holds, $m_0=m$ and ${\bf(J^\gamma)}$ holds with $\gamma\in(1,2]$. Let $(u,g,h)$ be a solution of \eqref{1.1} and spreading happen. Then
\bess\left\{\begin{aligned}
&\lim_{t\to\yy}\max_{|x|\le s(t)}\sum_{i=1}^{m}|u_i(t,x)-u^*_i|=0 ~ {\rm for ~ any } ~ 0\le s(t)=t^{\frac{1}{\gamma-1}}o(1) ~ ~ {\rm ~ if ~}\gamma\in(1,2),\\
&\lim_{t\to\yy}\max_{|x|\le s(t)}\sum_{i=1}^{m}|u_i(t,x)-u^*_i|=0 ~ {\rm for ~ any } ~ 0\le s(t)=(t\ln t) o(1) ~ ~ {\rm ~ if ~ }\gamma=2.
\end{aligned}\right.\eess
\end{theorem}
\begin{remark}\label{r1.2}It can be seen from Theorem \ref{A}, Theorem \ref{t1.1}, Theorem \ref{t1.2}, \cite[Theorem 3.15]{WHD} and \cite[Theorem 1.2]{ZLN} that free boundary problem with nonlocal diffusion has richer dynamics than its counterpart with random diffusion. That is because the kernel function plays a important role in studying the dynamics of nonlocal diffusion problem, and the accelerated spreading may happen if kernel function violates the so-called "thin-tailed" condition , see \cite{XLL} and the references therein.
\end{remark}
Now we assume that {\bf (J1)} holds, and choose a any $\mu_i>0$ and fix other $\mu_j$. Denote the unique solution pair of \eqref{1.3}-\eqref{1.4} by $(c_{\mu_i},\Phi^{c_{\mu_i}})$. By the monotonicity of $\Phi^{c_{\mu_i}}$, there is a unique $l_{\mu_i}>0$ such that $\phi^{c_{\mu_i}}_i(-l_{\mu_i})=u^*_i/2$. Define $\hat{\Phi}^{c_{\mu_i}}(x)=\Phi^{c_{\mu_i}}(x-l_{\mu_i})$. Our next result concerns the limitation of $(c_{\mu_i},l_{\mu_i},\Phi^{c_{\mu_i}},\hat{\Phi}^{c_{\mu_i}})$ as $\mu_i\to\yy$.
\begin{theorem}\label{t1.3}If {\bf(J2)} holds, then $c_{\mu_i}\to C_*$, $l_{\mu_i}\to\yy$, $\Phi^{c_{\mu_i}}(x)\to{\bf0}$ and $\hat{\Phi}^{c_{\mu_i}}(x)\to\Psi(x)$ as $\mu_i\to\yy$,
where $(C_*,\Psi(x))$ is the minimal speed solution pair of travelling wave problem \eqref{1.5} with $\psi_i(0)=u^*_i/2$. If {\bf(J2)} does not hold, then $c_{\mu_i}\to \yy$ as $\mu_i\to\yy$.
\end{theorem}
We next study the dynamics for \eqref{1.2}. For every $1\le i\le m$, we denote the level set of solution component $U_i$ of \eqref{1.2} by $E^{i}_{\lambda}=\{x\in\mathbb{R}: U_i(t,x)=\lambda, ~ \lambda\in(0,u^*_i)\}$,
$x^{+}_{i,\lambda}=\sup E^i_{\lambda}$ and $x^{-}_{i,\lambda}=\inf E^i_{\lambda}$.
\begin{theorem}\label{t1.4} Let $U$ be a solution of \eqref{1.5} with $U(0,x)=(u_{i0}(x))$ and ${\bf \hat{u}}=\yy$. Then the following results hold:
{\rm(i)} If {\bf (J2)} holds, then
\bes\label{1.6}
\lim_{t\to\yy}\frac{|x^{\pm}_{i,\lambda}|}{t}=C_*, ~ ~ \lim_{|x|\to\yy}U(t,x)={\bf0} ~ ~ {\rm for ~ any ~ } t\ge0,
\ees
and
\bes\label{1.7}\left\{\begin{array}{lll}
\lim_{t\to\yy}\max_{|x|\le ct}\dd\sum_{i=1}^{m}|U_i(t,x)-u^*_i|=0~ ~ {\rm for ~ any ~ }c\in(0,C_*),\\
\lim_{t\to\yy}\max_{|x|\ge ct}\dd\sum_{i=1}^{m}|U_i(t,x)|=0~ ~ {\rm for ~ any ~ }c>C_*.
\end{array}\right.
\ees
{\rm(ii)} If {\bf (J2)} does not hold, then
\bes\label{1.8}\lim_{t\to\yy}\frac{|x^{\pm}_{i,\lambda}|}{t}=\yy.
\ees
Moveover,
\bes\label{1.9}
\lim_{t\to\yy}\max_{|x|\le ct}\sum_{i=1}^{m}|U_i(t,x)-u^*_i|=0~ ~ {\rm for ~ any ~ }c>0.
\ees
{\rm(iii)} If {\bf(f6)} holds, $m_0=m$ and ${\bf(J^\gamma)}$ holds with $\gamma\in(1,2]$. Then
\bess\left\{\begin{aligned}
&\lim_{t\to\yy}\max_{|x|\le s(t)}\sum_{i=1}^{m}|U_i(t,x)-u^*_i|=0 ~ {\rm for ~ any } ~ 0\le s(t)=t^{\frac{1}{\gamma-1}}o(1) ~ ~ {\rm ~ if ~}\gamma\in(1,2),\\
&\lim_{t\to\yy}\max_{|x|\le s(t)}\sum_{i=1}^{m}|U_i(t,x)-u^*_i|=0 ~ {\rm for ~ any } ~ 0\le s(t)=(t\ln t) o(1) ~ ~ {\rm ~ if ~ }\gamma=2.
\end{aligned}\right.\eess
\end{theorem}
As before, we choose a any $\mu_i>0$ and fix other $\mu_j$. Our last main result concerns the limiting problem of \eqref{1.1} as $\mu_i\to\yy$.
\begin{theorem}\label{t1.5}
Problem \eqref{1.2} with $U(0,x)=(u_{i0}(x))$ for $|x|\le h_0$ and $U(0,x)={\bf 0}$ for $x\in\mathbb{R}\setminus [-h_0,h_0]$ is the limiting problem of \eqref{1.1} as $\mu_i\to\yy$. More precisely, denoting the unique solution of \eqref{1.1} by $(u_{\mu_i},g_{\mu_i},h_{\mu_i})$ and letting $\mu_i\to\yy$, we have $u_{\mu_i}(t,x)\to U(t,x)$ locally uniformly in $[0,\yy)\times\mathbb{R}$ and $-g_{\mu_i}(t), ~ h_{\mu_i}(t)\to\yy$ locally uniformly in $(0,\yy)$.
\end{theorem}
This paper is arranged as follows. In section 2, we prove Theorem \ref{t1.1} and Theorem \ref{t1.2}. Section 3 is devoted to the proofs of Theorem \ref{t1.3}, Theorem \ref{t1.4} and Theorem \ref{t1.5}. In Section 4, we will give two epidemic model as examples to
illustrate our results.
\section{Proofs of Theorem \ref{t1.1} and Theorem \ref{t1.2}}
\setcounter{equation}{0} {\setlength\arraycolsep{2pt}
In this section, we will prove Theorem \ref{t1.1} and Theorem \ref{t1.2} by constructing some properly upper and lower solutions.
\begin{proof}[{\bf Proof of Theorem \ref{t1.1}:}]
Assume that {\bf(J1)} holds. For small $\ep>0$ and $K>0$, we define $\underline{h}(t)=c_0(1-2\ep)t+K$
and $\underline{U}(t,x)=(1-\ep)\left[\Phi(x-\underline{h}(t))+\Phi(-x-\underline{h}(t))-{\bf u^*}\right]$ for
$t\ge0$ and $x\in[-\underline{h}(t),\underline{h}(t)]$. It then follows from \cite[Lemma 3.4]{DN21} that for any small $\ep>0$,
there exist suitable $T>0$ and $K>0$ such that
\[
g(t+T)\le-\underline{h}(t), ~ ~ h(t+T)\ge\underline{h}(t), ~ ~ u(t+T,x)\succeq\underline{U}(t,x), ~ ~ ~ ~ t\ge0, ~ x\in[-\underline{h}(t),\underline{h}(t)].
\]
Direct calculations show
\bess
&&\max_{|x|\le c_0(1-3\ep)t}|\underline{U}(t,x)-(1-\ep){\bf u^*}|\\
&&\qquad=(1-\ep)\max_{|x|\le c_0(1-3\ep)t}\left\{\sum_{i=1}^{m}|\phi_i(x-\underline{h}(t))+\phi_i(-x-\underline{h}(t))-2u_i^*|^2\right\}^{\frac{1}{2}}\\
&&\qquad\le(1-\ep)\max_{|x|\le c_0(1-3\ep)t}\left\{\sum_{i=1}^{m}\bigg(|\phi_i(x-\underline{h}(t))-u_i^*|+|\phi_i(-x-\underline{h}(t))-u_i^*|\bigg)^2\right\}^{\frac{1}{2}}\\
&&\qquad=2(1-\ep)\left\{\sum_{i=1}^{m}|\phi_i(-c_0\ep t-K)-u_i^*|^2\right\}^{\frac{1}{2}}\to0 ~ ~ {\rm as} ~ t\to\yy.
\eess
Therefore, $\liminf_{t\to\yy}u(t,x)\succeq (1-\ep){\bf u^*}$ uniformly in $|x|\le c_0(1-3\ep)t$.
Then for any $c\in(0,c_0)$, by letting $\ep>0$ small sufficiently such that $c<c_0(1-3\ep)$,
we have $\liminf_{t\to\yy}u(t,x)\succeq (1-\ep){\bf u^*}$ uniformly in $|x|\le ct$.
In view of the arbitrariness of $\ep>0$, we obtain $\liminf_{t\to\yy}u(t,x)\succeq {\bf u^*}$ uniformly in $|x|\le ct$.
On the other hand, consider the following ODE system
\[\bar{u}_t=F(\bar{u}), ~ ~ ~ ~ \bar{u}(0)=(\|u_{i0}(x)\|_{C([-h_0,h_0])}).\]
By condition {\bf(f4)} and a comparison argument, we have $\limsup_{t\to\yy}u(t,x)\le{\bf u^*}$ uniformly in $\mathbb{R}$.
Therefore, the proof of the result with {\bf(J1)} holding is complete.
Next we consider the case {\bf(J1)} being violated. As in the proof of \cite[Theorem 1.3]{DN21}, for any integer $n\ge1$ and $i\in\{1,2,\cdots,m_0\}$, define
\begin{align*}
J^n_i(x)=\left\{\begin{aligned}
&J_i(x)& &{\rm if} ~ |x|\le n,\\
&\frac{n}{|x|}J_i(x)& &{\rm if} ~ |x|\ge n,
\end{aligned}\right. ~ ~ \tilde{J}^n_i=\frac{J^n_i(x)}{\|J^n_i\|_{L^1(\mathbb{R})}}, ~ ~ {\bf J}^n=(J^n_i), ~ ~ {\rm and} ~ {\bf \tilde{J}}^n=(\tilde{J}^n_i)
\end{align*}
with $J^n_i(x)\equiv\tilde{J}^n_i\equiv0$ for $i\in\{m_0+1,\cdots,m\}$.
Clearly, the following results about $J^n_i$ and $\tilde{J}^n_i$ hold:
(1) $J^n_i(x)\le J_i(x)$, $|x|J^n_i(x)\le nJ_i(x)$, and for any $\alpha>0$,
there is $c>0$ depending only on $n,\alpha,J_i$ such that $e^{\alpha|x|}J^n_i(x)\ge c e^{\frac{\alpha}{2}|x|}J_i(x)$ for $x\in\mathbb{R}$,
which directly implies that
${\bf \tilde{J}}^n$ satisfies {\bf (J)} and {\bf(J1)}, but not {\bf(J2)}.
(2) ${\bf J}^n$ is nondecreasing in $n$, and $\lim_{n\to\yy}{\bf J}^n(x)={\bf \tilde J}^n(x)={\bf J}(x)$ in $[L^1(\mathbb{R})]^m$
and locally uniformly in $\mathbb{R}$.\\
Let $(u^n,g^n,h^n)$ be the solution of the following problem
\bess
\left\{\begin{aligned}
&u^n_{t}=D\circ\dd\int_{g^n(t)}^{h^{n}(t)}{\bf J}^n(x-y)\circ u^n(t,y)\dy-D\circ u^n+F(u^n), && t>0,~x\in(g^n(t),h^n(t)),\\
&u^n(t,x)=0, && t>0, x\notin(g^n(t),h^n(t)),\\
&{g^n}'(t)=-\sum_{i=1}^{m_0}\mu_i\dd\int_{g^n(t)}^{h^n(t)}\!\!\int_{h^n(t)}^{\infty}
J^n_i(x-y)u^n_i(t,x)\dy\dx, && t>0,\\
&{h^n}'(t)=\sum_{i=1}^{m_0}\mu_i\dd\int_{g^n(t)}^{h^n(t)}\!\!\int_{-\yy}^{g^n(t)}
J^n_i(x-y)u^n_i(t,x)\dy\dx, && t>0,\\
&u^n(0,x)=u(T,x),~g^n(T)=g(T), ~ h^n(0)=h(T), \ \ x\in[g(T),h(T)],
\end{aligned}\right.
\eess
where $T>0$, $u^n=(u^n_i)$, and $(u,g,h)$ is the solution of \eqref{1.1}. For any integer $n\ge1$, it follows from \cite[Lemma 3.5]{DN21} that
\[g^n(t)\ge g(t+T), ~ h^n(t)\le h(t+T) ~ {\rm and } ~ u^n(t,x)\preceq u(t+T,x) ~ ~ {\rm for } ~ t\ge0,~ g^n(t)\le x\le h^n(t).\]
Since $F$ satisfies {\bf (f1)}-{\bf (f3)}, $F(w)+(D^n-D)w$ still satisfies {\bf (f1)}-{\bf (f3)}
with $D^n=(d_i\|J^n_i\|_{L^1(\mathbb{R})})$ and $n\gg1$. Denote the unique positive root of $F(w)+(D^n-D)\circ w=0$ by ${\bf u^*_n}$.
It is easy to see that $\lim_{n\to\yy}{\bf u^*_n}={\bf u^*}$. By \cite[Lemmas 3.6 and 3.8]{DN21}, the following problem
\bess\left\{\begin{array}{lll}
D\circ\dd\int_{-\yy}^{0}{\bf J}^n(x-y)\circ \Phi(y)\dy-D\circ \Phi+c\Phi'(x)+F(\Phi)=0, ~ ~ -\yy<x<0,\\
\Phi(-\yy)={\bf u^*_n}, ~ ~ \Phi(0)={\bf0}, ~ ~ c=\sum_{i=1}^{m_0}\mu_i\dd\int_{-\yy}^{0}\int_{0}^{\yy}J^n_i(x-y)\phi_i(x)\dy\dx
\end{array}\right.
\eess
has a solution pair $(c^n,\Phi^n)$ with $\Phi^n(x)=(\phi^n_i(x))$ and $\lim_{n\to\yy}c^n=\yy$.
As before, for small $\ep>0$ and $K>0$ we define
\[\underline{h}^n(t)=c^n(1-2\ep)t+K,~ ~ \underline{u}^n(t,x)=(1-\ep)\left[\Phi^n(x-\underline{h}^n(t))+\Phi^n(-x-\underline{h}^n(t))-{\bf u^*_n}\right],\]
with $t\ge0$ and $x\in[-\underline{h}^n(t),\underline{h}^n(t)]$. By \cite[Lemma 3.7]{DN21}, for small $\ep>0$ and all large $n\gg1$ there exist $K>0$ and $T>0$
such that
\[
g(t+T)\le-\underline{h}^n(t), ~ ~ h(t+T)\ge\underline{h}^n(t), ~ ~ u(t+T,x)\succeq\underline{u}^n(t,x), ~ ~ ~ ~ t\ge0, ~ x\in[-\underline{h}^n(t),\underline{h}^n(t)].
\]
Similarly, we have that $\liminf_{t\to\yy}\underline u(t,x)\succeq\liminf_{t\to\yy}\underline u^n(t,x)\succeq (1-\ep){\bf u^*_n}$ uniformly in $|x|\le c^n(1-3\ep)t$.
Since $\lim_{n\to\yy}c^n=\yy$, for any $c>0$ there are large $N\gg1$ and small $\ep_0>0$ such that $c<c^n(1-3\ep)$ for $n\ge N$ and $\ep\in(0,\ep_0)$.
Thus $\liminf_{t\to\yy}\underline u(t,x)\succeq (1-\ep){\bf u^*_n}$ uniformly in $|x|\le ct$.
Letting $n\to\yy$ and $\ep\to0$, we derive $\liminf_{t\to\yy}\underline u(t,x)\succeq {\bf u^*}$ uniformly in $|x|\le ct$.
Together with our early conclusion, the desired result is obtained. The proof is ended.
\end{proof}
To prove Theorem \ref{t1.2}, the following two technical lemmas are crucial, and their proofs can be found from \cite{DN21} and \cite{DNwn}, respectively.
\begin{lemma}\label{l2.1} Let $P(x)$ satisfy {\bf (J)} and $\varphi_l(x)=l-|x|$ with $l>0$. Then for any $\ep>0$, there exists $L_{\ep}>0$ such that for any $l>L_{\ep}$,
\[\dd\int_{-l}^{l}P(x-y)\varphi_l(y)\dy\ge(1-\ep)\varphi_l(x) ~ ~ {\rm in ~ }[-l,l].\]
\end{lemma}
\begin{lemma}\label{l2.2} Let $P(x)$ satisfy {\bf (J)} and $l_2>l_1>0$. Define
\[\varphi(x)=\min\{1,\frac{l_2-|x|}{l_1}\}.\]
Then for any $\ep>0$, there is $L_{\ep}>0$ such that for all $l_1>L_{\ep}$ and $l_2-l_1>L_{\ep}$,
\[\dd\int_{-l_2}^{l_2}P(x-y)\varphi(y)\dy\ge(1-\ep)\varphi(x) ~ ~{\rm in ~ }[-l_2,l_2].\]
\end{lemma}
\begin{proof}[{\bf Proof of Theorem \ref{t1.2}:}]
Firstly, a simple comparison argument yields that $\limsup_{t\to\yy}u(t,x)\preceq {\bf u^*}$ uniformly in $\mathbb{R}$. Thus it remains to show the lower limitations of $u$. The discussion now is divided into two steps.
{\bf Step 1:} In this step, we prove $\liminf_{t\to\yy}u(t,x)\succeq {\bf u^*}$ uniformly in $[-s(t),s(t)]$ for any $0\le s(t)=t^{\frac{1}{\gamma-1}}o(1)$.
For small $\ep>0$, we define
\[\underline{h}(t)=(Kt+\theta)^{\frac{1}{\gamma-1}} ~ ~ {\rm and } ~ ~ \underline{u}(t,x)={\bf K_{\ep}}(1-\frac{|x|}{\underline{h}(t)}) ~ ~ ~ ~ {\rm for} ~ t\ge0, ~ x\in[-\underline{h}(t),\underline{h}(t)],\]
where ${\bf K_{\ep}}={\bf u^*}(1-\ep)$ and $K,\theta>0$ to be determined later.
Then we are going to verify that for any small $\ep>0$, there exist proper $K,T$ and $\theta$ such that
\bes\label{2.1}
\left\{\begin{aligned}
&\underline u_t\preceq D\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy-D\circ\underline u+ F(\underline u), && t>0,~x\in(-\underline h(t),\underline h(t)),\\
&\underline u(t,\pm\underline h(t))\preceq {\bf0},&& t>0,\\
&\underline h'(t)\le\sum_{i=1}^{m_0}\mu_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)\underline u_i(t,x)\dy\dx,&& t>0,\\
&-\underline h'(t)\ge-\sum_{i=1}^{m_0}\mu_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{-\yy}^{-\underline h(t)}
J_i(x-y)\underline u_i(t,x)\dy\dx,&& t>0,\\
&\underline h(0)\le h(T),\;\;\underline u(0,x)\preceq u(T,x),&& x\in[-\underline h(0),\underline h(0)].
\end{aligned}\right.
\ees
Once it is done, by comparison method we have
\[g(t+T)\le -\underline{h}(t), ~ ~ h(t+T)\ge\underline{h}(t) ~ ~ {\rm and } ~ ~ u(t+T,x)\succeq \underline{u}(t,x) ~ ~ {\rm for} ~ t\ge0, ~ x\in[-\underline{h}(t),\underline{h}(t)].\]
Moreover, for any $s(t)=t^{\frac{1}{\gamma-1}}o(1)$, direct computations show
\bess
\lim_{t\to\yy}\max_{|x|\le s(t)}\sum_{i=1}^{m}|\underline{u}_i(t,x)-(1-\ep)u^*_i|
=\lim_{t\to\yy}(1-\ep)\sum_{i=1}^{m}u^*_i\frac{s(t)}{\underline{h}(t)}=0,
\eess
which, together with our early conclusion and the arbitrariness of $\ep$, arrives at
\[\liminf_{t\to\yy}u(t,x)\succeq {\bf u^*} ~ ~ {\rm uniformly ~ in ~ }[-s(t),s(t)].\]
Hence the case $\gamma\in(1,2)$ is proved. The second inequality of \eqref{2.1} is obvious. We next show the third one. Simple calculations yield
\bess
&&\sum_{i=1}^{m_0}\mu_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)\underline u_i(t,x)\dy\dx\\
&&=\sum_{i=1}^{m_0}(1-\ep)\mu_iu^*_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)(1-\frac{|x|}{\underline{h}(t)})\dy\dx\\
&&\ge\sum_{i=1}^{m_0}(1-\ep)\mu_iu^*_i\dd\int_{0}^{\underline h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)(1-\frac{x}{\underline{h}(t)})\dy\dx\\
&&=\sum_{i=1}^{m_0}\frac{(1-\ep)\mu_iu^*_i}{\underline{h}(t)}\dd\int_{-\underline h(t)}^{0}\int_{0}^{\infty}
J_i(x-y)(-x)\dy\dx\\
&&=\sum_{i=1}^{m_0}\frac{(1-\ep)\mu_iu^*_i}{\underline{h}(t)}\dd\int_{0}^{\underline h(t)}\int_{x}^{\infty}
J_i(y)x\dy\dx\\
&&=\sum_{i=1}^{m_0}\frac{(1-\ep)\mu_iu^*_i}{\underline{h}(t)}\bigg(\dd\int_{0}^{\underline h(t)}\int_{0}^{y}+\int_{\underline{h}(t)}^{\yy}\int_{0}^{\underline{h}(t)}\bigg)J_i(y)x\dx\dy\\
&&\ge\sum_{i=1}^{m_0}\frac{(1-\ep)\mu_iu^*_i}{2\underline{h}(t)}\int_{0}^{\underline h(t)}J_i(y)y^2\dy\ge\sum_{i=1}^{m_0}\frac{C_1(1-\ep)\mu_iu^*_i}{2\underline{h}(t)}\int_{\underline h(t)/2}^{\underline h(t)}y^{2-\gamma}\dy\\
&&\ge\tilde{C}_1(Kt+\theta)^{(2-\gamma)/(\gamma-1)}\ge\frac{K(Kt+\theta)^{(2-\gamma)/(\gamma-1)}}{\gamma-1}=\underline{h}'(t)
~ ~ {\rm if }~ K\le \tilde{C}_1(\gamma-1).
\eess
Since $J_i$ and $u$ are both symmetric in $x$, the fourth inequality of \eqref{2.1} also holds. Next we focus on the first one in \eqref{2.1}. We first claim that there is a constant $\hat{C}>0$ such that
\[\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy\succeq\hat{C}{\bf K_{\ep}}\underline h^{1-\gamma}(t) ~ ~ {\rm for } ~ t>0, ~ x\in[-\underline h(t),\underline h(t)].\]
Firstly, for $x\in[\underline{h}(t)/4,\underline{h}(t)]$ we have
\bess
&&\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy=\int_{-\underline h(t)-x}^{\underline h(t)-x}{\bf J}(y)\circ\underline u(t,x+y)\dy\\
&&\succeq {\bf K_{\ep}}\circ\int_{-\underline h(t)/4}^{-\underline h(t)/8}{\bf J}(y)(1-\frac{x+y}{\underline{h}(t)})\dy\succeq{\bf K_{\ep}}\circ\int_{-\underline h(t)/4}^{-\underline h(t)/8}{\bf J}(y)\frac{-y}{\underline{h}(t)}\dy\\
&&\succeq\frac{{\bf K_{\ep}}C_1}{\underline{h}(t)}\int_{\underline h(t)/8}^{\underline h(t)/4}y^{1-\gamma}\dy={\bf K_{\ep}}\hat C_1\underline{h}^{1-\gamma}(t).
\eess
For $x\in[0,\underline{h}(t)/4]$, we have
\bess
&&\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy=\int_{-\underline h(t)-x}^{\underline h(t)-x}{\bf J}(y)\circ\underline u(t,x+y)\dy\\
&&\succeq {\bf K_{\ep}}\circ\int_{\underline h(t)/8}^{\underline h(t)/4}{\bf J}(y)(1-\frac{x+y}{\underline{h}(t)})\dy\succeq \frac{{\bf K_{\ep}}}{\underline{h}(t)}\circ\int_{\underline h(t)/8}^{\underline h(t)/4}{\bf J}(y)y\dy\\
&&\succeq\frac{{\bf K_{\ep}}C_1}{\underline{h}(t)}\int_{\underline h(t)/8}^{\underline h(t)/4}y^{1-\gamma}\dy={\bf K_{\ep}}\hat C_1\underline{h}^{1-\gamma}(t).
\eess
Then our claim is verified since $J_i$ and $u$ are both even in $x$.
Define $\tilde{F}(\eta)=F(u^*_1\eta,\cdots,u^*_m\eta)$ for $\eta\in[0,1]$. By the assumptions on $F$, we easily show that there is a ${\bf C}\succ0$ such that
\[\tilde F(\eta)\succeq {\bf C}\min\{\eta,1-\eta\} ~ ~ {\rm for ~ any ~ } \eta\in[0,1].\]
Hence there is a positive constant $\overline{C}$, depending only on $F$, such that
\bess
&&F(u^*_1(1-\ep)(1-\frac{|x|}{\underline{h}(t)}),\cdots,u^*_m(1-\ep)(1-\frac{|x|}{\underline{h}(t)}))\\
&&\succeq
(1-\frac{|x|}{\underline{h}(t)})F(u^*_1(1-\ep),\cdots,u^*_m(1-\ep))\\
&&\succeq (1-\frac{|x|}{\underline{h}(t)})\ep{\bf C}\succeq \overline{C}\ep\underline{u}.
\eess
By Lemma \ref{l2.1}, one can let $\theta$ large sufficiently such that
\[\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy\succeq (1-\ep^2)\underline{u}(t,x) ~ ~ {\rm for }~ t>0, ~ x\in[-\underline{h}(t),-\underline{h}(t)].\]
Therefore
\bess
&&D\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy-D\circ\underline u+ F(\underline u)\\
&&=\frac{\overline{C}\ep{\bf1}}{2}\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy+\bigg(D-\frac{\overline{C}\ep{\bf1}}{2}\bigg)\circ\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy-D\circ\underline u+ F(\underline u)\\
&&\succeq \frac{\overline{C}\ep{\bf1}}{2}\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy+(1-\ep^2)\bigg(D-\frac{\overline{C}\ep{\bf1}}{2}\bigg)\circ\underline{u}(t,x)-D\circ\underline u+ \overline{C}\ep\underline{u}\\
&&\succeq \frac{\overline{C}\ep{\bf1}}{2}\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy\succeq \frac{\overline{C}\ep}{2}{\bf K_{\ep}}\hat C_1\underline{h}^{1-\gamma}(t)\succeq \frac{K\underline{h}^{1-\gamma}(t)}{\gamma-1}{\bf K_{\ep}}\succeq \underline{u}_t
\eess
provided that $\ep$ and $K$ are small suitably. So the first inequality in \eqref{2.1} holds. For $K,\theta$ and $\ep$ as chosen above, noting that spreading happens we can find some $T>0$ such that
\[-\underline h(0)\ge g(T), ~ ~ \underline{h}(0)\le h(T), ~ ~ {\rm and } ~ ~ \underline{u}(0,x)\preceq{\bf K_{\ep}}\preceq u(T,x) ~ {\rm for ~ }x\in[-\underline{h}(0),\underline{h}(0)].\]
Therefore, \eqref{2.1} holds. So we finish Step 1.
{\bf Step 2:} We next show that $\liminf_{t\to\yy}u(t,x)\succeq {\bf u^*}$ uniformly in $[-s(t),s(t)]$ for any $0\le s(t)=t\ln t o(1)$.
For small $\ep>0$, we define
\[\underline{h}(t)=K(t+\theta)\ln(t+\theta) ~ ~ {\rm and } ~ ~ \underline{u}(t,x)={\bf K_{\ep}}\min\{1,\frac{\underline{h}(t)-|x|}{(t+\theta)^{1/2}}\} ~ ~ ~ ~ {\rm for} ~ t\ge0, ~ x\in[-\underline{h}(t),\underline{h}(t)],\]
where ${\bf K_{\ep}}={\bf u^*}(1-\ep)$ and $K,\theta>0$ to be determined later.
Now we are ready to prove
\bes\label{2.2}
\left\{\begin{aligned}
&\underline u_t\preceq D\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy-D\circ\underline u+ F(\underline u), &&\hspace{-7mm} t>0,x\in(-\underline h(t),\underline h(t))\setminus\{\underline{h}(t)-(t+\theta)^{1/2}\},\\
&\underline u(t,\pm\underline h(t))\preceq{\bf0},&&\hspace{-7mm} t>0,\\
&\underline h'(t)\le\sum_{i=1}^{m_0}\mu_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)\underline u_i(t,x)\dy\dx,&&\hspace{-7mm} t>0,\\
&-\underline h'(t)\ge-\sum_{i=1}^{m_0}\mu_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{-\yy}^{-\underline h(t)}
J_i(x-y)\underline u_i(t,x)\dy\dx,&&\hspace{-4mm}~ t>0,\\
&\underline h(0)\le h(T),\;\;\underline u(0,x)\preceq u(T,x),&& \hspace{-7mm}x\in[-\underline h(0),\underline h(0)].
\end{aligned}\right.
\ees
If \eqref{2.2} holds, then by comparison argument we see
\[g(t+T)\le -\underline{h}(t), ~ ~ h(t+T)\ge\underline{h}(t) ~ ~ {\rm and } ~ ~ u(t+T,x)\succeq \underline{u}(t,x) ~ ~ {\rm for} ~ t\ge0, ~ x\in[-\underline{h}(t),\underline{h}(t)].\]
We first deal with the third inequality in \eqref{2.2}. Careful computations show
\bess
&&\sum_{i=1}^{m_0}\mu_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)\underline u_i(t,x)\dy\dx\\
&&\ge\sum_{i=1}^{m_0}\mu_iu^*_i(1-\ep)\dd\int_{0}^{\underline h(t)-(t+\theta)^{1/2}}\int_{\underline h(t)}^{\infty}
J_i(x-y)\dy\dx\\
&&=\sum_{i=1}^{m_0}\mu_iu^*_i(1-\ep)\dd\int_{-\underline h(t)}^{-(t+\theta)^{1/2}}\int_{0}^{\infty}
J_i(x-y)\dy\dx\\
&&=\sum_{i=1}^{m_0}\mu_iu^*_i(1-\ep)\dd\int_{(t+\theta)^{1/2}}^{\underline h(t)}\int_{x}^{\infty}
J_i(y)\dy\dx\\
&&=\sum_{i=1}^{m_0}\mu_iu^*_i(1-\ep)\dd\bigg(\int_{(t+\theta)^{1/2}}^{\underline h(t)}\int_{(t+\theta)^{1/2}}^{y}+\int_{\underline{h}(t)}^{\yy}\int_{(t+\theta)^{1/2}}^{\underline{h}(t)}\bigg)
J_i(y)\dx\dy\\
&&\ge\sum_{i=1}^{m_0}\mu_iu^*_i(1-\ep)\int_{(t+\theta)^{1/2}}^{\underline h(t)}\int_{(t+\theta)^{1/2}}^{y}J_i(y)\dx\dy\\
&&\ge\sum_{i=1}^{m_0}\mu_iu^*_i(1-\ep)C_1\int_{(t+\theta)^{1/2}}^{\underline h(t)}\frac{y-(t+\theta)^{1/2}}{y^2}\dy\\
&&\ge \sum_{i=1}^{m_0}C_1\mu_iu^*_i(1-\ep)\bigg(\ln \underline{h}(t)-\frac{\ln(t+\theta)}{2}+\frac{(t+\theta)^{1/2}}{\underline{h}(t)}-1\bigg)\\
&&\ge \sum_{i=1}^{m_0}C_1\mu_iu^*_i(1-\ep)\bigg(\frac{\ln(t+\theta)}{2}+\frac{1}{2}+\ln K+\ln(\ln (t+\theta))-\frac{3}{2}\bigg)\\
&&\ge \sum_{i=1}^{m_0}\frac{C_1\mu_iu^*_i(1-\ep)}{2}\big(\ln(t+\theta)+1\big)\ge K\big(\ln(t+\theta)+1\big)=\underline{h}'(t)
\eess
provided that $\theta$ is large enough and $K$ is small suitably. By the symmetry of $J_i$ and $\underline{u}$ on $x$, it is easy to show that the fourth one in \eqref{2.2} also holds.
Now we verify the first inequality of \eqref{2.2}, and first claim that for $x\in[-\underline{h}(t),-\underline{h}(t)+(t+\theta)^{1/2}]\cup[\underline{h}(t)-(t+\theta)^{1/2},\underline{h}(t)]$,
there is a positive constant $\tilde{C}_1$ such that
\[\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy\succeq\frac{\tilde{C}_1\ln(t+\theta)}{4(t+\theta)^{1/2}}{\bf K_{\ep}}.\]
Direct calculations show
\bess
&&\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy\succeq{\bf K_{\ep}}\circ\int_{\underline h(t)-(t+\theta)^{1/2}}^{\underline h(t)}{\bf J}(x-y)\frac{\underline{h}(t)-y}{(t+\theta)^{1/2}}\dy\\
&&=\frac{{\bf K_{\ep}}}{{(t+\theta)^{1/2}}}\circ\int_{\underline h(t)-(t+\theta)^{1/2}-x}^{\underline h(t)-x}{\bf J}(y)[\underline{h}(t)-x-y]\dy.
\eess
Then for $x\in[\underline{h}(t)-\frac{3}{4}(t+\theta)^{1/2},\underline{h}(t)]$, we have
\bess
&&\frac{{\bf K_{\ep}}}{{(t+\theta)^{1/2}}}\circ\int_{\underline h(t)-(t+\theta)^{1/2}-x}^{\underline h(t)-x}{\bf J}(y)[\underline{h}(t)-x-y]\dy\succeq \frac{{\bf K_{\ep}}}{{(t+\theta)^{1/2}}}\circ\int_{-(t+\theta)^{1/2}/4}^{-(t+\theta)^{1/4}/4}{\bf J}(y)(-y)\dy\\
&&\succeq\frac{C_1{\bf K_{\ep}}}{{(t+\theta)^{1/2}}}\int_{-(t+\theta)^{1/2}/4}^{-(t+\theta)^{1/4}/4}(-y)^{-1}\dy=
\frac{C_1{\bf K_{\ep}}}{(t+\theta)^{1/2}}\int_{(t+\theta)^{1/4}/4}^{(t+\theta)^{1/2}/4}y^{-1}\dy\\
&&=\frac{C_1{\bf K_{\ep}}\ln(t+\theta)}{4(t+\theta)^{1/2}}
\eess
For $x\in[\underline{h}(t)-(t+\theta)^{1/2},\underline{h}(t)-\frac{3}{4}(t+\theta)^{1/2}]$, we obtain
\bess
&&\frac{{\bf K_{\ep}}}{{(t+\theta)^{1/2}}}\circ\int_{\underline h(t)-(t+\theta)^{1/2}-x}^{\underline h(t)-x}{\bf J}(y)[\underline{h}(t)-x-y]\dy\succeq \frac{{\bf K_{\ep}}}{(t+\theta)^{1/2}}\circ\int_{(t+\theta)^{1/4}/4}^{(t+\theta)^{1/2}/4}{\bf J}(y)[\underline{h}(t)-x-y]\dy\\
&&\succeq\frac{{\bf K_{\ep}}C_1}{(t+\theta)^{1/2}}\int_{(t+\theta)^{1/4}/4}^{(t+\theta)^{1/2}/4}y^{-1}\dy=\frac{C_1{\bf K_{\ep}}\ln(t+\theta)}{4(t+\theta)^{1/2}}.
\eess
By the symmetry of $J_i$ and $\underline{u}$ again, we immediately complete the proof of our claim.
By Lemma \ref{l2.2} with $l_2=\underline{h}(t)$ and $l_1=(t+\theta)^{1/2}$, we have
\[\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy\succeq(1-\ep^2)\underline{u}(t,x) ~ ~ {\rm for ~ }t>0,~ x\in[-\underline{h}(t),\underline{h}(t)].\]
Similarly to Step 1, there exists a positive constant $\overline{C}$ such that
\[F(\underline{u})\succeq \overline{C}\ep \underline{u}~ ~ {\rm for ~ }t>0,~ x\in[-\underline{h}(t),\underline{h}(t)].\]
Therefore, for $x\in[-\underline{h}(t),-\underline{h}(t)+(t+\theta)^{1/2}]\cup[\underline{h}(t)-(t+\theta)^{1/2},\underline{h}(t)]$, we similarly derive
\bess
&&D\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy-D\circ\underline u+ F(\underline u)\\
&&\succeq \frac{\overline{C}\ep{\bf1}}{2}\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy\succeq\frac{\tilde{C}_1\overline{C}\ep\ln(t+\theta)}{8(t+\theta)^{1/2}}{\bf K_{\ep}}\\
&&\succeq\frac{2K\ln(t+\theta)}{(t+\theta)^{1/2}}{\bf K_{\ep}}\succeq \underline{u}_t
\eess
provided that $\ep$ and $K$ are small enough, and $\theta$ is large suitably.
For $|x|\le \underline{h}(t)-(t+\theta)^{1/2}$, we have
\bess
D\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy-D\circ\underline u+ F(\underline u)\succeq \frac{\overline{C}\ep{\bf1}}{2}\circ\dd\int_{-\underline h(t)}^{\underline h(t)}{\bf J}(x-y)\circ\underline u(t,y)\dy\succeq{\bf0}\succeq\underline{u}_t.
\eess
Thus the first inequality of \eqref{2.2} is obtained.
Since spreading happens for \eqref{1.1}, for $\ep$, $\theta$ and $K$ as chosen above, we can choose $T>0$ properly such that $-\underline{h}(0)\ge g(T)$, $\underline{h}(0)\le h(T)$ and $\underline{u}(0,x)\preceq {\bf u^*}(1-\ep)\preceq u(T,x)$ for $x\in[-\underline{h}(0),\underline{h}(0)]$. So \eqref{2.2} is proved, and Step 2 is complete. The desired result directly follows from our early conclusions.
\end{proof}
\section{Proofs of Theorem \ref{t1.3}, Theorem \ref{t1.4} and Theorem \ref{t1.5}}
\setcounter{equation}{0} {\setlength\arraycolsep{2pt}
We first show the limitations of solution of semi-wave problem \eqref{1.3}-\eqref{1.4}, namely, Theorem \ref{t1.3}.
\begin{proof}[{\bf Proof of Theorem \ref{t1.3}:}]
We first prove the result with {\bf (J2)} holding. By some comparison considerations, we have that $c_{\mu_i}$ is nondecreasing in $\mu_i>0$.
Noticing $c_{\mu_i}<C_*$, we can define $C_{\yy}=\lim_{\mu_i\to\yy}c_{\mu_i}\le C_*$. Next we show that $\lim_{\mu_i\to\yy}l_{\mu_i}=\yy$. Clearly,
\bes\label{3.1}
0\le\int_{-\yy}^{0}\int_{0}^{\yy}J_i(x-y)\phi^{c_{\mu_i}}_i(x)\dy\dx\le \frac{c_{\mu_i}}{\mu_i}\le\frac{C_*}{\mu_i}.
\ees
Assume that $J_i$ does not have compact support set. Hence, for any $n>0$, by \eqref{1.4} we see
\bess
\frac{C_*}{\mu_i}&\ge&\int_{-\yy}^{0}\int_{0}^{\yy}J_i(x-y)\phi^{c_{\mu_i}}_i(x)\dy\dx\ge\int_{-n-1}^{-n}\phi^{c_{\mu_i}}_i(x)\int_{n+1}^{\yy}J_i(y)\dy\dx\\
&\ge&\phi^{c_{\mu_i}}_i(-n)\int_{n+1}^{\yy}J_i(y)\dy\ge0,
\eess
which implies that $\lim_{\mu_i\to\yy}\phi^{c_{\mu_i}}_i(-n)=0$. Noting that $\phi^{c_{\mu_i}}_i(x)$ is decreasing in $x\le0$,
we have that $\lim_{\mu_i\to\yy}\phi^{c_{\mu_i}}_i(x)=0$ locally uniformly in $(-\yy,0]$, which yields $\lim_{\mu_i\to\yy}l_{\mu_i}=\yy$.
Assume that $J_i$ is compactly supported, and $[-L,L]$ is the smallest set which contains the support set of $J_i$.
Combining \eqref{3.1} and the uniform boundedness of ${\phi^{c_{\mu_i}}_i}'(x)$,
one easily has that $\lim_{\mu_i\to\yy}\phi^{c_{\mu_i}}_i(x)=0$ locally uniformly in $[-L,0]$. Since ${\Phi^{c_{\mu_i}}}'$ is uniformly bounded about $\mu_i>1$,
it follows from a compact argument that there is a sequence $\mu^n_i\to\yy$ and a nonincreasing function $\Phi_{\yy}=(\phi^{\yy}_i)\in [C((-\yy,0])]^m$
such that $\Phi^{c_{\mu^n_i}}\to\Phi_{\yy}$ locally uniformly in $(-\yy,0]$ as $n\to\yy$. Clearly, $\Phi_{\yy}\in[{\bf0},{\bf u^*}]$. By the dominated convergence theorem,
\[D\circ\dd\int_{-\yy}^{0}{\bf J}(x-y)\circ \Phi_{\yy}(t,y)\dy-D\circ \Phi_{\yy}+c\Phi_{\yy}'(x)+F(\Phi_{\yy})=0, ~ ~ -\yy<x<0.\]
Thus
\bes\label{3.2}
d_i\dd\int_{-\yy}^{0}J_i(x-y)\phi^{\yy}_i(y)\dy-d_i\phi^{\yy}_i+c_{\mu_i}{\phi^{\yy}_i}'+f_i(\phi^{\yy}_1,\phi^{\yy}_2,\cdots,\phi^{\yy}_m)=0, \quad -\yy<x<0.
\ees
Moreover, $\phi^{\yy}_i(x)=0$ in $[-L,0]$. If $\phi^{\yy}_i(x)\not\equiv0$ for $x\le0$, there exists $L_1\le-L$ such that $\phi^{\yy}_i(L_1)=0<\phi^{\yy}_i(x)$ in $(-\yy,L_1)$.
By \eqref{3.2}, {\bf(J)} and the assumptions on $F$, we have
\[0=d_i\int_{-\yy}^{0}J_i(L_1-y)\phi^{\yy}_i(y)\dy+f_i(\underbrace{\phi^{\yy}_1(L_1),\cdots,0}_{i},\cdots,\phi^{\yy}_m(L_1))>0,\]
which implies that $\phi^{\yy}_i(x)\equiv0$ for $x\le0$. Hence $\lim_{\mu_i\to\yy}l_{\mu_i}=\yy$.
Notice that $\hat{\Phi}^{c_{\mu_i}}$ and $\hat\Phi^{c_{\mu_i}}$$'$ are uniformly bounded for $\mu_i>1$ and $x\le -l_{\mu_i}$.
By a compact consideration again, for any sequence $\mu^n_i\to\yy$, there exists a subsequence, denoted by itself, such that
$\lim_{n\to\yy}\hat{\Phi}^{c_{\mu^n_i}}=\hat{\Phi}^{\yy}$ locally uniformly in $\mathbb{R}$
for some nonincreasing and continuous function $\hat{\Phi}^{\yy}\in[{\bf0},{\bf u^*}]$.
Moreover, $\hat{\Phi}^{\yy}(0)=(u^*_1/2,\hat{\phi}^{\yy}_2(0),\cdots, \hat{\phi}^{\yy}_m(0))$. Using the dominated convergence theorem again, we have
\[D\circ\dd\int_{\mathbb{R}}{\bf J}(x-y)\circ \hat\Phi^{\yy}(y)\dy-D\circ \hat\Phi^{\yy}+C_{\yy}\hat{\Phi}^{\yy '}+F(\hat\Phi^{\yy})=0, ~ ~ -\yy<x<\yy.\]
Together with the properties of $\hat{\Phi}^{\yy}$ and the assumptions on $F$,
we easily derive that $\hat{\Phi}^{\yy}(-\yy)={\bf u^*}$ and $\hat{\Phi}^{\yy}(\yy)={\bf 0}$. Thus, $(C_{\yy},\hat{\Phi}^{\yy})$ is a solution pair of \eqref{1.5}. By Theorem \ref{B},
$C_*$ is the minimal speed of \eqref{1.5}. Noticing that $C_{\yy}\le C_*$, we derive that $C_{\yy}=C_*$ and $\hat{\Phi}^{\yy}=\Psi$.
By the arbitrariness of sequence $\mu^n_{i}$, we have that $\hat{\Phi}^{c_{\mu_i}}(x)\to\Psi(x)$ locally uniformly in $\mathbb{R}$ as $\mu_i\to\yy$.
We now show that if {\bf(J2)} does not hold, then $c_{\mu_i}\to\yy$ as $\mu_i\to\yy$.
Since $c_{\mu_i}$ is nondecreasing in $\mu_i>0$, we have that $\lim_{\mu_i\to\yy}c_{\mu_i}:=C_{\yy}\in(0,\yy]$.
Arguing indirectly, we assume that $C_{\yy}\in(0,\yy)$. Then following the similar lines in previous arguments,
we can prove that \eqref{1.5} has a solution pair $(C_{\yy},\Phi_{\yy})$ with $\Phi_{\yy}$ nonincreasing,
$\Phi_{\yy}(-\yy)={\bf u^*}$ and $\Phi_{\yy}(\yy)={\bf 0}$. This is a contradiction to Theorem \ref{B}. So $C_{\yy}=\yy$. The proof is complete.
\end{proof}
Then we give the proof of Theorem \ref{t1.4}.
\begin{proof}[{\bf Proof of Theorem \ref{t1.4}:}]
(i) Since {\bf (J2)} holds, problem \eqref{1.5} has a solution pair $(C_*,\Psi_{C_*})$ with $C_*>0$ and $\Psi_{C_*}$ nonincreasing in $\mathbb{R}$.
We first claim that $\Psi_{C_*}=(\psi_i)\succ{\bf 0}$ and $\Psi_{C_*}$ is monotonically decreasing in $\mathbb{R}$. For any $i\in\{1,2,\cdots,m_0\}$ and $l>0$,
define $\tilde{\psi_i}(x)=\psi_i(x-l)$. Then applying \cite[Lemma 2.2]{DN21} to $\tilde\psi$, we derive that $\psi_i(x)>0$ for $x<l$. By the arbitrariness
of $l>0$, we have $\psi_i(x)>0$ for $x\in\mathbb{R}$. Then for $i\in\{m_0+1,\cdots,m_0\}$, it follows from the assumptions on $F$ that $\psi'_i(x)<0$ in $\mathbb{R}$.
Thus $\psi_i(x)>0$ in $\mathbb{R}$ for $i\in\{m_0+1,\cdots,m_0\}$. To show the monotonicity of $\Psi_{C_*}$, it remains to verify that $\psi_i$ is decreasing in
$\mathbb{R}$ for every $i\in\{1,\cdots,m_0\}$. For any $\delta>0$, define $w(x)=\psi_i(x-\delta)-\psi_i(x)$ for any $i\in\{1,\cdots,m_0\}$.
Clearly, $w(x)\ge0$ in $\mathbb{R}$ and $w(x)\not\equiv0$ for $x<0$. By \eqref{1.5}, $w(x)$ satisfies
\[d_i\int_{-\yy}^{\yy}J_i(x-y)w(y)\dy-d_iw(x)+C_*w'(x)+q(x)w(x)\le0, ~ ~ x\in\mathbb{R}.\]
By \cite[Lemma 2.5]{DLZ}, $w(x)>0$ in $x<0$, and so $\psi_i(x)$ is decreasing in $x<0$. As before, for any $l>0$,
define $\tilde{\psi}_i(x)=\psi_i(x-l)$. Similarly, we can show that $\psi_i(x)$ is decreasing in $x<l$. Thus, our claim is verified.
Define $\bar{U}=K\Psi_{C_*}(x-C_*t)$ with $K\gg1$. We next show that $\bar{U}$ is an upper solution of \eqref{1.2}. In view of the assumptions on $U(0,x)$ and our above analysis,
there is $K\gg1$ such that $\bar{U}(0,x)=K\Psi_{C_*}(x)\succeq U(0,x)$ in $\mathbb{R}$. Moreover, by {\bf (f1)},
we have $KF(\Psi_{C_*}(x-C_*t))\succeq F(K\Psi_{C_*}(x-C_*t))$, and thus
\bess
\bar{U}_t=-C_*K\Psi'_{C_*}(x-C_*t)\succeq D\circ\int_{-\yy}^{\yy}{\bf J}(x-y)\circ \bar{U}(t,y)\dy-D\circ \bar{U}+F(\bar{U}).
\eess
It then follows from a comparison argument that $U(t,x)\preceq\bar{U}(t,x)$ for $t\ge0$ and $x\in\mathbb{R}$.
Noticing the properties of $\psi_i$, for any $\lambda\in(0,u^*_i)$ there is a unique $y_*\in\mathbb{R}$ such that $K\psi_i(y_*)=\lambda$. Therefore, it is easy to
see that
\bes \label{3.3}
x^{-}_{i,\lambda}(t)\le x^{+}_{i,\lambda}(t)\le y_*+C_*t.
\ees
Similarly, we can prove that for suitable $K_1\gg1$, $K_1\Psi(-x-C_*t)$ is also an upper solution of \eqref{1.5},
and there is a unique $\tilde y_*\in\mathbb{R}$ such that $K_1\psi_i(\tilde y_*)=\lambda$.
Then one easily derive $-\tilde y_*-C_*t\le x^{-}_{i,\lambda}(t)\le x^{+}_{i,\lambda}(t)$. This together with \eqref{3.3} arrives at
\[\limsup_{t\to\yy}\frac{|x^{-}_{i,\lambda}(t)|}{t}\le \limsup_{t\to\yy}\frac{|x^{+}_{i,\lambda}(t)|}{t}\le C_*.\]
Next we claim that
\[\liminf_{t\to\yy}\frac{|x^{+}_{i,\lambda}(t)|}{t}\ge \liminf_{t\to\yy}\frac{|x^{-}_{i,\lambda}(t)|}{t}\ge C_*.\]
We assume $\mu_1>0$, and fix other $\mu_i$. Denote the unique solution of \eqref{1.1} by $(u^{\mu_1},g^{\mu_1},h^{\mu_1})$. By a comparison consideration, we have
$U(t,x)\succeq u^{\mu_1}$ in $[0,\yy)\times[g^{\mu_1}(t),h^{\mu_1}(t)]$ for any $\mu_1>0$.
Moreover, we can choose $\mu_1$ large sufficiently, say $\mu_1>\tilde{\mu}>0$,
so that spreading happens for $(u^{\mu_1},g^{\mu_1},h^{\mu_1})$. Moreover, by Theorem \ref{A}, we have
\[\lim_{t\to\yy}\frac{-g^{\mu_1}}{t}=\lim_{t\to\yy}\frac{h^{\mu_1}}{t}=c_0.\]
To stress the dependence of $c_0$ on $\mu_1$, we rewrite $c_0$ as $c^{\mu_1}$. By Theorem \ref{t1.3}, $\lim_{\mu_1\to\yy}c^{\mu_1}=C_*$.
For any $\lambda\in(0,u^*_i)$, we choose $\delta$ small enough such that $\lambda<u^*_i-\delta$. Then by virtue of Theorem \ref{t1.1}, for any $0<\ep\ll1$, there is $T>0$ such that
\[\lambda<u^*_i-\delta\le u^{\mu_1}_i\le u^*_i+\delta, ~ ~ {\rm for} ~ t\ge T, ~ ~ |x|\le (c^{\mu_1}-\ep)t,\]
which obviously implies that $x^{-}_{i,\lambda}(t)\le-(c^{\mu_1}-\ep)t$ and $x^{+}_{i,\lambda}(t)\ge (c^{\mu_1}-\ep)t$.
Due to the arbitrariness of $\ep$ and $\mu_1$, our claim is proved. Since $K\Psi_{C_*}(x-C_*t)$ and $K_1\Psi(-x-C_*t)$ are the upper solution of \eqref{1.5},
it is easy to prove the second limitation of \eqref{1.6}. Thus \eqref{1.6} is obtained.
Now we prove \eqref{1.7}. Let $\bar{u}$ be the solution of
\[\bar{u}_t=F(\bar{u}), ~ ~ ~ ~ \bar{u}(0)=(\max\{\|u_{i0}(x)\|_{C([-h_0,h_0])},u^*_i\}).\]
By {\bf (f4)} and comparison principle, we derive that $\limsup_{t\to\yy}U(t,x)\preceq {\bf u^*}$ uniformly in $\mathbb{R}$. As before, for any $c\in (0,C_*)$, let $\mu_1>\tilde{\mu}$ large enough such that $c<c^{\mu_1}$. Using Theorem \ref{t1.1} and comparison principle, we see
$\liminf_{t\to\yy}U(t,x)\succeq {\bf u^*}$ uniformly in $x\in[-ct,ct]$ which, combined with our early conclusion, yields the desired result.
Moreover, since $K\Psi_{C_*}(x-C_*t)\succeq U(t,x)$ and $K_1\Psi(-x-C_*t)\succeq U(t,x)$ for $t\ge0$ and $x\in\mathbb{R}$, we have for any $c>C_*$,
\bess
0&\le&\sup_{|x|\ge ct}\dd\sum_{i=1}^{m}U_i(t,x)\le\sup_{x\ge ct}\dd\sum_{i=1}^{m}U_i(t,x)+\sup_{x\le -ct}\dd\sum_{i=1}^{m}U_i(t,x)\\
&\le&\sup_{x\ge ct}\dd\sum_{i=1}^{m}K\psi_i(x-C_*t)+\sup_{x\le -ct}\dd\sum_{i=1}^{m}K_1\psi_i(-x-C_*t)\\
&=&(K+K_1)\dd\sum_{i=1}^{m}\psi_i(ct-C_*t)\to0 ~ {\rm as } ~ t\to\yy.
\eess
Therefore, conclusion (i) is proved.
(ii) We now assume that {\bf (J2)} does not hold, but {\bf (J1)} holds. By Theorem \ref{t1.3}, $\lim_{\mu_1\to\yy}c^{\mu_1}=\yy$.
Thanks to the above arguments, we have $x^{-}_{i,\lambda}(t)\le-(c^{\mu_1}-\ep)t$ and $x^{+}_{i,\lambda}(t)\ge (c^{\mu_1}-\ep)t$.
Letting $\mu_1\to\yy$ and $\ep\to0$, we have $\lim_{t\to\yy}\frac{|x^{\pm}_{i,\lambda}|}{t}=\yy$. We next prove \eqref{1.9}. For any $c>0$, let $\mu_1$ large
enough such that $c^{\mu_1}>c$ and spreading happens for $(u^{\mu_1},g^{\mu_1},h^{\mu_1})$.
By a comparison argument and Theorem \ref{t1.1}, we see $\liminf_{t\to\yy}U(t,x)\succeq {\bf u^*}$ uniformly in $|x|\le ct$. Together with our previous result,
we obtain \eqref{1.9}.
We now suppose that {\bf (J1)} does not hold. It then follows from Theorem \ref{t1.1} that for any $c>0$, there is $T>0$ such that
\[\lambda<u^*_i-\delta\le u^{\mu_1}_i\le u^*_i+\delta, ~ ~ {\rm for} ~ t\ge T, ~ ~ |x|\le ct,\]
which clearly indicates \eqref{1.8}. As for \eqref{1.9}, by Theorem \ref{t1.1} again, we see $\liminf_{t\to\yy}U(t,x)\succeq {\bf u^*}$ uniformly in $|x|\le ct$.
(iii) As above, $(u^{\mu_1},g^{\mu_1},h^{\mu_1})$ is a lower solution to \eqref{1.2}. By Theorem \ref{t1.2} and our early conclusion, we immediately derive the desired result. Thus the proof is complete.
\end{proof}
Finally, we show the proof of Theorem \ref{t1.5}.
\begin{proof}[{\bf Proof of Theorem \ref{t1.5}:}] It clearly follows from comparison principles (see \cite[Lemmas 3.1 and 3.2]{DN21}) that $(u_{\mu_i},g_{\mu_i},h_{\mu_i})$ is monotonically nondecreasing in $\mu_i>0$. Hence we can define $G(t)=\lim_{\mu_i\to\yy}g_{\mu_i}(t)\in[-\yy,-h_0]$, $H(t)=\lim_{\mu_i\to\yy}h_{\mu_i}(t)\in[h_0,\yy]$ and $\hat{U}(t,x)=\lim_{\mu_i\to\yy}u_{\mu_i}(t,x)\le U(t,x)$ for $t>0$ and $G(t)<x<H(t)$. Here $u_{\mu_i}=(u^j_{\mu_i})$ and $\hat{U}=(\hat{U}_j)$ with $j\in\{1,2,\cdots,m\}$.
We now claim that $G(t)=-\yy$ and $H(t)=\yy$ for all $t>0$. We only prove the former since the latter can be handled by similar arguments. Arguing indirectly, assume that there is $t_0>0$ such that $G(t_0)>-\yy$. Then $-h_0\ge g_{\mu_i}(t)\ge G(t)\ge G(t_0)>-\yy$ for $t\in(0,t_0]$. By {\bf(J)}, there are small $\ep_1,\delta>0$ such that $J_i(|x|)>\ep_1$ for $|x|\le2\delta$. Therefore, for $t\in(0,t_0]$, we have
\bess
g'_{\mu_i}(t)&=&-\sum_{j=1}^{m_0}\mu_j\int_{g_{\mu_i}(t)}^{h_{\mu_i}(t)}\int_{-\yy}^{g_{\mu_i}(t)}J_j(x-y)u^j_{\mu_i}(t,x){\rm d}y{\rm d}x\\
&\le&-\mu_i\int_{g_{\mu_i}(t)}^{h_{\mu_i}(t)}\int_{-\yy}^{g_{\mu_i}(t)}J_i(x-y)u^i_{\mu_i}(t,x){\rm d}y{\rm d}x\le-\mu_i\int_{g_{\mu_i}(t)}^{g_{\mu_i}(t)+\delta}\int_{g_{\mu_i}(t)-\delta}^{g_{\mu_i}(t)}J_i(x-y)u^i_{\mu_i}(t,x){\rm d}y{\rm d}x\\
&\le&-\mu_i\ep_1\delta\int_{g_{\mu_i}(t)}^{g_{\mu_i}(t)+\delta}u^i_{\mu_i}(t,x){\rm d}x.
\eess
By the dominated convergence theorem, we see
\[\lim_{\mu_i\to\yy}\int_{g_{\mu_i}(t)}^{g_{\mu_i}(t)+\delta}u^i_{\mu_i}(t,x){\rm d}x=\int_{G(t)}^{G(t)+\delta}\hat{U}_i(t,x){\rm d}x>0 ~ ~ {\rm for } ~ t\in(0,t_0].\]
Then
\[-\frac{g_{\mu_i}(t_0)+h_0}{\mu_i}\ge\ep_1\delta\int_{0}^{t_0}\int_{g_{\mu_i}(t)}^{g_{\mu_i}(t)+\delta}u^i_{\mu_i}(t,x){\rm d}x\dt\to \ep_1\delta\int_{0}^{t_0}\int_{G(t)}^{G(t)+\delta}\hat{U}_i(t,x){\rm d}x\dt>0 ~ ~ {\rm as } ~ \mu_i\to\yy,\]
which clearly implies that $G(t_0)=-\yy$. So $G(t)=-\yy$ for all $t>0$, and our claim is verified. Combining with the monotonicity of $g_{\mu_i}(t)$ and $h_{\mu_i}(t)$ in $t$, one easily shows that $\lim_{\mu_i\to\yy}-g_{\mu_i}(t)=\lim_{\mu_i\to\yy}h_{\mu_i}(t)\to\yy$ locally uniformly in $(0,\yy)$.
Next we prove that $\hat{U}$ satisfies \eqref{1.2}. For any $(t,x)\in(0,\yy)\times\mathbb{R}$, there are large $\hat\mu_i>0$ and $t_1<t$ such that $x\in(g_{\mu_i}(s),h_{\mu_i}(s))$ for all $\mu_i\ge\hat\mu_i$ and $s\in[t_1,t]$. Integrating the first $m$ equations in \eqref{1.1} over $t_1$ to $s\in(t_1,t]$ yields that for $j\in\{1,2,\cdots,m_0\}$,
\[u^j_{\mu_i}(s,x)-u^j_{\mu_i}(t_1,x)=\int_{t_1}^{s}\bigg(d_j\mathcal{L}[u^j_{\mu_i}](\tau,x)+f_j(u^1_{\mu_i}(\tau,x),u^2_{\mu_i}(\tau,x),\cdots,u^m_{\mu_i}(\tau,x))\bigg){\rm d}\tau,\]
and
\[u^j_{\mu_i}(s,x)-u^j_{\mu_i}(t_1,x)=\int_{t_1}^{s}f_j(u^1_{\mu_i}(\tau,x),u^2_{\mu_i}(\tau,x),\cdots,u^m_{\mu_i}(\tau,x)){\rm d}\tau ~ ~ {\rm for } ~ j\in\{m_0+1,\cdots,m\}.\]
Letting $\mu_i\to\yy$ and using the dominated convergence theorem, we have that for $j\in\{1,2,\cdots,m_0\}$,
\[\hat{U}_j(s,x)-\hat{U}_j(t_1,x)=\int_{t_1}^{s}\bigg(d_j\mathcal{L}[\hat{U}_j](\tau,x)+f_j(\hat{U}_1(\tau,x),\hat{U}_2(\tau,x),\cdots,\hat{U}_m(\tau,x))\bigg){\rm d}\tau,\]
and
\[\hat{U}_j(s,x)-\hat{U}_j(t_1,x)=\int_{t_1}^{s}f_j(\hat{U}_1(\tau,x),\hat{U}_2(\tau,x),\cdots,\hat{U}_m(\tau,x)){\rm d}\tau ~ ~ {\rm for } ~ j\in\{m_0+1,\cdots,m\}.\]
Then differentiating the above equations by $s$, we see that $\hat{U}$ solves \eqref{1.2} for any $(t,x)\in(0,\yy)\times\mathbb{R}$. Moreover,
since $0\le\hat{U}(t,x)\le U(t,x)$ in $(0,\yy)\times\mathbb{R}$, it is easy to see that $\lim_{t\to0}\hat{U}(t,x)={\bf 0}$. By the uniqueness of solution to \eqref{1.2}, $\hat{U}(t,x)\equiv U(t,x)$ in $[0,\yy)\times\mathbb{R}$.
Using Dini's theorem, we directly derive our desired results.
\end{proof}
\section{Examples}
In this section, we introduce two epidemic models to explain our conclusions.
{\bf Example 1.} To investigate the spreading of some infectious diseases, such as cholera, Capasso and Maddalena \cite{CM} studied the following problem
\bes\label{4.1}
\left\{\begin{aligned}
&\partial_tu_1=d_1\Delta u_1-au_1+cu_2, & &t>0,~x\in\Omega,\\
&\partial_tu_2=d_2\Delta u_2-bu_2+G(u_1), & &t>0,~x\in\Omega,\\
&\frac{\partial u_1}{\partial \nu}+\lambda_1u_1=0, ~ ~ \frac{\partial u_2}{\partial \nu}+\lambda_2u_2=0, & & t>0,~ x\in\partial \Omega,
\end{aligned}\right.
\ees
where $u_1$ and $u_2$ represent the concentration of the infective agents, such as bacteria, and the infective human population, respectively. Both of them adopt the random diffusion (or called by local diffusion) strategy. And $d_1$ and $d_2$ are their
respective diffusion rates. $-au_1$ is the death rate of the infection agents, $cu_2$ is the
growth rate of the agents contributed by the infective humans, and $-bu_2$ is the death rate of the infective human population. The function $G(u_1)$ describes the infective rate of humans, and its assumptions will be given later.
Recently, much research for model \eqref{4.1} and its variations has been done. For example, one can refer to \cite{ABL,LZW} for the free boundary problem with local diffusion, and \cite{ZLN} for the spreading speed. Particularly, Zhao et al \cite{ZZLD} recently replaced the local diffusion term of $u_1$ with the nonlocal diffusion operator as in \eqref{1.1}, and assume $d_2=0$. They found that the dynamics of their model is different that of \cite{ABL}.
Here we assume that the dispersal of the infective human population is also approximated by the nonlocal diffusion as in \eqref{1.1}, and thus propose the following model
\bes\left\{\begin{aligned}\label{4.2}
&\partial_tu_1=d_1\int_{g(t)}^{h(t)}J_1(x-y)u_1(t,y)\dy-d_1u_1-au_1+cu_2, & &t>0,~x\in(g(t),h(t)),\\
&\partial_tu_2=d_2\int_{g(t)}^{h(t)}J_2(x-y)u_2(t,y)\dy-d_2u_2-bu_2+G(u_1), & &t>0,~x\in(g(t),h(t)),\\
&u_i(t,g(t))=u_i(t,h(t))=0,& &t>0, ~ i=1,2,\\
&g'(t)=-\sum_{i=1}^{2}\mu_i\int_{g(t)}^{h(t)}\int_{-\yy}^{g(t)}
J_i(x-y)u_i(t,x)\dy\dx,& &t>0,\\
&h'(t)=\sum_{i=1}^{2}\mu_i\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}
J_i(x-y)u_i(t,x)\dy\dx,& &t>0,\\
&-g(0)=h(0)=h_0>0;\;\;u_1(0,x)=u_{10}(x),\;\;u_2(0,x)=u_{20}(x),&&x\in[-h_0,h_0],
\end{aligned}\right.
\ees
where $J_i$ satisfy {\bf(J)}, $d_i,a,b,c$ are positive constant, $\mu_i\ge0$ and $\sum_{i=1}^{2}\mu_i>0$. $G(z)$ satisfies
(i) $G\in C^1([0,\yy))$, $G(0)=0$, $G'(z)>0$ for $z\ge0$ and $G'(0)>\frac{ab}{c}$;
(ii) $\bigg(\frac{G(z)}{z}\bigg)'<0$ for $z>0$ and $\lim_{t\to\yy}\frac{G(z)}{z}<\frac{ab}{c}$; Assumptions (i) and (ii) clearly imply that there is a unique positive constant $u^*_1$ such that $\frac{G(u^*_1)}{u^*_1}=\frac{ab}{c}$. Define $u^*_2=\frac{G(u^*_1)}{b}$.
(iii) $\bigg(\frac{G(u^*_1)}{u^*_1}\bigg)'<-\frac{ab}{c}$;\\
An example for $G$ is $G(z)=\frac{\alpha z}{1+\beta z}$ with $\alpha>\frac{ab}{c}$ and $\beta>\frac{\alpha c}{ab}$. Obviously, we have
\[F(u_1,u_2)=(f_1(u_1,u_2),f_2(u_1,u_2))=(-au_1+cu_2, -bu_2+G(u_1)) ~ ~{\rm and ~ ~ }{\bf u^*}=(u^*_1,u^*_2).\]
By the similar methods in \cite{DNwn}, we easily show the dynamics of \eqref{4.2} are governed by the following spreading-vanishing dichotomy
\sk{\rm(i)}\, \underline{Spreading:} $\lim_{t\to\yy}-g(t)=\lim_{t\to\yy}h(t)=\yy$ (necessarily $\mathcal{R}_0=\frac{G'(0)c}{ab}>1$) and
\[\lim_{t\to\yy}(u_1(t,x),u_2(t,x))=(u^*_1,u^*_2) ~ ~ {\rm locally ~ uniformly ~ in~} \mathbb{R}.\]
\sk{\rm(ii)}\, \underline{Vanishing:} $\lim_{t\to\yy}\big(h(t)-g(t)\big)<\yy$ and \[\lim_{t\to\yy}\bigg(\|u_1(t,\cdot)\|_{C([g(t),h(t)])}+\|u_2(t,\cdot)\|_{C([g(t),h(t)])}\bigg)=0.\]
It is easy to show that conditions {\bf(f1)}-{\bf(f5)} hold for $F$. Hence Theorem \ref{t1.1} is valid for \eqref{4.2}.
\begin{theorem}\label{t4.1}Let $(u_1,u_2,g,h)$ be a solution of \eqref{4.2} and spreading happen. Then
\bess\left\{\begin{aligned}
&\lim_{t\to\yy}\max_{x\in[0,\,ct]}\sum_{i=1}^{2}|u_i(t,x)-u^*_i|=0 ~ {\rm for ~ any ~ } c\in(0,c_0) ~ ~ {\rm if ~ {\bf(J1)} ~ holds ~ with ~ }m_0=m=2,\\
&\lim_{t\to\yy}\max_{x\in[0,\,ct]}\sum_{i=1}^{2}|u_i(t,x)-u^*_i|=0 ~ {\rm for ~ any ~ } c>0 ~ ~ {\rm if ~ {\bf(J1)} ~ does ~ not ~ hold},
\end{aligned}\right.\eess
where $c_0$ is uniquely determined by the semi-wave problem \eqref{1.3}-\eqref{1.4} with $m_0=m=2$.
\end{theorem}
However, one easily checks that $F$ does not satisfy {\bf(f6)}. Thus Theorem \ref{t1.2} can not directly be applied to \eqref{4.2}. But by using some new lower solution we still can prove that the similar results in Theorem \ref{t1.2} hold for problem \eqref{4.2}.
\begin{theorem}\label{t2.5} Assume that $J_i$ satisfy ${\bf (J^{\gamma})}$ with $\gamma\in(1,2]$ and $m_0=m=2$. Let spreading happen for \eqref{4.2}. Then
\bess\left\{\begin{aligned}
&\lim_{t\to\yy}\max_{|x|\le s(t)}\sum_{i=1}^{2}|u_i(t,x)-u^*_i|=0 ~ {\rm for ~ any } ~ 0\le s(t)=t^{\frac{1}{\gamma-1}}o(1) ~ ~ {\rm ~ if ~}\gamma\in(1,2),\\
&\lim_{t\to\yy}\max_{|x|\le s(t)}\sum_{i=1}^{2}|u_i(t,x)-u^*_i|=0 ~ {\rm for ~ any } ~ 0\le s(t)=(t\ln t) o(1) ~ ~ {\rm ~ if ~ }\gamma=2.
\end{aligned}\right.\eess
\end{theorem}
\begin{proof}{\bf Step 1:} Consider problem
\bess\left\{\begin{aligned}
&\bar{u}_{1t}=-a\bar{u}_1+c\bar{u}_2\\
&\bar{u}_{2t}=-b\bar{u}_2+G(u_1)\\
&\bar{u}_1(0)=\|u_{10}(x)\|_{C([-h_0,h_0])}, ~ \bar{u}_2(0)=\|u_{20}(x)\|_{C([-h_0,h_0])}.
\end{aligned}\right.\eess
It follows from simple phase-plane analysis that $\lim_{t\to\yy}\bar{u}_1(t)=u^*_1$ and $\lim_{t\to\yy}\bar{u}_2(t)=u^*_2$. By a comparison argument, we have $u_1(t,x)\le \bar{u}_1(t)$ and $u_2(t,x)\le \bar{u}_2(t)$ for $t\ge0$ and $x\in\mathbb{R}$. Thus
\[\limsup_{t\to\yy}u_1(t,x)\le u^*_1 ~ {\rm and } ~ \limsup_{t\to\yy}u_2(t,x)\le u^*_2 ~ ~ {\rm uniformly ~ in ~ }\mathbb{R}.\]
Thus it remains to show the lower limitations of $u$. We will carry it out in two steps.
{\bf Step 2:} This step concerns the case $\gamma\in(1,2)$. We will construct a suitably lower solution, which is different from that of the Step 2 in proof of Theorem \ref{t1.2}, to show that for any $0\le s(t)=t^{\frac{1}{\gamma-1}}o(1)$,
\[\liminf_{t\to\yy}u_1(t,x)\ge u^*_1 ~ {\rm and } ~ \liminf_{t\to\yy}u_2(t,x)\ge u^*_2 ~ ~ {\rm uniformly ~ in ~ }|x|\le s(t).\]
For small $\ep>0$ and $0<\frac{\alpha_2}{2}<\alpha_1<\alpha_2<1$, define
\bess
&&\underline{h}(t)=(Kt+\theta)^{\frac{1}{\gamma-1}} ~ {\rm and}\\ &&\underline{u}(t,x)=(\underline{u}_1(t,x),\underline{u}_2(t,x))=\bigg(u^*_1(1-\ep^{\alpha_1})(1-\frac{|x|}{\underline{h}(t)}),
u^*_2(1-\ep^{\alpha_2})(1-\frac{|x|}{\underline{h}(t)})\bigg)
\eess
with $t\ge0$, $x\in[-\underline{h}(t),\underline{h}(t)]$, and $K,\theta>0$ to be determined later.
We next prove that for small $\ep>0$, there exist proper $T,K$ and $\theta>0$ such that
\bes\left\{\begin{aligned}\label{4.3}
&\partial_t\underline u_1\le d_1\int_{-\underline{h}(t)}^{\underline{h}(t)}J_1(x-y)\underline u_1(t,y)\dy-d_1\underline u_1-a\underline u_1+c\underline u_2, &&\hspace{-15mm}t>0,~x\in(-\underline h(t),\underline h(t)),\\
&\partial_t\underline u_2\le d_2\int_{-\underline{h}(t)}^{\underline{h}(t)}J_2(x-y)\underline u_2(t,y)\dy-d_2\underline u_2-b\underline u_2+G(\underline u_1), & &\hspace{-15mm}t>0,~x\in(-\underline h(t),\underline h(t)),\\
&\underline u_i(t,\pm\underline h(t))\le0,& &\hspace{-15mm}t>0, ~ i=1,2,\\
&-\underline{h}(t)\ge-\sum_{i=1}^{2}\mu_i\int_{-\underline{h}(t)}^{h(t)}\int_{-\yy}^{-h(t)}
J_i(x-y)\underline u_i(t,x)\dy\dx,& &\hspace{-15mm}t>0,\\
&\underline h'(t)\le\sum_{i=1}^{2}\mu_i\int_{-\underline h(t)}^{h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)\underline u_i(t,x)\dy\dx,& &\hspace{-15mm}t>0,\\
&-\underline{h}(0)\ge g(T),~ \underline h(0)\le h(T);\;\;\underline u_1(0,x)\le u_1(T,x),\;\;\underline u_2(0,x)\le u_{2}(T,x),&&x\in[-\underline{h}(0),\underline{h}(0)].
\end{aligned}\right.
\ees
As before, if \eqref{4.3} holds, then by comparison principle and our definition of the lower solution $(\underline{u},-\underline{h},\underline{h})$, we easily derive
\[\liminf_{t\to\yy}(u_1(t,x),u_2(t,x))\succeq(u^*_1(1-\ep^{\alpha_1}),u^*_2(1-\ep^{\alpha_2})) ~ {\rm uniformly ~ in ~} |x|\le s(t),\]
which, combined with the arbitrariness of $\ep$, yields
\[\liminf_{t\to\yy}(u_1(t,x),u_2(t,x))\succeq(u^*_1,u^*_2) ~ {\rm uniformly ~ in ~} |x|\le s(t).\]
Clearly, $\underline{u}_i(t,\pm\underline{h}(t))=0$ for $t\ge0$. Then we prove that the fourth and fifth inequalities of \eqref{4.3} hold. Similarly to the proof of Theorem \ref{t1.2}, for large $\theta>0$ and small $K>0$, we have
\bess
&&\sum_{i=1}^{2}\mu_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)\underline u_i(t,x)\dy\dx\\
&&=\sum_{i=1}^{2}(1-\ep^{\alpha_i})\mu_iu^*_i\dd\int_{-\underline h(t)}^{\underline h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)(1-\frac{|x|}{\underline{h}(t)})\dy\dx\\
&&\ge\sum_{i=1}^{2}\frac{(1-\ep^{\alpha_i})\mu_iu^*_i}{\underline{h}(t)}\bigg(\dd\int_{0}^{\underline h(t)}\int_{0}^{y}+\int_{\underline{h}(t)}^{\yy}\int_{0}^{\underline{h}(t)}\bigg)J_i(y)x\dx\dy\\
&&\ge\sum_{i=1}^{2}\frac{(1-\ep^{\alpha_i})\mu_iu^*_i}{2\underline{h}(t)}\int_{0}^{\underline h(t)}J_i(y)y^2\dy\ge\sum_{i=1}^{2}\frac{C_1(1-\ep^{\alpha_i})\mu_iu^*_i}{2\underline{h}(t)}\int_{\underline h(t)/2}^{\underline h(t)}y^{2-\gamma}\dy\\
&&\ge\tilde{C}_1(Kt+\theta)^{(2-\gamma)/(\gamma-1)}\ge\frac{K(Kt+\theta)^{(2-\gamma)/(\gamma-1)}}{\gamma-1}=\underline{h}'(t)
.
\eess
Thus the fourth inequality in \eqref{4.3} holds, and the fifth obviously follows from the symmetry of $J$ and $\underline{u}$ about $x$. Now we deal with the first two inequalities in \eqref{4.3}. As in the Step 2 of the proof of Theorem \ref{t1.2}, we can show that there exist $\hat{C}>0$ such that
\bes\label{4.4}\int_{-\underline h(t)}^{\underline h(t)}J_i(x-y)\underline u_i(t,y)\dy\ge\hat{C}u^*_i(1-\ep^{\alpha_i})\underline h^{1-\gamma}(t) ~ ~ {\rm for } ~ t>0, ~ x\in[-\underline h(t),\underline h(t)].
\ees
So the details are omitted here. Then we claim that for small $\ep>0$,
\bes\label{4.5}-a\underline{u}_1(t,x)+c\underline{u}_2(t,x)\ge\ep\underline{u}_1(t,x) ~ ~ {\rm for ~ }t\ge0, ~ x\in[-\underline h(t),\underline h(t)].\ees
Clearly, it suffices to show that if $\ep$ is small enough, then
\[-au^*_1(1-\ep^{\alpha_1})+cu^*_2(1-\ep^{\alpha_2})\ge\ep u^*_1(1-\ep^{\alpha_1}).\]
Direct computations show
\bess
&&-au^*_1(1-\ep^{\alpha_1})+cu^*_2(1-\ep^{\alpha_2})-\ep u^*_1(1-\ep^{\alpha_1})\\
&&=-au^*_1+au^*_1\ep^{\alpha_1}+cu^*_2-cu^*_2\ep^{\alpha_2}-\ep u^*_1(1-\ep^{\alpha_1})\\
&&=au^*_1\ep^{\alpha_1}-cu^*_2\ep^{\alpha_2}-\ep u^*_1(1-\ep^{\alpha_1})\\
&&=\ep^{\alpha_1}\left[au^*_1-cu^*_2\ep^{\alpha_2-\alpha_1}-\ep^{1-\alpha_1} u^*_1(1-\ep^{\alpha_1})\right]>0 ~ ~ {\rm if ~ } \ep ~ {\rm is ~ small ~ sufficiently}.
\eess
Furthermore, we claim that for small $\ep>0$,
\bes\label{4.6}-b\underline{u}_2(t,x)+G(\underline{u}_1(t,x))\ge\ep\underline{u}_2(t,x)~ ~ {\rm for ~ }t\ge0, ~ x\in[-\underline h(t),\underline h(t)].
\ees
We first prove that for small $\ep>0$,
\bes\label{4.7}
\frac{G(u^*_1(1-\ep^{\alpha_1})(1-\frac{|x|}{\underline{h}(t)}))}
{u^*_1(1-\ep^{\alpha_1})(1-\frac{|x|}{\underline{h}(t)})}\ge\frac{ab}{c}(1+\ep^{\alpha_1})~ ~ {\rm for ~ }t\ge0, ~ x\in(-\underline h(t),\underline h(t)).
\ees
By the assumptions on $G$, we see
\[\frac{G(u^*_1(1-\ep^{\alpha_1})(1-\frac{|x|}{\underline{h}(t)}))}
{u^*_1(1-\ep^{\alpha_1})(1-\frac{|x|}{\underline{h}(t)})}\ge\frac{G(u^*_1(1-\ep^{\alpha_1}))}
{u^*_1(1-\ep^{\alpha_1})}.\]
Thus we only need to prove that for $t\ge0$ and $x\in(-\underline h(t),\underline h(t))$,
\[\frac{G(u^*_1(1-\ep^{\alpha_1}))}
{u^*_1(1-\ep^{\alpha_1})}\ge\frac{ab}{c}(1+\ep^{\alpha_1}).\]
Define
\[\Gamma(\ep)=\frac{G(u^*_1(1-\ep^{\alpha_1}))}
{u^*_1(1-\ep^{\alpha_1})}-\frac{ab}{c}(1+\ep^{\alpha_1}) ~ ~ {\rm for ~ }0<\ep\ll1.\]
Obviously, $\Gamma(0)=0$, and by our assumptions on $G$ we obtain that for $0<\ep\ll1$,
\bess
&&\Gamma'(\ep)=-\bigg(\frac{G(u^*_1(1-\ep^{\alpha_1}))}
{u^*_1(1-\ep^{\alpha_1})}\bigg)'\alpha_1\ep^{\alpha_1-1}-\frac{ab}{c}\alpha_1\ep^{\alpha_1-1}\\
&&=\alpha_1\ep^{\alpha_1-1}\left[-\bigg(\frac{G(u^*_1(1-\ep^{\alpha_1}))}
{u^*_1(1-\ep^{\alpha_1})}\bigg)'-\frac{ab}{c}\right]>0.
\eess
So \eqref{4.7} holds. Now we continue to prove \eqref{4.6}. Obviously, it holds for $x=\pm\underline{h}(t)$. For $x\in(-\underline h(t),\underline h(t))$, we have
\bess
&&-b\underline{u}_2(t,x)+G(\underline{u}_1(t,x))-\ep\underline{u}_2(t,x)\\
&&=(1-\frac{|x|}{\underline{h}(t)})\left[-bu^*_2(1-\ep^{\alpha_2})+u^*_1(1-\ep^{\alpha_1})
\frac{G(u^*_1(1-\ep^{\alpha_1})(1-\frac{|x|}{\underline{h}(t)}))}
{u^*_1(1-\ep^{\alpha_1})(1-\frac{|x|}{\underline{h}(t)})}-\ep u^*_2(1-\ep^{\alpha_2})\right]\\
&&\ge(1-\frac{|x|}{\underline{h}(t)})\left[-bu^*_2(1-\ep^{\alpha_2})+u^*_1(1-\ep^{\alpha_1})
\frac{ab}{c}(1+\ep^{\alpha_1})-\ep u^*_2(1-\ep^{\alpha_2})\right]\\
&&=(1-\frac{|x|}{\underline{h}(t)})\left[bu^*_2\ep^{\alpha_2}-
\frac{abu^*_1}{c}\ep^{2\alpha_1}-\ep u^*_2(1-\ep^{\alpha_2})\right]\\
&&=(1-\frac{|x|}{\underline{h}(t)})\ep^{\alpha_2}\left[bu^*_2-
\frac{abu^*_1}{c}\ep^{2\alpha_1-\alpha_2}-\ep^{1-\alpha_2} u^*_2(1-\ep^{\alpha_2})\right]>0
\eess
provided that $\ep$ is small suitably. With \eqref{4.4}, \eqref{4.5} and \eqref{4.6} in hand, we similarly can obtain that for small $K>0$,
\bess
&&d_1\int_{-\underline{h}(t)}^{\underline{h}(t)}J_1(x-y)\underline u_1(t,y)\dy-d_1\underline u_1-a\underline u_1+c\underline u_2\\
&&\ge\frac{\ep}{2}\int_{-\underline{h}(t)}^{\underline{h}(t)}J_1(x-y)\underline u_1(t,y)\dy\ge\hat{C}u^*_1(1-\ep^{\alpha_1})\underline h^{1-\gamma}(t)\ge\frac{u^*_1(1-\ep^{\alpha_1})K\underline{h}^{1-\gamma}}{\gamma-1}\ge\underline{u}_{1t},
\eess
and
\bess
&&d_2\int_{-\underline{h}(t)}^{\underline{h}(t)}J_2(x-y)\underline u_2(t,y)\dy-d_2\underline u_2-b\underline u_2+G(\underline u_1)\\
&&\ge\frac{\ep}{2}\int_{-\underline{h}(t)}^{\underline{h}(t)}J_2(x-y)\underline u_2(t,y)\dy
\ge\hat{C}u^*_2(1-\ep^{\alpha_2})\underline h^{1-\gamma}(t)\ge\frac{u^*_2(1-\ep^{\alpha_2})K\underline{h}^{1-\gamma}}{\gamma-1}\ge\underline{u}_{2t}.
\eess
Therefore the first two inequalities in \eqref{4.3} hold.
Since spreading happens, for such $K,\theta$ and $\ep$ as chosen above, there is a $T>0$ such that $-\underline{h}(0)\ge g(T)$, $\underline{h}(0)\le h(T)$ and $\underline{u}(0,x)\preceq (u^*_1(1-\ep^{\alpha_1}),u^*_2(1-\ep^{\alpha_2}))\preceq(u_1(T,x),u_2(T,x))$ in $[-\underline{h}(0),\underline{h}(0)]$. Hence \eqref{4.3} hold, and we finish this step.
{\bf Step 3:} We now prove that for any $0\le s(t)=t\ln t o(1)$,
\[\liminf_{t\to\yy}u_1(t,x)\ge u^*_1 ~ {\rm and } ~ \liminf_{t\to\yy}u_2(t,x)\ge u^*_2 ~ ~ {\rm uniformly ~ in ~ }|x|\le s(t).\]
For fixed $0<\frac{\alpha_2}{2}<\alpha_1<\alpha_2<1$ and small $\ep>0$, define
\bess
&&\underline{h}(t)=K(t+\theta)\ln(t+\theta) ~ {\rm and}\\ &&\underline{u}(t,x)=(\underline{u}_1(t,x),\underline{u}_2(t,x))
=\bigg(u^*_1(1-\ep^{\alpha_1})\min\{1,\frac{\underline{h}(t)-|x|}{(t+\theta)^{1/2}}\},
u^*_2(1-\ep^{\alpha_2})\min\{1,\frac{\underline{h}(t)-|x|}{(t+\theta)^{1/2}}\}\bigg)
\eess
with $t\ge0$, $x\in[-\underline{h}(t),\underline{h}(t)]$, and $K,\theta>0$ to be determined later.
We next prove that for small $\ep>0$, there exist proper $T,K$ and $\theta>0$ such that
\bes\left\{\begin{aligned}\label{4.8}
&\partial_t\underline u_1\le d_1\int_{-\underline{h}(t)}^{\underline{h}(t)}J_1(x-y)\underline u_1(t,y)\dy-d_1\underline u_1-a\underline u_1+c\underline u_2,\\
&\hspace{25mm}t>0,~x\in(-\underline h(t),\underline h(t))\setminus\{\underline{h}(t)-(t+\theta)^{1/2}\},\\
&\partial_t\underline u_2\le d_2\int_{-\underline{h}(t)}^{\underline{h}(t)}J_2(x-y)\underline u_2(t,y)\dy-d_2\underline u_2-b\underline u_2+G(\underline u_1),\\
&\hspace{25mm}t>0,~x\in(-\underline h(t),\underline h(t))\setminus\{\underline{h}(t)-(t+\theta)^{1/2}\},\\
&\underline u_i(t,\pm\underline h(t))\le0,& &\hspace{-25mm}t>0, ~ i=1,2,\\
&-\underline{h}(t)\ge-\sum_{i=1}^{2}\mu_i\int_{-\underline{h}(t)}^{h(t)}\int_{-\yy}^{-h(t)}
J_i(x-y)\underline u_i(t,x)\dy\dx,& &\hspace{-25mm}t>0,\\
&\underline h'(t)\le\sum_{i=1}^{2}\mu_i\int_{-\underline h(t)}^{h(t)}\int_{\underline h(t)}^{\infty}
J_i(x-y)\underline u_i(t,x)\dy\dx,& &\hspace{-25mm}t>0,\\
&-\underline{h}(0)\ge g(T),~ \underline h(0)\le h(T);\;\;\underline u_1(0,x)\le u_1(T,x),\;\;\underline u_2(0,x)\le u_{2}(T,x),&&x\in[-\underline{h}(0),\underline{h}(0)].
\end{aligned}\right.
\ees
Once \eqref{4.8} is derived, we similarly can complete the step. It is not hard to verify that \eqref{4.5} and \eqref{4.6} are still valid for small $\ep>0$. Then by following similar lines with the proof of Theorem \ref{t1.2}, we immediately verify \eqref{4.8}. The details are omitted. The proof is complete.
\end{proof}
On the other hand, noticing that the growth rate of infectious agents is of concave nonlinearity, Hsu and Yang \cite{HY} recently proposed the following variation of model \eqref{4.1}
\bes\label{4.a}
\left\{\begin{aligned}
&\partial_tu_1=d_1\Delta u_1-au_1+H(u_2), & &t>0,~x\in\Omega,\\
&\partial_tu_2=d_2\Delta u_2-bu_2+G(u_1), & &t>0,~x\in\Omega,
\end{aligned}\right.
\ees
where $H(u_2)$ and $G(u_1)$ satisfy that $H,G\in C^2([0,\yy))$, $H(0)=G(0)=0$, $H',G'>0$ in $[0,\yy)$, $H^{''},G^{''}>0$ in $(0,\yy)$, and $G(H(\hat z)/a)<b\hat{z}$ for some $\hat{z}$.
Examples for such $H$ and $G$ are $H(z)=\alpha z/(1+z)$ and $G(z)=\beta \ln(z+1)$ with $\alpha,\beta>0$ and $\alpha\beta>ab$. Based on the above assumptions, it is easy to show that if $0<H'(0)G'(0)/(ab)\le1$, the unique nonnegative constant equilibrium is $(0,0)$, and if $H'(0)G'(0)/(ab)>1$, there are only two nonnegative constant equilibria, i.e., $(0,0)$ and $(u^*_1,u^*_2)\succ{\bf0}$.
Some further results about \eqref{4.a} can be seen from \cite{HY} and \cite{WH}.
Motivated by the above works, Nguyen and Vo \cite{NV} very recently incorporated nonlocal diffusion and free boundary into model \eqref{4.a}, and thus obtained the following problem
\bes\left\{\begin{aligned}\label{4.b}
&\partial_tu_1=d_1\int_{g(t)}^{h(t)}J_1(x-y)u_1(t,y)\dy-d_1u_1-au_1+H(u_2), & &t>0,~x\in(g(t),h(t)),\\
&\partial_tu_2=d_2\int_{g(t)}^{h(t)}J_2(x-y)u_2(t,y)\dy-d_2u_2-bu_2+G(u_1), & &t>0,~x\in(g(t),h(t)),\\
&u_i(t,g(t))=u_i(t,h(t))=0,& &t>0, ~ i=1,2,\\
&g'(t)=-\sum_{i=1}^{2}\mu_i\int_{g(t)}^{h(t)}\int_{-\yy}^{g(t)}
J_i(x-y)u_i(t,x)\dy\dx,& &t>0,\\
&h'(t)=\sum_{i=1}^{2}\mu_i\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}
J_i(x-y)u_i(t,x)\dy\dx,& &t>0,\\
&-g(0)=h(0)=h_0>0;\;\;u_1(0,x)=u_{10}(x),\;\;u_2(0,x)=u_{20}(x),&&x\in[-h_0,h_0].
\end{aligned}\right.
\ees
They proved that problem \eqref{4.b} has a unique global solution, and its dynamics are also governed by a spreading-vanishing dichotomy. Now we give more accurate estimates on longtime behaviors of the solution to \eqref{4.b}. Assume $H'(0)G'(0)/(ab)>1$ and define
\[F(u_1,u_2)=(f_1(u_1,u_2),f_2(u_1,u_2))=(-au_1+H(u_2),-bu_2+G(u_1)) ~ ~ {\rm and } ~ ~ {\bf u^*}=(u^*_1,u^*_2).\]
One can easily check that {\bf (f1)}-{\bf(f6)} hold with ${\bf \hat{u}}={\bf\yy}$. Thus Theorem \ref{t1.1} and Theorem \ref{t1.2} are valid for solution of problem \eqref{4.b}. Here for convenience of readers, we list the results as below.
\begin{theorem}\label{t4.a}Let $(u_1,u_2,g,h)$ be a solution of \eqref{4.b} and $m_0=m=2$ in conditions {\bf (J1)} and ${\bf(J^\gamma)}$. If spreading happens, then
\bess\left\{\begin{aligned}
&\hspace{-2mm}\lim_{t\to\yy}\max_{|x|\le ct}[|u_1(t,x)-u^*_1|+|u_2(t,x)-u^*_2|]=0 ~ {\rm for ~ any ~ } c\in(0,c_0) ~ ~ {\rm if ~ {\bf(J1)} ~ holds},\\
&\hspace{-2mm}\lim_{t\to\yy}\max_{|x|\le ct}[|u_1(t,x)-u^*_1|+|u_2(t,x)-u^*_2|]=0 ~ {\rm for ~ any ~ } c>0 ~ ~ {\rm if ~ {\bf(J1)} ~ does ~ not ~ hold},\\
&\hspace{-2mm}\lim_{t\to\yy}\max_{|x|\le s(t)}[|u_1(t,x)-u^*_1|+|u_2(t,x)-u^*_2|]=0 ~ {\rm for ~ any } ~s(t)=t^{\frac{1}{\gamma-1}}o(1) \; {\rm if ~ {\bf(J^\gamma)} ~ holds ~ for } ~\gamma\in(1,2),\\
&\hspace{-2mm}\lim_{t\to\yy}\max_{|x|\le s(t)}[|u_1(t,x)-u^*_1|+|u_2(t,x)-u^*_2|]=0 \; {\rm for ~ any \; } s(t)=(t\ln t) o(1) \; {\rm if ~ {\bf(J^\gamma)} ~ holds ~ for } ~\gamma=2.
\end{aligned}\right.\eess
where $c_0$ is uniquely determined by the semi-wave problem \eqref{1.3}-\eqref{1.4} with $m_0=m=2$.
\end{theorem}
{\bf Example 2.} Our second example is the following West Nile virus model with nonlocal diffusion and free boundaries
\bes\left\{\begin{aligned}\label{4.9}
&H_t=d_1\int_{g(t)}^{h(t)}J_1(x-y)H(t,y)\dy-d_1H+a_1(e_1-H)V-b_1H, & &t>0,~x\in(g(t),h(t)),\\
&V_t=d_2\int_{g(t)}^{h(t)}J_2(x-y)V(t,y)\dy-d_2V+a_2(e_2-V)H-b_2V, & &t>0,~x\in(g(t),h(t)),\\
&H(t,x)=V(t,x)=0,& &t>0, ~ x\in\{g(t),h(t)\},\\
&g'(t)=-\mu\int_{g(t)}^{h(t)}\int_{-\yy}^{h(t)}
J_2(x-y)H(t,x)\dy\dx,& &t>0,\\
&h'(t)=\mu\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}
J_2(x-y)H(t,x)\dy\dx,& &t>0,\\
&-g(0)=h(0)=h_0>0;\;\;H(0,x)=u_{10}(x),\;\;V(0,x)=u_{20}(x),&&x\in[-h_0,h_0],
\end{aligned}\right.
\ees
where $J_i$ satisfy {\bf(J)}. $d_i,a_i,b_i,e_i$ and $\mu$ are positive constant. $H(t,x)$ and $V(t,x)$ are the densities of the infected bird (host) and mosquito (vector)
populations, respectively. Since model \eqref{4.9} is a simplified model, the biological interpretation of this model can be referred to the literatures \cite{ALZ,LRD,LH,WHD}. Moreover, the dynamics of this model have been recently studied in \cite{DNwn}. The authors proved that the dynamics of \eqref{4.9} are governed by the spreading-vanishing dichotomy
\sk{\rm(i)}\, \underline{Spreading:} $\lim_{t\to\yy}-g(t)=\lim_{t\to\yy}h(t)=\yy$ (necessarily $\mathcal{R}_0=\sqrt\frac{a_1a_2e_1e_2}{b_1b_2}>1$) and
\[\lim_{t\to\yy}(H(t,x),V(t,x))=(\frac{a_1a_2e_1e_2-b_1b_2}{a_1a_2e_2+b_1a_2},\frac{a_1a_2e_1e_2-b_1b_2}{a_1a_2e_1+a_1b_2}) ~ ~ {\rm locally ~ uniformly ~ in~} \mathbb{R}.\]
\sk{\rm(ii)}\, \underline{Vanishing:} $\lim_{t\to\yy}\big(h(t)-g(t)\big)<\yy$ and \[\lim_{t\to\yy}\bigg(\|H(t,\cdot)\|_{C([g(t),h(t)])}+\|V(t,\cdot)\|_{C([g(t),h(t)])}\bigg)=0.\]
Assume that spreading happens (necessarily $a_1a_2e_1e_2>b_1b_2$). Thus we have
\bess
&F(u)=(f_1(u_1,u_2),f_2(u_1,u_2))=((a_1(e_1-u_1)u_2-b_1u_1,a_2(e_2-u_2)u_1-b_2u_2)\\
&{\bf u^*}=(u^*_1,u^*_2)=(\frac{a_1a_2e_1e_2-b_1b_2}{a_1a_2e_2+b_1a_2},\frac{a_1a_2e_1e_2-b_1b_2}{a_1a_2e_1+a_1b_2}).
\eess
One easily checks that conditions ${\bf (f_1)}-{\bf (f_6)}$ hold for $F$ with ${\bf \hat{u}}=(e_1,e_2)$. Hence the more accurate longtime behaviors of solution to \eqref{4.9} can be summarized as follows.
\begin{theorem}\label{t4.2}Let $(H,V,g,h)$ be a solution of \eqref{4.9} and $m_0=m=2$ in conditions {\bf (J1)} and ${\bf(J^\gamma)}$. If spreading happens, then
\bess\left\{\begin{aligned}
&\hspace{-2mm}\lim_{t\to\yy}\max_{|x|\le ct}[|H(t,x)-u^*_1|+|V(t,x)-u^*_2|]=0 ~ {\rm for ~ any ~ } c\in(0,c_0) ~ ~ {\rm if ~ {\bf(J1)} ~ holds},\\
&\hspace{-2mm}\lim_{t\to\yy}\max_{|x|\le ct}[|H(t,x)-u^*_1|+|V(t,x)-u^*_2|]=0 ~ {\rm for ~ any ~ } c>0 ~ ~ {\rm if ~ {\bf(J1)} ~ does ~ not ~ hold},\\
&\hspace{-2mm}\lim_{t\to\yy}\max_{|x|\le s(t)}[|H(t,x)-u^*_1|+|V(t,x)-u^*_2|]=0 ~ {\rm for ~ any } ~s(t)=t^{\frac{1}{\gamma-1}}o(1) \; {\rm if ~ {\bf(J^\gamma)} ~ holds ~ for } ~\gamma\in(1,2),\\
&\hspace{-2mm}\lim_{t\to\yy}\max_{|x|\le s(t)}[|H(t,x)-u^*_1|+|V(t,x)-u^*_2|]=0 \; {\rm for ~ any \; } s(t)=(t\ln t) o(1) \; {\rm if ~ {\bf(J^\gamma)} ~ holds ~ for } ~\gamma=2.
\end{aligned}\right.\eess
where $c_0$ is uniquely determined by the semi-wave problem \eqref{1.3}-\eqref{1.4} with $m_0=m=2$.
\end{theorem}
|
1,108,101,564,651 | arxiv | \section{Introduction}
We give an algorithm which is at the heart of a type
diagnosis system for a higher-order concurrent
constraint language, viz.~the $\gamma$ calculus
\cite{Smolka:GammaCalculus:94} which is the underlying
operational model of the programming language Oz \cite{ozdoc}.
The algorithm decides satisfiability of
constraints
containing equations $x{=} y$ and $x{=} f(\ol{y})$,
and weak subsumption\ constraints $\MS{x}{y}$ over infinite constructor
trees with free variables.
The algorithm is given fully in terms of constraint simplification.
One the one hand, this gives credit to the close
relationship between type inference and constraint solving
(e.g.,
\cite{Wand:Simple:87,AikenWimmers:TypeInclusion:93,KozenPalsbergSchwartzbach:JCSS:94}
and many others).
On the other hand it establishes yet another correspondence
between unification problems arising from polymorphic type inference
and unification based grammar formalisms:
The most prominent one is the equivalence of
type checking polymorphic recursion \cite{Mycroft:84,Henglein:88}
with semi-unification \cite{KfouryTiurynUrz:93,DoerreRounds:LICS:90}
both of which are
undecidable in general. To avoid this undecidability,
we chose a weaker instance relation to give semantics to
$\MS{x}{y}$. For example, we allow $f(a\: b)$ as an instance
of $f(x\: x)$ even if $a\neq b$. On the type
side, this type of constraints maintains some of the polymorphic
flavour, but abandons full parametric
polymorphism~\cite{MuellerNiehren:Member:94}.
We start out from the set of
infinite constructor trees with holes (free variables).
We give a semantics which interprets the tree assigned
to a variable dually: As itself
and the set of its ``weak'' instances.
Our algorithm terminates, and can be shown to
be correct and complete under this semantics.
The decidability problem for our constraints turned out
to be equivalent to weak subsumption over feature graphs
solved by D\"orre \cite{Doerre:WeakSubsumption:94}
for
feature graphs with feature (but no arity) constraints.
However, only half of D\"orre's two-step
solution is a constraint solving
algorithm. The second step relies on the
equivalence of non-deterministic and deterministic
finite state automata. In contrast, our algorithm
decides satisfiability in a completely incremental
manner and is thus amenable to be integrated
in an concurrent constraint language like Oz~\cite{ozdoc}
or AKL~\cite{JansonHaridi:91}.
The extension of our algorithm towards
feature trees is easily possible
(see \cite{MuellerNiehren:Member:94}).
This allows to do type diagnosis for
records~\cite{SmolkaTreinen:92} and objects.
An entirely set-based semantics allows
to naturally extend the algorithm
to a full-fledged type diagnosis system,
covering -- among other aspects -- sorts, disjunctive types,
and recursive data type declarations
\cite{NiehrenPodelskiTreinen:93}.
{\paragraph{Type Diagnosis.}
As an illustrating example for the form of type diagnosis
we have in mind, consider the following $\gamma$ program:
\[
\exists x \exists y\exists z\exists p\ \
\abstr{p}{u}{v}{v{=} cons(x\: u)} \apc
\appl{p}{y}{y} \apc x{=} f(y\: z)
\]
This program declares four variables $x,y,z$, and $p$. It defines
a relational abstraction $p$, which states that its two arguments
$u$ and $v$ are related through the equation $v=cons(x\: u)$.\footnote{
Note that
$\abstr{p}{u}{v}{v{=} cons(x\: u)}$ is different from a
named $\lambda$ abstraction $p = \alam{u}{cons(x\: u)}$
because it is relational rather than functional,
and also different to the Prolog program
$p(u,v)\ \mbox{:--}\ v=cons(x\: u).$,
because Prolog does not allow
variables to be global wrt.~a predicate
but rather existentially quantifies $x$.
}
Furthermore, it states the equality $x{=} f(y\: z)$ and
applies $p$ to $yy$.
This application $\appl{p}{y}{y}$ reduces to a copy of
the abstraction $p$ with the
actual arguments $yy$ replaced for the formal ones $uv$:
\[\begin{array}{lcl}
& \exists x \exists y\exists z\exists p\ \
\abstr{p}{u}{v}{v{=} cons(x\: u)} \apc
\appl{p}{y}{y} \apc x{=} f(y\: z) \\
\to &
\exists x \exists y\exists z\exists p\ \
\abstr{p}{u}{v}{v{=} cons(x\: u)} \apc
y{=} cons(x\: y) \apc x{=} f(y\: z)
\end{array}\]
Observe how the abstraction $p$ is defined by
reference to the global variable $x$, while
the value of $x$ is defined through an application
of $p$: $\appl{p}{y}{y} \apc x{=} f(y\: z)$.
Such a cycle is specific to the $\gamma$ calculus
since no other language offers
explicit declaration of logic variables global to an abstraction
(be it logic, functional, or concurrent
languages, e.g., Prolog, ML \cite{HarperMacQueenMilner:86},
or Pict \cite{PierceTurner:Pict:95}).
The types of the variables involved are described by
the following constraint.\footnote{
The formal account of the derivation of type constraints from programs
will be given in \cite{Mueller:96}.
}
For ease of reading, we slightly abuse notation and pick
the type variables identical to the
corresponding object variables:
\[
p {=} \langle u\: v\rangle \apc
v{=} cons(x\: u) \apc
\MS{y}{u} \apc \MS{y}{v} \apc
x{=} f(y\:z)
\]
$\langle u\: v\rangle$ is the relational type of $p$,
and the application gives rise to the constraint
$\MS{y}{u} \apc \MS{y}{v}$,
which says that $y$ is constrained by both formal arguments
of the procedure $p$.
The subconstraint
$x{=} f(y\: z) \apc \MS{y}{v} \apc v{=} cons(x\: u)$
reflects the cyclic dependency between $x$ and $p$. It says
that $y$ be in the set of instances of
$v$ which depends through $v{=} cons(x\: u)$
on $x$, and at the same time that $x$ should be exactly
$f(y\: z)$.
Type diagnosis along this line
is discussed in depth in \cite{MuellerNiehren:Member:94}.
\paragraph{Related Work.}
Apart from the already mentioned work,
related work includes investigations about membership
constraints (e.g., \cite{NiehrenPodelskiTreinen:93}),
type analysis for untyped languages (Soft Typing)
\cite{AikenWimmers:TypeInclusion:93,%
CartwrightFagan:91,WrightCartwright:Scheme:93},
constraint-based program analysis
\cite{KozenPalsbergSchwartzbach:JCSS:94}
and the derivation of recursive sets from programs~\cite{Fruehwirth:91}.
For proofs and a detailed discussion of related work
see~\cite{MuellerNiehren:Member:94}.
\paragraph{Plan of the Paper.}
This paper is structured as follows. In the Section \ref{sec:notation}
below we present our constraints along with their semantics and
give necessary notation. Section \ref{sec:problem} gives a simple
algorithm which is correct but non-terminating.
Section \ref{sec:algorithm} gives the rules of the full algorithm.
Section \ref{sec:outlook} concludes and gives a brief outlook.
\section{Constraints and Semantics}
\label{sec:notation}
We assume a signature $\Sigma$ of function
symbols with at least two elements ranged over by
$f,g,h,a,b,c$ and an infinite
set of {\em base variables\/} $\B\V$ ranged over by
$\Base{\chi}$. If $V$ is a further set of variables then
${\IT{V}}$ stands for the set of all finite or infinite \Def{trees}
over signature $\Sigma$ and variables $V$. Trees of $\IT{V}$
are always ranged over by $s$ and $t$. The set of variables
occurring in a tree $t$ is denoted by ${\cal{V}}(t)$.
Sequences of variables are written as $\ol{x}$, or $\ol{\chi}$.
We build constraints over a set of
\Def{constraint variables} ranged over by $x$, $y$,
$z$, $u$, $v$, $w$. Constraint variables must contain at least
base variables. The syntax of
our \Def{constraints} $\phi$, $\psi$ is as follows:
\[\begin{array}{rcl@{\hskip2em\relax \mbox{and} \hskip2em\relax}rcl}
x,y & ::= & \chi &
\phi,\psi & ::=& x{=} y \Space{\mid}{.15}
x{=} f(\ol{y}) \Space{\mid}{.15} \MS{x}{y} \Space{\mid}{.15}
\phi \apc \psi
\end{array}\]
As {\em atomic constraints\/} we consider equations
$x{=} y$ or $x{=} f(\ol{y})$ and weak subsumption\
constraints $\MS{x}{y}$. Constraints are atomic
constraints closed under conjunction. \Def{First-order
formulae} build over constraints $\phi$ are denoted by
$\Phi$. We define $\congr{}$ to be the least binary
relation on $\phi$ such that $\apc$ is associative
and commutative. For convenience, we shall use the
following notation:
\[
\begin{array}{r@{\hskip2em\relax}c@{\hskip1em\relax}l}
\mbox{$\phi$ in $\psi$}& \mbox{iff} &\mbox{exists $\phi'$ with
$\phi\apc\phi' \congr{} \psi$}
\end{array}
\]
As semantic structures we pick \Def{tree-structures}
which we also call $\IT{V}$ for some set $V$. The domain of a
tree-structure $\IT{V}$ is the set of trees $\IT{V}$.
Its interpretation is defined by
$\: f^{\IT{V}}(\ol{t}) = f(\ol{t})$. We define the application
$f(\ol{T})$ of $f$ to a sequences of sets of trees $\ol{T}$
elementwise, $f(\ol{T})= \{f(\ol{t}) \mid \ol{t}\in \ol{T}\}$.
Given a tree $s\in \IT{V}$, the set $\inst{V}{s}$
of \Def{weak instances of $s$} is defined as the greatest fixed point of:
\[
\inst{V}{s} = \left\{\begin{array}{ll}
\IT{V} & \mbox{if $t = x$ for some $x$} \\
f(\ol{\inst{V}{s}}) & \mbox{if $t = f(\ol{s})$ for some $\ol{s}$}
\end{array}\right.
\]
Notice that this definition implies
$f(a\ b) \in \inst{V}{f(x\ x)}$, even if $a\not= b$.
Let $V_1$, $V_2$ be two sets whose elements we call
variables. A \Def{$V_1$-$V_2$-substitution} $\sigma$ is a
mapping from $V_1$ to $\IT{V_2}$. By homomorphic extension, every
substitution can be extended to a mapping from $\IT{V_1}$
to $\IT{V_2}$.
The set of \Def{strong instances of $s$}
is defined by
$
\ainst{V}{s} =
\{\sigma(s) \:|\: \mbox{$\sigma$ is a ${\cal{V}}(s)$-$V$-substitution}\}
$.
Note that $\ainst{V}{s} \subseteq \inst{V}{s}$, and that
$f(a\: b)\not\in \ainst{V}{f(x\: x)}$ if $a\neq b$.
Using $\ainst{V}{s}$ instead of $\inst{V}{s}$ would
make satisfiability of our constraints equivalent to
semi-unification and undecidable
\cite{Kfoury:SemiUnification:90,DoerreRounds:LICS:90}.
Let $\sigma$ be a $V_1$-$V_2$-substitution,
$\{x,y,\ol{z}\}\subset V_1$, and $\phi,\psi$
constraints such that ${\cal{V}}(\phi)\subseteq V_1$, ${\cal{V}}(\psi)\subseteq V_1$.
Then we define:\label{semantics}
\[\begin{array}{lcl@{\hskip2em\relax\qquad} lcl}
\models_{\sigma} x{=} y &
\mbox{iff} & \sigma(x) {=} \sigma(y) &
\models_{\sigma} x{=} f(\ol{z})&
\mbox{iff} & \sigma(x) {=} f^{\IT{V_2}} (\ol{\sigma(y)}) \\
\models_{\sigma} \MS{x}{y} &
\mbox{iff} & \inst{V_2}{\sigma(x)}\subseteq \inst{V_2}{\sigma(y)} &
\models_{\sigma} \phi \wedge \psi & \mbox{iff} &
\models_{\sigma} \phi \mbox{ and } \models_{\sigma} \psi
\end{array}\]
A \Def{$V_1$-$V_2$-solution} of $\phi$ is a $V_1$-$V_2$-substitution
satisfying $\models_{\sigma} \phi$. A constraint $\phi$ is called
\Def{satisfiable}, if there exists a $V_1$-$V_2$-solution for
$\phi$. The notion of $\models_\sigma$ extends to arbitrary first-order
formulae $\Phi$ in the usual way.
We say that a formula $\Phi$ is \Def{valid}, if $\models_\sigma
\Phi$ holds for all $V_1$-$V_2$-substitutions $\sigma$ with
${\cal{V}}(\Phi)\subseteq V_1$. In symbols, $\models \Phi$.
Our setting is a conservative extension of
the usual rational unification problem. This means
that free variables in the semantic domain do not affect
equality constraints. A constraint $\phi$ is \Def{satisfiable in the
tree-model $\IT{V}$}, if there
exists a $\B\V$-$V$-solution of $\phi$. The trees of
$\IT{\emptyset}$ are called \Def{ground trees}.
\begin{proposition}
\label{as usual}
Suppose $\phi$ not to contain weak subsumption\ constraints.
Then $\phi$ is satisfiable if and only if it is satisfiable in
the model of ground trees.
\end{proposition}
The statement would be wrong for $\phi$'s
containing weak subsumption constraints. For instance,
consider the following $\phi$ with $a\not= b$:
\[
\phi \Space{\con}{.3} \MS{x}{z} \apc \MS{y}{z} \apc x{=} a \apc y{=} b\:
\]
This $\phi$ is not satisfiable in the model of ground trees,
since the set $\inst{\emptyset}{t}$ is a singleton for
all ground trees $t$, whereas any $V_1$-$V_2$-solution $\sigma$ of
$\phi$ has to satisfy
$
\{a,b\} \subseteq \inst{V_2}{\sigma(z)}
$.
However, there exists a $\{x,y,z\}$-$\{v\}$-solution
$\sigma$ of $\phi$, where $\{v\}$ is an singleton:
$
\sigma(x) \Sp{=} a\:,
\sigma(y) \Sp{=} b\:,
\sigma(z) \Sp{=} v\:
$.
\begin{proposition}\label{lemma-eq-implies-in}
For all $x$, $y$, $z$, $u$, $v$ the following statements hold:
\[ \begin{array}{ll@{\hskip2em\relax}ll@{\hskip2em\relax}ll}
1) &\models\: x{=} y \rel \MS{x}{y}\:, &
2) &\models\: \MS{x}{y}\apc \MS{y}{z} \rel \MS{x}{z}\:, &
3) &\models\: x{=} f(\ol{y}) \rel \MS{x}{f(\ol{y})}\:, \\
4) &\not\models\: \MS{x}{y} \apc \MS{y}{x} \rel x{=} y &
5) &
\multicolumn{3}{l}{
\not\models\: x{=} f(u\ v) \apc \MS{x}{y} \apc y{=} f(z\ z)
\rel u{=} v\:.
}
\end{array}\]
\end{proposition}
\paragraph{Weak Subsumption vs.~Sets of Weak Instances.}
In the remainder of this section we
compare our sets of weak instance with D\"orre's
notion of weak subsumption.
Let us consider constructor
trees as special feature trees with integer-valued features,
a distinguished feature
{\sf label}~(e.g., \cite{NiehrenPodelski:93,Backofen:94}),
and a distinguished feature {\sf arity}.
Given feature constraints $x[f]y$ saying that $x$
has direct subtree $y$ at feature $f$, the equation
$x{=} f(y_1\dots y_n)$ can be considered
equivalent to:\footnote{
This simpler encoding of constructor trees not using arity
constraints has been suggested by one of the referees.
}
\[
x[{\sf arity}]n
\apc x[{\sf label}]f \apc x[1]y_1 \apc \dots \apc x[n]y_n.
\]
Let us write $s[f]\!\!\downarrow$ to say that
the tree $s$ has some direct subtree at $f$.
A \Def{simulation} between $\IT{V_1}$ and $\IT{V_2}$ is a relation
$\Delta \subseteq \IT{{V_1}} \times \IT{{V_2}}$ satisfying:
If $(t,s) \in \Delta$ then
\begin{center}
\begin{tabular}{lp{10cm}}
\Ax{Arity Simulation} & If $t[{\sf label}]\!\!\downarrow$
and there is an $n$ such that $t[{\sf arity}]n$,
then $s[{\sf arity}]n$.\\
\Ax{Feature Simulation} &
If $t[f]\!\!\downarrow$ and there is a tree $t'$
such that $t[f]t'$,
then
$s[f]\!\!\downarrow$, $s[f]s'$, and
$(t',s') \in \Delta$.
\end{tabular}
\end{center}
Now, the weak subsumption preorder $\protect{\mbox{$\sqsupset\hspace{-1.07em}\raisebox{-0.5em}{$\sim$}$}}^V$ is defined by:
\[
\mbox{$t \protect{\mbox{$\sqsupset\hspace{-1.07em}\raisebox{-0.5em}{$\sim$}$}}^V s$\hskip2em\relax iff\hskip2em\relax
there is a simulation $\Delta\subseteq V\times V$ such that
$(s,t)\in\Delta$}
\]
We have the following lemma:
\begin{lemma}
For all constructor trees $s,t$ it holds that:
$\inst{V}{s} \subseteq \inst{V}{t}$ iff $s \protect{\mbox{$\sqsupset\hspace{-1.07em}\raisebox{-0.5em}{$\sim$}$}}^V t$.
\end{lemma}
A similar statement can be derived for the set of strong
instances
and a strong subsumption preorder
following~\cite{Doerre:WeakSubsumption:94}.
The difference between $\protect{\mbox{$\sqsupset\hspace{-1.07em}\raisebox{-0.5em}{$\sim$}$}}^V$ and D\"orre's
notion of weak subsumption is that he does not require
\AxText{Arity Simulation}, while we naturally do
since we start
from constructor trees. For type checking, constructor
trees seem more natural: For illustration note that
the arity of a procedure is essential type information.
\section{A Non-terminating Solution}
\label{sec:problem}
\begin{figure}[htb]
\framebox[\textwidth]{
$\begin{array}{lll}
\Ax{Decom}&
\Rule{x{=} f(\ol{u}) \apc \phi}
{\ol{u}{=} \ol{v} \apc \phi}
& \Pack{x{=} f(\ol{v}) \:\mbox{in}\: \phi.}
\vspace{4mm}\\
\Ax{Clash}&
\Rule{x{=} f(\ol{u}) \apc \phi}
{\bot}
& \Pack{x{=} g(\ol{v}) \:\mbox{in}\: \phi,\: and\: f\not= g.}
\vspace{4mm}\\
\Ax{Elim}&
\Rule{x{=} y \apc \phi}
{x{=} y \apc \phi\repl{y}{x}}
& \Pack{ x\in{\cal{V}}(\phi),\mbox{ and } x\not= y.}
\vspace{4mm}\\
\Ax{Descend} &
\Rule{\MS{x}{y}\apc \phi} {x{=} f(\ol{u}) \apc
\MS{\ol{u}}{\ol{z}}\apc \phi}
& \begin{array}{l}
\ol{u} \mbox{ fresh}, y{=} f(\ol{z}) \:\mbox{in}\: \phi.
\end{array}
\end{array}$
}
\caption{\label{fig:loop-algorithm}A Non-Terminating Algorithm}
\end{figure}
In order to solve our constraints one could come up
with the system
given in Figure \ref{fig:loop-algorithm}.
Besides the three usual unification rules for
rational trees, the only additional rule is (Descend).
This algorithm is correct and very likely to be complete
in that for an unsatisfiable constraint $\phi$ there is a derivation
from $\phi$ to $\bot$.
However, this intuitive algorithm loops due to the introduction
of new variables.
\[
\begin{array}{c@{\hskip2em\relax}l@{\hskip2em\relax}c}
\underline{\MS{x}{y} \apc y {=} f(x)}
& \raisebox{-0.5em}{Descend} &
\underline{\MS{x}{y} \apc y {=} f(y)}
\\
\underline{ x{=} f(x_1)\apc \MS{x_1}{x} \apc y {=} f(x)}
& \raisebox{-0.5em}{Descend} &
\underline{ x{=} f(x_1)\apc \MS{x_1}{y} \apc y {=} f(y)}
\\
\dots &&\ldots
\end{array}
\]
Note that some form of descending is necessary in order to derive the clash
from the inconsistent constraint
\mbox{$
y {=} f(u) \apc u{=} a \apc z {=} f(x) \apc
\MS{x}{y} \apc \MS{x}{z} \apc \phi
$}
\section{Algorithm}
\label{sec:algorithm}
To consider trees with free variables as set of instances
means that we need to compute intersections of such sets
and to decide their emptiness.
When we simplify $\MS{x}{y}\apc \MS{x}{z}$
in a context $\phi$, we have to compute the intersection of the
sets of instances of $y$ and $z$.
In order to avoid the introduction of new variables we
add a new class of variables to represent such intersections,
and one new constraint.
\Def{Intersection variables} are defined as nonempty
finite subsets of base variables. In order
capture the intended semantics, we
write $\chi_1{\cap}\ldots{\cap} \chi_n$ instead of
$\{\chi_1\}\cup \ldots \cup\{\chi_n\}$.
The equality $\congr{}$ on intersection variables is the equality
on powersets, which satisfies:
\[
x{\cap} y \congr{} y {\cap} x, \hskip1em\relax
(x{\cap} y){\cap} z \congr{} x{\cap} (y{\cap} z),\hskip1em\relax
x{\cap} x \congr{} x.
\]
We call an $x$ a \Def{component} of $y$, if
$y\equiv \IS{x}{z}$ for some $z$.
The set of components of a variable $x$ is denoted by
$\VarComp{x}$. Note that \mbox{$\IS{x}{y}\in {\cal{V}}(\phi)$} implies
$x\in\VarComp{{\cal{V}}(\phi)}$ but in general not $x\in{\cal{V}}(\phi)$.
As additional constraint we introduce $\MS{x}{f(\ol{y})}$,
with the semantics:
\[
\models \: \MS{x}{f(\ol{y})} \leftrightarrow
\aexwb{u}{\MS{x}{u}\apc u{=} f(\ol{y})}\:.
\]
Complete semantics has to take care of
intersection variables such as $y{\cap} z$.
Constraint solving will propagate intersection variables
into most constraint positions. That is, our algorithm
actually operates on the following constraints:
\[\begin{array}{rcl@{\hskip2em\relax \mbox{and} \hskip2em\relax}rcl}
x,y & ::= & \chi \Space{\mid}{.15} x{\cap} y &
\phi,\psi & ::=& x{=} y \Space{\mid}{.15}
x{=} f(\ol{y}) \Space{\mid}{.15} \MS{x}{y} \Space{\mid}{.15}
\MS{x}{f(\ol{y})} \Space{\mid}{.15}
\phi \apc \psi
\end{array}
\]
However, if started with a constraint containing
only base variables, our algorithm maintains
this invariant for the equational constraints.
Let us call a variable $x$ \Def{immediately determined by f},
in $\phi$, written $x {\circ} f(\ol{y})$, if one of
$ x {=} f(\ol{y}) $ or $\MS{x}{f(\ol{y})} $
is in $\phi$ for some $f(\ol{y})$.
We say that $x$ is \Def{immediately determined} in $\phi$
if it is immediately determined by some $f$ in $\phi$.
Call $x$ \Def{determined}, written
$x \Less{\phi} f(\ol{u})$ if $x$ is immediately determined
in $\phi$, or
$ \MS{x}{\IS{y}{z}}$ and $y {\circ} f(\ol{u})$ are in $\phi$.
Obviously, if $x\Less{\phi} f(\ol{y})$, then the top-level
constructor of $x$ must be $f$.
We define the application of an operator
$\repl{y}{x}$ to intersection variables,
only if $x$ is a base variable. If
$z \congr{} (\chi_1{\cap} \ldots {\cap} \chi_n)$, then
$z\repl{y}{x}$ we define:
\[
z\repl{y}{x} \congr{} \chi_1\repl{y}{x}{\cap}\ldots{\cap}\chi_n\repl{y}{x}\:.
\]
We say that $\repl{y}{x}$ applied to
intersection variables performs \Def{deep substitution}.
The following property holds for deep substitution:
\[
\C({\cal{V}}(x{=} y \apc \phi)) \Sp{=} \C({\cal{V}}(x{=} y \apc \phi[y/x]))\:.
\]
Note however that $\:{\cal{V}}(x{=} y \apc \phi) \not=
{\cal{V}}(x{=} y \apc \phi[y/x])$ if $\phi \congr{} \MS{z}{x{\cap} y}$.
The variable $\IS{x}{y}$ is contained in the first but not in
the second set.
We can now specify our algorithm for constraint simplification.
It is given by the rules in Figure \ref{fig:unification} and
Figure \ref{fig:algorithm}.
\begin{figure}[hbt]
\framebox[\textwidth]{
$\begin{array}{lll}
\Ax{Decom}&
\Rule{x{=} f(\ol{u}) \apc \phi}
{\ol{u}{=} \ol{v} \apc \phi}
& \mbox{$x{=} f(\ol{v})$ in $\phi$.}
%
\vspace{4mm}\\
\Ax{Clash}&
\Rule{\phi}
{\bot}
& x\Less{\phi} f(\ol{u}),\: x{\cap} y \Less{\phi} g(\ol{v}),\:
\mbox{and }f\not= g.
\vspace{4mm}\\
\Ax{Elim}&
\Rule{x{=} y \apc \phi}
{x{=} y \apc \phi\repl{y}{x}}
& x\in\VarComp{{\cal{V}}(\phi)}\cap\B\V,\mbox{ and } x\not\equiv y.
\end{array}$
}
\caption{\label{fig:unification}Rational Tree Unification}
\end{figure}
The Rule \AxText{Decom} is known from usual unification
for rational trees. Up to the application condition
$x\in\VarComp{{\cal{V}}(\phi)}\cap\B\V$, this also applies to rule
\AxText{Elim}. This side condition accounts for
deep substitution. The \AxText{Clash} rule contains
as special cases:
\[
\Rule{x{=} f(\ol{y}) \apc x{=} g(\ol{z}) \apc \phi}{\bot}
f\not= g
\hskip2em\relax\mbox{and}\hskip2em\relax
\Rule{\MS{x}{f(\ol{y})} \apc \MS{x}{g(\ol{z})} \apc \phi}{\bot}
f\not= g\:.
\]
Its full power comes in interaction with the rules in
Figure \ref{fig:algorithm}. Then it allows to derive a clash
if for a variable $x$ a constructor is known, and for
some variable $\IS{x}{y}$ a distinct constructor
is derivable.
\begin{figure}[htb]
\framebox[\textwidth]{
$\begin{array}{lll}
\Ax{Propagate1}&
\Rule{\MS{\IS{x}{y}}{z} \apc \phi}
{\MS{\IS{x}{y}}{z{\cap} u} \apc\phi}
& \begin{array}{l}
\MS{x}{u}\:\mbox{in}\:\phi,\:z{\cap} u\not\equiv z.
\end{array}
\vspace{4mm}\\
\Ax{Propagate2}&
\Rule{\MS{\IS{x}{y}}{f(\ol{u})} \apc \phi}
{\MS{\IS{x}{y}}{f(\ol{u}{\cap}\ol{v})} \apc \phi}
& \begin{array}{l}
x\Less{\phi}f(\ol{v}),\:\ol{u}{\cap}\ol{v}\not\equiv \ol{u}.
\end{array}
\vspace{4mm}\\
\Ax{Collapse}&
\Rule{\MS{x}{\IS{y}{u}} \apc \phi}
{\MS{x}{\IS{y}{\IS{z}{u}}} \apc \phi}
&\begin{array}{l}
\MS{y}{z} \:\mbox{in}\: \phi,
\mbox{ and $\IS{y}{\IS{z}{u}} \not\equiv \IS{y}{z}$.}
\end{array}
\ignore{
\vspace{4mm}\\
\Ax{Intersect1}&
\Rule{\MS{x}{y}\apc \MS{x}{z}\apc \phi}
{\MS{x}{\IS{y}{z}} \apc\phi}
\vspace{4mm}\\
\Ax{Intersect2}&
\Rule{\MS{x}{f(\ol{y})}\apc \MS{x}{f(\ol{z})}\apc \phi}
{\MS{x}{f(\IS{\ol{y}}{\ol{z}})} \apc\phi}
\vspace{4mm}\\
\Ax{Descend1}&
\Rule{x {=} f(\ol{u}) \apc \phi}{x {=} f(\ol{u}) \apc
\MS{\ol{u}}{\ol{v}} \apc \phi}
& \begin{array}{l}
x \Less{\phi} f(\ol{v}), \\
\MS{\ol{u}}{\IS{\ol{v}}{\ol{w}}} \mbox{ not in } \phi
\end{array}
\vspace{4mm}\\
\Ax{Descend2}&
\Rule{\phi}
{\MS{\IS{x}{y}}{f(\ol{u})} \apc \phi}
&\begin{array}{l}
\IS{x}{y}\in{\cal{V}}(\phi),\: x\Less{\phi} f(\ol{u}),\\
\mbox{and not }
\IS{x}{y}{\circ} g(\ol{v}) \:\mbox{in}\: \phi.
\end{array}
\end{array}$
}
\caption{\label{fig:algorithm}Simplifying Membership Constraints}
\end{figure}
Rules \AxText{Propagate1} and \AxText{Propagate2}
propagate intersection variables into the right
hand side of weak subsumption\ contraints. The \AxText{Collapse}
rule collapses chains of variables related via
weak subsumption\ constraints. In other words, these rules propagate
lower bounds with respect to the weak subsumption\ relation.
The rules \AxText{Descend1} and \AxText{Descend2} replace
\AxText{Descend} from the non-terminating algorithm
in Figure \ref{fig:loop-algorithm}. The Descend rules
are the only rules introducing new weak subsumption\ constraints.
The rule \AxText{Descend2} introduces a constructor
for intersection-variables $x{\cap} y$ by adding a
constraint of the form $\MS{x{\cap} y}{f(\ol{u})}$.
If the rule is applied, then the intersection
of $x$ and $y$ is forced to be nonempty. Nonemptiness
is implied by $\phi$, if $\IS{x}{y}$ occurs in $\phi$
($\IS{x}{y}\in{\cal{V}}(\phi)$).
Note that \AxText{Descend1} and \AxText{Descend2} are
carefully equipped with side conditions for termination.
For example, the following derivations are {\bf not}
possible:
\[
\Rule{x{=} f(u)}
{x{=} f(u) \apc \MS{x}{f(u)}}
\hskip2em\relax
\Rule{\MS{x}{y} \apc x{=} f(x) \apc \MS{x}{f(y)}}
{\MS{x}{y} \apc \MS{x}{y} \apc x{=} f(x) \apc \MS{x}{f(y)}}
\hskip2em\relax
\Rule{x{=} f(y)}
{\MS{y}{y} \apc x{=} f(y)}\:.
\]
We can prove that our algorithm
performs equivalence transformations with respect to
substitutions $\sigma$ which meet the intended
semantics of intersection variables, i.e.,
\Def{intersection-correct} substitutions:
\begin{definition}[Intersection Correct]
We say that a substitution $\sigma$ is
\Def{intersection-correct for $x$ and $y$},
if it satisfies:
\[
\sigma(\IS{x}{y}) = \sigma(x) \cap \sigma(y)\:.
\]
We say that a substitution $\sigma$ is \Def{intersection-correct},
if the following properties holds for all intersection
variables $x,y$ and $z$:
\[
\begin{array}{l}
\mbox{If $x$, $y$, $\IS{x}{y}$ $\in$ ${\sf dom}(\sigma)$,
then $\sigma$ is intersection-correct for $x$ and $y$}.\\
\mbox{If $x$, $\IS{x}{y}$ $\in$ ${\sf dom}(\sigma)$, then $\sigma$
is intersection-correct for $\IS{x}{y}$ and $y$}.
\end{array}
\]
\end{definition}
Note that $\sigma$ is intersection-correct for $x$ and $x{\cap} y$, iff
$\sigma(\IS{x}{y}) \Sp{\subseteq} \sigma(x)$.
We call a constraint $\phi$
\Def{intersection-satisfiable}, if $\phi$ has
an intersection-correct solution.
\begin{proposition}
\label{satintsat}
Let $\phi$ be a constraint only containing base variables only.
Then $\phi$ is satisfiable, if and
only if it is intersection satisfiable.
\end{proposition}
We denote the set of all intersection-correct
solutions of $\phi$ with $\ISSol{\phi}$.
Assume $\sigma$ to be a substitution. A
\Def{$V$-extension} of $\sigma$
is a substitution $\tilde{\sigma}$ such that
${\sf dom}{(\tilde{\sigma})} = {\sf dom}{(\sigma)} \cup V$ such
that $\sigma$ and $\tilde{\sigma}$ coincide on ${\sf dom}{(\sigma)}$.
We denote the set of all intersection-correct $V$-extensions
of $\sigma$ with $\ISExt{V}{\sigma}$.
Let $\phi$ and $\psi$ be constraints. We say that
$\phi$ \Def{intersection-implies} $\psi$, written
$\phi\mathrel{\models^I} \psi$, if
\[
\ISExt{{\cal{V}}(\psi)} {\ISSol{\phi}} \Sp{\subseteq} \ISSol{\psi}
\ \ \ \mbox{ and }\ \ \
\ISSol{\phi}=\emptyset \mbox{ iff }
\ISExt{{\cal{V}}(\psi)}{\ISSol{\phi}}=\emptyset
\]
We call $\phi$ and $\psi$ \Def{intersection-equivalent} if
$\phi \mathrel{\models^I}\psi$ and $\psi \mathrel{\models^I}\phi$, and
write $\phi \mathrel{\rlmodels^I} \psi$.
Both conditions ensure the following Lemma:
\begin{lemma}
If $\phi$ is not intersection
satisfiable, then $\phi\mathrel{\models^I}\psi$ holds vacuously for all $\psi$.
Furthermore, if $\phi \mathrel{\rlmodels^I} \psi$, then
$\phi$ is intersection satisfiable if and only if
$\psi$ is.
\end{lemma}
Given the above notions, the following two theorems
are our main results. For the proofs the reader is referred
to \cite{MuellerNiehren:Member:94}.
\begin{theorem}[Termination]
\label{Termination}
The rule system given in Figures
\ref{fig:unification} and \ref{fig:algorithm}
terminates.
\end{theorem}
\begin{theorem}[Correctness and Completeness]
\label{CorCom}
Let $\phi$ be a constraint containing base variables only.
Then the following statements are equivalent:
\begin{enumerate}
\item
$\phi$ is intersection-satisfiable.
\item
There exists an irreducible $\psi\neq \bot$ derivable
from $\phi$.
\item
There exists a irreducible $\psi\neq \bot$ that is intersection-equivalent
to $\phi$.
\item
$\bot$ cannot be derived from $\phi$.
\end{enumerate}
\end{theorem}
\paragraph{Acknowledgements.}
We would like to thank Ralf Treinen for pointing
us to D\"orre's paper and the anonymous referees
for useful remarks.
The research reported in this paper has been supported by the
Bun\-des\-mi\-ni\-ster f\"ur Bildung, Wissenschaft, Forschung und
Technologie (FTZ-ITW-9105), the Esprit Project \hbox{ACCLAIM} (PE~7195),
the Esprit Working Group CCL (EP~6028), and a fellowship of the
Graduiertenkolleg 'Kognition' at the Universit\"at des Saarlandes
of the first author.
\section{Outlook}
\label{sec:outlook}
\ignore{
\begin{figure}[htb]
\framebox[\textwidth]{
$\begin{array}{lll}
\Ax{FeatProp} &
\Rule{ \phi }{ \phi \apc \Feat{x}{f} } &
\begin{array}{l}
\MS{x}{y}\:\mbox{in}\: \phi,
\Feat{y}{f} \:\mbox{in}\: \phi \\
\Feat{x}{f} \:\mbox{not in}\: \phi
\end{array}
\vspace{4mm}\\
\Ax{FeatDecom} &
\Rule{\AtFeat{x}{f}{y} }{ y{=} z } &
\begin{array}{l}
\AtFeat{x}{f}{z} \:\mbox{in}\: \phi
\end{array}
\vspace{4mm}\\
\Ax{ArityProp} &
\Rule{ \phi }{ \phi \apc \Arity{x}{F} } &
\begin{array}{l}
\MS{x}{y},
\Arity{y}{F} \:\mbox{in}\: \phi \\
\Arity{x}{F} \:\mbox{not in}\: \phi
\end{array}
\vspace{4mm}\\
\Ax{ArityDef} &
\Rule{ \phi }{ \phi \apc \Arity{x}{F} } &
\begin{array}{l}
x{\circ} f(a_1:x_1 \dots a_n:x_n), F=\{a_1,\dots,a_n\} \\
\Arity{x}{F} \:\mbox{not in}\: \phi
\end{array}
\end{array}$
}
\caption{\label{fig:records}Handling Records}
\end{figure}
}
We have presented an algorithm for deciding
satisfiability of weak subsumption\ constraints over infinite
constructor trees with holes.
Our motivation to solve such constraints
grew out of a type inference problem. Formally,
the problem is equivalent to type checking a weak form
of polymorphic recursion. Type checking
polymorphic recursion is equivalent to
semi-unification and to subsumption of feature graphs.
All three are undecidable
\cite{Henglein:88,KfouryTiurynUrz:93,DoerreRounds:LICS:90}.
We establish a similar correspondence between
a type inference problem and
weak subsumption of feature graphs:
The latter has been investigated by D\"orre
looking for a logical treatment
of coordination phenomena in unification based
grammar formalisms \cite{Doerre:WeakSubsumption:94}.
Our starting point from the constraint language Oz however
lead us to an incremental algorithm, in contrast
to the automata based solution of D\"orre.
|
1,108,101,564,652 | arxiv | \section{Interacting soft gluons in the small-$x_B$ region of DIS}
\label{sec:1}
A number of striking phenomena have
been observed in recent
deep-inelastic electron-proton scattering (DIS) experiments
in the small-$x_B$ region. In particular it is seen,
that the contribution of the gluons dominates\cite{r1},
and that large-rapidity-gap (LRG) events exist\cite{r2,r3}.
The latter shows that the virtual photons in such processes may
encounter ``colorless objects'' originating from the proton.
The existence of LRG events in these and other\cite{r4,r5}
scattering processes
have attracted much attention, and
there has been much discussion \cite{r2,r3,r4,r5,r6,r7,r8,r9,r10}
on problems associated with the origin and/or the
properties of such ``colorless objects''.
Reactions in which ``exchange'' of such ``colorless objects'' dominate
are known in the literature \cite{r3,r7,r8} as
``diffractive scattering processes''.
While the concepts and methods used by different
authors in describing such processes
are in general very much different from one another,
all the authors
(experimentalists as well as theorists)
seem to agree on the following\cite{r8}
(see also Refs.
[\ref{r2}--\ref{r7}, \ref{r9}--\ref{r10a}]):
(a) Interacting soft gluons play a dominating role in
understanding the phenomena in the small-$x_B$ region of DIS in
general, and in describing the properties of LRG events in particular.
(b) Perturbative QCD should be, and can be, used to describe the
LRG events associated with high transverse-momentum ($p_\perp$)
jets which have been observed at HERA\cite{r9} and at the Tevatron\cite{r6}.
Such events are, however, rather rare.
For the description of the bulk of LRG events, concepts
and methods beyond the perturbative QCD, for example
Pomeron Models\cite{r7} based on Regge Phenomenology, are needed.
It has been suggested a long time ago
(see the first two papers in Ref.\cite{r7})
that, in the QCD language,
``Pomeron-exchange'' can be interpreted as ``exchange of two or more
gluons'' and that such results can be obtained by calculating the
corresponding Feynman diagrams. It is generally felt that
non-perturbative methods should be useful in understanding ``the
small-x phenomena'', but the
question, whether or how
perturbative QCD plays a role in such non-perturbative approaches does
not have an unique answer.
In a recent Letter \cite{r10a}, we proposed that the ``colorless
objects'' which play the dominating role in LRG events are
color-singlet gluon-clusters due to self-organized criticality, and
that optical-geometrical concepts and methods are useful
in examing the space-time properties of such objects.
The proposed picture \cite{r10a} is based on the following
observation: In a system of soft gluons
whose interactions are not negligible,
gluons can be emitted and/or absorbed
at any time and everywhere in the system
due to color-interactions between the members of the system as well as
due to color-interactions of the members with
gluons and/or quarks and antiquarks outside the system. In this
connection it is important to keep in mind that gluons interact
directly with gluons
and that {\em the number of gluons in a system is not a conserved
quantity}. Furthermore, since in systems of interacting soft-gluons
the ``running-coupling-constant'' is in general greater than unity,
non-perturbation methods are needed to describe the local
interactions associated with such systems.
That is, such sytems are in general extremly complicated, they are
not only too complicated (at least for us) to take
the details of local interactions into account
(for example by describing
the reaction mechanisms in terms of Feynman diagrams),
but also too complicated to apply well-known concepts
and methods in conventional Equilibrium Statistical Mechanics.
In fact, the accumulated
empirical facts about LRG events
and the basic properties of gluons prescribed by the QCD
are forcing us to accept the following
picture for such systems:
A system of interacting soft gluons can be, and should be considered as
{\it an open dynamical complex system with many degrees of freedom},
which is in general {\em far from equilibrium}.
In our search for an appropriate method to deal with such complex
systems, we are led to the following questions: Do we see comparable
complex systems in Nature?
If yes, what
are the characteristic
features of such systems, and what can we learn by studying such systems?
\section{Characteristic features of open dynamical complex systems}
\label{sec:2}
Open, dynamical, complex systems which are in general far from equilibrium
are {\em not} difficult to find in Nature ---
at least {\em not} in the macroscopic world! Such systems have been studied,
and in particular the following have been observed by
Bak, Tang and Wiesenfeld (BTW) some time ago\cite{r11}:
This kind of complex systems may evolve to
self-organized critical states which lead to
fluctuations extending over all length- and
time-scales, and that
such fluctuations manifest themselves in form of
spatial and temporal power-law scaling behaviors
showing properties
associated with fractal
structure and flicker noise respectively.
To be more precise, BTW\cite{r11} and many other
authors\cite{r12} proposed, and demonstrated by
numerical simulations, the following:
Open dynamical complex systems of locally interacting objects which
are in general far from equilibrium
can evolve into self-organized
structures of states which are barely stable. A local perturbation of a
critical state may ``propagate'', in the sense that it spreads to (some)
nearest neighbors, and than to the next-nearest neighbors, and so on in
a ``domino effect'' over all length scales, the size of
such an ``avalanche'' can be as
large as the entire
system. Such a ``domino effect'' eventually terminates after a total time $T$,
having reached a final amount of dissipative energy and having
effected a total spatial extension $S$. The quantity $S$ is called by
BTW the ``size'', and the quantity $T$ the ``lifetime'' of the
avalanche --- named by BTW a ``cluster''
(hereafter referred to as BTW-cluster or BTW-avalanche). As we
shall see in more details later on, it is of considerable importance to
note that a BTW-cluster {\it cannot}, and {\it should not}
be identified with a cluster in the usual sense.
It is an avalanche,
{\it not} a {\it static} object
with a fixed structure
which remains unchanged until it
decays after a time-interval (known as the lifetime in
the usual sense).
In fact, it has been
shown\cite{r11,r12} that the
distribution ($D_S$) of
the ``size'' (which is a measure of
the dissipative energy, $S$) and the distribution
($D_T$) of the lifetime
($T$) of BTW-clusters in such open
dynamical complex systems obey power-laws:
\begin{equation}
\label{e1}
D_S(S)\sim S^{-\mu},
\end{equation}
\begin{equation}
\label{e2}
D_T(T)\sim T^{-\nu},
\end{equation}
where $\mu$ and $\nu$ are positive real constants. Such spatial and
temporal power-law
scaling behaviors can be, and have been, considered
as the universal signals --- the ``fingerprints'' --- of the
locally perturbed
self-organized critical states in such systems.
It is
expected\cite{r11,r12} that the general concept of self-organized
criticality (SOC), which is
complementary to chaos, may be
{\it the} underlying concept for temporal and spatial scaling in a wide class
of {\it open non-equilibrium complex systems} --- although it is not yet known
how the exponents of such power laws can be calculated analytically.
SOC has been observed in
a large number of open dynamical complex systems in
non-equilibrium\cite{r11,r12,r14,r15,r16,r17}
among which the following examples are
of particular interest, because they illuminate several aspects of
SOC which are relevant for the discussion in this paper.
First, the well known Gutenberg-Richter law\cite{r13,r14}
for earthquakes as a special
case of Eq.(1):
In this case, earthquakes are BTW-clusters due to SOC. Here,
$S$ stands for the released energy (the magnitude)
of the observed earthquakes. $D_S(S)$ is the number of
earthquakes at which an energy $S$ is released.
Such a simple law is known to be valid
for all earthquakes, large (up to $8$ or $9$ in Richter scale)
or small! We note, the power-law behavior given by the
Gutenberg-Richter law implies in particular the following.
The question ``How large is a typical earthquake?'' does
not make sense!
Second, the sandpile experiments\cite{r11,r12} which show
the simple regularities mentioned in Eqs.(1) and (2):
In this example, we see how local perturbation can be caused by the
addition of one grain of sand (note that we are dealing with
an open system!). Here,
we can also see how
the
propagation of perturbation in form of ``domino effect''
takes place, and
develops into BTW-clusters/avalanches of all possible sizes and durations.
The size- and duration-distributions are given by Eqs.(1)
and (2) respectively.
This example is indeed a very attractive one,
not only because such
experiments can be, and have been performed in laboratories
\cite{r12}, but also because they can
be readily simulated on a PC\cite{r11,r12}.
Furthermore, it has been pointed out, and demonstrated
by simple models\cite{r12,r15,r16,r17},
that the concept of SOC can also be applied
to Biological
Sciences.
It is amazing to see how phenomena as complicated as Life
and Evolution can be simulated
by simple models such as the ``Game of Life''\cite{r15} and
the ``Evolution Model''\cite{r16,r17}.
Having seen that systems of interacting soft-gluons
are open dynamical complex systems,
and that a wide class of open systems with many degrees of
freedom in the macroscopic world
evolve to self-organized critical states which lead to
fluctuations extending over all length- and time-scales,
it seems natural to ask the following:
Can such states and such fluctuations
also exist in the microscopic world --- on the
level of quarks and gluons? In particular: Can SOC be the dynamical
origin of
color-singlet gluon-clusters which play
the dominating role in inelastic diffractive scattering processes?
\section{SOC in inelastic diffractive scattering processes?}
\label{sec:3}
Because of the special role played by ``the colorless objects''
in inelastic diffractive scattering, and the possible relations
between such objects and color-singlet gluon-clusters which
can be formed in systems of interacting soft gluons,
it should be of considerable interest
to study the questions mentioned at the end of the last section, as
well as in the title of this section.
A simple and effective way
of answering them, is to check whether the
characteristic properties of SOC, in particular the
SOC-``fingerprints''
mentioned in Eqs.(\ref{e1}) and (\ref{e2}) show up
in the relevant
experiments.
In order to perform such a comparison, we need
to extract the spatial and the temporal distributions of the
gluon-clusters.
What {\em are} such ``colorless objects''? Is it possible that the
colorless objects which are associated with the proton-target and
which play the dominating role in inelastic diffractive scattering
processes are BTW-clusters which exist due to SOC in systems
of interacting soft gluons? Can we examine the properties of such
colorless objects by studying the final states of the above-mentioned
scattering processes?
To answer these questions,
it is useful to recall the following:
As color-singlets, such colorless objects can
exist inside and/or outside the proton, and the interactions between
such color-singlets as well as those between such objects and ``the
mother proton'' should be of Van der Waals type.
Hence it is expected that
such a colorless object can be readily separated as an entire object
from the mother proton in
scattering processes in which the momentum-transfer is sufficient
to overcome the binding energy due to the Van der Waals type of
interactions. This means, in inelastic diffractive scattering
the beam-particle (which is the virtual photon $\gamma^\star$ in DIS)
should have a chance to encounter on of the color-singlet
gluon-clusters. For the reasons mentioned above, the struck colorless
object can simply be ``knocked out'' and/or ``carried away'' by the
beam-particle in such a collision event. Hence, it seems that the
question whether ``the colorless objects'' are indeed BTW-clusters is
something that can be answered experimentally. In this connection we
recall that, according to the general theory of SOC\cite{r11,r12}, the
size of a BTW-cluster is characterized by its dissipative energy, and
in case of systems of interacting soft gluons associated with the
proton, the dissipative energy carried by the BTW-cluster should be
proportional to the energy fraction ($x_P$) carried by the colorless
object. Hence, if the colorless object can indeed be considered as a
BTW-cluster due to SOC, we should be able to obtain information about
the size-distribution of such color-singlet gluon-clusters by examing
the $x_P$-distributions of LRG events in the small-$x_B$ region of DIS.
Having this in mind, we now take a closer look at
the measured \cite{r3}
``diffractive structure function''
\mbox{$F_2^{D(3)}(\beta,Q^2;x_P)\equiv \int dt F_2^{D(4)}(\beta,Q^2;x_P,t)$}.
Here, we note that $F_2^{D(4)}(\beta,Q^2;x_P,t)$
is related \cite{r3,r7,r8,r9} to the
differential cross-section for large-rapidity-gap
events
\begin{equation}
\label{a3}
{d^4\sigma^D\over d\beta dQ^2 dx_P dt}={4\pi\alpha^2\over\beta
Q^4}(1-y+{y^2\over 2})F_2^{D(4)}(\beta,Q^2;x_P,t),
\end{equation}
in analogy to
the relationship between the corresponding quantities
[namely $d^2\sigma/(dx_B\,dQ^2)$ and $F_2(x_B,Q^2)$]
for normal deep-inelastic electron-proton scattering events
\begin{equation}
\label{a4}
{d^2\sigma\over dx_BdQ^2}={4\pi\alpha^2\over
x_BQ^4}(1-y+{y^2\over 2})F_2(x_B,Q^2).
\end{equation}
The kinematical variables, in particular $\beta$, $Q^2$, $x_P$ and $x_B$
(in both cases) are directly measurable quantities, the definitions
of which are shown in Fig.1 together with the corresponding
diagrams of the
scattering processes. We note
that, although these variables are
Lorentz-invariants, it is sometimes convenient to interpret them in a
``fast moving frame'', for example the electron-proton center-of-mass
frame where the proton's 3-momentum $\vec P$ is large (i.e. its
magnitude $|\vec P|$ and thus the energy $P^0\equiv (|\vec P|^2+M^2)^{1/2}$
is much larger than the proton mass $M$). While $Q^2$ characterizes
the virtuality of the space-like photon
$\gamma^\star$, $x_B$ can be interpreted,
in such a ``fast moving frame'' (in the framework
of the celebrated parton model), as the
fraction of proton's energy $P^0$ (or longitudinal momentum $|\vec P|$)
carried by the struck charged constituent.
We recall, in the framework
of the parton model, $F_2(x_B,$ $Q^2)/x_B$ for ``normal events''
can be interpreted as the sum of the probability densities
for the above-mentioned $\gamma^\star$ to interact with
a charged constituent of the proton. In analogy to this,
the quantity
$F_2^{D(3)}(\beta,Q^2;x_P)/\beta$ for LRG events
can be interpreted as the sum of the probability
densities for $\gamma^\star$ to interact with
a charged constituent which
carries a fraction $\beta\equiv x_B/x_P$ of the energy (or longitudinal
momentum) of the colorless object,
under the condition that the colorless object
(which we associate with a system of interacting soft gluons) carries a
fraction $x_P$
of proton's energy (or longitudinal momentum).
We hereafter denote this
charged-neutral and color-neutral gluon-system by
$c^\star_0$ (in Regge pole models\cite{r7} this object is known as
the ``pomeron'').
Hence, by comparing Eq.\,(3) with Eq.\,(4) and by comparing the two
diagrams shown in Fig.\,1(a) and Fig.\,1(b), it is tempting to draw
the following conclusions:
The diffractive process is nothing else but
a process in which the virtual photon $\gamma^\star$
encounters a $c_0^\star$,
and $\beta$ is nothing else but the Bjorken-variable with respect to
$c_0^\star$ (this is why it is called $x_{BC}$ in Ref.[\ref{r10}]).
This means,
a diffractive $e^-p$ scattering event can be envisaged as an event in
which the virtual photon $\gamma^\star$ collides with ``a $c_0^\star$-target''
instead of ``the proton-target''.
Furthermore, since $c_0^\star$ is charge-neutral,
and a photon can only directly interact with an object
which has electric charges and/or magnetic moments,
it is tempting to assign $c_0^\star$ an
electro-magnetic structure function $F_2^{c}(\beta, Q^2)$,
and study the interactions between the virtual photon and the quark(s)
and antiquark(s) inside $c_0^\star$.
In such a picture
(which should be formally the same as that of
Regge pole models\cite{r7},
if we would replace the $c_0^\star$'s by ``pomerons'')
we are confronted with the following two questions:
First, is it possible and meaningful to discuss the $x_P$-distributions of
the $c_0^\star$'s without knowing the intrinsic properties, in particular the
electromagnetic structures, of such objects?
Second,are gluon-clusters hadron-like, such that their electromagnetic
structures can be studied
in the same way as those for
ordinary hadrons?
Since we wish to begin the quantitative discussion with
something familiar to most of the
readers in this community, and we wish
to differentiate between the
conventional-approach and the SOC-approach, we would like to
discuss the second question here, and leave the first question to
the next section.
We recall (see in particular the last two papers in Ref.\ref{r7}),
in order to see whether
the second question
can be answered
in the {\em affirmative},
we need to know {\em whether}
$F_2^{D(3)}(\beta,Q^2;x_P)$ can be factorized in the form
\begin{equation}
\label{eee1}
F_2^{D(3)}(\beta, Q^2;x_P)=f_c(x_P)F_2^c(\beta,Q^2).
\end{equation}
Here, $f_c(x_P)$ plays the role of a ``kinematical factor''
associated with the ``target $c_0^\star$'',
and $x_P$ is the fraction
of proton's energy (or longitudinal momentum) carried by
$c_0^\star$. [We could call $f_c(x_P)$
``the $c_0^\star$-flux'' --- in exactly the same
manner as in Regge pole models\cite{r7}, where it is called
``the pomeron flux''.] $F_2^c(\beta,Q^2)$ is
``the electro-magnetic structure function of $c_0^\star$''
[the counterpart of $F_2(x_B,Q^2)$ of the proton] which
--- in analogy to proton (or any other hadron) ---
can be expressed as
\begin{equation}
\label{eee2}
\frac{F_2^c(\beta,Q^2)}{\beta}
= \sum_i e_i^2 [q_i^c(\beta,Q^2)+\bar q_i^c(\beta,Q^2)],
\end{equation}
where $q_i^c(\bar q_i^c)$ stands for the probability
density for $\gamma^\star$
to interact with a quark (antiquark) of flavor $i$ and electric
charge $e_i$ which carries a fraction $\beta$ of the energy
(or longitudinal momentum)
of $c_0^\star$. It is clear that
Eq.(6) should be valid for all $x_P$-values in this kinematical
region, that is, both the right- and the left-hand-side
of Eq.(6) should be independent of the energy (momentum) carried
by the ``hadron'' $c_0^\star$.
Hence, to find out experimentally whether the second question can be
answered in the affirmative, we only need to check whether the
data are in agreement with the assumption
that $F_2^c(\beta , Q^2)$ prescribed by Eqs.(5) and (6) exists.
For such a test,
we take the existing
data\cite{r3} and plot $\log [F_2^{D(3)}(\beta, Q^2;x_P)/\beta]$
against $\log\beta$ for different $x_P$-values.
We note, under the assumption
that the factorization shown in Eq.(5)
is valid, the $\beta$-dependence for a given $Q^2$ in
such a plot should have exactly the same form as that in the
corresponding
$\log [F_2^{c}(\beta, Q^2)/\beta]$ vs $\log \beta$ plot;
and that the latter is the analog of
$\log [F_2(x_B, Q^2)/x_B]$ vs $\log x_B$ plot for normal events.
In Fig.2 we show the result of such
plots for three fixed $Q^2$-values (3.5, 20 and 65 GeV$^2$,
as representatives of three different ranges in $Q^2$).
Our goal is to examine whether or
how the $\beta$-dependence of the function given in
Eq.(6) changes with $x_P$. In principle,
if there were enough data points, we should, and we could, do such
a plot for the data-sets associated with every $x_P$-value.
But, unfortunately there are not so much data at present.
What we can do, however, is to consider
the $\beta$-distributions in different $x_P$-bins, and to vary
the bin-size of $x_P$,
so that we can explicitly
see whether/how the shapes of the $\beta$-distributions
change. The results are shown
in Fig.2. The $\beta$-distribution in the first
row, corresponds to the integrated value $\tilde{F}^D_2(\beta, Q^2)$
shown in the literature\cite{r3,r8}.
Those in the second and in the third row are obtained by considering
different bins and/or by
varying the sizes of the bins.
By joining the points associated with a given $x_P$-interval
in a plot for a given $Q^2$,
we obtain the $\beta$-distribution for a $c_0^\star$ carrying
approximately the amount of energy $x_P P^0$, encountered
by a photon of virtuality $Q^2$. Taken together with Eq.(6) we can
then extract the distributions $q_i^c(\beta, Q^2)$ and
$\bar{q}_i^c(\beta, Q^2)$ for this $Q^2$-value, provided
that $F_2^c(\beta, Q^2)/\beta$ is independent of $x_P$.
But, as we can see in Fig.2, the existing data\cite{r3,r8}
show that the $x_P$-dependence of this function is far from
being negligible!
Note in particular
that according to Eq.(\ref{eee1}), by choosing a suitable $f_P(x_P)$
we can shift the curves for different $x_P$-values in the vertical
direction (in this log-log plot); but {\em we can never change
the shapes of the $\beta$-distributions} which are different for
different $x_P$-values!
In order to see, and to realize, the meaning of the $x_P$-dependence
of the distributions of the charged constituents of $c^\star_0$
expressed in terms of $F_2^c(\beta, Q^2)/\beta$
in LRG events [see Eqs.(5) and (6)],
let us, for a moment, consider
normal deep-inelastic scattering events in the
$x_B$-region where quarks dominate ($x_B > 0.1$, say).
Here we can plot the data for
$\log [F_2(x_B, Q^2)/x_B]$ as a function of $\log x_B$ obtained
at {\em different incident energies ($P^0$'s)} of the proton.
{\em Suppose} we see, that
at a given $Q^2$, the data for $x_B$-distributions taken
at different values
of $P^0$ are very much different.
{\em Would} it still be possible to introduce $F_2(x_B,Q^2)$
as ``the electro-magnetic structure function'' of the proton,
from which we can extract the $x_B$-distribution of the quarks
$q_i(x_B,Q^2)$ at a given $Q^2$?
The fact that it is not possible to assign
an $x_P$-independent
structure function $F_2^c(\beta, Q^2)/\beta$ to $c_0^\star$ which
stands
for the ``pomeron'', and whose ``flux'' $f_c(x_P)$
is expected to be independent of $\beta$ and $Q^2$,
deserves to be taken seriously.
It strongly suggest that the following picture
{\em cannot} be true:
``There exists a universal colorless object
(call it pomeron or $c_0^\star$ or something else)
the exchange of which describes diffractive scattering
in general and DIS off proton in particular. This object is
hadron-like
in the sense that it has not only a typical size and a typical
lifetime, but also a typical electromagnetic structure which can
e.g. be measured and described by an ``electromagnetic structure
function''.
In summary of this section, we note that
the empirical facts mentioned above show that {\em no}
energy-independent electromagnetic strcture function can be assigned
to the expected universal
colorless object $c_0^\star$.
This piece of experimental fact is of considerable importance, because
it is the first indication that, if there is a universal ``colorless
object'', this object {\em cannot} be considered as
an ordinary hadron. In other words, it has to be
something else! In fact, as we shall see below, this
property is closely related to the observation
that such an
object {\em cannot} have a typical size, or a typical
lifetime. The final answer to the question mentioned in the title of
this section
will be presented in Section \ref{sec:7}.
\section{Distributions of the gluon-clusters}
\label{sec:4}
After having seen that the existing data
does not allow us to assign an energy-independent electromagnetic
structure function to ``the colorless object'' such that the universal
colorless object ($c_0^\star$) can be treated as an ordinary hadron,
let us now come back
to the first question in Section \ref{sec:3},
and try to find out whether it is
never-the-less possible, and meaningful, to talk about the
$x_P$-distribution of $c_0^\star$.
As we shall see in this section,
the answer to this question is Yes!
Furthermore, we shall also see,
in order to answer this question
in the affirmative,
we {\em do not} need the factorization mentioned
in Eq.(5), and we {\em do not} need to know whether the gluon-clusters are
hadron-like. But, as we have already mentioned above, it is
of considerable importance
to discuss the second question
so that we can understand the origin and
the nature of the $c_0^\star$'s.
In view of the fact that we do use the concept ``distributions
of gluons''
in deep-inelastic lepton-hadron scattering, although the gluons
do not directly interact with the virtual photons,
we shall try to introduce the notion ``distribution of
gluon-clusters'' in a similar manner.
In order to see what we should do for the introduction
of such distributions, let us recall the following:
For normal deep-inelastic $e^-p$ collision
events, the structure function $F_2(x_B, Q^2)$ can be expressed
in term of the distributions of partons, where the partons are
not only quarks and antiquarks, but also gluons which
can contribute to the structure function by quark-antiquark
pair creation and annihilation.
In fact, in order to satisfy energy-momentum-conservation
(in the electron-proton system),
the contribution of the gluons $x_gg(x_g,Q^2)$ has to be taken into account
in the energy-momentum sum rule
for all measured $Q^2$-values. Here, we denote by
$g(x_g,Q^2)$ the probability density
for the virtual photon $\gamma^\star$ (with virtuality $Q^2$) to meet a
gluon which carries the energy (momentum) fraction $x_g$ of the proton,
analogous to $q_i(x_B, Q^2)$ [or $\bar q_i(x_B, Q^2)$] which
stands for the probability density for this $\gamma^\star$
to interact with a quark (or an antiquark) of flavor $i$ and electric
charge $e_i$ which carries the energy (momentum) fraction $x_B$ of the
proton. We note, while both $x_B$ and $x_g$ stand for energy
(or longitudinal momentum) fractions carried by partons,
the former can be, but the latter {\em cannot} be directly
measured.
Having these, in particular the energy-momentum sum rule in mind,
we immediately see the following: In a given
kinematical region
in which the contributions of only
one category of partons (for example quarks for $x_B > 0.1$ or
gluons for $x_B < 10^{-2}$) dominate, the structure
function $F_2(x_B,Q^2)$ can approximately
be related to the
distributions of that particular kind of partons in a
very simply manner. In fact,
the expressions below can be, and have been,
interpreted as the probability-densities for the virtual photon $\gamma^\star$
(with virtuality $Q^2$) to meet a quark or a gluon which carries the energy
(momentum) fraction $x_B$ or $x_g$ respectively.
\begin{eqnarray}
\label{ee2}
{F_2(x_B,Q^2)\over x_B}&\approx& \sum_i e_i^2\, q_i(x_B,Q^2)
\mbox{\hspace*{1cm}or\hspace*{1cm}} \nonumber\\
{F_2(x_B,Q^2)\over x_g}&\approx& g(x_g,Q^2)\mbox{\ .}
\end{eqnarray}
The relationship between $q_i(x_B,Q^2)$,
$g(x_g,Q^2)$ and
$F_2(x_B, Q^2)$ as they stand in Eq.(\ref{ee2})
are general
and formal (this is the case especially for that between $g$ and
$F_2$) in the following sense:
Both $q_i(x_B, Q^2)$ and $g(x_g,Q^2)$ contribute to the
energy-momentum sum rule and both of them are in accordance
with
the assumption that partons
of a given category
(quarks or gluons)
dominate a given kinematical region
(here $x_B>0.1$ and $x_B<10^{-2}$ respectively).
But, neither the dynamics which leads to the observed $Q^2$-dependence
nor the relationship between $x_g$ and $x_B$ are given. This means,
{\it without further theoretical inputs}, the simple expression for
$g(x_g, Q^2)$ as given by Eq.(7) is {\it practically useless}!
Having learned this, we now discuss what happens
if we assume, in diffractive lepton-nucleon scattering,
the colorless gluon-clusters ($c_0^\star$'s) dominate the
small-$x_B$ region ($x_B< 10^{-2}$, say). In this simple picture, we
are assuming that the following is approximately true:
The gluons in this region appear predominately in form
of gluon clusters. The interaction
between the struck $c_0^\star$
and the rest of the proton
can be neglected during the
$\gamma$-$c_0^\star$ collision such that
we can apply impuls-approximation to the $c_0^\star$'s in this
kinematical region.
That is, here we can
introduce
--- in the same manner as we do for
other partons
(see Eq.\ref{ee2}), a
probability density $D_S(x_P|\beta,Q^2)$ for $\gamma^\star$ in the
diffractive scattering process to ``meet'' a $c_0^\star$ which carries
the fraction $x_P$ of the proton's
energy $P^0=(|\vec{P}|^2+M^2)^{1/2} \approx |\vec{P}|$
(where $\vec{P}$ is the momentum and $M$ is the mass of the proton).
In other words,
in
diffractive scattering events
for processes in the kinematical region
$x_B < 10^{-2}$, we should have, instead of $g(x_g,Q^2)$, the following:
\begin{equation}
\label{ee3}
{F_2^{D(3)}(\beta,Q^2;x_P)\over x_P}\approx D_S(x_P|\beta,Q^2)\,.
\end{equation}
Here, $x_PP^0$ is the energy carried by $c_0^\star$,
and $\beta$ indicates the corresponding fraction carried by
the struck charged constituent in $c_0^\star$.
In connection with the similarities and the differences between
$q_i(x_B,Q^2)$, $g(x_B,Q^2)$ in (\ref{ee2}) and $D_S(x_P|\beta, Q^2)$
in (\ref{ee3}), it is
useful to note in particular the significant difference
between $x_g$ and $x_P$,
and thus that between
the $x_g$-distribution $g(x_g,Q^2)$ of the gluons and the
$x_P$-distribution $D_S(x_P|\beta, Q^2)$ of the $c^\star_0$'s: Both $x_g$ and
$x_P$ are energy (longitudinal momentum) fractions of charge-neutral
objects, with which
$\gamma^\star$ {\it cannot} directly interact. But, in contrast to $x_g$,
$x_P$ {\it can be directly measured in experiments}, namely by
making use of the kinematical relation
\begin{equation}
\label{ee4}
x_P\approx {Q^2+M_x^2\over Q^2+W^2},
\end{equation}
and
by measuring the quantities $Q^2$, $M_x^2$ and $W^2$
in every collision event. Here, $Q$, $M_x$
and $W$ stand respectively for the invariant momentum-transfer from
the incident electron, the invariant-mass of the final hadronic state
after the $\gamma^\star-c_0^\star$ collision, and the invariant mass of the
entire hadronic system in the collision between $\gamma^\star$ and the
proton. Note that $x_B\equiv\beta x_P$, hence $\beta$ is also
measurable. This means, in sharp contrast to $g(x_g,Q^2)$, {\it
experimental information} on $D_S(x_P|\beta, Q^2)$
in particular its $x_P$-dependence
can be obtained ---
{\it without further theoretical inputs}!
\section{The first SOC-fingerprint: Spatial scaling}
\label{sec:5}
We mentioned at the beginning of Section \ref{sec:3}, that in order
to find out whether the concept of SOC
indeed plays a role in diffractive DIS we need to check the
fingerprints of SOC shown in Section \ref{sec:2}, and that such tests
can be made by examing the
corresponding cluster-distributions obtained from experimental data.
We are now ready to do this, because we have
learned in Sections \ref{sec:3} and \ref{sec:4}, that it is not only
meaningful but also possible to extract $x_P$-distributions from the
measured diffractive structure functions,
although the gluon-clusters {\em cannot} be treated as hadrons.
In fact, as we can explicitly see
in Eqs.(8) and (9), in order to extract the $x_P$-dependence of
the gluon-clusters from the data, detailed knowledge about the intrinsic
structure of the clusters are not necessary.
Having these in mind, we now
consider $D_S$ as a function of $x_P$ for given values
of $\beta$ and $Q^2$,
and plot
$F_2^{D(3)}(\beta,Q^2;x_P)/x_P$ against $x_P$
for different sets of $\beta$ and $Q^2$. The results of such
log-log plots are shown in Fig. 3. As we can see,
the data\cite{r3} suggest
that the probability-density for the virtual photon $\gamma^\star$ to
meet a color-neutral and charged-neutral object $c_0^\star$ with energy
(longitudinal momentum) fraction $x_P$ has a power-law behavior in
$x_P$, and the exponent of this power-law
depends very little on $Q^2$ and $\beta$.
This is to be compared with $D_S(S)$ in Eq.(~\ref{e1}), where $S$,
the dissipative energy (the size of the BTW-cluster)
corresponds to the energy of the
system $c_0^\star$. The latter is $x_PP^0$, where $P^0$ is the
total energy of the proton.
It means, the existing data\cite{r3} show that
$D_S(x_P | \beta, Q^2)$ exhibits the same kind of power-law behavior
as the size-distribution of BTW-clusters.
This result is
in accordance with the expectation that
self-organized critical phenomena may exist
in the colorless systems of interacting soft gluons in
diffractive deep-inelastic electron-proton
scattering processes.
We note, up to now, we have only argued (in Section \ref{sec:1}) that
such gluon-systems are
open, dynamical, complex systems
in which SOC may occur, and we have mentioned (in Section \ref{sec:2}) the
ubiquitousness of SOC in Nature.
Having seen the experimental evidence
that one of the ``SOC-fingerprints'' (which are
necessary conditions for the existence of
SOC) indeed exists, let us now take a second look at the colorless
gluon-systems from a theoretical standpoint.
Viewed from a
``fast moving frame'' which can
for example be the electron-proton c.m.s. frame,
such
colorless systems
of interacting soft gluons are part of the proton
(although, as color-singlets, they can also be outside the confinement
region). Soft gluons can be
intermittently emitted or absorbed by gluons in such a system,
as well as by gluons,
quarks and antiquarks outside the system.
The emission- and absorption-processes are due to local interactions
prescribed by the well-known QCD-Lagrangian (here ``the running
coupling constants'' are in general large,
because the distances between the interacting colored objects
cannot be considered as ``short''; remember that the
spatial dimension of a $c_0^\star$ can be much
larger than that of a hadron!).
In this connection, it is useful to keep the following in mind:
Due to the complexity of the system,
details about the local interactions may be relatively
unimportant, while
general and/or global features --- for example
energy-flow between different parts (neighbors and neighbor's
neighbors $\ldots$) of the system ---
may play an important role.
How far can one go in neglecting dynamical details when one
deals with such open
complex systems? In order to see this, let us
recall how Bak and Sneppen\cite{r16}
succeeded in modeling
some of the essential aspects of
The Evolution in Nature.
They consider the ``fitness'' of different ``species'', related to one
another through a ``food chain'', and assumed
that the species with the lowest fitness
is most likely to disappear or mutate at the next time-step
in their computer simulations.
The crucial step in their simulations
that {\em drives} evolution is the adaption of the individual species to
its present {\em environment} (neighborhood) through mutation
and selection of a
fitter variant.
Other interacting species form part of the {\em environment}.
This means, the neighbors will be influenced by
every time-step.
The result these authors
obtained strongly suggests
that the process of evolution is
a self-organized critical phenomenon. One of the essential
simplifications they made in their evolution models\cite{r16,r17}
is the following: Instead of the explicit
connection between
the fitness and the configuration of the
genetic codes,
they use random numbers for the fitness of the
species.
Furthermore, as they have pointed out in their papers, they
could in principle have chosen to model evolution on a less
coarse-grained scale by considering mutations at the individual
level rather than on the level of species, but that would make the
computation prohibitively difficult.
Having these in mind, we are naturally led to the questions:
Can we consider the creation and
annihilation processes of colorless
systems of interacting soft gluons associated
with a proton as ``evolution'' in a microscopic world?
Before we try to build models for a quantitative description
of the data, can we simply apply the existing evolution
models\cite{r16,r17} to such open, dynamical, complex
systems of interacting soft-gluons,
and check whether some of the essential features
of such systems
can be
reproduced?
To answer these questions, we now report on the result of our
first trial in this direction:
Based on the fact that we know {\em very little} about
the detailed reaction mechanisms in such gluon-systems and
{\em practically}
{\em nothing} about their structures, we simply {\em ignore} them,
and assume that they are self-similar in space
(this means, colorless gluon-clusters can be considered as clusters of
colorless gluon-clusters and so on). Next,
we divide them in an arbitrary given number of subsystems $s_i$
(which may or may not have the same size). Such a system is open,
in the sense that neither its energy $\varepsilon_i$, nor
its gluon-number $n_i$ has a fixed value. Since we do not
know, in particular, how large the $\varepsilon_i$'s are, we
use random numbers. As far the $n_i$'s are concerned, since
we do not know how these numbers are associated with the energies
in the subsystems $s_i$, except that they are not conserved
quantities,
we just ignore them, and consider only the $\varepsilon_i$'s.
As in Ref.[\ref{r16}] or in Ref.[\ref{r17}], the random number of this
subsystem as well as those of the fixed\cite{r16} or random (see the
first paper of Ref.[\ref{r17}]) neighbors will be changed at every time-step.
Note, this is how we simulate the processes of energy flow due to
exchange of gluons between the subsystems, as well as those with
gluons/quarks/antiquarks outside the system. In other words, in the
spirit of Bak and Sneppen\cite{r16} we neglecting the dynamical
details {\it totally}.
Having in mind that,
in such systems,
the gluons as well as the
subsystems ($s_i$'s) of gluons are {\it virtual}
(space-like), we can ask:
``How long can such a colorless subsystem
$s_i$ of interacting soft gluons exist,
which carries energy $\varepsilon_i$?''
According to the uncertainty principle,
the answer should be:
``The time interval
in which the subsystem $s_i$ can exist
is proportional to $1/\varepsilon_i$,
and this quantity can be considered as the lifetime $\tau_i$ of
$s_i$.'' In this sense, the subsystems of colorless gluons are
expected to have larger probabilities to mutate because they are
associated with higher energies, and thus shorter ``lifetimes''.
Note that the basic local interaction
in this self-organized evolution
process is the emission (or absorption) of gluons by gluons prescribed
by the QCD-Lagrangian --- although the detailed mechanisms
(which can in principle be explicitly written down by
using the QCD-Lagrangian)
do not play a
significant role.
In terms of the evolution model\cite{r16,r17}
we now call $s_i$ the ``species'' and identify
the corresponding
lifetime $\tau_i$ as the ``fitness of $s_i$''.
Because of the one-to-one correspondence between $\tau_i$ and
$\varepsilon_i$, where the latter is a random number,
we can also directly assign random numbers to the $\tau_i$'s
instead. From now we can adopt the evolution model\cite{r16,r17}
and note that,
at the start of such a process (a simulation), the fitness on average
grow, because the least fit are always eliminated. Eventually the
fitness do not grow any further on average. All gluons have a fitness
above some threshold. At the next step, the least fit species (i.e. the
most energetic subsystem $s_i$ of interacting soft gluons),
which would be right at the threshold,
will be ``replaced''
and starts an
avalanche (or punctuation of mutation events), which is causally
connected with this triggering ``replacement''.
After a while, the avalanche will
stop, when all the fitnesses again will be over that threshold.
In this sense, the evolution goes on, and on, and on.
As in Refs.[\ref{r16}] and [\ref{r17}], we can monitor the duration of
every avalanche, that is the total number of mutation events in
everyone of them, and count how many avalanches of each size are observed.
The
avalanches mentioned here are
special cases of those discussed in Section \ref{sec:2}.
Their size- and lifetime-distributions are
given by Eq.(1) and Eq.(2) respectively. Note in particular that the
avalanches in the Bak-Sneppen model correspond to sets of subsystems
$s_i$, the energies ($\epsilon_i$) of which are too high ``to be fit
for the colorless systems of low-energy gluons''. It means, in the
proposed picture, what the virtual photon in deep-inelastic
electron-proton scattering ``meet'' are those ``less fit'' one ---
those who carry ``too much'' energy.
In a geometrical picture this means, it is
more probable for such ``relatively energetic'' colorless
gluons-clusters to be spatially
further away from the (confinement region of)
the proton.
There exists, in the mean time, already several versions of evolution
models\cite{r12,r17} based
on the original idea of Bak and Sneppen\cite{r16}
Although SOC phenomena have been observed in all these cases\cite{r12,r16,r17},
the slopes of the power-law distributions for the avalanches are different
in different models --- depending on the rules applied to the mutations.
The values range from approximately $-1$ to approximately $-2$.
Furthermore, these models\cite{r12,r16,r17} seem to show that neither the
size nor the dimension of the system used for the computer simulation
plays a significant role.
Hence, if we identify
the colorless charge-neutral object $c_0^\star$ encountered by the
virtual photon $\gamma^\star$ with
such an avalanche,
we are identifying the
lifetime of $c_0^\star$ with $T$, and the ``size''
(that is the total amount of dissipative energy in this
``avalanche'') with the total amount of energy of $c_0^\star$.
Note that the latter is nothing else but $x_PP^0$, where $P^0$
is the total energy of the proton. This is how and why the
$S$-distribution in Eq. (\ref{e1}) and the $x_P$-distribution of
$D_S(x_P|\beta,Q^2)$ in Eq.(\ref{ee3}) are related to each other.
\section{The second fingerprint: Temporal scaling}
\label{sec:6}
In this section we discuss in more detail the effects associated with
the time-degree-of-freedom. In this connection,
some of the concepts and methods related to
the two questions
raised in Section \ref{sec:3} are of great interest. In particular,
one may
wish to know
{\em why} the parton-picture
does not work equally well for hadrons and for gluon-clusters.
The answer is very simple:
The time-degree
of freedom cannot be ignored when we apply
impulse-approximation,
and the applicability of the latter
is the basis of the parton-model.
We recall that,
when we apply the parton model to stable hadrons,
the quarks, antiquarks and
gluons are considered as free and stable objects,
while the virtual photon $\gamma^\star$ is associated
with a given interaction-time $\tau_{\rm int}(Q^2,x_B)$ characterized
by the values $Q^2$ and $x_B$ of such scattering processes.
We note however that, this is possible only when the interaction-time
$\tau_{\rm int}$ is much shorter than the
corresponding
time-scales related to hadron-structure
(in particular the average
propagation-time of color-interactions in hadron).
Having these in mind, we see that, we are confronted with
the following questions when we deal
with gluon-clusters which have finite lifetimes:
Can we consider the $c_0^\star$'s as ``{\it free}'' and
``{\it stable}'' particles when
their lifetimes are {\it shorter} than the interaction-time $\tau_{\rm
int}(Q^2,x_B)$? Can we say that a $\gamma^\star$-$c_0^\star$
collision process takes place,
in which the incident $\gamma^\star$ is
absorbed by one a or a system of the charged constituents of $c_0^\star$, when
the lifetime $T$ of $c_0^\star$ is {\it shorter} than
$\tau_{\rm int}(Q^2,x_B)$?
Since the notion
``stable objects'' or ``unstable objects'' depends on the
scale which is used in
the measurement, the question whether a $c_0^\star$ can
be considered as a parton
(in the sense that it can be considered as a free
``stable object'' during the $\gamma^\star$-$c_0^\star$ interaction)
depends very much on
on the interaction-time
$\tau_{int}(Q^2, x_B)$.
Here, for
given values of $Q^2$, $x_B$, and thus $\tau_{int}(Q^2, x_B)$,
only those $c^\star_0$'s whose lifetime ($T$'s) are greater
than $\tau_{int}(Q^2, x_B)$ can absorb the corresponding
$\gamma^\star$.
That
is to say, when we consider diffractive electron-proton scattering in
kinematical regions in which $c_0^\star$'s dominate,
we must keep in mind that the measured cross-sections (and thus the
diffractive structure function $F_2^{D(3)}$)
only include contributions from collision-events in which the
condition $T>\tau_{\rm int}(Q^2,x_B)$
is satisfied\,!
We note that $\tau_{\rm int}$ can be estimated by making use of the
uncertainty principle. In fact, by calculating $1/q^0$ in the
above-mentioned
reference frame,
we obtain
\begin{equation}
\label{e4}
\tau_{\rm int}={4|\vec P|\over Q^2}{x_B\over 1-x_B},
\end{equation}
which implies that, for given $|\vec P|$ and $Q^2$ values,
\begin{equation}
\label{eee3}
\tau_{\rm int}\propto x_B,\hskip 1cm \mbox{\rm for } x_B\ll 1.
\end{equation}
This means, for diffractive $e^-p$ scattering events in the
small-$x_B$ region at given $|\vec
P|$ and $Q^2$ values, $x_B$ is directly proportional to the interaction
time $\tau_{\rm int}$. Taken together with the relationship between
$\tau_{\rm int}$ and the minimum lifetime $T({\rm mim})$ of the
$c_0^\star$'s mentioned above, we reach the following conclusion: The
distribution of this minimum value,
$T({\rm min})$ of the $c_0^\star$'s which dominate the
small-$x_B$ ($x_B<10^{-2}$, say) region can be obtained by examining
the $x_B$-dependence of
$F_2^{D(3)}(\beta,Q^2;x_P)/\beta$ discussed in
Eqs. (5), (6) and in Fig. 2. This is because, due to the fact that
this function is proportional to
the quark (antiquark) distributions $q^c_i(\bar{q_i}^c)$ which can be
directly probed by the incident virtual photon
$\gamma^\star$, by measuring $F_2^{D(3)}(\beta,Q^2,x_P)/\beta$
as a function of $x_B\equiv \beta x_P$, we are in fact asking
the following questions:
Do the distributions of the charged constituents of
$c_0^\star$ depend on the interaction time $\tau_{\rm int}$,
and thus on the minimum lifetime $T({\rm min})$ of the
to be detected gluon-clusters\,?
We use the identity $x_B\equiv\beta x_P$ and plot the quantity
$F_2^{D(3)}(\beta,Q^2;x_P)/\beta$ against the variable $x_B$
for fixed values of $\beta$ and $Q^2$.
The result of such a log-log plot is given in Fig.4. It shows
not only how the dependence on the
time-degree-of-freedom can be extracted from the existing
data\cite{r3}, but also that, for all the measured
values of $\beta$ and $Q^2$, the quantity
\begin{equation}
\label{e5}
p(x_B|\beta, Q^2) \equiv
{F_2^{D(3)}(\beta, Q^2; x_B/\beta)
\over \beta }
\end{equation}
is approximately
independent of $\beta$, and independent of $Q^2$.
This strongly suggests that the quantity given in Eq.(\ref{e5})
is associated with some {\em global} features of $c_0^\star$ ---
consistent with the observation made in Section \ref{sec:3} which shows
that it {\em cannot} be used to describe the {\em structure} of $c_0^\star$.
This piece of empirical fact can be expressed by setting
$p(x_B|\beta, Q^2)\approx p(x_B)$.
By taking a closer look at this $\log$-$\log$ plot, as well
as the corresponding plots for different sets of
fixed $\beta$- and $Q^2$-values (such plots are not
shown here, they are similar to those in Fig.3),
we see that they are straight lines indicating that
$p(x_B)$ obeys a power-law. What does this piece of
experimental fact tell us? What can we learn from
the distribution of the lower limit of the lifetimes (of the
gluon-systems $c_0^\star$'s)?
In order to answer these questions, let us,
for a moment, assume that we know the lifetime-distribution $D_T(T)$
of the $c_0^\star$'s. In such a case,
we can readily evaluate the integral
\begin{equation}
\label{e6}
I[\tau_{\rm int}(x_B)]\equiv\int^\infty_{\tau_{\rm int}(x_B)}D_T(T)dT,
\end{equation}
and thus obtain the number density of all those clusters which live
longer than the interaction time $\tau_{\rm int}(x_B)$.
Hence, under the statistical assumption
that the chance for a $\gamma^\star$
to be absorbed by one of those
$c_0^\star$'s of lifetime $T$
is proportional to $D_T(T)$ (provided that
$\tau_{\rm int}(Q^2,x_B)\le T$, otherwise this chance is
zero), we
can then interpret the integral in Eq.(13) as follows:
$I[\tau_{\rm int}(Q^2,x_B)]\propto p(x_B)$ is
the probability density for $\gamma^\star$ [associated with the
interaction-time $\tau_{\rm int}(x_B)$]
to be absorbed by $c_0^\star$'s.
Hence,
\begin{equation}
\label{e7}
D_T(x_B)\propto {d\over dx_B}p(x_B).
\end{equation}
This means in particular,
the fact that $p(x_B)$ obeys a power-law in $x_B$ implies that
$D_T(T)$ obeys a power-law in $T$.
Such a {\em behavior is similar} to
that shown in Eq.(~\ref{e2}).
In order to see the {\em quality} of this power-law behavior of $D_T$, and
the {\em quality} of its independence of $Q^2$ and $\beta$, we compare the
above-mentioned behavior with the existing data\cite{r3}. In Fig.5,
we show the log-log plots of
$d/dx_B[p(x_B)]$ against $x_B$. We note that $d/dx_B[p(x_B)]$ is
approximately $F_2^{D(3)}(\beta, Q^2; x_B/\beta)/(\beta x_B)$.
The quality of the power-law
behavior of $D_T$ is explicitly shown in Fig.5.
\section{$Q^2$-dependent exponents in the power-laws?}
\label{sec:7}
We have seen, in Sections \ref{sec:5} and \ref{sec:6}, that in
diffractive deep-inelastic
electron-proton scattering, the size- and the
lifetime-distributions of the gluon-clusters obey power-laws,
and that the exponents depend very little
on the variables $\beta$ and $Q^2$. We interpreted
the power-law behaviors as the fingerprints of SOC in the formation
processes of such clusters in form of BTW-avalanches. Can such approximately
independence (or weak
dependence) of the exponents on $Q^2$ and $\beta$
be understood in a physical picture based on
SOC? In
particular, what do we expect to see in photoproduction
processes
where the
associated value for $Q^2$ is zero?
In order to answer these questions, let us
recall the space-time aspects of the
collision processes which are closely related
to the above-mentioned
power-law behaviors.
Viewed in a fast moving frame (e.g. the c.m.s. of the colliding
electron and proton), the states of the interacting soft gluons
originating from the
proton are self-organized.
The colorless gluon-clusters caused by local perturbations
and developed through ``domino effects'' are BTW-avalanches
(see Sections \ref{sec:1} and \ref{sec:5}), the
size-distribution of which [see Eqs.(8) and (1)] are given by
Fig.3. This explicitly shows that
there are gluon-clusters of all sizes,
because a power-law
size-distribution implies that there is no scale in size.
Recall that, since such clusters are color-singlets, their
spatial extensions can be much larger than that of the proton,
and thus they can be ``seen'' also {\em outside} the proton
by a virtual
photon originating from the electron.
In other words, what the virtual photon encounters is a cloud of
colorless gluon-clusters everyone of which is in general partly inside
and partly outside the proton.
The virtual photon, when it encounters a colorless
gluon-cluster, will be absorbed
by the charged constituents
(quarks and antiquarks due to fluctuation of the gluons)
of the gluon-system. Here it is useful to recall that in such a space-time
picture, $Q^2$ is inversely proportional to the transverse size,
and $x_B$ is a measure of the interaction time [See Eqs. (10) and (11)
in Section \ref{sec:6}] of the virtual photon.
It is conceivable, that the values for the cross-sections for virtual
photons (associated with a given $Q^2$ and a given $x_B$) to
collide with gluon-clusters (of a given size and a given
lifetime) may depend on these variables. But, since the
processes of self-organization (which produce such gluon-clusters)
take
place independent of the virtual photon (which originates from the
incident electron and enters ``the cloud'' to
look for suitable partners), the power-law behaviors of the size-
and lifetime-distributions of the gluon-clusters are expected to be
independent of the properties associated with the virtual photon.
This means, by using
$\gamma^\star$'s associated with different values
of $Q^2$ to detect clusters of various sizes,
we are moving up or down on the straight lines in the
log-log plots for the size- and lifetime distributions,
the slopes of which do not change.
In other words,
the approximative $Q^2$-independence of the slope is
a natural consequence of the SOC picture.
As far as the $\beta$-dependence is concerned, we recall the
results obtained in Sections \ref{sec:3} and \ref{sec:4},
which explicitly show the following:
The gluon-clusters ($c_0^\star$'s)
{\em cannot} be considered as {\em hadrons}. In particular, it is
neither possible
nor meaningful
to talk about ``the electro-magnetic structure of the gluon-cluster''.
This suggests, by studying the $\beta$-dependence of
the ``diffractive structure functions'' we cannot expect to gain further
information about the structure
of the gluon-clusters or further insight about the reaction mechanisms.
Having seen these, we try to look for
measurable quantities in which the integrations over $\beta$
have already been
carried out.
A suitable candidate for this purpose is the differential cross-section
\begin{eqnarray}
\lefteqn{\frac{1}{x_P}\,\frac{d^2\sigma^D}{dQ^2 dx_P} = } \nonumber \\
& = &\int d\beta \,\frac{4\pi\alpha^2}{\beta Q^4}\,
\left( 1-y+\frac{y^2}{2}\right)\,
\frac{F_2^{D(3)}(\beta, Q^2; x_P)}{x_P} \nonumber\\
& \approx &
\int d\beta \,\frac{4\pi\alpha^2}{\beta Q^4}\,
\left( 1-y+\frac{y^2}{2}\right)\,
D_S(x_P| \beta, Q^2)
\end{eqnarray}
Together with Eqs.(3) and (8), we see that this cross-section is
nothing else but the effective $\beta$-weighted
$x_P$-distribution $D_S(x_P|Q^2,\beta)$ of the
gluon-clusters. Note that the weighting factors shown on the
right-hand-side of Eq.(15) are simply results of QED!
Next, we use the data\cite{r3} for
$F_2^{D(3)}$ which are available at present,
to do a log-log plot for the integrand of the expression
in Eq.(15) as a function of $x_P$
for different values of $\beta$ and $Q^2$.
This is shown in
in Fig.6a. Since the absolute values of this quantity depend
very much, but the slope of the curves very little on $\beta$,
we carry out the integration as follows:
We first fit every set of the data separately.
Having obtained the slopes and the intersection points,
we use the obtained fits to perform the integration over $\beta$.
The results are shown in the
\begin{eqnarray*}
\log{\left(\frac{1}{x_P}\,\frac{d^2\sigma^D}{dQ^2\,dx_P}\right)}
& \mbox{\ \ versus\ \ \ } &
\log{(x_P)}
\end{eqnarray*}
plots of Fig.6b.
These results show the $Q^2$-dependence of the slopes
is practically negligible, and that the slope
is approximately $-1.95$ for all values of $Q^2$.
Furthermore, in order to see whether the quantity introduced in
Eq.(15) is indeed
useful, and in order to perform a decisive test of the
$Q^2$-independence of the slope in the power-law behavior
of the above-mentioned size-distributions,
we now
compare the results in deep-inelastic
scattering\cite{r3} with those obtained in photoproduction\cite{r18},
where LRG events have
also be observed. This means, as in
diffractive deep-inelastic scattering, we again associate the
observed effects with colorless objects which are interpreted as
system of interacting soft gluons originating from the proton.
In order to find out whether it is the same kind of
gluon-clusters as in deep-inelastic scattering, and whether
they ``look'' very much different when we probe them with
real ($Q^2=0$) photons, we replot the existing
$d\sigma/dM_x^2$ data\cite{r18} for photoproduction
experiments performed at different total energies,
and note
the kinematical relationship between $M_x^2$, $W^2$ and $x_P$
(for $Q^2\ll M^2$ and $|t|\ll M_x^2$):
\begin{eqnarray}
x_P \approx \frac{M_x^2}{W^2} & &
\end{eqnarray}
The result of the corresponding
\begin{eqnarray*}
\log{\left(\frac{1}{x_P}\,\frac{d\sigma}{dx_P}\right)}
& \mbox{\ \ versus\ \ \ } &
\log{(x_P)}
\end{eqnarray*}
plot is shown in Fig.7. The slope obtained from a least-square
fit to the existing data\cite{r18} is $-1.98\pm 0.07$.
The results obtained in diffractive
deep-inelastic electron-proton scattering
and that for diffractive photoproduction strongly suggest
the following: The formation processes of gluon-clusters
in the proton is due to self-organized criticality, and thus
the spatial distributions of such clusters
--- represented by the $x_P$-distribution ---
obey power-laws.
The exponents of
such power-laws are
independent of
$Q^2$. Since $1/Q^2$ can be interpreted
as a measure for the transverse
size of the incident virtual photon, the observed
$Q^2$-independence of the exponents can be
considered as further evidence for SOC ---
in the sense that the self-organized gluon-cluster formation
processes take place independent of the
photon which is ``sent in'' to detect the clusters.
Having these results, and the close relationship between real photon
and hadron in mind, we are immediately led to the following questions:
What shall we see, when we replace the (virtual or real) photon by a
hadron --- a proton or an antiproton? (See in this connection Fig.8,
for the notations and the kinematical relations for the description of
such scattering processes.) Should we not see similar behaviors, if
SOC in gluon-systems is indeed the reason for the occurrence of
colorless gluon-clusters which can be probed experimentally in
inelastic diffractive scattering processes? To answer these questions,
we took a closer look at the available single diffractive
proton-proton and proton-antiproton scattering data\cite{r4,r5};
and in
order to make quantitative comparisons, we plot the quantities
which correspond to those shown in Fig.\ref{figure6}b and Fig.\ref{figure7}.
These plots are shown
in Fig.\ref{figure9}a and Fig.\ref{figure9}b.
In Fig.\ref{figure9}a, we see the double differential
cross-section $(1/x_P)d^2\sigma/(dtdx_P)$ at four different
$t$-values.
In Fig.\ref{figure9}b,
we see the integrated differential cross-section
$(1/x_P)d\sigma/dx_P$.
Note that, here
\begin{equation}
x_P \approx M_x^2/s,
\end{equation}
where $\sqrt{s}$ is the total
c.m.s. energy of the colliding proton-proton or antiproton-proton system.
Here, the integrations of
the double
differential cross-section
over $t$ are in the ranges in which
the corresponding experiments have been performed.
(The extremely
weak energy-dependence has been ignored in the integration.)
The
dashed lines in all the plots in Figs.\ref{figure9}a and
\ref{figure9}b
stand for the slope
$-1.97$ which is the average of the slope obtained from the plots
shown in Figs.\ref{figure6}b and \ref{figure7}.
This means, the result shows exactly what we expect to see: The
fingerprints of SOC can be clearly seen
also in proton- and antiproton-induced
inelastic diffractive scattering processes,
showing that such characteristic features
are indeed universal and robust!
We are thus led to the following conclusions. Color-singlet
gluon-clusters can be formed in hadrons as a consequence of
self-organized criticality (SOC) in systems of interacting soft
gluons. In other words, ``the colorless objects'' which dominate the
inelastic diffractive scattering processes are BTW-avalanches
(BTW-clusters). Such color-singlet gluon-clusters are in general distributed
partly inside and partly outside the confinement region of the
``mother-hadron''. Since the interactions between the color-singlet
gluon-clusters and other color singlet objects (including the target
proton) should be of Van der Waals type, it is expected that such an
object can be
readily driven out of the above-mentioned confinement region
by the beam-particle in geometrically more peripheral collisions. This
is why we examined inelastic single-diffractive
scattering processes at high energies in which virtual photon, real
photon, proton, and antiproton are used as beam particles. This is
also why we can extract the universal distributions of such
color-singlet gluon-clusters directly from the data. In particular,
the fact that $x_P$ is the energy fraction carried by the struck
colorless gluon-cluster, and the fact that the $x_P$-distributions are
universal, it
is tempting to regard such $x_P$-distributions as ``the
parton-distributions''
for diffractive scattering processes. Can we make use of such
``parton-distributions'' to describe and/or to predict the measurable
cross-sections in inelastic diffractive scattering processes? This and
other related questions will be discussed in Part II of the present paper.
\subsection*{Acknowledgement}
We thank P. Bak, X. Cai, D. H. E. Gross, C. S. Lam, Z. Liang,
K. D. Schotte, K. Tabelow and E. Yen
for helpful discussions, R. C. Hwa, C. S. Lam and J. Pan for
correspondence, and FNK der FU-Berlin
for financial support.
Y. Zhang also thanks Alexander
von Humboldt Stiftung for the fellowship granted to him.
|
1,108,101,564,653 | arxiv | \section{\normalfont\normalsize\abstractname}}
{\section*{\normalfont\normalsize\abstractname}}
{}{\typeout{Failed to patch abstract.}}
\patchcmd{\HyOrg@maketitle}
{\section{\protect\normalfont{\@title}}}
{\section*{\protect\normalfont{\@title}}}
{}{\typeout{Failed to patch title.}}
\makeatother
\shorttitle{Bayes factor workflow}
\keywords{Bayes factors, Bayesian model comparison, Prior, Posterior, Simulation-based calibration}
\usepackage{csquotes}
\usepackage{amsmath}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[backend=biber,useprefix=true,style=apa,url=false,doi=false,sorting=nyt,eprint=false]{biblatex}
\DeclareLanguageMapping{american}{american-apa}
\addbibresource{r-references.bib}
\usepackage{fancyvrb}
\usepackage{newfloat}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{listings}
\usepackage{glossaries}
\makeglossaries
\lstset{language=R, frame=single, basicstyle=\small\ttfamily, stringstyle=\color{DarkGreen}, otherkeywords={0,1,2,3,4,5,6,7,8,9}, morekeywords={TRUE,FALSE}, deletekeywords={data,frame,as,character}, keywordstyle=\color{blue}, commentstyle=\color{DarkGreen}}
\DeclareFloatingEnvironment[fileext=los, listname=List of Schemes, name=Listing, placement=!htbp, within=section]{listing}
\usepackage{placeins}
\usepackage{todonotes}
\ifxetex
\usepackage{polyglossia}
\setmainlanguage[]{english}
\else
\usepackage[shorthands=off,main=english]{babel}
\fi
\ifluatex
\usepackage{selnolig}
\fi
\newlength{\cslhangindent}
\setlength{\cslhangindent}{1.5em}
\newenvironment{cslreferences}%
{\setlength{\parindent}{0pt}%
\everypar{\setlength{\hangindent}{\cslhangindent}}\ignorespaces}%
{\par}
\title{Workflow Techniques for the Robust Use of Bayes Factors}
\author{Daniel J. Schad\textsuperscript{1, 2, 3}, Bruno Nicenboim\textsuperscript{2, 3}, Paul-Christian Bürkner\textsuperscript{4}, Michael Betancourt\textsuperscript{5}, \& Shravan Vasishth\textsuperscript{3}}
\date{}
\authornote{
Correspondence concerning this article should be addressed to Daniel J. Schad, Health and Medical University, Potsdam, Germany. E-mail: \href{mailto:[email protected]}{\nolinkurl{[email protected]}}
}
\affiliation{\vspace{0.5cm}\textsuperscript{1} Health and Medical University Potsdam, Germany\\\textsuperscript{2} Tilburg University, Netherlands\\\textsuperscript{3} University of Potsdam, Germany\\\textsuperscript{4} University of Stuttgart, Germany\\\textsuperscript{5} Symplectomorphic, New York, USA}
\abstract{
Inferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions. Moreover it's not clear how straightforwardly this approach can be implemented in practice, and in particular how sensitive it is to the details of the computational implementation. Here, we investigate these questions for Bayes factor analyses in the cognitive sciences. We explain the statistics underlying Bayes factors as a tool for Bayesian inferences and discuss that utility functions are needed for principled decisions on hypotheses. Next, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors. Importantly, it is unknown whether Bayes factor estimates based on bridge sampling are unbiased for complex analyses. We are the first to use simulation-based calibration as a tool to test the accuracy of Bayes factor estimates. Moreover, we study how stable Bayes factors are against different MCMC draws. We moreover study how Bayes factors depend on variation in the data. We also look at variability of decisions based on Bayes factors and how to optimize decisions using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis, and we illustrate this workflow using an example from the cognitive sciences. We hope that this study will provide a workflow to test the strengths and limitations of Bayes factors as a way to quantify evidence in support of scientific hypotheses.
Reproducible code is available from \url{https://osf.io/y354c/}.
}
\begin{document}
\maketitle
{
\hypersetup{linkcolor=}
\setcounter{tocdepth}{3}
\tableofcontents
}
\hypertarget{introduction}{%
\section{Introduction}\label{introduction}}
In the cognitive sciences and related areas, recent years have seen a rise in Bayesian approaches to data analysis. Many cognitive science journals have published special issues on Bayesian data analysis, including methodological journals such as the Journal of Mathematical Psychology (Lee, 2011; Mulder \& Wagenmakers, 2016) and Psychological Methods (Chow \& Hoijtink, 2017; Hoijtink \& Chow, 2017), but also the more experimental journal Psychonomic Bulletin \& Review (Vandekerckhove, Rouder, \& Kruschke, 2018). Further introductory articles have been contributed (see Etz \& Vandekerckhove, 2018; Doorn, Aust, Haaf, Stefan, \& Wagenmakers, 2021; Etz et al., 2018; Nicenboim \& Vasishth, 2016; Sorensen, Hohenstein, \& Vasishth, 2016; Vasishth, Nicenboim, Beckman, Li, \& Kong, 2018). That Bayesian analyses are so prominently discussed and used is an indication that Bayesian approaches are becoming increasingly mainstream (Gelman et al., 2014).
Bayesian approaches provide tools for different aspects of data analysis. Bayesian data analysis plays an important role in cognitive science as it allows us to carry out inference, i.e., a way to quantify the evidence that data provide in support of one hypothesis or another. Such Bayesian hypothesis testing can be implemented using Bayes factors (Gronau et al., 2017a; Heck et al., 2020; Jeffreys, 1939; Kass \& Raftery, 1995; Rouder, Haaf, \& Vandekerckhove, 2018; Schönbrodt \& Wagenmakers, 2018; Wagenmakers, Lodewyckx, Kuriyal, \& Grasman, 2010), which quantify evidence in favor of one model over another, where each model implements one scientific hypothesis about the data (for a critique of Bayes factors see Navarro, 2019).
Bayes factors are increasingly used in the cognitive sciences and other fields of science (Heck et al., 2020). However, while Bayes factors provide an immediate approach to hypothesis testing, it is known that they are highly sensitive to details of the data and model assumptions. Moreover, it is unclear how implementable it is in practice and how sensitive it is to the details of the computational implementation.
First, the results of Bayes factor analyses are highly sensitive to and crucially depend on prior assumptions about model parameters (we illustrate this below) (Aitkin, 1991; Gelman et al., 2013; Grünwald, 2000; Liu \& Aitkin, 2008; Myung \& Pitt, 1997; Vanpaemel, 2010). That is, in Bayesian inference, researchers specify a priori assumptions about which parameter values they consider most likely before seeing the data. These priors can vary between experiments/research problems and even differ subjectively between different researchers, which will change the resulting evidence based on Bayes factors. Note that the dependency of Bayes factors on the prior goes beyond the dependency of the posterior on the prior.
Importantly, for most interesting problems and models, Bayes factors cannot be computed analytically. Instead, approximations are needed. One major approach is to estimate Bayes factors based on posterior MCMC draws (Betancourt, 2020a) via an algorithm termed bridge sampling (Bennett, 1976; Meng \& Wong, 1996), which is implemented in the R package \texttt{bridgesampling} (Gronau, Singmann, \& Wagenmakers, 2020). An alternative algorithm that we will discuss is the Savage--Dickey method (Dickey, Lientz, \& others, 1970). The approximate Bayes factor estimate may be unstable if insufficient MCMC draws are used (for the bridge sampling or the Savage--Dickey method), leading to different Bayes factors each time the analysis is performed (see Gronau et al., 2020). This sensitivity of the estimator to the particular Markov chain realization is also known as the variance of the estimator.
Even if the estimation of Bayes factors via bridge sampling yields stable results, it is still unclear whether the computations are accurate or biased for complex problems, i.e., whether the approximate Bayes factor estimate actually corresponds to the true Bayes factor. This stable error in the estimator is also known as the bias of the estimator. This potential bias is concerning, as - for realistic complex models - there are no guaranties that the Bayes factor estimates we obtain are correct. It is therefore crucial to calibrate Bayes factor estimates, which we do in the present work.
As a further important aspect, any variability that is present in the data will also impact the results from Bayes factor analyses. Any inferences and decisions will always depend on the particular details of observed data and there's no way around that. Accordingly, computing Bayes factors does not mean that we can obtain some abstract and reliable ``truth'' from some observed data, which is still sampled with considerable noise. Bayes factors - just like frequentist p-values or any quantification of evidence - can vary considerably between replications of the same experiment. Excessive variation is a common consequence of poor experimental design, which limits the conclusions that can be drawn from individual data sets (Oelrich, Ding, Magnusson, Vehtari, \& Villani, 2020). To avoid fragile discovery claims we need to ensure that testing based on Bayes factors is relatively stable across possible realizations of the data.
Last, we should not confuse inferences with decisions. Bayes factors provide inference on hypotheses. However, to obtain discrete decisions, such as to claim discovery, from continuous inferences in a principled way requires utility functions. Common decision heuristics (e.g., using Bayes factor larger than 10 as a discovery threshold) do not provide a principled way to perform decisions, but are merely heuristic conventions.
Indeed, simply selecting the hypothesis most compatible with the observed data does not need to result in useful outcomes. Frequentist null hypothesis significance testing, for example, bases testing not on inferences but rather on false discovery rates and true discovery rates, which are examples of \emph{utility functions}. To ensure that Bayes factors inform useful hypothesis tests, we need to define relevant utility functions and investigate the performance of Bayes factors in that context.
In this paper, we investigate these different aspects of the performance of Bayes factors (see Fig.~\ref{fig:FigureBF}). We investigate how Bayes factors are influenced by prior assumptions, we will investigate the stability of Bayes factors, i.e., how many MCMC draws are needed so that Bayes factor estimates won't change in different runs of the bridge sampling algorithm; we will study accuracy, i.e., whether the approximations are biased or correspond to the true Bayes factor; we will look at the variability of Bayes factors with artificial and real replications of empirical data; and we will look at decision-making based on Bayes factors using utility functions.
\begin{figure}
{\centering \includegraphics{Figure_BF2}
}
\caption{Shown are the schematic relations between the data and the model, Bayes factors, and resulting inferences and decisions. The data and the model constitute a true Bayes factor, that can be used for data informed inferences and decisions (dark red arrows). However, the true Bayes factor is unknown for complex models. Therefore, we use the data and the model to obtain an approximate Bayes factor estimator, and we use this for data informed inferences and decisions (light red arrows).}\label{fig:FigureBF}
\end{figure}
Note that in Bayesian approaches to data analysis in the cognitive sciences, other approaches than Bayes factors are sometimes used to investigate the viability of some hypotheses. For example, some researchers use the posterior of a fitted model to test whether the e.g., 95\% posterior credible interval for some critical parameter overlaps with zero, and treat this as a Bayesian hypothesis test. Other approaches compute the probability that a parameter is larger than zero. However, importantly, these approaches cannot really answer the question: How much evidence do we have in support for an effect at all (i.e., versus the hypothesis of no effect)? A 95\% credible interval that doesn't overlap with zero or a high probability that the parameter is positive may \emph{hint} that the predictor may be needed to explain the data, but they are not really answering this question how much evidence there is that the parameter is needed to explain the data (versus the null hypothesis that the parameter can be set e.g., to zero) (see Wagenmakers, Lee, Rouder, \& Morey, 2019; Rouder et al., 2018). This is a very important point. Indeed, this is often overlooked in the literature, and many papers use 95\% posterior credibility intervals to argue that there is evidence for or against an effect. This is a mistake that indeed the second and last authors of this paper made in the past (e.g., Nicenboim \& Vasishth, 2016; Jäger, Engelmann, \& Vasishth, 2017). However, the approach of using 95\% posterior credibility intervals to argue there is evidence for an effect is in fact not well defined. In this work, we introduce proper approaches to Bayesian inferences and decision making on hypotheses using Bayes factors, which allow us to explicitly quantify the evidence that the data provide for the hypothesis that a certain model parameter is needed to explain the data.
\hypertarget{a-quick-review-of-bayesian-methodology}{%
\subsection{A quick review of Bayesian methodology}\label{a-quick-review-of-bayesian-methodology}}
Statistical analyses in the cognitive sciences often pursue two goals: to estimate parameters and to test hypotheses. Both of these goals can be achieved using Bayesian data analysis. Bayesian approaches to data analysis focus on a model, which can range from a relatively simple statistical model, such as a linear regression or a multilevel (i.e., linear mixed-effects) model, to a complex non-linear model, such as a computational model of cognition. Indeed, when dealing with Bayes factors, this always implies a set of models, where a Bayes factor comprises a comparison of evidence between two models.
Critical for the model is that it specifies an ``observational'' model \(\mathcal{M}\), which is a mathematical function that specifies the probability density of the data \(y\) given the vector of model parameters \(\Theta\) and the model \(\mathcal{M}\). This is usually written as \(p(y \mid \Theta, \mathcal{M})\), or by dropping the model \(\mathcal{M}\) simply as \(p(y \mid \Theta)\). Since \(y\) is a free variable in the model, it is possible to use the observational model to simulate data, by selecting some model parameters \(\Theta\) and drawing random samples for the data \(\tilde{y}\). We use this approach heavily in our simulated data below. However, the model is also highly useful once we have collected (or simulated) some data and want to estimate parameters and make inferences. When the data is given (fixed), then the observational model turns into a likelihood function: \(p(y \mid \Theta) = L_y(\Theta)\), where the likelihood varies as a function of the model parameters \(\Theta\). This can be used to estimate model parameters or to compute evidence for the model relative to other models.
Let's consider an example, where for each of \(N\) subjects \(n\), we observe one data point \(y_n\) (e.g., the person's IQ). Let's assume in model \(\mathcal{M}_1\) that the data points follow a normal distribution. We can now describe the probability density\footnote{To be precise, note that the likelihood function is technically defined without any terms that don't depend on the parameters. Thus, technically \(\sqrt{2\pi}\) wouldn't be part of the likelihood function even though it's part of the observational model. These technicalities, however, don't affect our inferences and so here we write down the full observational model as the likelihood for simplicity.} for each observed data point \(y_n\) in subject \(n\) based on model parameters for the mean \(\mu\) and the standard deviation \(\sigma\) as:
\begin{equation}
p(y_n \mid \mu, \sigma, \mathcal{M}_1) = \frac{1}{\sigma \sqrt{2\pi}}e^{\frac{(\mu - y_n)^2}{-2\sigma^2}}
\end{equation}
This formula gives the likelihood for the data point \(y_n\) from one subject \(n\). However, we have data from multiple subjects. We assume that the data from the different subjects are conditionally independent from each other (given the parameters). This yields the following formula for the likelihood for the described simple linear model example: \(p(y \mid \mu, \sigma, \mathcal{M}_1) = \prod_n \frac{1}{\sigma \sqrt{2\pi}}e^{\frac{(\mu - y_n)^2}{-2\sigma^2}}\).
Based on this simple model, we can express different hypotheses to explain the data \(y\). For example, we could formulate the general hypothesis that the parameter \(\mu\) can take any possible value \(\mu \neq 0\), i.e., \(p(y \mid \mu \neq 0, \sigma, \mathcal{M}_1)\). However, this model can also be used to specify interval or point hypotheses. An example for a point hypothesis could be to postulate that the parameter \(\mu\) takes the value \(\mu = 100\).
This would yield the probability density: \(p(y \mid \mu=100, \sigma, \mathcal{M}_0) = \prod_n \frac{1}{\sigma \sqrt{2\pi}}e^{\frac{(100 - y_n)^2}{-2\sigma^2}}\). An example for an interval hypothesis could be that we assume that the parameter \(\mu\) is larger than \(100\), which we could specify as
\begin{equation}
p(y \mid \mu > 100, \sigma, \mathcal{M}_0) = \left\{ \begin{array}{ll} \prod_n \frac{1}{\sigma \sqrt{2\pi}}e^{\frac{(\mu - y_n)^2}{-2\sigma^2}} &, \mu > 100 \\
0 &, \mu \le 100 \end{array} \right.
\end{equation}
We will discuss below how Bayes factors can be used to quantify relative evidence for such different hypotheses.
In these models, one key goal is to estimate model parameters from data. In Bayesian data analysis inferences are constructed by complementing the likelihood with the prior model, written \(p(\Theta)\), that defines a probability distribution that encodes whatever domain expertise we want to incorporate into the analysis. From a strict Bayesian perspective the information encoded in the prior model should be independent from the observed data; this can be accomplished, for example, by specifying the prior model before making an observation but this is not always necessary. To inform prior distributions, it is often useful to rely on analyses of previous data sets, meta analyses, or on theoretical models.
Based on the likelihood and the prior, it is possible to compute the posterior distribution of the model parameters. The posterior distribution represents the results of inferences about which values of the model's parameters are most probable given the likelihood and the priors. The posterior is usually written as \(p(\Theta \mid y, \mathcal{M}_1)\) and represents posterior probability distributions specifying how likely each value of a model parameter is a posteriori, that is after seeing the data \(y\) and given the model \(\mathcal{M}_1\). Bayes' rule specifies how the posterior distributions \(p(\Theta \mid y, \mathcal{M}_1)\) can be computed by combining the prior \(p(\Theta \mid \mathcal{M}_1)\) with the likelihood \(p(y \mid \Theta, \mathcal{M}_1)\), reflecting updates of beliefs in the light of data:
\begin{equation}
p(\Theta \mid y, \mathcal{M}_1) = \frac{p(y \mid \Theta, \mathcal{M}_1) p(\Theta \mid \mathcal{M}_1)}{p(y \mid \mathcal{M}_1)} \label{eq:marginall}
\end{equation}
\noindent
Here, \(p(y \mid \mathcal{M}_1)\) is a normalizing constant termed the ``evidence'' or ``marginal likelihood'', which is the likelihood of the data based on the model independent of the parameters \(\Theta\), and is derived as \(p(y \mid \mathcal{M}_1) = \int p(y \mid \Theta, \mathcal{M}_1) p(\Theta \mid \mathcal{M}_1) d \Theta\). This quantity plays a central role in Bayesian model comparison via Bayes factors, as we will describe below.
Note that the marginal likelihood is a single number that tells you the likelihood of the observed data \(y\) given the model \(\mathcal{M}_1\) (and only in the discrete case, it tells you the probability of the observed data \(y\) given the model; in the continuous case, the probability for a specific data point is always zero, and the density for a single data point is evaluated instead). The marginal likelihood is not a function of the model parameters and the marginal likelihood does not depend on the model parameters \(\Theta\) any more; the parameters are ``marginalized'' or integrated out. Instead the marginal likelihood maps entire models to likelihood values. The likelihood is evaluated for all possible parameter values (according to the prior), weighted by the prior plausibility and summed together. For this reason, \emph{the prior here is as important as the likelihood}! The marginal likelihood itself is not particularly interpretable until we consider multiple models: it can only be interpreted relative to another marginal likelihood; we will illustrate this issue below.
Priors play a key role in the performance of Bayesian inference; in particular they can regularize inferences when the data do not inform the likelihood functions sufficiently strongly. We will see below, however, that they will influence marginal likelihoods and thus Bayes factors, and anything informed by Bayes factors, even when the data are strongly informative. Thus, priors are even more crucial for Bayes factors than for posterior distributions (Aitkin, 1991; Gelman et al., 2013; Grünwald, 2000; Liu \& Aitkin, 2008; Myung \& Pitt, 1997; Vanpaemel, 2010).
For very simple models, posterior density functions can be computed analytically, which then allows certain expectation values (e.g., the posterior mean) to be evaluated analytically as well. That is, mathematical formulas can be derived from the likelihood and the prior to obtain a closed form formula for the posterior densities. However, for most interesting models, e.g., for multilevel models, which we will deal with in the current paper, such closed-form analytical solutions are not available and we have to rely on methods that approximate posterior expectation values. An alternative approach to estimating the posterior is to use sampling methods such as Markov Chain Monte Carlo sampling, which is the method behind popular software implementing Bayesian analysis such as Stan (Carpenter et al., 2017), JAGS (Plummer \& others, 2003), WinBUGS (Lunn, Thomas, Best, \& Spiegelhalter, 2000), PYMC3 (Salvatier, Wiecki, \& Fonnesbeck, 2016), Turing (Ge, Xu, \& Ghahramani, 2018), and others. These methods allow us to obtain samples from the posterior distribution, which can be used to obtain approximate estimates for posterior expectations, such as the mean of the posterior distribution or the standard deviation.
\hypertarget{inference-and-discovery}{%
\section{Inference and discovery}\label{inference-and-discovery}}
\hypertarget{hypotheses}{%
\subsection{Hypotheses}\label{hypotheses}}
Three different kinds of hypotheses can be derived from an observational model: general hypotheses (full parameter range), point hypotheses (one specific parameter value), and interval hypotheses (interval of parameter within a model) (also see Betancourt, 2018).
A point hypothesis is defined by restricting one or more of the model parameters to specific values. The other model parameters, however, for example nuisance parameters, will generally be unconstrained. One example of a point hypothesis is that a model parameter is hypothesized to be zero. By contrast, in general hypotheses, all different values for the model parameter are possible. That is, it is hypothesized that the parameter exists, i.e., such as a parameter representing a difference between two experimental conditions, and that it takes some value, which can be estimated from the data. Sometimes, no constraints are put on the possible parameter values by using an improper uniform prior. At other times, some parameter values are considered more likely than others, but still, all values for the model parameter are possible in principle.
By contrast, interval hypotheses specify that a given model parameter is within a given interval or range. For example, an interval could involve the hypothesis that a parameter takes a positive value, and not a negative value. An alternative for an interval hypothesis could be that we specify one parameter to be bounded, e.g., that the parameter lies in the range between 0 and 1. Sometimes, an interval hypothesis can be used to capture the intent of a point hypothesis: i.e.~a parameter might be hypothesized to be very cloze to zero, e.g., between -0.1 and +0.1, such that it can be treated as being zero from a practical perspective (i.e., in a region of practical equivalence; ROPE; Kruschke, 2011; Freedman, Lowe, \& Macaskill, 1984; Spiegelhalter, Freedman, \& Parmar, 1994).
To illustrate, let's assume an observational model: \(p(y \mid \Theta)\) (e.g., a multilevel/linear mixed effects model). We can partition the model parameters as follows: \(\Theta = \{ \Theta_1, \Theta_2 \}\). That is, we assume the model parameters consist of two blocks, namely \(\Theta_1\) and \(\Theta_2\). For example, in our multilevel models \(\Theta_1\) could contain the fixed effect of interest (e.g., the regression coefficient associated with some predictor variable, e.g., cloze predictability), whereas \(\Theta_2\) may capture all other parameters (e.g., the intercept, random effects, and the residual variance). Based on this partition, we can distinguish a point hypothesis, an interval hypothesis, and a general hypothesis for \(\Theta_1\) (see Fig.~\ref{fig:FigureHyp}). In the point hypothesis, we assume that \(\Theta_1\) takes exactly one specific value, in our example zero: \(\Theta_1 = 0\), leading to the observational model \(p(y \mid \Theta_1 = 0, \Theta_2, \mathcal{M}_0)\). In the interval hypothesis, we assume that \(\Theta_1\) is not zero but takes some range of values, e.g., \(\Theta_1 > 0\), leading to the observational model \(p(y \mid \Theta_1 > 0, \Theta_2, \mathcal{M}_1)\). In the general hypothesis, we assume that \(\Theta_1\) can take any possible value, e.g., \(\Theta_1 \neq 0\), which leads to the observational model \(p(y \mid \Theta_1 \neq 0, \Theta_2, \mathcal{M}_2)\).
\begin{figure}
{\centering \includegraphics{Figure_Hyp}
}
\caption{Illustration of different types of parameters for two parameters Theta 1 and Theta 2. (a) Point hypothesis in all parameters. (b) Point hypothesis in some parameters. (c) Interval hypothesis in all parameters. (d) Interval hypothesis in some parameters. (e) Full hypothesis.}\label{fig:FigureHyp}
\end{figure}
\hypertarget{inference-over-hypotheses}{%
\subsection{Inference over hypotheses}\label{inference-over-hypotheses}}
\hypertarget{comparing-two-point-hypotheses}{%
\subsubsection{Comparing two point hypotheses}\label{comparing-two-point-hypotheses}}
Point hypothesis tests are widely used in frequentist statistics. Specifically, frequentist statistics can be used to test the alternative hypothesis (i.e., model \(\mathcal{M}_1\)) that a true parameter value is different from zero by considering a point estimate for the parameter value. It chooses for all parameters the value that exhibits the largest value for the likelihood, that is, the maximum likelihood estimate (MLE) \(\{ \Theta_1=\hat{\Theta}_1, \Theta_2=\hat{\Theta}_2 \}\). Thus, note that while frequentist statistics aims to test a point null hypothesis against a general hypothesis (i.e., that the parameter is different from zero), in fact it reduces this to a comparison between two point hypotheses, by using the MLE for model comparison! Based on the MLE parameters, it is considered how compatible these parameters are with the data, i.e., \(p(y \mid \mathcal{M}_1) = p(y \mid \Theta_1=\hat{\Theta}_1, \Theta_2=\hat{\Theta}_2)\). In the likelihood ratio test, the MLE is compared to a second point hypothesis (\(\mathcal{M}_0\)), namely, that the critical parameter \(\Theta_1\) (e.g., a fixed effect) is zero: \(\Theta_1 = 0\). The other parameters (e.g., intercept or residual variance) are still assumed to be the MLE: \(\Theta_2=\hat{\Theta}_2\). From this, it is again possible to compute how likely the data are under this point parameter value, yielding a second likelihood value: \(p(y \mid \mathcal{M}_0) = p(y \mid \Theta_1=0, \Theta_2=\hat{\Theta}_2)\). In frequentist statistics, evidence for the alternative hypothesis (\(\mathcal{M}_1\)) over the null hypothesis (\(\mathcal{M}_0\)) is computed as the ratio in likelihoods:
\begin{align}
\mathcal{M}_0 &= \{\Theta_1 = 0, \Theta_2 = \hat{\Theta}_2\} \\
\mathcal{M}_1 &= \{\Theta_1 = \hat{\Theta}_1, \Theta_2 = \hat{\Theta}_2\} \\
LR &= \frac{p(y\mid \mathcal{M}_1)}{p(y\mid \mathcal{M}_0)} = \frac{p(y \mid \Theta_1=\hat{\Theta}_1, \Theta_2=\hat{\Theta}_2)}{p(y \mid \Theta_1=0, \Theta_2=\hat{\Theta}_2)}
\end{align}
Thus, the likelihood ratio test depends on the ``best'' estimate for the model parameter(s), that is, the model parameter \(\Theta\) occurs on the right side of the equation for each likelihood.
That means that in the likelihood ratio test, each model is tested on its ability to explain the data conditional on the ``best'' estimate for the model parameter (i.e., the MLE \(\hat{\Theta} = \{ \hat{\Theta}_1, \hat{\Theta}_2 \}\)). Thus, the likelihood ratio reduces interval hypotheses to point hypotheses. A likelihood ratio reduces an entire interval hypothesis/model to a single point hypothesis. Note that this reduction can be problematic.
Importantly, the comparison of point hypotheses completely depends on whether the point estimate for the model parameter(s) is representative of the possible values for the model parameter(s). If the point estimate is not representative, which is often the case in practical data analysis, where there is uncertainty about the precise parameter value, then comparing point hypotheses can be problematic.
Another related issue worth mentioning is that the likelihood ratio test introduces \emph{data-dependent hypotheses}. It is thus not comparing scientific hypotheses any more, but rather algorithmic hypotheses derived from the data. If the likelihood function is sufficiently narrow this might \emph{approximate} a well-defined hypothesis, but in general the difference can be large.
Bayesian analyses can also quantify relative evidence for two point hypotheses. In Bayesian analyses, this relative evidence can be obtained from within one single Bayesian model (\(\mathcal{M}_1\)). In this case, point hypothesis tests can be performed based on the ratio of posterior densities at the point parameter values. For example, one might compare evidence for the hypothesis that a critical parameter \(\Theta_1\) takes a value of e.g., \(\Theta_1 = 100\). Such a point hypothesis could be compared to the assumption that the parameter takes a value of zero (\(\Theta_1 = 0\)). Thus, to compute relative Bayesian evidence, one would take the estimated posterior density at the value of \(\Theta_1 = 100\) (\(p(\Theta_1=100 \mid y)\)) and the posterior density at a parameter value of zero (\(p(\Theta_1=0 \mid y)\)). Taking a ratio between these two posterior densities yields the relative evidence on the comparison of these two point hypotheses, i.e., the posterior evidence in favor of \(\Theta_1 = 100\) over \(\Theta_1 = 0\):
\begin{align}
\text{Posterior density ratio}&=\frac{\int d(\Theta_2) p(\Theta_1=100, \Theta_2 \mid y)}{\int d(\Theta_2) p(\Theta_1=0, \Theta_2 \mid y)}\\&= \frac{\int d(\Theta_2) p(y \mid \Theta_1=100, \Theta_2) \times p(\Theta_1=100, \Theta_2)}{\int d(\Theta_2) p(y \mid \Theta_1=0, \Theta_2) \times p(\Theta_1=0, \Theta_2)}
\end{align}
\begin{equation}
\text{Posterior density ratio} = \mathrm{likelihood\;ratio} \times \mathrm{ratio\;of\;prior\;densities}
\end{equation}
As we can see, the resulting ratio of posterior densities can be rewritten as a product of the ratio of likelihood functions and the ratio of prior densities. In other words the Bayesian comparison of point hypotheses reduces to the frequentist comparison, with a correction that takes into account the information in the prior model.
\hypertarget{comparing-two-interval-hypotheses}{%
\subsubsection{Comparing two interval hypotheses}\label{comparing-two-interval-hypotheses}}
An alternative type of hypotheses refers to intervals or ranges of parameters. I.e., these are cases where the hypothesis simply states that a free model parameter has a certain range of values, but where the precise parameter value is unknown.
As one example, let's assume the hypothesis that a parameter \(\Theta_1\) takes a positive value, \(H_1: \Theta_1 > 0\), is compared to a ROPE: \(H_2: -0.1 < \Theta_1 < 0.1\). In this case, the result is again a ratio of posterior probabilities:
\begin{equation}
Posterior\;ratio = \frac{p(\Theta_1>0 \mid y)}{p(-0.1 < \Theta_1 < 0.1 \mid y)}
\end{equation}
However, let's also look at a more specific example case, which is often of relevance in the cognitive sciences. Specifically, one could specify the hypothesis that a critical model parameter \(\Theta_1\) takes a positive value: \(H_1: \Theta_1 > 0\) and compare this to the hypothesis that the parameter value is zero or smaller: \(H_2: \Theta_1 \leq 0\). In this specific case, where both hypotheses together span the full range of possible parameter values, evidence for hypothesis \(H_1\) can be obtained by computing the posterior probability that the parameter is positive, i.e., \(p(\Theta_1>0 \mid y) = \int p(\Theta_1>0, \Theta_2 \mid y) d \Theta_2\).\footnote{Note that in certain special cases (e.g., with a symmetric prior centered around a point null hypothesis using Savage Dickey estimation), posterior probabilities are in fact Bayes factors.}
When using MCMC sampling to estimate the posterior, one can compute the posterior probability for the hypothesis by taking the proportion of samples that is larger than zero.
\hypertarget{bayes-factors-comparing-two-arbitrary-hypotheses}{%
\subsubsection{Bayes factors: Comparing two arbitrary hypotheses}\label{bayes-factors-comparing-two-arbitrary-hypotheses}}
Comparing more general hypotheses is hard: We can't compare densities to probabilities so we can't compare \emph{different kinds of hypotheses} with simple ratios as we did above. Instead, we need to reduce the posteriors with different parameter spaces to something compatible that can be compared; because all models share the same observational space this has to be the marginal likelihood, which is the basis for computing Bayes factors.
Bayes factors thus provide a way to compare any two model hypotheses against each other. This can e.g., involve comparison between two general hypotheses, or comparison between a general hypothesis and a point hypothesis, or any other comparison.
The Bayes factor tells us, given the data and the model priors, how much we need to update our relative belief between the two models. The Bayes factor is thus the ratio between posterior to prior odds.
To derive Bayes factors, we first compute the model posterior, i.e., the posterior probability for a model \(\mathcal{M}_i\) given the data: \(p(\mathcal{M}_i \mid y) = p(y \mid \mathcal{M}_i) \times P(\mathcal{M}_i)\). This involves the marginal likelihood for each model, that is the average probability density of the data given the model \(p(y \mid \mathcal{M}_i)\). This can be computed by taking integrals over the model parameters; that is, marginal likelihoods are averaged across all possible posterior values of the model parameter(s): \(p(y \mid \mathcal{M}_i) = \int p(y, \Theta \mid \mathcal{M}_i) d \Theta = \int p(y \mid \Theta, \mathcal{M}_i) p(\Theta \mid \mathcal{M}_i) d \Theta\).
Based on this posterior model probability \(p(\mathcal{M}_i \mid y)\), we can compute the model odds for one model over another as:
\begin{equation}
\frac{p(\mathcal{M}_1 \mid y)}{p(\mathcal{M}_2 \mid y)} = \frac{p(y \mid \mathcal{M}_1) \times p(\mathcal{M}_1)}{p(y \mid \mathcal{M}_2) \times p(\mathcal{M}_2)} = \frac{p(y \mid \mathcal{M}_1)}{p(y \mid \mathcal{M}_2)} \times \frac{p(\mathcal{M}_1)}{p(\mathcal{M}_2)} \label{eq:PostRatio}
\end{equation}
\begin{equation}
Posterior\;ratio = Bayes\;factor \times prior\;odds
\end{equation}
The Bayes factor is thus a measure of relative evidence, the comparison of the predictive performance of one model (\(\mathcal{M}_1\)) against another one (\(\mathcal{M}_2\)). This comparison (\(BF_{12}\)) is a ratio of marginal likelihoods:
\begin{equation}
BF_{12} = \frac{P(y \mid \mathcal{M}_1)}{P(y \mid \mathcal{M}_2)}
\end{equation}
\(BF_{12}\) indicates the evidence that the data provide for \(\mathcal{M}_1\) over \(\mathcal{M}_2\), or in other words, which of the two models is more likely to have generated the data, or the relative evidence that we have for \(\mathcal{M}_1\) over \(\mathcal{M}_2\). Under the assumption that all models are equally likely a priori, Bayes factor values larger than one indicate that \(\mathcal{M}_1\) is more compatible with the data, smaller than one indicate \(\mathcal{M}_2\) is more compatible with the data, and values close to one indicate that both models are equally compatible with the data.
Note that this model comparison does not depend on a specific parameter value. Instead, all possible prior parameter values are taken into account simultaneously.
Importantly, Bayes factors are a general way to compare models. When computing the Bayes factor between two point hypotheses, then the Bayes factor reduces to the ratio of posterior densities (after marginalizing out all other parameters not involved in the point hypothesis). When computing the Bayes factor for comparing two interval hypotheses, then the Bayes factor reduces to the ratio of posterior probabilities. Thus, Bayes factors are the general way of providing evidence for any hypothesis over another one in Bayesian data analysis.
Note that the marginal likelihood shares similarities to a quantity termed the prior predictive distribution. This addresses the important question how it is possible to make predictions and sample artificial data \(\tilde{y}\) from a Bayesian model \(\mathcal{M}\). This can be done based on the prior predictive distribution:
\begin{equation}
p(\tilde{y} \mid \mathcal{M}) = \int p(\tilde{y} \mid \Theta, \mathcal{M}) p(\Theta \mid \mathcal{M}) d \Theta
\end{equation}
or written differently:
\begin{align*}
\tilde{\Theta} &\sim \pi(\Theta)
\\
\tilde{y} &\sim \pi(y | \tilde{\Theta})
\end{align*}
Note that this prior predictive distribution averages predictions across the observational model \(p(\tilde{y} \mid \Theta, \mathcal{M})\) weighted by the prior \(p(\Theta \mid \mathcal{M})\). It is visible that the prior predictive distribution looks very similar to the marginal likelihoods.
Conceptually, in Bayes factor analyses, the model is specified with the priors, before seeing the data to be analyzed. Based on these priors and the observational model, it is possible to compute prior predictions (i.e., predictive densities) for observed data. These prior model predictions are then evaluated using the observed data to yield the support that the data give to the model.
In other words, the marginal likelihoods quantify how compatible the observations are with the \emph{prior} predictions.
The prior predictive distribution is highly sensitive to the priors because it evaluates the likelihood of the observed data under prior assumptions. Note that Bayes factor analyses always investigate prior predictions. This stands in contrast to posterior predictions usually evaluated using some kind of cross-validation. Both approaches are ``out-of-sample'', and are therefore valid approaches to investigating predictions.
Importantly, Bayes factors are even \emph{more} sensitive to prior assumptions than intra-model posterior distributions of the model parameters. The issue is that even if the posterior density of a model is hardly influenced by the prior assumptions (e.g., because there's enough data and a good experimental design), the marginal likelihoods and the Bayes factors can still be strongly influenced by the prior, because the models are compared under prior assumptions. Thus, defining priors is a central issue when using Bayes factors. Conceptually, the priors will determine how models will be compared.
In the present work, we will consider the case of nested model comparison, where a null model hypothesizes that a model parameter is zero or absent (a point hypothesis: \(p(\Theta_1 = 0 \mid y)\))\footnote{Note that the fact that we investigate Bayes factors for point null hypotheses doesn't mean we are advocating for point null hypotheses.}, whereas an alternative model hypothesizes that the model parameter is present and has some value different from zero that needs to be estimated from the data (a general hypothesis: \(p(\Theta_1 \neq 0 \mid y)\)). Bayes factors provide one way to generalize the likelihood ratio test beyond true point hypotheses. Note that Bayes factor analyses thus have the advantage (over frequentist analyses) that nuisance parameters (\(\Theta_2\)) can be integrated out.
\begin{equation}
BF_{10} = \frac{p(\Theta_1 \neq 0 \mid y)}{p(\Theta_1 = 0 \mid y)} = \frac{\int p(\Theta_1 \neq 0, \Theta_2 \mid y) d \Theta_2}{\int p(\Theta_1 = 0, \Theta_2 \mid y) d \Theta_2}
\end{equation}
Note, however, that Bayes factors do not only work for such nested hypotheses, but also extend to non-nested models.
For general hypotheses, Bayes factors provide the Bayesian way of quantifying evidence in favor of one model over another, where evidence can be written as \(p(y \mid \mathcal{M})\). Prior model probabilities \(p(\mathcal{M})\) reflect the probabilities of each of the models before seeing the data. Bayes factors allow us to compute the posterior probabilities of the models, i.e., \(p(\mathcal{M} \mid y)\), which reflect the probability of the model given the prior probabilities of the models and the data.
The interpretation of posterior probabilities relies on the assumption that the true model is contained within the observational model (this is often called the \(\mathcal{M}\)-closed assumption). Likewise the interpretation of posterior model probabilities assumes that the true model is one of the observational models being compared. If the true model is not any of the investigated models, the posterior cannot be interpreted as ``probability of truth''. Instead, Bayes factors quantify only how compatible each prior predictive distribution is with the observed data.
Bayes factors have important advantages over frequentist analyses. Bayes factors are immediately applicable to the comparison of any set of well-defined hypotheses, whereas frequentist comparisons often have to be developed bespoke for each particular comparison, and common frequentist methods limit one to only a few possible comparisons.
As we saw above, the common frequentist approach of using likelihood ratio tests to quantify evidence for competing hypotheses depends on the best parameter estimate (i.e., the MLE). If the best estimate for the model parameter(s) is not very representative of the possible values for the model parameter(s), then Bayes factors will be superior to the likelihood ratio test. Indeed, we can also reduce a Bayesian hypothesis test to just test single point values against each other; however, what is much better is to integrate over the parameter space before taking the ratio using Bayes factors.
Note that Bayes factors quantify Bayesian evidence when comparing two models with each other. However, posterior model probabilities can also be computed for the more general case, where two models or more than two models are considered:
\begin{equation}
p(\mathcal{M}_1 \mid y) = \frac{p(y \mid \mathcal{M}_1) p(\mathcal{M}_1)}{\sum_i p(y \mid \mathcal{M}_i) p(\mathcal{M}_i)}
\end{equation}
For simplicity, we here mostly constrain ourselves to two models. (Note that the prior sensitivity analyses we study below are comparing evidence between many models.)
\hypertarget{occams-razor}{%
\subsubsection{Occam's razor}\label{occams-razor}}
The marginal likelihoods can only be interpreted relative to another marginal likelihood (evaluated at the same \(y\)). Thus, we can only obtain \emph{relative} evidence for one model over another model, which is what the Bayes factor does, or over a set of other models. Thus, Bayes factors imply relative evidence.
\begin{figure}
{\centering \includegraphics{OccamFactor1_MB}
}
\caption{Shown are the schematic marginal likelihood functions, p(Data|Model), that each of three models assigns to different possible data (left panel) and evaluated marginal likelihoods at the data y (right panel). The total probability each model assigns to the data is equal to one, i.e., the areas under the curves of all three models are the same. Model 1 assigns all probability to a narrow range of data, and can predict this data with high probability density (low complexity model). Model 3 assigns its probability to a large range of different possible outcomes, but predicts each individual data with low probability density (high complexity model). Model 2 takes an intermediate position (intermediate complexity). Left panel: The vertical dashed line illustrates where the empirically observed data fall. The data most support model 2, since this model predicts the data with highest likelihood. In other words, Model 2 has enough complexity to capture the structure of the observed data. The figure follows Figure 3.13 in Bishop (2006).}\label{fig:OccamFactor}
\end{figure}
Importantly, one would prefer a model that gives a higher marginal likelihood, i.e., a higher likelihood of observing the data after integrating out the influence of the model parameter(s) (here: \(\Theta\)). A model will yield a high marginal likelihood if it makes a high proportion of good prior predictions (i.e., model 2 in Fig.~\ref{fig:OccamFactor}; Figure adapted from Bishop, 2006).
Models that are too flexible (Fig.~\ref{fig:OccamFactor}, model 3) will divide their prior predictive probability density across all of their predictions. They can predict many different outcomes. Thus, they likely can also predict the actually observed outcome. However, due to the normalization, they cannot predict it with high probability, because they also predict all kinds of other outcomes. This is true for both models with priors that are too wide or for models with too many parameters. Bayesian model comparison automatically penalizes such complex models, which is called ``Occam's razor''.
By contrast, good models (Fig.~\ref{fig:OccamFactor}, model 2) will make very specific predictions, where the specific predictions are consistent with the observed data. Here, all the predictive probability density is at the ``location'' where the observed data fall, and little probability density is located at other places, providing good support for the model. Of course, specific predictions can also be wrong, when expectations differ from what the observed data actually look like (Fig.~\ref{fig:OccamFactor}, model 1).
Note that having a natural Occam's razor is good for posterior inference, i.e., for assessing how much (continuous) evidence there is for one model or another. However, it doesn't necessarily imply good decision making or hypothesis testing, i.e., to make discrete decisions about which model explains the data best, or on which model to base further actions. We will discuss such discrete decisions further below (see section ``Selecting between hypotheses'').
\hypertarget{bayes-factor-scale}{%
\subsubsection{Bayes factor scale}\label{bayes-factor-scale}}
For the Bayes factor, a scale (see Table~\ref{tab:BFs}) has been proposed to interpret Bayes factors according to the strength of change of evidence in favor of one model (corresponding to some hypothesis) over another (Jeffreys, 1939); but this scale should not be regarded as a hard and fast rule with clear boundaries.
\begin{longtable}[]{@{}rl@{}}
\caption{\label{tab:BFs} Bayes factor scale as proposed by Jeffreys (1939)}\tabularnewline
\toprule
\begin{minipage}[b]{0.47\columnwidth}\raggedleft
\(BF_{12}\)\strut
\end{minipage} & \begin{minipage}[b]{0.47\columnwidth}\raggedright
Interpretation\strut
\end{minipage}\tabularnewline
\midrule
\endfirsthead
\toprule
\begin{minipage}[b]{0.47\columnwidth}\raggedleft
\(BF_{12}\)\strut
\end{minipage} & \begin{minipage}[b]{0.47\columnwidth}\raggedright
Interpretation\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(>100\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Extreme change in evidence towards \(\mathcal{M}_1\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(30-100\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Very strong change in evidence towards \(\mathcal{M}_1\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(10-30\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Strong change in evidence towards \(\mathcal{M}_1\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(3-10\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Moderate change in evidence towards \(\mathcal{M}_1\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(1-3\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Anecdotal change in evidence towards \(\mathcal{M}_1\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(1\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
No change in evidence.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(\frac{1}{1}-\frac{1}{3}\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Anecdotal change in evidence towards \(\mathcal{M}_2\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(\frac{1}{3}-\frac{1}{10}\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Moderate change in evidence towards \(\mathcal{M}_2\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(\frac{1}{10}-\frac{1}{30}\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Strong change in evidence towards \(\mathcal{M}_2\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(\frac{1}{30}-\frac{1}{100}\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Very strong change in evidence towards \(\mathcal{M}_2\).\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedleft
\(<\frac{1}{100}\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
Extreme change in evidence towards \(\mathcal{M}_2\).\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\hypertarget{implementation-of-bayes-factors}{%
\subsubsection{Implementation of Bayes factors}\label{implementation-of-bayes-factors}}
One question now is how do we apply the Bayes Factor method to models that we care about, i.e., that represent more realistic data analysis situations that frequently occur in psycholinguistics, cognitive science, and other fields of research. In psycholinguistics and psychology, we typically fit fairly complex hierarchical models with many variance components. The major problem is that we won't be able to calculate the marginal likelihood for hierarchical models (or any other complex model) analytically. There are two very common methods for calculating the Bayes factor for complex models: the Savage--Dickey density ratio method (Dickey et al., 1970) and bridge sampling (Bennett, 1976; Meng \& Wong, 1996). The Savage--Dickey density ratio method is a straightforward way to compute a Bayes factor estimator, but it is limited to nested models. See Wagenmakers et al. (2010) for a complete tutorial. Note that the Savage--Dickey method can be unstable, especially in cases where the posterior is far away from zero. We will revisit this instability later.
Bridge sampling is a much more powerful method. This approach involves approximations of the marginal likelihoods. However, Bayes factor estimates based on bridge sampling can be unstable when based on models with too low effective sample size.\footnote{Posterior MCMC draws are correlated, and depending on the correlation a sample of a given size might contain more or less information. Therefore, ``effective sample size'' is corrected for the autocorrelation and provides an estimate of how much information is contained within the Markov chain relative to the number of independent samples (Vehtari et al., 2020).}
However, estimates of effective sample size are quantity specific (Betancourt, 2020a) and an effective sample size estimate for the posterior mean may not say anything about a potential effective sample size estimate for the bridge sampling estimate. So even high effective sample size for the (unnormalized) posterior density may not yield stable bridge sampling estimators. Instead, effective samples size may still be low for the (unnormalized) likelihood function.
Indeed, bridge sampling relies on posterior densities and requires many more (effective) posterior samples than what is normally required for parameter estimation; see Gronau et al. (2017b) for a general tutorial, and Gronau et al. (2020) for a tutorial using the R package \texttt{bridgesampling}.
Importantly, even when Bayes factor estimates based on bridge sampling are computed in a stable way (i.e., stability over different sets of MCMC draws), it is unclear, whether the estimates are unbiased for the kinds of (multilevel) models that we care about. Bridge sampling doesn't only have a problem with low effective sample size. To understand these problems, it is useful to discuss the typical set, which is the ``set containing the bulk of the posterior probability mass'' (Gabry, Simpson, Vehtari, Betancourt, \& Gelman, 2019, pp. 394--395).
MCMC explores the typical set and uses that exploration to estimate expectation values of functions of the parameters. When the algorithm enjoys a \emph{central limit theorem} that exploration is effective and the error in an estimator is determined by how much the variation of the corresponding function is contained within the typical set. Bayes factors, however, are given by the posterior expectation of the reciprocal likelihood function with usually varies most at extreme values far away from the typical set and even under ideal conditions the MCMC estimators for these expectations can suffer from large errors.
Therefore, calibrations (Betancourt, 2019) are needed to test whether Bayes factor estimates correspond to the true Bayes factor in a given application. We will discuss this issue and perform such calibrations below.
\hypertarget{selecting-between-hypotheses}{%
\subsection{Selecting between hypotheses}\label{selecting-between-hypotheses}}
Importantly, Bayes factors (and posterior model probabilities) tell how much evidence the data provide in favor of one model or another. That is, they allow us to perform inferences on the model space, i.e., to determine how much each hypothesis is consistent with the data.
Based on this evidence, it is also possible to perform decisions about selecting one hypothesis or the other, e.g., to declare discovery based on a Bayes factor analysis. Note however, that such discrete decision making is a completely different issue. Several heuristics have been proposed on how such decisions can be made. For example, Table~\ref{tab:BFs} shows how to put continuous evidence into discrete categories, and these categories could be used for decision-making. One common heuristic sometimes used in basic research is to treat Bayes factors that are larger than 10 (or smaller than 1/10) as a ground to declare discovery. Another heuristic that is often used in machine learning is to select the model with the highest posterior probability.
Importantly, these are just heuristics for deriving decisions, and they are not principled ways of how to derive decisions from evidence. A principled way to obtain decisions from evidence is to explicitly define utility functions. Utilities specify the values of possible actions (i.e., consequences of decisions) if certain hypotheses are in fact true. Thus, one could ask: what is the value of declaring discovery correctly or incorrectly? And what is the value of not declaring discovery correctly or incorrectly? Based on such reasoning about utilities, one can ask the question: which hypothesis should one choose to maximize utility?
For example frequentist null hypothesis significance testing (NHST) considers utilities in the form of the cost of false discoveries and the benefit of true discoveries, and then constructs a decision making process that bounds the worst case utility, at least when the assumptions hold.
Thus, while Bayes factors have a clear rationale and justification in terms of the (continuous) evidence they provide, utility functions are needed to map such evidence to actions, i.e., to perform decisions based on them.
\hypertarget{bayesian-decision-making-processes}{%
\subsubsection{Bayesian Decision Making Processes}\label{bayesian-decision-making-processes}}
To perform decisions in Bayesian analyses, the implementation of Bayesian decision making processes (Gelman et al., 2013; Robert, 2007) is necessary, which convert inferential information, such as the continuous Bayes factor or continuous posterior model probabilities, into discrete decisions. Bayesian inferences are continuous in nature and do not provide such discrete results.
However, there are two important caveats associated with discrete decisions: first, in practice, we often work with estimators of Bayes factors rather than with true Bayes factors (see Fig.~\ref{fig:FigureBF}). Such estimators can be noisy (we will illustrate this below). If the estimation error is not zero then the estimator will influence the decision making process in addition to the posterior distribution. This highlights that it is crucial to calibrate the Bayes factor estimator (we discuss this below) to make sure that the practical implementation of the Bayes factor estimator works appropriately.
As the second caveat, because the inferential information varies with observations (we will discuss this in detail below), so too will the decisions. Thus, random noise in the data can lead to very different inferences, and thus to very different decisions, simply based on chance.
\begin{figure}
{\centering \includegraphics{OccamFactor2_MB}
}
\caption{Shown are the schematic marginal likelihood functions, p(Data|Model), that each of three models assigns to different possible data. (a) The vertical dashed lines indicate artificial data sampled from the posterior of model 2. (b-c) The three panels illustrate three sampled data points. Although the data are simulated from model 2, some data sets still support model 1, model 2, and model 3.}\label{fig:OccamFactor2}
\end{figure}
Such random fluctuation in the data is illustrated in Figure~\ref{fig:OccamFactor2}. The Figure shows long vertical dashed lines (one in each panel), which illustrate data that is simulated based on model 2. It is clear that some of these simulated data points again support model 2. However, some simulated data points fall at higher or lower locations, and end up providing support for a different model, i.e., models 1 and 3. This illustrates that taking decisions on hypotheses based on observed data can be premature, and can lead to the kind of errors we have discussed in the previous section (e.g., deciding for a false model, be it too simple, or too complex, or simply different, that in fact did not generate the data). Here, we quantify the variability that is inherent in artificial and actual replications of the same data sets, and find that a single data set of conventional size from a fairly standard cognitive experimental design might not contain sufficient information to support clear inferences or even decisions on hypotheses.
Robust decision making requires sufficiently good experimental design to reduce the variation of the inferences, and hence the decisions, as much as possible. At the very least we have to quantify that variation to understand how stable a decision making process will actually be.
Indeed, because of the variation inherent in decisions, often making \emph{no} decision may actually be the best approach! If one just reports the inferences (i.e., Bayes factors), others can make their own decisions using their own utility functions in combination with the full information in the reported inference.
\hypertarget{utility-functions}{%
\subsubsection{Utility functions}\label{utility-functions}}
To perform decisions based on Bayesian analyses, utility functions are needed. The utility of different possible actions, that is, the value of the consequences when accepting and acting based on one hypothesis or another, can differ quite dramatically in different situations. For example, for a researcher trying to implement a life-saving therapy, falsely rejecting this new therapy could have high negative utility (negative utility is loss), whereas falsely adopting the new therapy may have little negative consequences. By contrast, falsely claiming a new discovery in fundamental research may have bad consequences (high loss), whereas falsely missing a new discovery claim may be less problematic if further evidence can be accumulated. Thus the performance of decision making procedures can be determined only in the context of utility functions appropriate to a given analysis.
A decision process has different possible outcomes, for which it is possible to assign different utilities. For example, in the cognitive sciences, when deciding to claim a discovery or not, different situations with different utilities can occur. First, if one claims a discovery based on a (Bayesian) decision-process, this can yield a true discovery (TD), which would have positive value, e.g., a utility of \(U_{TD} = 1\). However, a discovery claim can also be false (FD), yielding a possibly negative utility of \(U_{FD} = -1.5\). Second, an alternative outcome of a decision-making process is to not claim discovery, but to reject it. Again, this can be a true rejection (TR) of a discovery, which may have positive utility (e.g., \(U_{TR} = 0.5\)). However, the rejection of a discovery can also be false (i.e., missing a true new discovery), which might have a negative utility (e.g., of \(U_{FR} = -0.5\)). Note that the utilities that we chose here are arbitrary, and other values could be chosen as well.
In the cognitive sciences, decision making might, in general, be premature. If we can't construct useful utilities then we probably shouldn't be trying to make decisions. Reporting inferences directly and avoiding discovery claims avoids having to worry about utility functions.
One research goal in the cognitive science would be to develop a procedure of how such utilities can be conceived in a way that is not arbitrary, but theoretically motivated. If such utilities are available, this can support Bayesian decision making. We will illustrate this in an example below.
\hypertarget{calibration-methods}{%
\subsubsection{Calibration methods}\label{calibration-methods}}
Decisions based on Bayes factors, and the estimation error of Bayes factors itself, can vary with the observed data. Therefore, we need to quantify that data variation, or calibrate the Bayes factor method relative to the assumed model, if we want to use this method responsibly. Conveniently we can implement this calibration by observing how the Bayes factor outcomes vary across prior predictive simulations.
Calibrations over different data sets (Betancourt, 2019) are thus needed to investigate the properties of Bayes factor estimates (marginal likelihoods), i.e., to test whether Bayes factor estimates correspond to the true Bayes factor for a given study. They are also needed to understand the properties of Bayesian decision-making procedures.
To investigate Bayes factor estimates, in simulation-based calibrations (SBC), one can simulate data based on a data generating process by sampling artificial data from several observational models.
What we do here is that in each simulation run, we simulate data from one of several different models, where the probability for each model is specified as a prior across model space.
Then, it is possible to estimate marginal likelihoods based on the simulated data, which can then be used to estimate Bayes factors and posterior model probabilities. When this is done many times, then it is possible to test whether the posterior model probabilities on average correspond to the true data generating process. Moreover, it is possible to check whether on average the inference that Bayes factors support is correct.
In the previous section, we have introduced utility functions to quantify the values of actions taken based on decision-making processes. However, decision making procedures will vary depending on the data, and may perform well or badly (in terms of utilities) depending on what the data look like. The problem of course is that before running a study we do not know what the data looks like and what the possible outcomes of a study will be. Therefore, we need to quantify how those utilities can vary across different possible data sets. To determine this is the goal of calibration studies (Betancourt, 2018). These can be implemented using artificial data simulations, where we simulate data based on some priors and models, where we thus know which model or hypothesis was true in the data simulation. Then we can run a Bayesian decision-procedure on the simulated data and summarize the results in terms of their average utilities. For example, we can summarize false positives with false positive \emph{rates} that quantify how often an observation informs a false positive decision, and we can compute the utilities associated with such false positive rates.
For example, we can look at simulated data sets where the true model (i.e., the model sampled from the model prior) corresponds to a null hypothesis (H0), perform decisions based on the Bayesian evidence, and obtain the false discovery rate (FDR), i.e., how often the Bayes factor supports an alternative model (H1) when in fact the null hypothesis (H0) is true. This is a Bayesian equivalent of frequentist type I (\(\alpha\)) errors. Likewise, we can look at the simulated data sets where the true model (i.e., the model sampled from the prior) corresponds to an alternative hypothesis (H1), compute Bayesian evidence, perform decisions, and obtain the true discovery rate (TDR), i.e., the probability to choose the alternative hypothesis (H1) when it is actually true. This is a Bayesian equivalent to frequentist power analyses. When combining decisions with utilities, we can then obtain the average utility of a given decision-rule.
We will perform calibration studies here to investigate the accuracy of Bayes factor estimates and to investigate utilities of Bayesian decision-making procedures.
\hypertarget{simulation-based-calibration-and-calibrating-decisions}{%
\subsubsection{Simulation-based calibration and calibrating decisions}\label{simulation-based-calibration-and-calibrating-decisions}}
An important point about approximate computations of Bayes factor estimates using bridge sampling is that there are no strong guarantees for their accuracy. That is, even if we can show that the approximated Bayes factor estimate using bridge sampling is stable across different MCMC draws and across different starting values for the bridge sampling, even then it remains unclear how close the approximated Bayes factor is to the true Bayes factor. Bridge sampling is a form of density estimation. Technically, bridge sampling estimators can be written as a product of expectation values, although those expectation values are particularly hard to estimate with MCMC. In principle, it could very well be that the stably estimated Bayes factors based on bridge sampling are in fact biased, i.e., that they do not yield the correct (true) Bayes factor, but some biased approximation to it. The technique of simulation-based calibration (SBC; Talts, Betancourt, Simpson, Vehtari, \& Gelman, 2018; Betancourt, 2020b; Schad, Betancourt, \& Vasishth, 2021) can be used to investigate this question.
In SBC, the priors are used to simulate artificial data. Then, posterior inference is done on the artificial, simulated data, and the data-averaged posterior can be compared to the prior.
Any differences between the average posterior and the prior are due to errors in the computation and thus indicate a problem with inference.
By contrast, if the data-averaged posterior is equal to the prior, then this is consistent with accurate computations (caution: this consistency condition holds only for the average posterior over prior predictive simulations; we have no guarantees on how any individual posterior distribution will behave in these simulations, let alone for observed data; thus, this statement does not apply to Bayesian inference on a single data set, where a prior is used to infer a posterior distribution, but is specific to SBC). We can formulate SBC for model inference, where \(\mathcal{M}\) is a true model used to simulate artificial data \(y\), and \(\mathcal{M}'\) is a model inferred from the simulated data.
\begin{equation}
p(\mathcal{M}') = \int \int p(\mathcal{M}' \mid y) p(y \mid \mathcal{M}) p(\mathcal{M}) \; \mathrm{d} y \mathrm{d} \mathcal{M} \label{eq:SBC}
\end{equation}
Critically if SBC does not show a difference between the average posterior (i.e., the left-hand side of equation~\eqref{eq:SBC}) and the prior, then this doesn't guarantee that the computation for every posterior will necessarily be good; it is a necessary condition but not a sufficient one.
Applied to Bayes factor analyses, we define a prior on the model space, e.g., we can define the prior probabilities for a null and an alternative model, specifying how likely each model is a priori. From these priors, we can randomly draw one hypothesis (model), e.g., \(n_{sim} = 500\) times. Thus, in each of \(500\) draws we randomly choose one model (either H0 or H1), with the probabilities given by the model priors. For each draw, we first sample model parameters from their prior distributions, and then use these sampled model parameters to simulate artificial data. For each simulated artificial data set, we can then compute marginal likelihoods and Bayes factors (between the models H1 and H0) using bridge sampling, and we can then compute the posterior probabilities for each hypothesis using the true prior model probabilities (i.e., how likely each model is a posteriori). As the last, and critical step in SBC, we can then compare the posterior model probabilities to the prior model probabilities. A key result in SBC is that if the computation of marginal likelihoods and model posteriors is performed accurately by the bridge sampling procedure, i.e., without bias, that is, if the approximate Bayes factor estimate corresponds to the true Bayes factor, then the data-averaged posterior model probabilities should be the same as the prior model probabilities. We show the concrete steps of simulation-based calibration in an example R analysis below.
Conveniently those same simulations can also be used to calibrate inferences, such as how variable the Bayes factor is, or decision making processes, such as selection between models with the Bayes factor.
Thus, one can ask how sensitive the Bayes factor is in detecting the more appropriate model given the data. Moreover, the same simulations from the SBC can also be used to determine true discovery rates (TDR) and false discovery rates (FDR).
Based on these same simulations, it is also possible to calibrate any decision making process based on Bayes factors. That is, for the specific set of simulations from the SBC, one can specify utilities for different actions.
The immediate heuristic for turning Bayes factors into decisions is a threshold (e.g., of \(BF_{10} = 10\)). However, the specific threshold value has no canonical value. By calibrating the consequences of different threshold values, however, one can identify the threshold value best suited to a particular analysis.
Thus, the calibrations allow to determine the overall utility as a function of the threshold value used to determine the decisions. Then, it is possible to optimize the threshold value (or cut-off criteria) to yield optimal total utility. This procedure may even compensate for experimental design limitations. We will illustrate an example analysis for this below.
\hypertarget{bayes-factor-workflow}{%
\section{Bayes factor workflow}\label{bayes-factor-workflow}}
In the coming section on ``Misbehaving Bayes factors'', we discuss various potential problems associated with Bayes factor analyses. We outline a Bayes factor workflow to investigate these potential problems for a concrete analysis. These problems can largely be investigated using one set of artificial data simulations in the context of simulation-based calibration (SBC; Talts et al., 2018; Betancourt, 2020b; Schad et al., 2021) - we will discuss what this is below. Consequently we can integrate a set of analyses (that we illustrate below) into a coherent workflow for determining when Bayes factors are robust in any given application. For this workflow, we define the following steps:
\begin{enumerate}
\item Define the observational model
\item Define the prior (prior model probabilities and prior parameter distributions), ideally verified with prior pushforward and prior predictive checks
\item Fit the model and estimate Bayes factors using bridge sampling on the same empirical data set multiple times (at least twice) to investigate whether the number of MCMC draws are sufficient to obtain stable Bayes factor estimates
\item Run SBC to check whether Bayes factors are computed accurately
\item Use simulations to investigate data variability of Bayesian inferences to support realistic expectations concerning their reliability
\item If SBC supports accurate and reliable Bayes factor estimation, then one can use the Bayes factors obtained for the empirical data to support Bayesian inferences, otherwise, improve experimental design or acknowledge limitations
\end{enumerate}
In many cases, having valid Bayes factor estimates will be sufficient, since an important goal in cognitive science is to provide evidence in support of scientific hypotheses. This evidence is continuous in nature and thus reporting continuous evidence in scientific papers, without making discrete decisions, would be a natural approach. This is especially important given the large data variability inherent in evidence quantification (see below for illustrations), which often makes discrete decisions seem premature based on individual data sets. However, if discrete decisions are needed, for example, in order to make a discrete discovery claim, then the workflow can be expanded with the following steps:
\begin{enumerate}
\item If one wants to make a decision, e.g., on a discovery claim, then one can define utility functions, i.e., the utility for each action given each truth
\item Use simulations to optimize the discovery threshold
\item Use simulations to investigate data variability of decisions (false and true discovery rates)
\item Make a decision on discovery using an optimized discovery threshold
\end{enumerate}
We consider this workflow, in particular conducting SBC, to be the ideal way to approach Bayes factor analyses. However, we acknowledge that it takes a lot of time and computational resources to run this workflow for realistic research problems. It may therefore be difficult in research practice to implement this ideal workflow in every single analysis that one runs. We therefore suggest to implement this workflow once for a given research program, were different models and experimental designs may be similar to each other.
Based on this definition of the Bayes factor workflow, we now discuss in detail the problems and questions that motivate the workflow.
\hypertarget{misbehaving-bayes-factors}{%
\section{Misbehaving Bayes Factors}\label{misbehaving-bayes-factors}}
Bayes factors are a useful tool for quantifying relative Bayesian support for different models of the data, and they can be used to derive decisions based on the Bayesian evidence. However, there are several problems associated with Bayes factor analyses. First, Bayes factor estimates can exhibit estimation error because they are unstable against MCMC draws and because the estimation is not accurate (for other reasons not related to imprecision caused by finite number of MCMC draws) and does not correspond to the true value. Second, Bayes factor estimates - as any other form of evidence quantification - strongly depends on the particular data set, and can thus strongly vary with noise in the data. Third, Bayes factors can support poor decision-making, either because simple decision heuristics perform badly with respect to relevant utility functions, or because the data variability of Bayes factors leads to highly variable decision-outcomes. In the following, we will discuss these issues in detail. Moreover, we use the analysis of these difficulties to formulate a Bayes factor workflow that can be used to validate robust inference for specific data sets.
\hypertarget{EstimationError}{%
\subsection{Estimation error}\label{EstimationError}}
Two questions that we investigate here are how stable estimates of Bayes factors are when they are computed from different MCMC chains and with different starting values for the bridge sampler, and how accurate the estimates of Bayes factors are relative to the true Bayes factor.
\hypertarget{SBC1}{%
\subsubsection{Simulation-based calibration: Recovering the prior from the data}\label{SBC1}}
An important point about approximate computations of Bayes factor estimates (using bridge sampling) is that we do not know whether Bayes factor estimates are unbiased, i.e., whether the estimates correspond to the true Bayes factor. Here, we use the technique of simulation-based calibration (SBC; Talts et al., 2018; Betancourt, 2020b; Schad et al., 2021) to investigate this question, and we perform one example analysis in R.
First, we create an (artificial) experimental design. We use the R package \texttt{designr} (Rabe, Kliegl, \& Schad, 2021) to create the experimental design with a within-subject factor \texttt{x} with two levels (using sum coding with -1 and +1) and 15 subjects. Each condition (-1/+1) is measured twice per subject (this is what the \texttt{replications=2} argument does).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{design \textless{}{-}}\StringTok{ }\KeywordTok{fixed.factor}\NormalTok{(}\StringTok{"x"}\NormalTok{, }\DataTypeTok{levels=}\KeywordTok{c}\NormalTok{(}\StringTok{"{-}1"}\NormalTok{, }\StringTok{"1"}\NormalTok{), }\DataTypeTok{replications=}\DecValTok{2}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{random.factor}\NormalTok{(}\StringTok{"subj"}\NormalTok{, }\DataTypeTok{instances=}\DecValTok{15}\NormalTok{)}
\NormalTok{simdata \textless{}{-}}\StringTok{ }\KeywordTok{design.codes}\NormalTok{(design)}
\NormalTok{simdata}\OperatorTok{$}\NormalTok{x \textless{}{-}}\StringTok{ }\KeywordTok{as.numeric}\NormalTok{(}\KeywordTok{as.character}\NormalTok{(simdata}\OperatorTok{$}\NormalTok{x))}
\end{Highlighting}
\end{Shaded}
We assume that our dependent variable are response times in milliseconds, and we assume that response times are log-normally distributed.
To explain response times in this experimental design, we aim to test two distinct hypotheses, which are implemented in two different hierarchical (linear mixed-effects) models. The alternative hypothesis (H1) assumes that factor \texttt{x} influences the dependent variable, i.e., that the fixed effects estimate associated with factor \texttt{x}, \(\beta_1\), takes some value that is different from zero \(H_1: \beta_1 \neq 0\). In R, the corresponding model formula can be written as: \texttt{log(rt)\ \textasciitilde{}\ 1\ +\ x\ +\ (1\ +\ x\ \textbar{}\ subj)}. By contrast, the null hypothesis (H0) assumes that factor \texttt{x} does not influence the dependent variable response times, i.e., \(H_0: \beta_1 = 0\). In R, the corresponding model formula can be written as: \texttt{log(rt)\ \textasciitilde{}\ 1\ +\ (1\ +\ x\ \textbar{}\ subj)}. To compare this general hypothesis H1 to the point hypothesis H0, we will use Bayes factors.
The next step in SBC is to define the prior model probabilities. For simplicity, we assume that both hypotheses (H0 and H1) are both equally likely a priori, which also has the advantage that both hypotheses are equally frequently sampled in the SBC. (However, see Schad \& Vasishth, 2019, for a different prior with higher probability for the null.)
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{priorsHypothesis \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DataTypeTok{H0 =} \FloatTok{0.5}\NormalTok{, }\DataTypeTok{H1 =} \FloatTok{0.5}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
Moreover, we define hypothetical priors for the model parameters. Note that we assume the dependent variable response times to be log-normally distributed; the priors are thus defined in this log-normal distribution model. They can be interpreted as the priors for a linear mixed-effects model on log-transformed response times. Specifically, for the intercept we assume a normal distribution with mean \(6\) and standard deviation \(0.5\). Note that a prior mean for the intercept of \(6\) reflects the a priori expectation that response times are an average of \texttt{exp(6)\ =\ 403} ms. For the fixed effect estimate for factor \texttt{x} (i.e., \texttt{b}), we assume a normal distribution with mean \(0\) and standard deviation of \(1.0\). For the random effects standard deviations, we assume a half normal distribution with mean \(0\) and standard deviation of \(1.5\), which is truncated to take only positive values. For the residual noise term, we assume a normal distribution with mean \(0\) and standard deviation of \(0.5\), which is again truncated to take only positive values. For the random effects correlation between the intercept and the estimate for \texttt{x}, we assume an LKJ prior (Lewandowski, Kurowicka, \& Joe, 2009) with parameter value \(2\). We write these priors in \texttt{brms} (Bürkner, 2017, 2018):
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{priors \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(6, 0.5)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"Intercept"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 1.0)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"b"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 1.5)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"sd"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 0.5)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"sigma"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"lkj(2)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"cor"}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
Based on these priors, it is now possible to simulate a priori data for the artificial experimental design. First, we use the prior probabilities for the hypotheses to sample a hypothesis from the prior. We do so 500 times (i.e., 500 runs of SBC).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{nsim \textless{}{-}}\StringTok{ }\DecValTok{500}
\NormalTok{u \textless{}{-}}\StringTok{ }\KeywordTok{runif}\NormalTok{(nsim)}
\NormalTok{hypothesis\_samples \textless{}{-}}\StringTok{ }\NormalTok{(u }\OperatorTok{\textgreater{}}\StringTok{ }\NormalTok{priorsHypothesis[}\DecValTok{1}\NormalTok{])}\OperatorTok{/}\KeywordTok{sum}\NormalTok{(priorsHypothesis)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{table}\NormalTok{(hypothesis\_samples)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## hypothesis_samples
## 0 1
## 245 255
\end{verbatim}
We see that the H0 and the H1 are each sampled approximately 250 times. We will perform a formal SBC analysis below.
Next, we sample model parameters from the priors based on the model that was sampled in each run. For this, we use the custom R function \texttt{SimFromPrior()} {[}taken from Schad et al. (2021); \url{https://osf.io/b2vx9/}{]}. First, we choose the alternative hypothesis (H1) to sample values for the model parameters, i.e., to sample parameters from their prior distributions.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{beta0 \textless{}{-}}\StringTok{ }\NormalTok{beta1 \textless{}{-}}\StringTok{ }\NormalTok{sigma\_u0 \textless{}{-}}\StringTok{ }\NormalTok{sigma\_u1 \textless{}{-}}\StringTok{ }\NormalTok{rho\_u \textless{}{-}}\StringTok{ }\NormalTok{sigma \textless{}{-}}\StringTok{ }\OtherTok{NA}
\KeywordTok{set.seed}\NormalTok{(}\DecValTok{123}\NormalTok{)}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{nsim) \{}
\NormalTok{ tmp \textless{}{-}}\StringTok{ }\DecValTok{{-}1}\NormalTok{; }\ControlFlowTok{while}\NormalTok{ (tmp}\OperatorTok{\textless{}}\DecValTok{0}\NormalTok{) }\CommentTok{\# sample from a half{-}normal distribution}
\NormalTok{ tmp \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"Intercept"}\NormalTok{,}\DataTypeTok{coef=}\StringTok{""}\NormalTok{)}
\NormalTok{ beta0[i] \textless{}{-}}\StringTok{ }\NormalTok{tmp}
\NormalTok{ beta1[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"b"}\NormalTok{)}
\NormalTok{ sigma\_u0[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{)}
\NormalTok{ sigma\_u1[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{)}
\NormalTok{ rho\_u[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"cor"}\NormalTok{)}
\NormalTok{ sigma[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"sigma"}\NormalTok{)}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
Then we set the \texttt{beta1} parameter to zero in all runs where the null hypothesis was drawn.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{beta1[ hypothesis\_samples}\OperatorTok{==}\DecValTok{0}\NormalTok{ ] \textless{}{-}}\StringTok{ }\DecValTok{0}
\end{Highlighting}
\end{Shaded}
Now that we have simulated the model parameters, we can simulate data based on the sampled hypothesis. For the fake data simulation from a generalized linear mixed-effects model, we use the R function \texttt{simLMM()} from the \texttt{designr} package.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{rtsimmat \textless{}{-}}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\OtherTok{NA}\NormalTok{,}\KeywordTok{nrow}\NormalTok{(fakedata),nsim)}
\CommentTok{\# We take exp() since we assume response times are log{-}normally distributed}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{nsim) }
\NormalTok{ rtsimmat[,i] \textless{}{-}}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\KeywordTok{simLMM}\NormalTok{(}\DataTypeTok{formula=}\OperatorTok{\textasciitilde{}}\StringTok{ }\NormalTok{x }\OperatorTok{+}\StringTok{ }\NormalTok{(x }\OperatorTok{|}\StringTok{ }\NormalTok{subj), }
\DataTypeTok{dat=}\NormalTok{simdata, }
\DataTypeTok{Fixef=}\KeywordTok{c}\NormalTok{(beta0[i], beta1[i]), }
\DataTypeTok{VC\_sd=}\KeywordTok{list}\NormalTok{(}\KeywordTok{c}\NormalTok{(sigma\_u0[i], sigma\_u1[i]), sigma[i]), }
\DataTypeTok{CP=}\NormalTok{rho\_u[i], }\DataTypeTok{empirical=}\OtherTok{FALSE}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
The next step is to estimate the Bayesian (brms) models on the simulated data. For each simulated data set, we estimate the posterior of the H0 and the H1, then we perform bridge sampling, and then we use this to compute a Bayes factor for each of the \(500\) simulated data sets.
For the hierarchical modeling, we use the R-package \texttt{brms} (Bürkner, 2017, 2018). We specify a large number of sampling iterations for each of four chains (\texttt{s\ =\ 10,000}, warmup samples: \texttt{s\ =\ 2,000}). This large number is required to obtain stable Bayes factor estimates. Note that it is a much larger number than the default number of iterations (\texttt{s\ =\ 2,000}), which was not set to estimate Bayes factors, but instead to estimate posterior expectations.
Moreover, \texttt{adapt\_delta}, which is set to \texttt{adapt\_delta\ =\ 0.9}, and \texttt{max\_treedepth}, which is set to \texttt{max\_treedepth\ =\ 15} are control parameters for ensuring the posterior sampler is working correctly (Betancourt, 2016, 2017; Gabry et al., 2019). Importantly, it's necessary to set the argument \texttt{save\_pars\ =\ save\_pars(all\ =\ TRUE)}. This setting is a precondition for later performing bridge sampling for computing the Bayes factor analysis.
For each model (H0 and H1), we use the function \texttt{bridge\_sampler()} to compute marginal likelihoods, and we compute the Bayes factor by comparing marginal likelihoods using the function \texttt{bayes\_factor(lml\_Full,\ lml\_Null)}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{BF10\_SBC \textless{}{-}}\StringTok{ }\KeywordTok{rep}\NormalTok{(}\OtherTok{NA}\NormalTok{,nsim)}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{nsim) \{}
\NormalTok{ simdata}\OperatorTok{$}\NormalTok{simrt \textless{}{-}}\StringTok{ }\NormalTok{rtsimmat[,i]}
\CommentTok{\# estimate model for alternative hypothesis}
\NormalTok{ brm1 \textless{}{-}}\StringTok{ }\KeywordTok{brm}\NormalTok{(simrt }\OperatorTok{\textasciitilde{}}\StringTok{ }\NormalTok{x }\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{x}\OperatorTok{|}\NormalTok{subj), simdata, }
\DataTypeTok{family=}\KeywordTok{lognormal}\NormalTok{(), }\DataTypeTok{prior=}\NormalTok{priors, }\DataTypeTok{cores=}\DecValTok{4}\NormalTok{,}
\DataTypeTok{save\_pars =} \KeywordTok{save\_pars}\NormalTok{(}\DataTypeTok{all =} \OtherTok{TRUE}\NormalTok{),}
\DataTypeTok{warmup=}\DecValTok{2000}\NormalTok{, }\DataTypeTok{iter=}\DecValTok{10000}\NormalTok{, }
\DataTypeTok{control=}\KeywordTok{list}\NormalTok{(}\DataTypeTok{adapt\_delta=}\FloatTok{0.99}\NormalTok{, }\DataTypeTok{max\_treedepth=}\DecValTok{15}\NormalTok{))}
\NormalTok{ lml\_Full \textless{}{-}}\StringTok{ }\KeywordTok{bridge\_sampler}\NormalTok{(brm1, }\DataTypeTok{silent=}\OtherTok{TRUE}\NormalTok{)}
\KeywordTok{rm}\NormalTok{(brm1)}
\CommentTok{\# estimate model for null hypothesis}
\NormalTok{ brm0 \textless{}{-}}\StringTok{ }\KeywordTok{brm}\NormalTok{(simrt }\OperatorTok{\textasciitilde{}}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{x}\OperatorTok{|}\NormalTok{subj), simdata, }
\DataTypeTok{family=}\KeywordTok{lognormal}\NormalTok{(), }\DataTypeTok{prior=}\NormalTok{priors[}\OperatorTok{{-}}\DecValTok{2}\NormalTok{,], }\DataTypeTok{cores=}\DecValTok{4}\NormalTok{,}
\DataTypeTok{save\_pars =} \KeywordTok{save\_pars}\NormalTok{(}\DataTypeTok{all =} \OtherTok{TRUE}\NormalTok{),}
\DataTypeTok{warmup=}\DecValTok{2000}\NormalTok{, }\DataTypeTok{iter=}\DecValTok{10000}\NormalTok{,}
\DataTypeTok{control=}\KeywordTok{list}\NormalTok{(}\DataTypeTok{adapt\_delta=}\FloatTok{0.99}\NormalTok{, }\DataTypeTok{max\_treedepth=}\DecValTok{15}\NormalTok{))}
\NormalTok{ lml\_Null \textless{}{-}}\StringTok{ }\KeywordTok{bridge\_sampler}\NormalTok{(brm0, }\DataTypeTok{silent=}\OtherTok{TRUE}\NormalTok{)}
\KeywordTok{rm}\NormalTok{(brm0)}
\NormalTok{ BF10\_SBC[i] \textless{}{-}}\StringTok{ }\KeywordTok{bayes\_factor}\NormalTok{(lml\_Full, lml\_Null)}\OperatorTok{$}\NormalTok{bf}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
Note that in the null-model, we do keep the random effects of factor \texttt{x} varying across subjects and across items, i.e., \texttt{simrt\ \textasciitilde{}\ 1\ +\ (1+x\textbar{}subj)}. That is, we do assume that effects of factor \texttt{x} could be present for individual subjects, but importantly, by removing the fixed-effect of \texttt{x} we assume a priori that the overall mean effect across all subjects is zero. The model comparison therefore targets only this fixed effect of factor \texttt{x}, but not the random effects.
While this is not required in and part of SBC, we show here the distributions of Bayes factors given the true hypotheses (see Fig.~\ref{fig:ViolinPlot}). The results show that the Bayes factor estimates exhibit wide distributions when either the H0 or the H1 are true. It is clear that when the H1 is the true hypothesis in the data simulation, then the Bayes factors provides more evidence to the H1 on average. By contrast, when the H0 is the true hypothesis in the data simulation, then the distribution of Bayes factors is clearly shifted towards evidence for the H0. Interestingly, these distributions are quite asymmetric such that strong evidence for the correct hypothesis is rather rare, and weaker evidence is more frequent.
Note that there was one outlier data point for the H0 with a \(BF_{10} = -3e-86\). This resulted from an unstable marginal likelihood, since the bridge sampling did not converge. Thus, even in the simple example case we use, there can occasionally be problems with bridge sampling.
\begin{figure}
{\centering \includegraphics{figure-ViolinPlot-1}
}
\caption{Distribution of Bayes factors (BF10) as a function of which hypothesis was true in the simulations from the SBC. Left panel: Distributions are shown as violin plots. Right panel: Empirical cumulative distribution functions (ECDFs). One outlier data point for the H0 with a BF10 of -3e-86 resulted from an unstable marginal likelihood (i.e., the bridge sampling did not converge) and was removed for visualization.}\label{fig:ViolinPlot}
\end{figure}
Last, we can compute the posterior model probabilities. The ratio of posterior model probabilities, \(p(H1|y)/p(H0|y)\), can be obtained by multiplying the Bayes factor (\texttt{BF10\_SBC}) with the prior ratio of model probabilities (which is \(p(H1)/p(H0) = 0.5/0.5 = 1\) in our example): \(p(H1|y)/p(H0|y) = BF_{10} \times p(H1)/p(H0)\):
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{postModelRat \textless{}{-}}\StringTok{ }\NormalTok{BF10\_SBC }\OperatorTok{*}\StringTok{ }\NormalTok{priorsHypothesis[}\DecValTok{2}\NormalTok{]}\OperatorTok{/}\NormalTok{priorsHypothesis[}\DecValTok{1}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
This posterior ratio can be used to compute the posterior probabilities for the null hypothesis and for the alternative hypothesis:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{postModelProbsH1 \textless{}{-}}\StringTok{ }\NormalTok{postModelRat}\OperatorTok{/}\NormalTok{(postModelRat}\OperatorTok{+}\DecValTok{1}\NormalTok{)}
\NormalTok{postModelProbsH0 \textless{}{-}}\StringTok{ }\DecValTok{1}\OperatorTok{/}\NormalTok{(postModelRat}\OperatorTok{+}\DecValTok{1}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
This code computes the posterior model probabilities for alll \(500\) simulation runs.
As the last step, across the \(500\) simulation runs, we average the posterior probabilities for each model, i.e., by computing the mean posterior probability across all \(500\) runs: \(\mu_{\mathcal{M}_x}^{post} = \frac{1}{n_{sim}} \sum_{i=1}^{n_{sim}} p(\mathcal{M}_x \mid y_i^{sim})\), where each \(y_i^{sim}\) is one out of \(n_{sim} = 500\) simulated data sets, \(\mathcal{M}_x\) is one selected model, and \(\mu_{\mathcal{M}_x}^{post}\) is the average posterior probability for model \(x\), or simply in R: \texttt{mean(postModelProbsH1)}.
If one wanted to make decisions based on the continuous evidence, e.g., to compute things like FDR or TDR, then one would need to specify thresholds on Bayes factors or posterior probabilities, such that Bayes factors/posterior probabilities larger or smaller than these thresholds would indicate evidence for the H0, for the H1, or for neither hypothesis. However, one key aspect of Bayesian data analysis is that it provides continuous estimates of posterior probabilities.
Now, we can investigate our question of interest in SBC: we can look at how likely each model was chosen a posteriori on average and compare these average posterior model probabilities (see below, ``means''; in addition, their 95\% binomial confidence intervals) to the prior model probabilities that were in fact used to simulate the data (i.e., 50\% each).
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Obtain 95\% confidence intervals from a logistic linear model }
\CommentTok{\# and transform confidence intervals into probabilities}
\NormalTok{BM \textless{}{-}}\StringTok{ }\KeywordTok{glm}\NormalTok{(postModelProbsH1}\OperatorTok{\textasciitilde{}}\DecValTok{1}\NormalTok{,}\DataTypeTok{family=}\StringTok{"binomial"}\NormalTok{)}
\NormalTok{CIs \textless{}{-}}\StringTok{ }\DecValTok{1}\OperatorTok{/}\NormalTok{(}\DecValTok{1}\OperatorTok{+}\KeywordTok{exp}\NormalTok{(}\OperatorTok{{-}}\KeywordTok{confint}\NormalTok{(BM)))}
\NormalTok{ME \textless{}{-}}\StringTok{ }\KeywordTok{as.numeric}\NormalTok{(}\DecValTok{1}\OperatorTok{/}\NormalTok{(}\DecValTok{1}\OperatorTok{+}\KeywordTok{exp}\NormalTok{(}\OperatorTok{{-}}\KeywordTok{coef}\NormalTok{(BM))))}
\CommentTok{\# Show the average posterior probability for H1 with 95\% confidence intervals}
\KeywordTok{t}\NormalTok{(}\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{pH1=}\KeywordTok{round}\NormalTok{(}\DecValTok{100}\OperatorTok{*}\KeywordTok{c}\NormalTok{(}\DataTypeTok{CI=}\NormalTok{CIs[}\DecValTok{1}\NormalTok{], }\DataTypeTok{mean=}\NormalTok{ME, }\DataTypeTok{CI=}\NormalTok{CIs[}\DecValTok{2}\NormalTok{]),}\DecValTok{2}\NormalTok{)))}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## CI.2.5
## pH1 45.53 49.91 54.3
\end{verbatim}
The results show that the average posterior model probability for the H1 versus the H0 was at roughly 50\%. This result directly corresponds to the prior model probability of 50\%. The confidence intervals include the prior of 50\%. This SBC analysis therefore, for this specific and simple example case, did not indicate any signs of significant bias. This is important calibration information for the bridge sampling approach, since it has not been clear so far whether bridge sampling yields unbiased estimates for the types of multilevel models studied here and often used in research practice. These results are therefore encouraging and support the application of bridge sampling for computation of Bayes factors and posterior probabilities for our case study. However, much more extensive simulation studies are required to investigate this point more generally, which is outside the scope of this paper.
In addition to these SBC results, we can also investigate additional calibration questions of interest by looking at posterior model probabilities as a function of which prior hypothesis (model) was sampled in a given run. For each simulation we know whether the data was simulated based on the H0 or the H1, that is, we know whether for a given simulated data set, the H0 or the H1 is ``true''. This information is stored in the vector \texttt{hypothesis\_samples}. For each ``true'' hypothesis, we can now look at how much posterior probability mass is allocated to the two models by the Bayesian analysis. If the artificial data were simulated based on the H0, how high is the posterior probability for the H0? Is it higher than chance? And if so, by how much. Moreover, if the artificial data were simulated based on the H1, what is the posterior probability for the H1?
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{true\_hypothesis \textless{}{-}}\StringTok{ }\KeywordTok{ifelse}\NormalTok{(hypothesis\_samples}\OperatorTok{==}\DecValTok{1}\NormalTok{, }\StringTok{"H1"}\NormalTok{, }\StringTok{"H0"}\NormalTok{)}
\NormalTok{tabSBC \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(postModelProbsH0, postModelProbsH1, true\_hypothesis) }\OperatorTok{\%\textgreater{}\%}\StringTok{ }
\StringTok{ }\KeywordTok{group\_by}\NormalTok{(true\_hypothesis) }\OperatorTok{\%\textgreater{}\%}\StringTok{ }
\StringTok{ }\KeywordTok{summarize}\NormalTok{(}\DataTypeTok{pH0=}\KeywordTok{round}\NormalTok{(}\KeywordTok{mean}\NormalTok{(postModelProbsH0, }\DataTypeTok{na.rm=}\OtherTok{TRUE}\NormalTok{)}\OperatorTok{*}\DecValTok{100}\NormalTok{), }
\DataTypeTok{pH1=}\KeywordTok{round}\NormalTok{(}\KeywordTok{mean}\NormalTok{(postModelProbsH1, }\DataTypeTok{na.rm=}\OtherTok{TRUE}\NormalTok{)}\OperatorTok{*}\DecValTok{100}\NormalTok{)) }\OperatorTok{\%\textgreater{}\%}
\StringTok{ }\KeywordTok{as.data.frame}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{table}
\caption{\label{tab:tabSBC1}Average posterior probabilities for the H0 (pH0) and for the H1 (pH1) as a function of the true hypothesis in the simulations (H0 versus H1).}
\centering
\begin{tabular}[t]{lrr}
\toprule
True hypothesis & pH0 & pH1\\
\midrule
H0 & 73 & 27\\
H1 & 28 & 72\\
\bottomrule
\end{tabular}
\end{table}
The results (see Table~\ref{tab:tabSBC1}) in the first row show that if the H0 was used to simulate artificial data, then the Bayesian procedure allocated an average of 73\% posterior probability to the H0. Thus, the chance to support the null hypothesis correctly is clearly better than 50/50, i.e., better than chance, in this set of simulated data and model.
Moreover, the second row of the table shows that if H1 was used to simulate the artificial data, then the posterior probability for H1 was an average of 72\%. Thus, the alternative hypothesis is also somewhat likely to be correctly supported in the present setting. Taken together, this analysis shows that the data and the model on average provide some evidence for the hypotheses of interest. Note that this result completely depends on things such as the effect size or experimental design (including sample size), and the posterior probabilities of the true model may be higher if stronger effects or larger samples are investigated.
Importantly, the SBC analysis supported the Bayes factor estimates and could not detect a difference of average posterior model probabilities from the prior model probabilities, suggesting that the Bayes factor estimates for this analysis are valid. However, this will not always be the outcome of SBC. We will next investigate a case where SBC shows a problem with posterior model probabilities.
\hypertarget{sbc-for-bridge-sampling-an-example-where-bayes-factor-estimates-are-not-accurate-due-to-model-mis-specification}{%
\subsubsection{SBC for bridge sampling: An example where Bayes factor estimates are not accurate due to model mis-specification}\label{sbc-for-bridge-sampling-an-example-where-bayes-factor-estimates-are-not-accurate-due-to-model-mis-specification}}
Next, we again investigate the same experimental design as above, with the same priors for the parameters and the same observational model as in the previous section. Again, we are interested in the hypotheses H0 and H1 and in using bridge sampling to compute Bayes factors. We again use SBC to investigate whether average posterior model probabilities correspond to the prior model probabilities to test whether Bayes factor estimates capture the true Bayes factor. The only thing that differs in this analysis is that we leave out the random slopes in the estimation procedure. In R-formula syntax the H1 can thus be written as \texttt{simrt\ \textasciitilde{}\ x\ +\ (1\textbar{}subj)} and the H0 can be written as \texttt{simrt\ \textasciitilde{}\ 1\ +\ (1\textbar{}subj)}. It is known from analysis of frequentist tools that leaving out random slopes from a linear mixed-effects model can lead to an increase in type I (\(\alpha\)) error (Barr, Levy, Scheepers, \& Tily, 2013; Matuschek, Kliegl, Vasishth, Baayen, \& Bates, 2017). Here, we are interested in whether leaving out random slopes from a corresponding Bayesian multilevel model leads to a bias in posterior model probabilities. In line with the frequentist results, we expect that posterior model probabilities for the H1 should be inflated when neglecting random slopes. To investigate this, we perform the exact same SBC analysis as before, but only leaving out the random slopes from the fitted models.
\begin{verbatim}
## CI.2.5
## pH1 54.21 58.57 62.84
\end{verbatim}
The result shows that now, as expected, the average posterior probability for the alternative hypothesis (H1) of 58.57\% is higher than its prior probability of 50\%. This increase is supported by the 95\% confidence intervals, which do not overlap with 50\%. Moreover, a frequentist intercept-only logistic regression shows that the average posterior model probability highly significantly differs from the prior probability of 50\% (\(b = 0.35, SE = 0.09, z = 3.81, p = .0001\)). This shows that leaving out random slopes from a multilevel model when the data do contain random variation of the effect across subjects can lead to severe biases in the posterior model probabilities as pointed out by Barr et al. (2013) for frequentist linear mixed-effects models. SBC is a useful tool that can detect such biases. It is therefore highly recommended to use SBC to calibrate one's Bayes factor estimates for a specific study, model, and priors.
We again look at the average posterior probabilities as a function of which hypothesis was true in the simulations.
\begin{table}
\caption{\label{tab:tabSBC2}Average posterior probabilities for the H0 (pH0) and for the H1 (pH1) as a function of the true hypothesis in the simulations (H0 versus H1) when using a mis-specified model.}
\centering
\begin{tabular}[t]{lrr}
\toprule
True hypothesis & pH0 & pH1\\
\midrule
H0 & 66 & 34\\
H1 & 17 & 83\\
\bottomrule
\end{tabular}
\end{table}
The results (see Table~\ref{tab:tabSBC2}) show that the average posterior probability was largest for the correct model. When the H0 was true in the simulations (first row), then the average posterior probability for the H0 was 66\%. By contrast, when the H1 was true in the simulations (second row), then the average posterior probability for the H1 was 83\%. Thus, while the correct hypothesis still had higher average posterior probability, the increase in posterior evidence for the H1 is still visible (83\% is larger than 66\%).
The key problem in this analysis is that the models generating the data (\texttt{H0} / \texttt{H1}; which included variation in random slopes) were different from the models used to analyze the data (\texttt{pH0} /\texttt{pH1}; which assumed no variation in random slopes). Thus, we can see in the SBC that if our models are wrong, then our inferences can be misleading.
\hypertarget{sbc-for-the-savage-dickey-method}{%
\subsubsection{SBC for the Savage-Dickey method}\label{sbc-for-the-savage-dickey-method}}
The Savage--Dickey method (Dickey et al., 1970) can be used to estimate the Bayes factor for nested models, where one (or more) of the model parameters of the full model (e.g., a regression coefficient) is set to a fixed value (such as e.g., zero) to obtain a nested null model. In these cases, the Bayes factor can be obtained by computing the ratio between the densities of the posterior and the prior at the value for the model parameters of zero. This is a very interesting and elegant result in Bayesian modeling. Unfortunately, however, the implementation of the Savage--Dickey method in \texttt{brms} can give very unreliable results when the posterior is far away from zero, and any kind of Savage--Dickey ratio based on MCMC samples may have this limitation. The reason for this is that if the posterior is far from the fixed value being investigated, then only very few MCMC draws will fall close to this value. Therefore, the estimate of the posterior density at the value of zero will be very noisy. The results will be consistent in showing very little support for the null model. However, the exact amount of support for the alternative hypothesis (H1) cannot be measured in a precise way.
Here, we study Bayes factor estimates from the Savage--Dickey method using SBC to test whether the Bayes factor estimates of the method are accurate on average. Again, we run the same SBC analysis as before, now adding random slopes, but using the Savage--Dickey method to compute Bayes factors instead of using bridge sampling.
\begin{verbatim}
## CI.2.5
## pH1 43.02 47.38 51.77
\end{verbatim}
The result shows that the average posterior probability for the H1 is close to the prior model probability of 50\%. A frequentist intercept-only logistic regression shows that there is no evidence that the posterior model probability differs from the prior probability of 50\% (\(b = -0.10, SE = 0.09, z = -1.2, p = .242\)). Note, however, that our SBC analysis uses a limited number of \(500\) SBC iterations, and a SBC with a larger number of iterations might reveal divergences of smaller size than we can currently resolve.
Again, we also look at the average posterior probabilities as a function of which hypothesis was actually true, i.e., which model was used to simulate the data.
\begin{table}
\caption{\label{tab:tabSBC3}Average posterior probabilities for the H0 (pH0) and for the H1 (pH1) as a function of the true hypothesis in the simulations (H0 versus H1) when using the Savage-Dickey method.}
\centering
\begin{tabular}[t]{lrr}
\toprule
True hypothesis & pH0 & pH1\\
\midrule
H0 & 72 & 28\\
H1 & 31 & 69\\
\bottomrule
\end{tabular}
\end{table}
The results (see Table~\ref{tab:tabSBC3}) show that with the Savage--Dickey method, there is again higher posterior probability for the correct hypothesis (72\% for the correct H0 and 69\% for the correct H1).
\hypertarget{sbc-using-the-savage-dickey-method-an-example-with-invalid-average-posterior-probabilities}{%
\subsubsection{SBC using the Savage-Dickey method: An example with invalid average posterior probabilities}\label{sbc-using-the-savage-dickey-method-an-example-with-invalid-average-posterior-probabilities}}
Again, we provide an example where average posterior model probabilities are incorrect as determined by SBC. As for the bridge sampling, we again fit a model where the random slopes are excluded from the model. Everything else remains the same as in the study above.
\begin{verbatim}
## CI.2.5
## pH1 54.25 58.6 62.87
\end{verbatim}
The result shows that as in the SBC with bridge sampling, the average posterior probability for the H1 is also strongly inflated when using the Savage--Dickey method to estimate Bayes factors. The posterior probability for the H1 is an average of 58.60\%, and significantly different from the prior 50\% (\(b=0.35, SE=0.09, z=3.82, p = .0001\)). This again shows that posterior probabilites are not estimated accurately when random slopes are ignored in a Bayesian linear mixed-effects model.
\begin{table}
\caption{\label{tab:tabSBC4}Average posterior probabilities for the H0 (pH0) and for the H1 (pH1) as a function of the true hypothesis in the simulations (H0 versus H1) when using the Savage-Dickey method and a mis-specified model.}
\centering
\begin{tabular}[t]{lrr}
\toprule
True hypothesis & pH0 & pH1\\
\midrule
H0 & 63 & 37\\
H1 & 19 & 81\\
\bottomrule
\end{tabular}
\end{table}
Again, the average posterior probability as a function of the true hypothesis reveals (see Table~\ref{tab:tabSBC4}) that the correct hypothesis has higher posterior probability than the incorrect hypothesis, but that the posterior probability is higher for the correct H1 (81\%) than for the H0 (63\%), reflecting the overall bias towards the H1 in the estimation.
\hypertarget{unstable-bayes-factor-estimates-due-to-the-effective-number-of-posterior-samples}{%
\subsubsection{Unstable Bayes factor estimates due to the effective number of posterior samples}\label{unstable-bayes-factor-estimates-due-to-the-effective-number-of-posterior-samples}}
An important issue that we have glossed over in the previous examples is that the number of posterior samples chosen in the Hamiltonian Markov Chain Monte Carlo (MCMC) sampler (called by \texttt{brms}) can have a strong impact on the results of the Bayes factor estimators. This is true for both bridge sampling and also for the Savage--Dickey method. Bridge sampling is a form of density estimation for which we have no good theoretical guarantees of MCMC sampling. Note that in the analyses presented above, we set the number of MCMC draws to a large number of \(n_{iter} = 10,000\) iterations. The SBC analysis therefore took a considerable amount of time.
Running the same \texttt{brms} models with less MCMC draws will induce some instability in the Bayes factor estimates based on the bridge sampling, such that running the same analysis twice would yield different results for the Bayes factor. Moreover, bridge sampling in itself may be unstable and may return different results for different bridge sampling runs on the same posterior MCMC draws (just because of different starting values). This is very concerning, as the results reported in a paper might not be stable if the number of posterior samples or effective sample size is not large enough. Indeed, the default number of posterior samples in \texttt{brms} is \texttt{iter\ =\ 2000} (and the default number of warmup samples is \texttt{warmup\ =\ 1000}). It is important to note that these defaults were not set to support bridge sampling (nor the Savage-Dickey method), i.e., they were not defined for computation of densities to support Bayes factors. Instead, they are valid for posterior inference on expectations (e.g., posterior means) for models that are not too complex. However, when using these defaults for estimation of densities and the computation of Bayes factors, then instabilities can arise.
For illustration, we perform the SBC analysis again, now using the default number of iterations in \texttt{brms} (\(s = 2,000\) samples, \(s = 1,000\) warm-up samples). We first run the model using bridge sampling, and then look at SBC for the Savage--Dickey method.
\begin{verbatim}
## CI.2.5
## pH1 44.58 48.96 53.35
\end{verbatim}
The results for bridge sampling show that even with a smaller number of MCMC draws, the average posterior does not diverge from the prior 50\%, suggesting accurate average estimation of posterior probabilities. This is very interesting information and in our example analysis shows that just because the Bayes factor estimator is more noisy with smaller number of MCMC draws, this does not seem to mean that it gets biased, at least in the case that we study here.
\begin{table}
\caption{\label{tab:tabSBC5}Average posterior probabilities for the H0 (pH0) and for the H1 (pH1) as a function of the true hypothesis in the simulations (H0 versus H1) using bridge sample with the small default number of MCMC draws (s = 2,000).}
\centering
\begin{tabular}[t]{lrr}
\toprule
True hypothesis & pH0 & pH1\\
\midrule
H0 & 72 & 28\\
H1 & 31 & 69\\
\bottomrule
\end{tabular}
\end{table}
Also, there is still quite a bit of information captured by the Bayes factor estimates as average posterior probabilities are larger for the correct hypothesis (see Table~\ref{tab:tabSBC5}). Thus, in this example case, average posterior probabilities perform well even when using a smaller number of bridge samples. Importantly, this may be the case because we are using a very small simulated data set with only 15 subjects and only 4 data points per subject. When using more realistic and larger data sets, which may possibly also include variation across items, such a small number of MCMC draws might be much more problematic and larger numbers of MCM iterations may be needed.
Next, we perform the same analysis for the Savage--Dickey method, again now with the default number of MCMC iterations in brms.
\begin{verbatim}
## CI.2.5
## pH1 47.44 51.82 56.18
\end{verbatim}
Again, the average posterior model probabilities do not diverge from the prior 50\%.
\begin{table}
\caption{\label{tab:tabSBC6}Average posterior probabilities for the H0 (pH0) and for the H1 (pH1) as a function of the true hypothesis in the simulations (H0 versus H1) using the Savage-Dickey method with the small default number of s = 2,000 MCMC draws.}
\centering
\begin{tabular}[t]{lrr}
\toprule
True hypothesis & pH0 & pH1\\
\midrule
H0 & 72 & 28\\
H1 & 26 & 74\\
\bottomrule
\end{tabular}
\end{table}
Again (see Table~\ref{tab:tabSBC6}), the average posterior probabilities are largest for the correct hypotheses.
Note that SBC cannot tell us how stable the posterior probabilities are estimated in the individual analysis run. What we need for this is to use the same data set and to estimate Bayes factors again and again, by only varying the MCMC samples on which the Bayes factor estimates are based, but leaving all other aspects of the data and the model constant. Thus, we want to know how stable the Bayes factor estimates are with respect to the MCMC chains.
To investigate this, we use the same experimental design that we introduced above. We simulate data from a linear mixed-effects model with a full variance covariance matrix for random effects (i.e., a maximal linear mixed-effects model). Importantly, this time, we do not sample model parameters from prior distributions. Instead, we set them to fixed values. We use values for the fixed effects for the intercept of \(6\) and for the effect of \texttt{x} of \(-1\). For the random effects we assume standard deviations of \(0.5\) and a correlation of \(0.3\). The residual noise is set to \(0.5\). We simulate the data using the R function \texttt{simLMM()} and make use of the functionality \texttt{empirical=TRUE}, which makes sure that the fixed effects in the data correspond precisely to the indicated values (i.e., the intercept is exactly \(6\) and the effect of \texttt{x} exactly \(-1\)).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{design \textless{}{-}}\StringTok{ }\KeywordTok{fixed.factor}\NormalTok{(}\StringTok{"x"}\NormalTok{, }\DataTypeTok{levels=}\KeywordTok{c}\NormalTok{(}\StringTok{"{-}1"}\NormalTok{, }\StringTok{"1"}\NormalTok{), }\DataTypeTok{replications=}\DecValTok{2}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{random.factor}\NormalTok{(}\StringTok{"subj"}\NormalTok{, }\DataTypeTok{instances=}\DecValTok{15}\NormalTok{)}
\NormalTok{fakedata \textless{}{-}}\StringTok{ }\KeywordTok{design.codes}\NormalTok{(design)}
\NormalTok{fakedata}\OperatorTok{$}\NormalTok{x \textless{}{-}}\StringTok{ }\KeywordTok{as.numeric}\NormalTok{(}\KeywordTok{as.character}\NormalTok{(fakedata}\OperatorTok{$}\NormalTok{x))}
\CommentTok{\# simulate data}
\NormalTok{fakedata}\OperatorTok{$}\NormalTok{fakert \textless{}{-}}\StringTok{ }\KeywordTok{simLMM}\NormalTok{(}\DataTypeTok{formula=}\OperatorTok{\textasciitilde{}}\StringTok{ }\NormalTok{x }\OperatorTok{+}\StringTok{ }\NormalTok{(x }\OperatorTok{|}\StringTok{ }\NormalTok{subj), }
\DataTypeTok{dat =}\NormalTok{fakedata, }
\DataTypeTok{Fixef =} \KeywordTok{c}\NormalTok{(}\DecValTok{6}\NormalTok{, }\FloatTok{{-}1.0}\NormalTok{),}
\DataTypeTok{VC\_sd =} \KeywordTok{list}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\FloatTok{0.5}\NormalTok{, }\FloatTok{0.5}\NormalTok{), }\FloatTok{0.5}\NormalTok{),}
\DataTypeTok{CP =} \FloatTok{0.3}\NormalTok{,}
\DataTypeTok{empirical=}\OtherTok{TRUE}\NormalTok{, }\DataTypeTok{verbose=}\OtherTok{FALSE}\NormalTok{)}
\CommentTok{\# save fake data}
\KeywordTok{saveRDS}\NormalTok{(fakedata, }\StringTok{"dataR/SBC\_BF\_stab\_fakeDat.RDS"}\NormalTok{)}
\CommentTok{\# frequentist linear mixed{-}effects model using lmer() }
\CommentTok{\# shows fixed effects estimates are precisely as indicated}
\KeywordTok{round}\NormalTok{(}\KeywordTok{coef}\NormalTok{(}\KeywordTok{summary}\NormalTok{(}\KeywordTok{lmer}\NormalTok{(fakert }\OperatorTok{\textasciitilde{}}\StringTok{ }\NormalTok{x }\OperatorTok{+}\StringTok{ }\NormalTok{(x }\OperatorTok{|}\StringTok{ }\NormalTok{subj), }\DataTypeTok{data=}\NormalTok{fakedata))),}\DecValTok{3}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 6 0.115 14 52.186 0
## x -1 0.114 14 -8.773 0
\end{verbatim}
For this fixed simulated data set, we estimate the exact same model 100 times, i.e., each time performing new MCMC sampling using the same data, model and priors. We compute Bayes factors for each of the 100 models, and then investigate whether the 100 Bayes factors are the same in each run. We run this analysis using bridge sampling and also using the Savage--Dickey method. Moreover, we run the analyses using the default number of \(2,000\) samples, and also using the larger number of \(10,000\) samples.
\begin{figure}
{\centering \includegraphics{figure-BFniter-1}
}
\caption{Stability of Bayes factor estimates against different MCMC chains. a) Histograms of 100 Bayes factor estimates obtained from the same data and model, where only the MCMC chains differ between runs. The histograms are shown for bridge sampling (left panels) and for the Savage--Dickey method (right panels), as well as for the default number of 2,000 samples (upper panels) and for a larger number of 10,000 samples (lower panels). Red horizontal error bars indicate 95 percent quantiles. b) Bars show the log10 of the standard deviation across 100 Bayes factor estimates displayed in (a) for each method and number of sample separately.}\label{fig:BFniter}
\end{figure}
The results, displayed in Figure~\ref{fig:BFniter}, show that Bayes factor estimates were quite stable when using bridge sampling with a large number of posterior samples. However, Bayes factors estimates become more unstable when using a smaller number of MCMC samples. And they became quite unstable and variable when estimation was done using the Savage--Dickey method.
These results demonstrate that bridge sampling with a large number of samples is required to obtain stable Bayes factor estimates. Of course, if Bayes factors are not estimated in a stable way, but depend on random noise in the MCMC chain, then these Bayes factor estimates cannot accurately represent information that is contained in the data. Thus, the instability observed for the Savage--Dickey method here may be one important reason, why in the SBC analyses above, there was very little information contained in the posterior probabilities based on the Savage--Dickey method.
As we have discussed above, one might think that the performance of bridge sampling with only 2,000 MCMC iterations does not look to bad. As noted above, this may be a result of the very small data set that we used for these simulations. A small number of bridge samples may be much more problematic when using larger data sets with possibly more complicated random effects structures.
We note that the stability of bridge sampling needed in any given application depends on how how small a difference between marginal likelihood one wants to be able to resolve. If the two models compared are very different, then even large variation might be acceptable. However, but if the two models are very close to each other, then small variation might be problematic.
Next, we studied the stability of Bayes factors when in the true data simulation process, the H0 was actually true. We used the same simulated data set as in the previous analysis, with the only difference that we set the critical fixed effect estimate to zero.
\begin{figure}
{\centering \includegraphics{figure-BFniter2-1}
}
\caption{Stability of Bayes factor estimates against different MCMC chains in a situation where the H0 is the true model. a) Histograms of 100 Bayes factor estimates obtained from one same data and model, where only the MCMC chains differ between runs. The histograms are shown for bridge sampling (left panels) and for the Savage--Dickey method (right panels), as well as for the default number of 2,000 samples (upper panels) and for a larger number of 10,000 samples (lower panels). Red horizontal error bars indicate 95 percent quantiles. b) Bars show the log10 of the standard deviation across 100 Bayes factor estimates displayed in (a) for each method and number of sample separately.}\label{fig:BFniter2}
\end{figure}
The results show (see Fig.~\ref{fig:BFniter2}) that the Bayes factors from the Savage--Dickey method are now estimated in a much more stable way. The reason for this is that the Savage--Dickey method relies on the estimated posterior density for the critical fixed effect at the value of zero. In the first set of simulations, the posterior was far away from zero, and the estimation was therefore very unstable. However, in the second set of simulations, where the true fixed effect was zero, the posterior samples were also close to zero, yielding a quite stable estimation of the Bayes factor. Note that for bridge sampling and for the Savage--Dickey method, larger number of samples yielded more stable Bayes factors.
In general the instability of Bayes factor estimates against the MCMC draws (and starting values of the bridge sampler) demonstrates that it is necessary to use a large number of iterations when computing Bayes factors using \texttt{brms} and \texttt{bridge\_sampler()}. Moreover, it shows that we have to check for each data set that we analyze, whether our Bayes factor estimate is stable. It is possible to do this by running the analysis a few times (at least twice) to test whether the obtained Bayes factor estimates are stable.
That said, it's important to note that stability doesn't mean accuracy. Bridge sampling with large number of samples is returning low variability estimates. However, it is not clear from the stability analysis how those estimates relate to the true Bayes factors! I.e., whether the Bayes factor estimates are unbiased. SBC (as we have performed above) is needed to judge this aspect.
\hypertarget{continuously-varying-prior-probabilities}{%
\subsubsection{Continuously varying prior probabilities}\label{continuously-varying-prior-probabilities}}
The above SBC-based tests of whether bridge sampling performs unbiased approximations of Bayes factors relied on our a priori assumption that prior model probabilities for the H0 and the H1 were both 50\%. Here, we investigate a larger range of prior probabilities, by systematically varying the prior probability for the H0 from zero to one across 500 simulations. Based on this, we performed the same SBC analysis as above.
\begin{figure}
{\centering \includegraphics{figure-SBC3plot-1}
}
\caption{Posterior probabilities for the H0 are plotted as a function of prior probabilities for the H0. If the Bayes factor estimate is unbiased, then the data should be aligned along the diagonal (see dashed black line). The points are average posterior probabilities as a function of a priori selected hypotheses for 50 simulation runs each. Errorbars represent 95 percent binomial confidence intervals. a) Bridge sampling with 10,000 MCMC draws. b) Savage-Dickey method with 2,000 MCMC draws.}\label{fig:SBC3plot}
\end{figure}
The results of this analysis are shown in Figure~\ref{fig:SBC3plot}, which plots the posterior probability for the H0 as a function of the prior probability for the H0. Accurate Bayes factor estimates are obtained if all data points lie exactly on the diagonal. The results show that the local regression line is very close to the diagonal, and that the data points (each summarizing results from 50 simulations, with means and confidence intervals) also lie close to the diagonal, demonstrating that the estimated averaged posterior model probabilities are close to their a priori values. This result shows that posterior model probabilities, which are based on the Bayes factors approximations from the bridge sampling, are unbiased for a large range of different a priori model probabilities.
In fact, we think there is reason to argue that if the Bayes factor estimator works okay for one model prior it will work for all model priors, which is what the analysis demonstrates. Therefore, the analysis with using different model priors is not really needed as part of a normal workflow on Bayes factor analyses. However, varying prior model probabilities may still be helpful to reveal some potential problems. Consider for example a situation where a bug in the Bayesian estimation software leads to shrinkage of all posterior model probabilities towards a value of 0.5. Such a potential bias could be detected by varying the true prior probability across simulations as shown above. Thus, while we do not consider this analysis as part of the standard workflow, it can still be interesting to perform such analyses.
These results on Bayesian model inferences based on Bayes factors in theory suggest that stable and accurate Bayes factors can be computed when using large numbers of posterior MCMC draws (i.e., effective sample size). Moreover, the SBC results show that the resulting Bayes factor estimates deviated from the true Bayes factor in some of the example cases (specifically when the model formulation was incorrect), demonstrating that SBC is needed to judge whether Bayes factor estimates are accurate. Based on these results on the average theoretical performance of Bayes factor estimation, we next turn to a different issue - i.e., how Bayes factors depend on and vary with varying data, leading to bad performance in individual cases despite good average performance.
\hypertarget{DataSensitivity}{%
\subsection{Data and prior sensitivity}\label{DataSensitivity}}
\hypertarget{variation-associated-with-the-data-subjects-items-and-residual-noise}{%
\subsubsection{Variation associated with the data (subjects, items, and residual noise)}\label{variation-associated-with-the-data-subjects-items-and-residual-noise}}
A second, and very different, source limiting robustness of Bayes factor estimates derives from the variability that is observed with the data, i.e., among subjects, items, and residual noise. Thus, when repeating an experiment a second time in a replication analysis, using different subjects and items, will lead to different outcomes of the statistical analysis every time a new replication run is conducted. This limit to robustness is well known in frequentist analyses, as the ``dance of p-values'' (Cumming, 2014), where over repeated replications, p-values are not consistently significant across studies. Instead, the results yield highly different p-values each time a study is replicated, and this can even be observed when simulating data from some known truth and re-running analyses on simulated data sets.
This same type of variability should also be present in Bayesian analyses (also see \url{https://daniellakens.blogspot.com/2016/07/dance-of-bayes-factors.html}). Here we investigate this type of variability in Bayes factor analyses.
\hypertarget{variability-of-the-bayes-factor-prior-predictive-simulations}{%
\subsubsection{Variability of the Bayes factor: Prior predictive simulations}\label{variability-of-the-bayes-factor-prior-predictive-simulations}}
In the section implementing SBC above, we have seen that SBC provided evidence whether a model supported accurate estimation of Bayes factors. Moreover, we looked at average posterior probabilities when the H0 or the H1 was true in the simulations, and thus obtained some hint on whether the data contained a lot of (posterior) information about the true hypotheses in question.
Here, we take a closer look at an SBC simulation set where the diagnostics showed stable and accurate results: we take the SBC simulations where we used bridge sampling with many (\(10,000\)) draws to estimate Bayes factors.
Based on these SBC simulations, we can now take a look at how the posterior probabilities vary across individual simulation runs. Thus, while the average performance may look promising, this leaves unclear whether inferences from individual data sets are stable or highly variable, thus providing information in how far one can rely on individual data sets for reliable inference.
\begin{figure}
{\centering \includegraphics{figure-SBCvar1-1}
}
\caption{Histograms of posterior probabilities for the H1 across 500 simulated data sets, where either the H0 (left panel) or the H1 (right panel) was the true hypothesis in the data simulations. Estimation was using bridge sampling based on 10,000 MCMC draws.}\label{fig:SBCvar1}
\end{figure}
As is shown in Figure~\ref{fig:SBCvar1}, the posterior probabilities widely varied across individual data sets.
We can see that when the H0 was the true hypothesis in the simulations (Fig.~\ref{fig:SBCvar1}, left panel), then posterior model probabilities for the H1 varied quite a bit from a value of \(0\) up to values larger than \(0.5\), and approaching \(1\). Thus, posterior evidence could support either the H0 or the H1, depending on the individual data set. Note, however, that a large proportion of data sets showed a posterior probability for the H1 of smaller than \(0.5\), suggesting they indeed provided more support for the H0.
By contrast, when the H1 was true in the simulations (Fig.~\ref{fig:SBCvar1}, right panel), then quite a few data sets seemed to provide strong evidence for the H1, with a posterior probability for the H1 of close to \(1\). However, there was also a relatively large proportion of data sets that actually provided more support for the H0 (i.e., with posterior probabilities for the H1 of smaller than \(0.5\)), indicating they supported an incorrect hypothesis.
This shows that for the present experimental design, priors, and effect size, an individual data set may provide evidence that is inconsistent with the true hypothesis either for or against the effect. Thus, these individual data sets only have a limited ability to inform inference based on them, and larger data sets and/or larger effect sizes might be needed for reliable inferences based on an individual study.
Thus, results show widely varying posterior model probabilities for multiple data sets which were all simulated from the same prior truth. One option would be to follow up on looking at variability in the context of prior predictive analyses. However, we are here interested in how much information is contained in a typical cognitive data set, which may contain less variation compared to the prior predictive analyses performed above. Therefore, we simulate data based on a posterior model fit for a fairly typical cognitive study.
\hypertarget{variability-of-the-bayes-factor-posterior-simulations}{%
\subsubsection{Variability of the Bayes factor: Posterior simulations}\label{variability-of-the-bayes-factor-posterior-simulations}}
One way to investigate how variable the outcome of Bayes factor analyses can be (given that the Bayes factor is computed in a stable and accurate way), is to run posterior simulations based on a fitted model. That is, one can assume that the truth is approximately known (as approximated by the posterior model fit), and that based on this ``truth'' several artificial data sets are simulated. Computing the Bayes factor analysis again on the simulated data can provide some insight into how variable the Bayes factor will be in a situation where the ``true'' data generating process is always the same, and where variations in Bayes factor results have to be attributed to random noise in participants, items, residual variation, and to uncertainty about the precise true parameter values. Note that we already performed these simulations in the previous prior predictive analyses. However, here we perform more extensive analyses using posterior predictive simulations. We switch from prior predictive analyses to posterior predictive analyses because we are interested in how much information is contained in a typical data set from the cognitive sciences - which may be more information (i.e., less variation) compared to postulating a priori variation in a prior informed by domain expertise only.
\hypertarget{example-inhibitory-and-facilitatory-interference-effects}{%
\subsubsection{Example: Inhibitory and facilitatory interference effects}\label{example-inhibitory-and-facilitatory-interference-effects}}
For this, we will look at some fairly typical experimental example studies from the cognitive sciences. We look at studies that investigated cognitive mechanisms underlying a well-studied phenomenon in sentence comprehension. The example we consider here is the agreement attraction configuration below, where the ungrammatical sentence (2) seems more grammatical than the equally ungrammatical sentence (1):
\begin{enumerate}
\def(\arabic{enumi}){(\arabic{enumi})}
\tightlist
\item
The key to the cabinet are in the kitchen.
\item
The key to the cabinets are in the kitchen.
\end{enumerate}
Both sentences are ungrammatical because the subject does not agree with the verb in number: The verb (``are'') does not agree in number with the subject of the sentence (``key''). Sentences such as (2) are often found to have shorter reading times at the verb (``are'') compared to (1) (for a meta analysis see Jäger et al., 2017). Such shorter reading times are sometimes referred to as ``facilitatory interference'' (Dillon, 2011). ``Inhibitory interferece'' refers to longer reading times at the verb. One proposal explaining the shorter reading times is that the attractor word (here, cabinets) agrees locally in number with the verb, leading to an illusion of grammaticality. This is an interesting phenomenon as the plural versus singular feature of the attractor noun (``cabinet/s'') is not the subject, and therefore does not need to agree with the verb. That agreement attraction effects are consistently observed indicates that some non-compositional processes are taking place. An account of agreement attraction effects in language processing, that is based on a full computational implementation (which is in the ACT-R framework; Taatgen, Lebiere, \& Anderson, 2006), explains such agreement attraction effects in ungrammatical sentences as a result of retrieval-based working memory mechanisms (Engelmann, Jäger, \& Vasishth, 2019; cf.~Hammerly, Staub, \& Dillon, 2019). Agreement attraction in ungrammatical sentences has been investigated many times in similar experimental setups with different dependent measures such as self-paced reading and eye-tracking. It is generally believed to be a robust empirical phenomenon, and we choose it for analysis here because it provides an example of a relatively robust effect in cognitive science.
\hypertarget{overview-of-the-analyses}{%
\subsubsection{Overview of the analyses}\label{overview-of-the-analyses}}
In this section, we look at the data variability of Bayes factors (and associated effect estimates) using posterior predictive simulations across several different scenarios. First, we investigate a study by Lago, Shalom, Sigman, Lau, and Phillips (2015) using priors (in the model fitting, not in the simulation of data) derived from a meta analysis, where the prior mean differs from zero, and where the data provide some evidence for an effect. Then, we look at this same data set using a more neutral prior that is centered on zero. Next, we use data from a study where the overall effect of interest is close to zero. As one common characteristic, these two data sets are of rather small size (30-60 subjects), which is often the case and rather typical in the cognitive sciences. Here, we also investigate the variability of Bayes factors (and associated effect estimates) in a large sample study, which used 181 subjects, and yields much more stable Bayes factor estimates. Next, we go one step further by looking not at simulated replications of a study, but at ten real empirical replication studies of the same experimental effect. As in the theoretical simulations, also the real empirical data results show strong variability of the Bayes factor across studies, little evidence within each single study, but strong evidence when pooling across individual studies.
\hypertarget{LagoFit}{%
\subsubsection{Case 1: Lago et al.~(2015)}\label{LagoFit}}
First, we investigate facilitatory agreement attraction effects by looking at a self-paced reading study by Lago et al. (2015). We estimate a fixed effect for the experimental condition agreement attraction (\texttt{x}; i.e., sentence type), against a null model where the fixed effect of sentence type is excluded. Note that for the agreement attraction effect of sentence type, we use sum contrast coding (i.e., -1 and +1).
We run a multilevel model with the following formula in \texttt{brms}: \texttt{rt\ \textasciitilde{}\ 1+x\ +\ (1+x\textbar{}subj)\ +\ (1+x\textbar{}item)}, where \texttt{rt} is reading time, we have random variation associated with subjects and with items, and we assume that reading times follow a log-normal distribution: \texttt{family\ \ =\ lognormal()}.
As a next step, we determine priors for the analysis of these data. For this we use results from a meta analysis (Jäger et al., 2017) to obtain priors for the effect size of the factor \texttt{x} agreement attraction. We describe how we obtained the priors in detail below in the section showing an example for a Bayes factor workflow. Here, we simply show the prior that is derived from the meta analysis using \texttt{brms} code:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{priors \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(6, 0.5)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"Intercept"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal({-}0.03, 0.009)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"b"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 0.5)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"sd"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 1)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"sigma"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"lkj(2)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"cor"}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
Next, using these priors, we fit the Bayesian model using brms:
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# run alternative model}
\NormalTok{m1\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{brm}\NormalTok{(rt }\OperatorTok{\textasciitilde{}}\StringTok{ }\DecValTok{1}\OperatorTok{+}\NormalTok{x }\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{x}\OperatorTok{|}\NormalTok{subj)}\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{x}\OperatorTok{|}\NormalTok{item),}
\DataTypeTok{data =}\NormalTok{ lagoE1,}
\DataTypeTok{family =} \KeywordTok{lognormal}\NormalTok{(),}
\DataTypeTok{prior =}\NormalTok{ priors,}
\DataTypeTok{warmup =} \DecValTok{2000}\NormalTok{,}
\DataTypeTok{iter =} \DecValTok{10000}\NormalTok{,}
\DataTypeTok{cores =} \DecValTok{4}\NormalTok{,}
\DataTypeTok{save\_pars =} \KeywordTok{save\_pars}\NormalTok{(}\DataTypeTok{all =} \OtherTok{TRUE}\NormalTok{),}
\DataTypeTok{control =} \KeywordTok{list}\NormalTok{(}\DataTypeTok{adapt\_delta =} \FloatTok{0.99}\NormalTok{,}
\DataTypeTok{max\_treedepth=}\DecValTok{15}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
We skip a careful checking of the model here (also see Schad et al., 2021; Betancourt, 2020b), and show these analyses later in the section discussing an example for the Bayes factor workflow.
Next, we take a look at the population-level results from the Bayesian modeling.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{round}\NormalTok{(}\KeywordTok{fixef}\NormalTok{(m1\_lagoE1),}\DecValTok{3}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Estimate Est.Error Q2.5 Q97.5
## Intercept 6.015 0.056 5.903 6.127
## x -0.031 0.008 -0.046 -0.015
\end{verbatim}
They show that for the fixed effect \texttt{x}, capturing the agreement attraction effect, the 95\% credible interval does not overlap with zero. This indicates that the effect may have the expected negative direction, reflecting shorter reading times in the plural condition.
As discussed above, such estimation does not answer the question: How much evidence do we have in support for an effect at all? They may hint that the predictor may be needed to explain the data, but they are not really answering this question how much evidence there is that the parameter is needed to explain the data (see Wagenmakers et al., 2019; Rouder et al., 2018). We cannot draw such an inference, because we did not specify the null hypothesis of zero effect explicitly. Instead, Bayes factors are needed that compare a null model without the effect/parameter to an alternative model that contains the parameter.
To this end, we run the model again, now without the parameter of interest, i.e., the null model, which essentially fixes \(\beta\) to exactly zero: \texttt{rt\ \textasciitilde{}\ 1\ +\ (1+x\textbar{}subj)\ +\ (1+x\textbar{}item)}.
Now everything is ready for computing the log marginal likelihood, that is, the probability of the data given the model, after integrating out the model parameters, which we estimate using bridge sampling as before (Gronau et al., 2017b, 2020). We perform this integration using the function \texttt{bridge\_sampler()} for each of the two models:
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# run bridge sampler}
\NormalTok{lml\_m1\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{bridge\_sampler}\NormalTok{(m1\_lagoE1, }\DataTypeTok{silent =} \OtherTok{TRUE}\NormalTok{)}
\NormalTok{lml\_m0\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{bridge\_sampler}\NormalTok{(m0\_lagoE1, }\DataTypeTok{silent =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
This gives us the marginal log likelihoods for each of the models. From these, we can compute the Bayes factors.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{h\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{bayes\_factor}\NormalTok{(lml\_m1\_lagoE1, lml\_m0\_lagoE1)}
\end{Highlighting}
\end{Shaded}
We use the command \texttt{bayes\_factor(lml\_m1\_lagoE1,\ lml\_m0\_lagoE1)} to specify that we want to compute the Bayes factor between the full model, where the effect of agreement attraction is included, and the null model, where the effect of agreement attraction is absent. It computes the Bayes factor \(BF_{10}\), that is, the evidence of the alternative over the null:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{h\_lagoE1}\OperatorTok{$}\NormalTok{bf}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 6.744471
\end{verbatim}
It shows a Bayes factor of \(6\), suggesting that there is some support for the alternative model, which contains the fixed effect of agreement attraction. That is, this provides evidence for the alternative hypothesis that there is a difference between the experimental conditions, i.e., a facilitatory effect in the plural condition of the size derived from the meta analysis.
Under the criteria shown in Table~\ref{tab:BFs}, the Bayes factor provides moderate evidence for an effect of sentence type on reading times.
However, our current purpose is to perform posterior predictive analyses. We can now take the Bayesian hierarchical model fitted to the data above (Lago et al., 2015) and run posterior predictive simulations. In these simulations, in each simulation run (i) one takes a posterior sample for the model parameters (i.e., \(p(\Theta \mid y)\)) and then (ii) uses this sample of model parameters to simulate new data \(\tilde{y}\) from the model \(p(\tilde{y} \mid \Theta)\). That is, posterior predictive simulations are a Bayesian way to perform artificial data simulation. Posterior predictive simulations from the fitted \texttt{brms} model can be performed using the brms-function \texttt{posterior\_predict()}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{pred\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{posterior\_predict}\NormalTok{(m1\_lagoE1)}
\end{Highlighting}
\end{Shaded}
Figure~\ref{fig:posteriorPredictiveSimulations} visualizes the simulated data via density plots for the observed data (black) and for \(100\) posterior simulated data sets (shown in color/grey). It shows that the simulated data seem fairly well in line with the empirically observed data, at least when investigating the marginal distribution.
\begin{figure}
{\centering \includegraphics{figure-posteriorPredictiveSimulations-1}
}
\caption{Density plots for observed data (black) and for 100 posterior artificial data sets simulated from a fitted Bayesian model (shown in color/grey).}\label{fig:posteriorPredictiveSimulations}
\end{figure}
The question that we are interested in is, how much information is contained in this posterior simulated data. That is, we can run Bayesian models on this posterior simulated data and compute Bayes factors to test whether in the simulated data there is evidence for agreement attraction effects. Of great interest to us is then the question of how variable the results of these Bayes factor analyses will be across different simulated replications of the same study.
Note that this effectively constitutes a prior predictive simulation with the prior informed by a previous experiment.
We now perform this analysis for \(50\) different artificial data sets simulated from the posterior predictive distribution. For each of these data sets, we can proceed in exactly the same way as we did for the real observed experimental data. That is, \(50\) times, we again fit the same \texttt{brms} model, now to the simulated data, and using the same prior as before. For each simulated data-set, we use bridge sampling to compute the Bayes factor of the alternative model compared to a null model where the agreement attraction effect (fixed effect predictor of sentence type, \texttt{x}) is set to \(0\). For each simulated posterior predictive data set, we store the resulting Bayes factor. We again use the prior from the meta analysis.
We can now visualize the distribution of Bayes factors (\(BF_{10}\)) across posterior predictive distributions by plotting a histogram. Values larger than one in this histogram indicate evidence for the alternative model (H1) that agreement attraction effects exist (i.e., the sentence type effect is different from zero), and Bayes factor values smaller than one indicate evidence for the null model (H0) that no agreement attraction effect exists (i.e., the difference in reading times between experimental conditions is zero).
\begin{figure}
{\centering \includegraphics{figure-plotBFdistrn-1}
}
\caption{Left panel: Histogram of Bayes factors (BF10) of the alternative model over the null model in 50 simulated data sets. The vertical solid black line shows equal evidence for both hypotheses; the dashed red line shows the Bayes factor computed from the empirical data; the horizontal errorbar shows 95 percent of all Bayes factors. Right panel: Estimates of the facilitatory effect of retrieval interference and 95 percent credible intervals across all simulations (black, solid lines) and the empirically observed data (red, dashed line).}\label{fig:plotBFdistrn}
\end{figure}
The results (see Fig.~\ref{fig:plotBFdistrn}) show that the Bayes factors are quite variable. Although all data sets are simulated from the same posterior predictive distribution, the Bayes factor results are as different as providing moderate evidence for the null model (\(BF_{10} < 1/3\)) or providing strong evidence for the alternative model (\(BF_{10} > 10\)). The bulk of the simulated data sets provide moderate or anecdotal evidence for the alternative model. That is, much like the ``dance of p-values'' (Cumming, 2014), this analysis reveals a ``dance of the Bayes factors'' with simulated repetitions of the same study. The variability in these results shows that a typical cognitive or psycholinguistic data set is not necessarily highly informative for drawing firm conclusions about the hypotheses in question.
What is driving these differences in the Bayes factors between simulated data sets? One obvious reason why the outcomes may be so different is that the difference in reading times between the two sentence types, that is, the experimental effect that we wish to make inferences about, may vary based on the noise and uncertainty in the posterior predictive simulations. It is therefore interesting to plot the Bayes factors from this simulated data set as a function of the difference in simulated reading times between the two sentence types as estimated in the Bayesian model. That is, we extract the estimated mean difference in reading times at the verb between plural and singular attractor conditions from the fixed effects of the Bayesian model, and plot the Bayes factor as a function of this difference (together with 95\% credibility intervals).
\begin{figure}
{\centering \includegraphics{figure-BFregression-1}
}
\caption{Bayes factor (BF10) as a function of the estimate (with 95 percent credible intervals) of the facilitatory effect of retrieval interference across 50 simulated data sets. The prior is from a meta analysis. Analysis results from the empirical data are shown in red.}\label{fig:BFregression}
\end{figure}
The results (displayed in Figure~\ref{fig:BFregression}) show that the mean difference in reading times between experimental conditions varies across posterior predictive simulations. This indicates that the experimental data and design contain a limited amount of information about the effect of interest. Of course, if the (simulated) data is not stable, Bayes factor analyses based on this simulated data cannot be stable across simulations either. Accordingly, as is clear from Figure~\ref{fig:BFregression}, the magnitude of the difference in mean reading times between experimental conditions is indeed a main driving force for the Bayes factor calculations.
One important thing to note in Figure~\ref{fig:BFregression} is that as the difference between reading times becomes more negative,
that is, as the plural noun condition (i.e., ``cabinets'' in the example; sentence 2) is read faster than the singular noun condition (i.e., ``cabinet''; example sentence 1), the Bayes factor BF10 increases to larger and larger values, indicating that the evidence in favor of the alternative model increases. When the difference between reading times becomes \textit{less} negative, by contrast, i.e., the plural condition (sentence 2) is not read much faster than the singular condition (sentence 1), then the Bayes factor BF10 decreases to values smaller than 1. Importantly, this behavior occurs because we are using our informative priors from the meta analysis, where the prior mean for the agreement attraction effect is not centered at a mean of zero, but has a negative value (i.e., a prior mean of \(-0.027\) on the log millisecond scale). Therefore, differences in reading times that are \textit{less} negative / more positive than this prior mean are more in line with a null model of no effect. This also leads to the striking observation that the 95\% credible intervals are quite consistent and all do not overlap with zero, whereas the Bayes factor results are far more variable. Note that computing Bayes factors for such a prior with a non-zero mean asks the very specific question of whether the data provide more evidence for the effect size obtained from the meta analysis compared to the absence of any effect.
\hypertarget{case-2-using-a-prior-centered-on-zero}{%
\subsubsection{Case 2: Using a prior centered on zero}\label{case-2-using-a-prior-centered-on-zero}}
As an alternative, we can also use a centered prior, where the prior mean is zero, that is, we are agnostic with respect to the direction of the effect. We repeat the simulations here with such a mean-centered prior for the agreement attraction effect (using a prior standard deviation of \(0.3\); see below sensitivity analysis).
\begin{figure}
{\centering \includegraphics{figure-BFregressionPriorN-1}
}
\caption{Centered prior with mean 0 and standard deviation 0.3. Bayes factor (BF10) as a function of the effect estimate (with 95 percent credible intervals) for 50 simulated studies.}\label{fig:BFregressionPriorN}
\end{figure}
For this changed, centered, prior, the Bayes factors now show a slightly different result. As is displayed in Figure~\ref{fig:BFregressionPriorN}, Bayes factors now follow a hockey-stick function (which would turn into a u-shape in case more positive effect sizes would be present in the data). For large negative differences between reading times in agreement attraction conditions (i.e., left side of the plot), Figure~\ref{fig:BFregressionPriorN} again shows positive values for the Bayes factor BF10, indicating evidence in favor of the alternative hypothesis (i.e., the model including the fixed-effect of agreement attraction). Moreover, when the estimated difference between reading times approaches zero, Bayes factors show values smaller than one (but close to one), indicating support for the null hypothesis. However, support for the null hypothesis is now much less pronounced (only anecdotal, i.e., \(BF_{10} > 1/3\)). This is the case as the alternative hypothesis (H1) now specifies a prior mean of zero for the effect size, such that even an estimated effect size of zero can still be somewhat explained by the alternative model, and the null model is not so much better in explaining the data.
\hypertarget{case-3-a-study-with-an-effect-size-close-to-zero}{%
\subsubsection{Case 3: A study with an effect size close to zero}\label{case-3-a-study-with-an-effect-size-close-to-zero}}
Next, we show an example case where the effect size in the original study is very close to zero, i.e., there is no difference between experimental conditions (Wagers, Lau, \& Phillips, 2009, experiment 3, singular). Therefore, the simulated effect sizes vary between positive and negative values. We again use the mean-centered prior (prior mean = 0, prior standard deviation = 0.3). Figure~\ref{fig:BFregressionPriorWagersE3sg} shows that the Bayes factor gets positive, providing some support for the alternative model not only for negative estimated effect sizes, but also for positive estimated effect sizes. Any difference between experimental conditions - negative or positive - can support the alternative model. Note that this only happens for a centered prior, where the prior mean is zero. For our informative prior based on the meta analysis, which is not centered on zero, by contrast, increasingly positive effect estimates would lead to increasing evidence for the null.
\begin{figure}
{\centering \includegraphics{figure-BFregressionPriorWagersE3sg-1}
}
\caption{Results from a study with no facilitatory effect (Wagers et al., 2009, Exp. 3, singular). Centered prior with mean 0 and standard deviation 0.3. Bayes factor (BF10) as a function of the effect estimate (with 95 percent credible intervals) for 50 simulated studies.}\label{fig:BFregressionPriorWagersE3sg}
\end{figure}
\hypertarget{case-4-a-large-sample-study}{%
\subsubsection{Case 4: A large sample study}\label{case-4-a-large-sample-study}}
Last, the previous studies had relatively limited sample sizes, e.g., the study by Lago et al. (2015) (experiment 1) had 32 subjects, and the study by Wagers et al. (2009) (experiment 3, singular) had data from 60 subjects. We now want to see how stable Bayes factors are in a situation where the sample size is relatively large. For this, we perform the same Bayes factor analysis for data from a study by Jäger, Mertzen, Van Dyke, and Vasishth (2020), which contains data from 181 subjects. The results are displayed in Figure~\ref{fig:BFregressionPriorJaeger}. They show that with 181 subjects, the Bayes factor is quite stable. That is, across 50 posterior simulated data sets, the Bayes factor computed on the simulated data always ranges between \(1.3\) and \(1.7\). Thus, in large sample studies the ``dance of the Bayes factor'' is very limited to a narrow range, and Bayes factors are quite stable. This may in part be the case because the posterior predictive distribution was so narrow that all the simulated data sets were very much alike.
\begin{figure}
{\centering \includegraphics{figure-BFregressionPriorJaeger-1}
}
\caption{Analysis for a large-sample study (Jaeger et al., 2020). Centered prior with mean 0 and standard deviation 0.3. Bayes factor (BF10) as a function of the effect estimate (with 95 percent credible intervals) for 50 simulated studies.}\label{fig:BFregressionPriorJaeger}
\end{figure}
\hypertarget{sensitivity-analysis}{%
\subsubsection{Sensitivity analysis}\label{sensitivity-analysis}}
In the above example, there was good prior information about the free model parameter \(\beta\) from a meta analysis. However, what happens if we are not sure about the prior for the model parameter? It might happen that we compare the null model with a very ``bad'' alternative model, because our prior for \(\beta\) is not appropriate.
To deal with this situation, many authors use or recommend default prior distributions, where the priors for the model parameters are fixed (e.g., at the scale of an effect size), and are independent of the scientific problem in question, and of potential subjective perspectives on it (Morey \& Rouder, 2011; Navarro, 2015; Rouder, Speckman, Sun, Morey, \& Iverson, 2009; Zellner \& Siow, 1980). While Rouder et al. (2009) provide default priors that are appropriate to generic situations, they also (p.~235) state: ``simply put, principled inference is a thoughtful process that cannot be performed by rigid adherence to defaults.'' In other words, they point out that it is important to consider alternative values for the prior; a sensitivity analysis is necessary.
Sensitivity analysis refers to examining a lot of different alternative models, where each model uses different prior assumptions. This way, it's possible to investigate the extent to which the Bayes factor results depend on, or are sensitive to, the prior assumptions. This is called a sensitivity analysis. Recall that the model is the likelihood \emph{and} the priors. We can therefore compare models that only differ in the prior (for an example see Nicenboim, Vasishth, \& Rösler, 2020). We will next perform a sensitivity analysis for effects of agreement attraction for the data by Lago et al. (2015). This involves running the \texttt{brms} model on the actual observed data using different priors.
What we do is to examine Bayes factors for several models. Each model has the same likelihood but a different prior for \(\beta\) (i.e., the effect of sentence type \texttt{x}). For all of the priors we assume a normal distribution with a mean of zero. Assuming a mean of zero means that we do not make any assumption a priori about the direction of the effect. If the effect should differ from zero, we want the data to tell us that. What differs between the different priors is their standard deviation. That is, what differs is the amount of uncertainty about the effect size that we allow for in the prior. A large standard deviation allows for very large effect sizes a priori, whereas a small standard deviation implies the assumption that we expect the effect not to be very large a priori. Note that while a model with a wide prior (i.e., large standard deviation) also allocates prior probability density to small effect sizes, it allocates much less probability density to small effect sizes. Thus, if the effect size in the observed data is actually small, then a model with a narrow prior (small standard deviation) will have a better chance of detecting the effect.
Note that a sensitivity analysis is a case of inference over model space (i.e., with many different models), where one reports the entire model posterior instead of choosing any particular model (i.e., a particular prior). Importantly, the difference between inference and decision making is critical here. Thus, the posterior provides continuous evidence about models with different priors, but it does not support decision making, i.e., selection of individual models, without using a utility function. We will discuss this critical distinction further below.
Here, we use the same priors as used in the previous analysis of these data, now again with a centered prior mean for the effect of \texttt{x}. That is, the normal distribution for the prior for the agreement attraction effect (i.e., difference in reading times between sentence types, \texttt{x}) now has a mean of zero. Moreover, we now vary its standard deviation, using different values ranging from SD = \(0.005\) to SD = \(0.4\).
We run the \texttt{brms} model for each of the 10 different priors (which differ only in the prior standard deviation for the experimental factor). Then, we compute the Bayes factor using bridge sampling against the null model with \texttt{x} = \(0\), and we store the resulting Bayes factor for each model.
Finally, we plot the Bayes factors as a function of the prior standard deviation (see Fig.~\ref{fig:sensAnalysPlot}). We show the \(BF_{10}\), that is, the evidence for the alternative model over the null model.
\begin{figure}
{\centering \includegraphics{figure-sensAnalysPlot-1}
}
\caption{Sensitivity analysis: Bayes factor (BF10) as a function of the prior standard deviation.}\label{fig:sensAnalysPlot}
\end{figure}
The results show that there is very little evidence for an agreement attraction effect in the sensitivity analysis. The Bayes factors provide little evidence for the alternative model for small prior standard deviations, i.e., for small effect sizes (the maximum lies at a standard deviation of \(0.03\)), but this evidence is only anecdotal. At the same time there is anecdotal evidence against agreement attraction effects for models with a larger prior standard deviation, that is, there is anecdotal evidence against agreement attraction effects with large effect sizes.
Conceptually, the data do not fully support such big effect sizes, but start to favor the null model relatively more when such big effect sizes are tested against the null.
Indeed, Bayes factors explicitly penalize models with wide priors if the data aren't consistent with large effect sizes. This is the effect of Occam's razor that we discussed above (see Fig.~\ref{fig:OccamFactor}).
Note that these results do not directly support a decision to pick the model with standard deviation of \(0.03\) as the best model (without using utility functions) - the evidence in posterior inference is continuous rather than discrete.
The reason that the conclusion differs (sometimes dramatically) as a function of the prior is that priors are never uninformative when it comes to Bayes factors. The wide priors specify that we expect very large effect sizes (with some considerable probability), and there is relatively little evidence in the data for such large effect sizes.
Indeed, very recently, Uri Simonsohn criticized Bayes factors because they might provide evidence in favor of the null and against a very specific alternative model, when the researchers only knew the direction of the effect (see \url{https://datacolada.org/78a}). This can happen when very uninformative vague priors are used, and provide a major motivation for using more informed prior distributions.
Overall, we think that the outcome of the sensitivity analysis of weak evidence is quite reasonable. The result basically shows that the Bayes factors lie somewhere between 3 and 1/3, which all indicate inconclusive results. Given that all results are inconclusive, it doesn't really matter whether the Bayes factors are larger or smaller than one, since no conclusions can be drawn from them anyway.
The above example also again shows the impact of the prior. First, we had performed an analysis with an informative prior derived from a meta analysis with a prior mean of \(-0.027\), and had obtained strong evidence for the alternative hypothesis (\(BF_{10} = 6\)). Here, we use mildly informative priors with prior means of \(0\). The results show that the data does not contain enough information to counteract this mildly informative prior. Thus, conclusive evidence is only obtained under prior beliefs that the average effect is smaller than zero, but not under more agnostic prior beliefs.
\hypertarget{how-consistent-is-the-bayes-factor-across-multiple-studies}{%
\subsubsection{How consistent is the Bayes factor across multiple studies?}\label{how-consistent-is-the-bayes-factor-across-multiple-studies}}
The analyses performed above (section ``Visualize distribution of Bayes factors'') quantified the uncertainty measured via Bayes factors that is inherent in posterior predicted data. This analysis thus showed that even if we hold the true data generating process constant (i.e., we simulate from a given posterior), we observe variability in what inferences the simulated data support.
Here, we go one step further. Instead of relying on simulated replications of the same experiment, we take real data from real empirical replications of the same type of experimental study. This allows us to investigate in how far the results from Bayes factor analyses vary from study to study, even if the same experimental effect is investigated. In particular, we obtained the experimental data from a set of different studies that have one thing in common: they all investigate (inter alia) agreement attraction in ungrammatical sentences (Dillon, Mishler, Sloggett, \& Phillips, 2013; Lago et al., 2015; Wagers et al., 2009).
All of these data sets use similar experimental manipulations to study agreement attraction effects during sentence processing. They all investigate reading time on a target word, measured via self-paced reading or via eye-tracking. There are of course important differences between the studies: some investigate English, others Spanish; and the syntactic configurations differ across studies. However, they all investigate the same basic type of effect, agreement attraction in ungrammatical configurations, and are therefore trying to estimate the same basic effect. Importantly, agreement attraction is generally thought to be a robust empirical phenomenon (Phillips, Wagers, \& Lau, 2011); this example therefore provides an example case for an empirically well-established effect in the cognitive sciences.
For all of these studies, we focus on the question of whether there is evidence for a difference in mean reading times between sentence types. Again, we use Bayesian modeling using \texttt{brms} for posterior estimation, we assume a zero-centered prior for the agreement attraction effect, and we compute Bayes factors using bridge sampling. We now perform a sensitivity analysis for each of the data sets separately.
\begin{figure}
{\centering \includegraphics{figure-plotSens-1}
}
\caption{Prior sensitivity analyses for different empirical data sets, each implementing a replication study of interference effects of number attraction in sentence comprehension. For each empirical study (indicated by different colors), the Bayes factor (BF10) of the alternative model against the null model is shown for different prior standard deviations for the size of the experimental effect.}\label{fig:plotSens}
\end{figure}
Figure~\ref{fig:plotSens} visualizes the results of this analysis. It shows that the evidence in support of agreement attraction effects is very weak for every single analyzed study. One study (Wagers et al., 2009, Experiment 4) shows at least moderate evidence (\(BF_{10} > 3\)) for an interference effect of small size (the prior standard deviation of \(0.040\) shows the largest Bayes factor). Moreover, several of the other studies also show some evidence for small agreement attraction effects, but this evidence is anecdotal at best, with maximal Bayes factors ranging between 1 and 3.
What the analysis consistently shows, however, is that (i) in all studies the estimated effect is in the expected direction, i.e., it is negative, and (ii) all studies provide evidence against a large agreement attraction effect. For the largest studied prior standard deviation of \(0.4\), the results show at least moderate evidence for the null model and against the alternative for 9 out of the 10 data sets, and two data sets actually provide strong evidence (\(BF_{10} < 1/10\)) against such a large prior effect size.
Moreover, the analysis reveals large variability in the results across data sets. While analysis of some data sets suggest tentative evidence for the alternative model, supporting agreement attraction effects in sentence comprehension (e.g., Wagers et al., 2009, Experiment 4), other data sets show no evidence for agreement attraction effects at all (e.g., Wagers et al., 2009, Experiment 3, singular).
This analysis shows that Bayes factors used in a sensitivity analysis can quantify the evidence in favor of a range of different hypotheses. The evidence varies considerably with the data set even though we investigate different experimental investigations of the same phenomenon of agreement attraction. However, the prior sensitivity analyses of the different data sets are consistent in that for small expected effect sizes, none of them provides strong evidence, neither in support of the H1 nor in support of the H0, whereas all data sets provide some evidence against large effect sizes.
That no data set provides strong evidence for the H1 might be quite surprising to the reader, given that agreement attraction effects are generally thought to be a robust empirical phenomenon. What we illustrate here is that individual studies may in fact carry quite limited information about a fairly standard experimental effect. Indeed, standard experimental designs and sample sizes may be insufficiently sensitive to accurately detect a typical cognitive effect such as agreement attraction. Evidence synthesis through
meta analyses will be needed to make clear inferences about the effect of agreement attraction (cf.~Nicenboim et al., 2020).
Meta analyses can be performed using Bayesian modeling, and again, Bayes factors can be used to quantify the evidence a meta analysis provides in favor of some hypothesis. We illustrate this point here. First, we run frequentist linear mixed effects models for each of the data sets (using the R function \texttt{lmer()}). From each analysis, we save the estimated agreement attraction effect and its associated standard error.
Based on these estimates, we perform a Bayes factor meta analysis:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{priorsM \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 1)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"b"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 0.5)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"sd"}\NormalTok{))}
\CommentTok{\# run alternative model H1 with different priors}
\NormalTok{lml\_brm1 \textless{}{-}}\StringTok{ }\KeywordTok{list}\NormalTok{()}
\ControlFlowTok{for}\NormalTok{ (j }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\KeywordTok{length}\NormalTok{(priSD)) \{}
\NormalTok{ priorsM[}\DecValTok{1}\NormalTok{,] \textless{}{-}}\StringTok{ }\KeywordTok{set\_prior}\NormalTok{(}\KeywordTok{paste0}\NormalTok{(}\StringTok{"normal(0, "}\NormalTok{,priSD[j],}\StringTok{")"}\NormalTok{),}\DataTypeTok{class =} \StringTok{"b"}\NormalTok{)}
\NormalTok{ m.brm1 \textless{}{-}}\StringTok{ }\KeywordTok{brm}\NormalTok{(b }\OperatorTok{|}\StringTok{ }\KeywordTok{se}\NormalTok{(SE) }\OperatorTok{\textasciitilde{}}\StringTok{ }\DecValTok{0}\OperatorTok{+}\NormalTok{Intercept }\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{|}\NormalTok{expt),}
\DataTypeTok{data =}\NormalTok{ lmerResults,}
\DataTypeTok{prior =}\NormalTok{ priorsM, }\DataTypeTok{save\_pars =} \KeywordTok{save\_pars}\NormalTok{(}\DataTypeTok{all =} \OtherTok{TRUE}\NormalTok{),}
\DataTypeTok{iter =} \DecValTok{4000}\NormalTok{, }\DataTypeTok{control=}\KeywordTok{list}\NormalTok{(}\DataTypeTok{adapt\_delta=}\FloatTok{0.95}\NormalTok{))}
\NormalTok{ lml\_brm1[[j]] \textless{}{-}}\StringTok{ }\KeywordTok{bridge\_sampler}\NormalTok{(m.brm1, }\DataTypeTok{silent =} \OtherTok{TRUE}\NormalTok{)}
\NormalTok{\}}
\CommentTok{\# run null model H0}
\NormalTok{m.brm0 \textless{}{-}}\StringTok{ }\KeywordTok{brm}\NormalTok{(b }\OperatorTok{|}\StringTok{ }\KeywordTok{se}\NormalTok{(SE) }\OperatorTok{\textasciitilde{}}\StringTok{ }\DecValTok{0} \OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{|}\NormalTok{expt),}
\DataTypeTok{data =}\NormalTok{ lmerResults,}
\DataTypeTok{prior =}\NormalTok{ priorsM[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{,], }\DataTypeTok{save\_pars =} \KeywordTok{save\_pars}\NormalTok{(}\DataTypeTok{all =} \OtherTok{TRUE}\NormalTok{),}
\DataTypeTok{iter =} \DecValTok{4000}\NormalTok{, }\DataTypeTok{control=}\KeywordTok{list}\NormalTok{(}\DataTypeTok{adapt\_delta=}\FloatTok{0.95}\NormalTok{))}
\NormalTok{lml\_brm0 \textless{}{-}}\StringTok{ }\KeywordTok{bridge\_sampler}\NormalTok{(m.brm0, }\DataTypeTok{silent =} \OtherTok{TRUE}\NormalTok{)}
\NormalTok{BF\_ln \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{()}
\ControlFlowTok{for}\NormalTok{ (j }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\KeywordTok{length}\NormalTok{(priSD)) }\CommentTok{\# j \textless{}{-} 1}
\NormalTok{ (BF\_ln[j] \textless{}{-}}\StringTok{ }\KeywordTok{bayes\_factor}\NormalTok{(lml\_brm1[[j]], lml\_brm0)}\OperatorTok{$}\NormalTok{bf)}
\NormalTok{metaResults \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(priSD,BF\_ln)}
\end{Highlighting}
\end{Shaded}
\begin{figure}
{\centering \includegraphics{figure-metaAnalysPlot-1}
}
\caption{Sensitivity analysis for the Bayesian meta analysis: the Bayes factor (BF10) as a function of the prior standard deviation provides extreme evidence in favor of the effect.}\label{fig:metaAnalysPlot}
\end{figure}
The results from this meta analysis using a sensitivity analysis with the Bayes factor (see Figure~\ref{fig:metaAnalysPlot}) shows that across studies, there is extreme evidence (\(BF_{10} > 100\)) for the alternative hypothesis that agreement attraction effects exist. Thus, while the individual studies, each considered separately, do not provide much evidence for the effect, combining studies into a Bayesian meta analysis clearly shows that the effect exists.
Note that because we have all the raw data available for all 10 studies here, instead of running a Bayesian meta analysis based on frequentist test statistics, we can also run one large hierarchical Bayesian model that captures the data from all 10 studies at the same time and treats experiment as a random effect. The formula for this model could be: \texttt{rt\ \textasciitilde{}\ 1+x\ +\ (1+x\textbar{}subj)\ +\ (1+x\textbar{}item)\ +\ (1+x\textbar{}expt)}. And we can use this multilevel model to perform meta analytic Bayes factor analyses. Note that due to the large amount of data involved in analyzing all data sets simultaneously, this analysis needs a very large number of MCMC draws for computing stable Bayes factors, and we therefore skip it here for brevity.
\hypertarget{PoorlyCalibratedDecisions}{%
\subsection{Poorly calibrated decisions}\label{PoorlyCalibratedDecisions}}
Above (in the section on Estimation Error), we have used SBC to calibrate the accuracy of Bayes factor computations. Interestingly, these simulated data sets can also be used to calibrate decisions based on the Bayesian evidence, which is what we turn to here.
\hypertarget{using-sbc-simulations-to-calibrate-decisions}{%
\subsubsection{Using SBC simulations to calibrate decisions}\label{using-sbc-simulations-to-calibrate-decisions}}
Above, we had used SBC to calibrate the continuous evidence obtained by computing Bayes factors. An alternative way to look at the results from the SBC analysis is to use thresholds on the Bayes factor to make discrete decisions, such as the decision to declare discovery. Frequently, such decisions are made by relying on conventions, such as e.g., declaring discovery when a Bayes factor is larger than \(10\). However, note that to perform such decisions in a principled way, utility functions are needed that define the utility of each decision in light of the underlying truth. We illustrate this below (see section ``Principled decisions using utility functions'').
In a first approach, we here use one threshold sometimes used in practice, i.e.~that a Bayes factor larger than 10 provides (strong) evidence for the alternative hypothesis (H1), a Bayes factor smaller than 1/10 provides (strong) evidence for the null hypothesis (H0), and a Bayes factor between 1/10 and 10 provides ``moderate'' or ``anecdotal'' evidence for either hypothesis. For simplicity, we here use the thresholds of 10 and 1/10. We can look at what decisions the Bayes factors support by looking at the simulated data from the SBC (see section on ``Estimation error'' above, subsection ``Simulation-based calibration: Recovering the prior from the data''; i.e., the simulation using bridge sampling with many MCMC draws). We investigate decisions based on whether the H0 or the H1 was actually used to simulate the artificial data.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{pDatBF \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{Evidence\_H0=}\NormalTok{BF10\_SBC}\OperatorTok{\textless{}=}\DecValTok{1}\OperatorTok{/}\DecValTok{10}\NormalTok{, }
\DataTypeTok{Evidence\_H1=}\NormalTok{BF10\_SBC}\OperatorTok{\textgreater{}=}\DecValTok{10}\NormalTok{,}
\DataTypeTok{No\_Evidence=}\NormalTok{BF10\_SBC}\OperatorTok{\textgreater{}}\DecValTok{1}\OperatorTok{/}\DecValTok{10} \OperatorTok{\&}\StringTok{ }\NormalTok{BF10\_SBC}\OperatorTok{\textless{}}\DecValTok{10}\NormalTok{,}
\DataTypeTok{true\_hypothesis=}\NormalTok{true\_hypothesis)}
\NormalTok{plyr}\OperatorTok{::}\KeywordTok{ddply}\NormalTok{(pDatBF, }\StringTok{"true\_hypothesis"}\NormalTok{, plyr}\OperatorTok{::}\NormalTok{summarize, }
\DataTypeTok{Evidence\_H0=}\KeywordTok{round}\NormalTok{(}\KeywordTok{mean}\NormalTok{(Evidence\_H0,}\DataTypeTok{na.rm=}\OtherTok{TRUE}\NormalTok{)}\OperatorTok{*}\DecValTok{100}\NormalTok{),}
\DataTypeTok{No\_Evidence=}\KeywordTok{round}\NormalTok{(}\KeywordTok{mean}\NormalTok{(No\_Evidence,}\DataTypeTok{na.rm=}\OtherTok{TRUE}\NormalTok{)}\OperatorTok{*}\DecValTok{100}\NormalTok{), }
\DataTypeTok{Evidence\_H1=}\KeywordTok{round}\NormalTok{(}\KeywordTok{mean}\NormalTok{(Evidence\_H1,}\DataTypeTok{na.rm=}\OtherTok{TRUE}\NormalTok{)}\OperatorTok{*}\DecValTok{100}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## true_hypothesis Evidence_H0 No_Evidence Evidence_H1
## 1 H0 13 87 0
## 2 H1 0 52 48
\end{verbatim}
The results show that when the H0 was actually true in the data simulation, the Bayes factor provided no strong evidence (\(10 > BF_{10} > 1/10\)) in 87\% of cases, and provided evidence for the H0 in only 13\% of simulations. However, when the H0 was true, it never decided for the H1, reflecting a false discovery rate (FDR) or zero. Likewise, when the H1 was actually true in the data simulation, the Bayes factor provided no strong evidence in 52\% simulations (i.e., the true discovery rate, TDR), and provided evidence for the H1 in 48\% of cases. However, it never provided evidence for the H0. These results show that for this example case of a small artificial data set with rather strong effect sizes, the Bayesian decision rule is often uncertain about the true hypothesis, but that it does not decide for the false hypothesis.
An alternative Bayesian decision-rule that is sometimes used in practice is to choose the model that has the highest posterior probability (\texttt{chooseH1\ \textless{}-\ postModelProbsH1\ \textgreater{}\ 0.5}; note that this does not involve the possibility to be undecided).
\begin{verbatim}
## true_hypothesis No_Discovery Discovery
## 1 H0 0.91 0.09
## 2 H1 0.33 0.67
\end{verbatim}
For the present data set this shows a false discovery rate of 9\% and a true discovery rate of 67\%, again suggesting that the effect size and experimental design in the artificial data set were sufficient for detecting a true effect from the data with a reasonable accuracy.
\hypertarget{bayesian-calibration-of-frequentist-analysis-methods-for-the-same-data}{%
\subsubsection{Bayesian calibration of frequentist analysis methods for the same data}\label{bayesian-calibration-of-frequentist-analysis-methods-for-the-same-data}}
It is possible to compare the results from the Bayesian calibration of Bayes factor analyses with corresponding analyses of frequentist analysis tools for the simulated data. In frequentist analyses, the H0 is rejected if the p-value, i.e., the probability for obtaining an effect as extreme as observed or stronger under the null hypothesis, is small. In frequentist statistics, a finding is considered statistically significant if the p-value is smaller than some threshold, which is conventionally \(p < .05\) (see Benjamin et al., 2018, for an alternative threshold of p \textless{} .005); the H0 is not rejected otherwise, i.e., if \(p > .05\). Unlike in Bayesian data analysis, frequentist null hypothesis significance testing often favors one hard cut-off value, which is conventionally \(p = .05\). Based on such a cut-off, when the H0 was used to generate artificial simulated data, we can compute the number of times that the H0 is falsely rejected, i.e., compute the \(\alpha\) error rate. Moreover, in the cases where the H1 was used to simulate the artificial data, we can compute how often the frequentist model rejects the H0 correctly, reflecting statistical power. For this, we fit a frequentist linear mixed-effects model using the \texttt{lmer} function to each of the \(500\) simulated data sets.
We can now apply the \(p < .05\) cut-off, and compute how often the H0 is rejected when the H0 is true, and how often the H0 is rejected when the H1 is true (note that \(p > .05\) is not a good cut-off to accept the H0, but simply to fail to reject it):
\begin{verbatim}
## hypothesis_lmer
## H0 H1
## H0 97 3
## H1 41 59
\end{verbatim}
The results show that when the null hypothesis is actually true (first row of the table), the empirical alpha error is estimated as 3\%, which is reasonably close to the expected value of 5\%. When the alternative hypothesis is true (i.e., second row of the table), statistical power is estimated to lie at 59\%. This result shows that the effect size in our artificial example is large enough to detect it in the small data set with intermediate (but not with good) power.
Note that cut-offs different from \(p < .05\) could be used as well, such as \(p < .1\) or \(p < .005\). This would lead to different \(\alpha\) error rates (presumably close to \(0.1\) or to \(0.005\)), and also to other values for the statistical power, where lower values for \(\alpha\) will lead to lower power.
Importantly, this Bayesian calibration analysis is different from a standard frequentist simulation analysis of the \(\alpha\) and \(\beta\) errors. Note that in this Bayesian analysis, we assume uncertainty about the exact effect size, since the priors are specified as distributions. In frequentist analysis, one would often assume a single fixed effect size and compute \(\alpha\) and \(\beta\) errors for exactly this effect size, possibly without considering uncertainty that exist about the precise effect size.
The results from the calibration analyses show similarities and differences in the calibration between the Bayesian decision rule and corresponding frequentist null hypothesis significant testing. The Bayesian and frequentist decisions were similar as both had a fairly good chance of detecting a true H1 from the data (for the Bayes factor decision: 48\%; for the posterior probability decision: 67\%; for the frequentist decision: 59\% of the true H1 were detected). However, the Bayes factor decision-rule distinguished between cases where there was no evidence and cases where support for the H0 could be observed. Thus, the Bayes factor decision-rule (\(BF_{10} < 1/10\)) provided correct support for the H0 in 13\% of the simulations. The frequentist analysis (and the decision rule based on posterior probabilities) did not distinguish between situations of ``no evidence'' versus ``evidence for the H0''.
\hypertarget{principled-decisions-using-utility-functions}{%
\subsubsection{Principled decisions using utility functions}\label{principled-decisions-using-utility-functions}}
The decision-rules that we have studied in the previous sections (based on Bayesian and frequentist analyses) relied on conventions about thresholds that would determine a decision, e.g., on whether to declare discovery. However, as noted before, these conventions provide no principled approach on how to perform decisions in a Bayesian setup.
Therefore, utility functions can be defined to specify the value of the consequences that originate from a given decision-rule. Utility refers to the value that is associated with possible choices under given truths. For example, there may be negative value (negative utility = loss) of falsely acting based on a false hypothesis claim. To decide on actions, utility functions are needed that specify the value of each possible action taken under all possible true states of the world (true hypothesis in this case). The threshold we used above, i.e., to choose the model with the highest posterior probability, is indeed used in much of machine learning, and is optimal if all of the possible decision-truth combinations have equal utility. However, in practice, the different combinations may have different utilities, which necessitates the definition of utility functions.
Here we define an exemplary utility function to support discrete decision-making. Let's assume that we make a decision between 2 options: claiming a discovery or not claiming a discovery. Then let's assume that a true discovery (TD) has utility of \(U_{TD} = 10\) and that missing to claim a true discovery (i.e., a false rejection, FR) has utility \(U_{FR} = -5\). Moreover, we assume that a false discovery (FD) has utility (loss) of \(U_{FD} = -50\), whereas correctly rejecting a discovery (true rejection, TR) has utility \(U_{TR} = 5\).
Note that these numbers seem rather arbitrary for the kind of basic research applications in the cognitive sciences that we have in mind. Thus, it is not clear how to choose these numbers appropriately. However, note that thresholds for p-values or for labeling results from Bayes factor analyses are also quite arbitrary, but fixed by convention.
Importantly, such threshold conventions define implicit utility functions, which may or may not be relevant to a given problem!
Utility analyses explicitly quantify the consequences of different possible actions. Specific utility functions could be agreed upon by research communities and as a result of such agreement, such utility analyses could be used by editors from different journals to decide upon publication based on more liberal/risky or more conservative strategies or publication categories. One problem with utility functions is that it is currently unclear what procedure could be used to quantify such utilities. That is, how can we quantify the utility of a false positive published findings, e.g., measured by the number of false citations. Thus, future research in the cognitive sciences is needed to investigate how utilities can be quantified and linked to evidence, yielding procedures for their definition. Alternatively, given the clear and good utility functions are hard to derive, an alternative approach is to not make any decisions, but rather to communicate continuous evidence.
Next, we can compute the average expected utility given a certain decision threshold. For this, we define an index matrix \(TA\) (``truth-action''), where each column indicates one combination of truth and actions. For example, column one would indicate all cases in the simulations where the H0 was true (i.e., the data was simulated based on the H0), and the decision procedure decided to claim discovery (i.e., false discovery). Column two would indicate cases where the H0 was true and no discovery was claimed (true rejection). Column three would indicate cases where the H1 was true and discovery was claimed (true discovery), and column four indicates cases where the H1 was true and no discovery was claimed (false rejection). Each row of the index matrix \(TA\) corresponds to one simulated data set from the SBC, and for each simulated data set the matrix indicates via a \(1\) which truth-action combination was realized in the SBC simulations with a given decision-rule, whereas all other truth-action combinations are marked with a \(0\). We here define the index matrix \(TA\) in R:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{postDat \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(true\_hypothesis, chooseH1)}
\KeywordTok{levels}\NormalTok{(postDat}\OperatorTok{$}\NormalTok{true\_hypothesis) \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"TrueH0"}\NormalTok{,}\StringTok{"TrueH1"}\NormalTok{)}
\NormalTok{postDat}\OperatorTok{$}\NormalTok{act \textless{}{-}}\StringTok{ }\KeywordTok{factor}\NormalTok{(postDat}\OperatorTok{$}\NormalTok{chooseH1,}
\DataTypeTok{levels=}\KeywordTok{c}\NormalTok{(}\OtherTok{FALSE}\NormalTok{,}\OtherTok{TRUE}\NormalTok{),}
\DataTypeTok{labels=}\KeywordTok{c}\NormalTok{(}\StringTok{"NoDisc"}\NormalTok{,}\StringTok{"Disc"}\NormalTok{))}
\NormalTok{postDat}\OperatorTok{$}\NormalTok{TA\_ \textless{}{-}}\StringTok{ }\KeywordTok{paste0}\NormalTok{(postDat}\OperatorTok{$}\NormalTok{true\_hypothesis,}\StringTok{"."}\NormalTok{,postDat}\OperatorTok{$}\NormalTok{act)}
\KeywordTok{table}\NormalTok{(postDat}\OperatorTok{$}\NormalTok{TA\_)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
##
## TrueH0.Disc TrueH0.NoDisc TrueH1.Disc TrueH1.NoDisc
## 23 222 170 83
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{mm \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{\textasciitilde{}}\StringTok{ }\DecValTok{{-}1} \OperatorTok{+}\StringTok{ }\NormalTok{TA\_, }\DataTypeTok{data=}\NormalTok{postDat))}
\KeywordTok{str}\NormalTok{(mm)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## 'data.frame': 498 obs. of 4 variables:
## $ TA_TrueH0.Disc : num 0 0 0 0 0 0 1 0 0 0 ...
## $ TA_TrueH0.NoDisc: num 1 1 1 1 1 0 0 1 1 1 ...
## $ TA_TrueH1.Disc : num 0 0 0 0 0 1 0 0 0 0 ...
## $ TA_TrueH1.NoDisc: num 0 0 0 0 0 0 0 0 0 0 ...
\end{verbatim}
Moreover, we define a vector of utilities \(u\) for these four different possible truth-action combinations:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{utility \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\OperatorTok{{-}}\DecValTok{50}\NormalTok{,}\DecValTok{5}\NormalTok{,}\DecValTok{10}\NormalTok{,}\OperatorTok{{-}}\DecValTok{5}\NormalTok{)}
\KeywordTok{names}\NormalTok{(utility) \textless{}{-}}\StringTok{ }\KeywordTok{names}\NormalTok{(mm)}
\NormalTok{utility}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## TA_TrueH0.Disc TA_TrueH0.NoDisc TA_TrueH1.Disc TA_TrueH1.NoDisc
## -50 5 10 -5
\end{verbatim}
Based on these definitions, we can now compute the average expected utility (averaged across all simulated data sets) as:
\begin{equation}
average\;expected\;utility = \frac{1}{N} \sum_{n=1}^{N} TA \times u
\end{equation}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{(avUt \textless{}{-}}\StringTok{ }\KeywordTok{mean}\NormalTok{( }\KeywordTok{as.matrix}\NormalTok{(mm) }\OperatorTok{\%*\%}\StringTok{ }\KeywordTok{t}\NormalTok{(}\KeywordTok{t}\NormalTok{(utility)) ))}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 2.5
\end{verbatim}
In this example, the average expected utility for a decision-rule that chooses the hypothesis with the highest posterior probability is thus 2.50. Here, we used the posterior probabilities for decision making. An alternative approach is to use decision rules based on Bayes factors instead.
Let's use the threshold \(BF_{10} \geq 10\) for a discovery claim and again compute the average expected utility for this decision-rule:
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{levels}\NormalTok{(postDat\_SBC1x}\OperatorTok{$}\NormalTok{true\_hypothesis) \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"TrueH0"}\NormalTok{,}\StringTok{"TrueH1"}\NormalTok{)}
\NormalTok{postDat\_SBC1x}\OperatorTok{$}\NormalTok{chooseH1\_BF \textless{}{-}}\StringTok{ }\NormalTok{postDat\_SBC1x}\OperatorTok{$}\NormalTok{BF10\_SBC }\OperatorTok{\textgreater{}=}\StringTok{ }\DecValTok{10}
\NormalTok{postDat\_SBC1x}\OperatorTok{$}\NormalTok{act \textless{}{-}}\StringTok{ }\KeywordTok{factor}\NormalTok{(postDat\_SBC1x}\OperatorTok{$}\NormalTok{chooseH1\_BF, }
\DataTypeTok{levels=}\KeywordTok{c}\NormalTok{(}\OtherTok{FALSE}\NormalTok{,}\OtherTok{TRUE}\NormalTok{), }\DataTypeTok{labels=}\KeywordTok{c}\NormalTok{(}\StringTok{"NoDisc"}\NormalTok{,}\StringTok{"Disc"}\NormalTok{))}
\NormalTok{postDat\_SBC1x}\OperatorTok{$}\NormalTok{TA\_ \textless{}{-}}\StringTok{ }\KeywordTok{paste0}\NormalTok{(postDat\_SBC1x}\OperatorTok{$}\NormalTok{true\_hypothesis,}\StringTok{"."}\NormalTok{,postDat\_SBC1x}\OperatorTok{$}\NormalTok{act)}
\NormalTok{postDat\_SBC1x}\OperatorTok{$}\NormalTok{TA\_ \textless{}{-}}\StringTok{ }\KeywordTok{factor}\NormalTok{(postDat\_SBC1x}\OperatorTok{$}\NormalTok{TA\_,}
\DataTypeTok{levels=}\KeywordTok{c}\NormalTok{(}\StringTok{"TrueH0.Disc"}\NormalTok{,}\StringTok{"TrueH0.NoDisc"}\NormalTok{,}\StringTok{"TrueH1.Disc"}\NormalTok{,}\StringTok{"TrueH1.NoDisc"}\NormalTok{))}
\KeywordTok{table}\NormalTok{(postDat\_SBC1x}\OperatorTok{$}\NormalTok{TA\_)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
##
## TrueH0.Disc TrueH0.NoDisc TrueH1.Disc TrueH1.NoDisc
## 0 245 121 132
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{mm \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{\textasciitilde{}}\StringTok{ }\DecValTok{{-}1} \OperatorTok{+}\StringTok{ }\NormalTok{TA\_, }\DataTypeTok{data=}\NormalTok{postDat\_SBC1x))}
\NormalTok{(avUt \textless{}{-}}\StringTok{ }\KeywordTok{mean}\NormalTok{( }\KeywordTok{as.matrix}\NormalTok{(mm) }\OperatorTok{\%*\%}\StringTok{ }\KeywordTok{t}\NormalTok{(}\KeywordTok{t}\NormalTok{(utility)) ))}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 3.564257
\end{verbatim}
\begin{figure}
{\centering \includegraphics{figure-utility1-1}
}
\caption{Utility for claiming discovery as a function of the critical BF cut-off. The utility for a true discovery is set to 10 and the utility for a false discovery is set to -50. Note that with more (than 500) simulations, the line should become smooth.}\label{fig:utility1}
\end{figure}
Now, the expected average utility is 3.56 and thus higher than before.
We can vary the discovery (Bayes factor) threshold to select the threshold with the highest average utility.
The analysis shows (see Fig.~\ref{fig:utility1}) that for decision-rules using a low value for the Bayes factor threshold, the average utility is low. The largest average utility is obtained for a Bayes factor threshold of 7. For thresholds larger than 7, average utility declines again. Based on this analysis, one could thus call a discovery when the Bayes factor reaches a value of at least 7. The true discovery rates (TDR) and false discovery rates (FDR) for this threshold are:
\begin{verbatim}
## true_hypothesis No_Discovery Discovery
## 1 TrueH0 100 0
## 2 TrueH1 48 52
\end{verbatim}
This is very close to the results with a Bayes factor threshold of \(10\). Again, the false discovery rate (FDR) is \(0\). However, the true discovery rate (TDR) is now a bit larger and takes a value of 52\%.
Note that while analysis of TDR and FDR provide a good first approach, more elaborate rates can be defined when a ``no decision'' option is possible.
\hypertarget{Example1}{%
\section{Example: Inhibitory and facilitatory interference effects}\label{Example1}}
In the following, we will illustrate the Bayesian workflow using a concrete example from the cognitive sciences.
We again investigate the example on inhibitory and facilitatory interference effects that we described above in the section on data variability. We have described the observational model above. Therefore, we start by describing how we obtained the priors from a meta analysis, and then execute further steps from the Bayes factor workflow.
\hypertarget{determine-priors-using-meta-analysis}{%
\subsubsection{Determine priors using meta analysis}\label{determine-priors-using-meta-analysis}}
Building good priors is a challenging task. Indeed, it is one of the crucial steps involved in a principled Bayesian workflow (Betancourt, 2020b; Gelman et al., 2020; O'Hagan et al., 2006; Schad et al., 2021).
One good way to obtain priors for Bayesian analyses, and specifically for Bayes factor analyses, is to use results from meta analyses on the subject. Here, we take the prior for the experimental manipulation of agreement attraction from a published meta analysis (Jäger et al., 2017).\footnote{Note that this meta analysis already includes the data that we want to make inference about; thus, this meta analysis estimate is not really the right estimate to use, since it involves using the data twice. We ignore this detail here because our goal is simply to illustrate the approach.} It is important to note here that meta analyses almost always have some limitations. First, the studies included can have important differences in implementation and different sources of bias, leading to quite a lot of between-study variability. Second, most of the studies on agreement attraction are severely underpowered (Jäger et al., 2017). This has the effect that biased estimates tend to get published; this naturally biases the meta analysis estimates. As long as one remains aware of these limitations, a meta analysis based estimate can be a reasonable starting point. Moreover, the problems inherent to meta analyses can be partially compensated by widening priors. In other words prior elicitation is more robust when using meta analysis only to get a reasonable order of magnitude instead of a precise shape. Here, we use meta analysis to determine the precise effect size and its uncertainty.
The mean effect size (difference in reading time between the two experimental conditions) in the meta analysis is \(-22\) milliseconds (ms), with \(95\% \;CI = [-36, \; -9]\) (cf., Jäger et al., 2017, Table 4). This means that on average, the target word (i.e., the verb) in sentences such as (2) is read \(22\) milliseconds faster than in sentences such as (1). The size of the effect is measured on the millisecond scale, assuming a normal distribution of effect sizes across studies.
However, individual reading times usually do not follow a normal distribution. Instead, a better assumption about the distribution of reading times is a log-normal distribution. This is what we will assume in the \texttt{brms} model. Therefore, to use the prior from the meta analysis in the Bayesian analysis, we have to transform the prior values from the millisecond scale to log millisecond scale.
For this transformation, we assume an intercept in the log-normal distribution of \(\beta_0\) and a slope of \(\beta_1\) (assuming sum coding -1/+1). Based on this, we know that the difference in reading times between agreement attraction conditions is 22 ms (i.e., \(\text{effSize} = 22\;ms\)), and that this difference can be computed based on \(\beta_0\) and \(\beta_1\) from the log-normal distribution. We can write:
\begin{equation}
22\;ms = \text{effSize} = \exp(\beta_0 + \beta_1) - \exp(\beta_0 - \beta_1)
\end{equation}
What we want to know is the value of \(\beta_1\) that we can assume for our prior. The equation shows that the slope \(\beta_1\) in log-normally distributed data depends on the intercept term \(\beta_0\). Here, we assume an intercept in log-space of \(\beta_0 = 6.0\). This yields a plausible expectation for a mean reading time of \(\exp(6) = 403\) milliseconds (cf., Schad et al., 2021). Based on this intercept term, we compute the effect size (half the difference between the two experimental conditions; reflecting sum contrast coding, i.e., singular = -1 and plural = +1) (Schad, Vasishth, Hohenstein, \& Kliegl, 2020) in log space as the \(\beta_1\) parameter, which yields a value of \(\beta_1 = 0.027\)\footnote{Given an effect size of \texttt{effSize\ =\ -22} and an intercept term of \(\beta_0 = 6.0\), this is computed as: \(\beta_1 = \log \left ( -\text{effSize}/\exp(\beta_0) + \sqrt{(\text{effSize}/\exp(\beta_0))^2+4} \right ) - \log(2)\)}. Adding and subtracting this parameter value to/from the intercept (\(\exp(6) = 403\)) in log space and computing the difference (i.e., \(\exp(6 - 0.027) - \exp(6 + 0.027)\) gives the difference of \(-22\) ms between the two experimental conditions. This simply confirms that our transformation has been computed correctly.
However, we also want to consider the uncertainty about the effect size, given as confidence intervals in the meta analysis. To this end, we compute values of 1 standard error below or above the mean (approximating the standard error based on a normal distribution as the range of the confidence interval divided by 4, i.e., \((-36 - (-9))/4\)), and we compute the corresponding values in log space. Based on this, we take the standard deviation of the normal prior distribution in log space as the average distance between (a) the mean in log space and (b) the values of 1 standard deviation above/below the mean (measured in milliseconds, and transformed into log space). This provides our prior standard deviation (in log-space), informed by the meta analysis (Jäger et al., 2017).
Next, we set the priors for the analysis with \texttt{brms}. Based on the previous calculations, the prior for the experimental factor of interference effects is set to a normal distribution with mean = \(-0.03\) and standard deviation = \(0.009\). For the other model parameters, we use mildly informative priors based on our recent analysis of a principled Bayesian workflow (Schad et al., 2021).
\hypertarget{prior-predictive-checks}{%
\subsubsection{Prior predictive checks}\label{prior-predictive-checks}}
An additional and highly recommended way to obtain appropriate priors (Betancourt, 2020b; Gabry et al., 2019; Good, 1950; Schad et al., 2021) is to perform prior predictive checks. Here, the idea is to simulate data from the model and the priors, and then to analyze the simulated data using summary statistics. For example, it would be possible to compute the summary statistic of the difference in the reading times between agreement attraction conditions (i.e., sentences (1) versus sentences (2)). The simulations would yield a distribution of differences. Arguably, this distribution of differences, that is, the data analyses of the simulated data, are much easier to judge for plausibility than the prior parameters specifying prior distributions. That is, we might find it easier to judge whether a difference in reading times between sentence types is plausible rather than judging the parameters of the model.
Here, we implement exemplary prior predictive checks. We start with the prior \(\beta \sim Normal(-0.03,0.009)\), which was derived from the meta analysis. Moreover, we add prior assumptions for the intercept, the residual standard deviation, for random effects variances, and random effects correlations. For these additional assumptions see the following prior specification, specified using the brms package:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{priors \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(6, 0.5)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"Intercept"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal({-}0.03, 0.009)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"b"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 0.5)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"sd"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0, 1.0)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"sigma"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"lkj(2)"}\NormalTok{, }\DataTypeTok{class =} \StringTok{"cor"}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
We load the experimental design by Lago et al. (2015) and use this for our prior predictive checks. To this end, we first repeatedly simulate parameters from the priors. For this, we use the custom R function \texttt{SimFromPrior()} (taken from Schad et al., 2021).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{nsimPP \textless{}{-}}\StringTok{ }\DecValTok{500}
\NormalTok{beta0 \textless{}{-}}\StringTok{ }\NormalTok{beta1 \textless{}{-}}\StringTok{ }\NormalTok{sigma\_u0 \textless{}{-}}\StringTok{ }\NormalTok{sigma\_u1 \textless{}{-}}\StringTok{ }\NormalTok{sigma\_w0 \textless{}{-}}\StringTok{ }
\StringTok{ }\NormalTok{sigma\_w1 \textless{}{-}}\StringTok{ }\NormalTok{rho\_u \textless{}{-}}\StringTok{ }\NormalTok{rho\_w \textless{}{-}}\StringTok{ }\NormalTok{sigma \textless{}{-}}\StringTok{ }\OtherTok{NA}
\KeywordTok{set.seed}\NormalTok{(}\DecValTok{123}\NormalTok{)}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{nsimPP) \{}
\NormalTok{ beta0[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"Intercept"}\NormalTok{,}\DataTypeTok{coef=}\StringTok{""}\NormalTok{)}
\NormalTok{ beta1[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"b"}\NormalTok{)}
\NormalTok{ sigma\_u0[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{)}
\NormalTok{ sigma\_u1[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{)}
\NormalTok{ sigma\_w0[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{)}
\NormalTok{ sigma\_w1[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{)}
\NormalTok{ rho\_u[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"cor"}\NormalTok{)}
\NormalTok{ rho\_w[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"cor"}\NormalTok{)}
\NormalTok{ sigma[i] \textless{}{-}}\StringTok{ }\KeywordTok{SimFromPrior}\NormalTok{(priors,}\DataTypeTok{class=}\StringTok{"sigma"}\NormalTok{)}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
Next, we use these simulated parameters to simulate artificial reading times.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{rtfakemat \textless{}{-}}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\OtherTok{NA}\NormalTok{,}\KeywordTok{nrow}\NormalTok{(lagoE1),nsim)}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{nsim)}
\NormalTok{ rtfakemat[,i] \textless{}{-}}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\KeywordTok{simLMM}\NormalTok{(}
\DataTypeTok{formula=}\OperatorTok{\textasciitilde{}}\StringTok{ }\NormalTok{x }\OperatorTok{+}\StringTok{ }\NormalTok{(x }\OperatorTok{|}\StringTok{ }\NormalTok{subj) }\OperatorTok{+}\StringTok{ }\NormalTok{(x }\OperatorTok{|}\StringTok{ }\NormalTok{item), }
\DataTypeTok{dat=}\NormalTok{lagoE1, }
\DataTypeTok{Fixef=}\KeywordTok{c}\NormalTok{(beta0[i], beta1[i]), }
\DataTypeTok{VC\_sd=}\KeywordTok{list}\NormalTok{(}\KeywordTok{c}\NormalTok{(sigma\_u0[i], sigma\_u1[i]), }
\KeywordTok{c}\NormalTok{(sigma\_w0[i], sigma\_w1[i]), }
\NormalTok{ sigma[i]),}
\DataTypeTok{CP=}\KeywordTok{c}\NormalTok{(rho\_u[i], rho\_w[i]), }\DataTypeTok{empirical=}\OtherTok{FALSE}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
\begin{figure}
{\centering \includegraphics{figure-xFigPPC-1}
}
\caption{Prior predictive checks for the agreement attraction reading time data by Lago et al. (2015). Distributions are over simulated hypothetical data. a) Multivariate summary statistic: Distribution of histograms of reading times. Shaded areas correspond to 10–90 percent, 20–80 percent, 30–70 percent, and 40–60 percent quantiles across histograms; the solid line (in the middle of the shaded area) indicates the median across hypothetical data-sets. The distribution of reading times shows reading times are mostly below 1000 ms, but also shows a long tail with longer reading times, possibly due to the log-normal distribution. b)-d) Scalar summary statistics. b) Distribution of average reading times shows that a priori reading times are in the expected range (between 0 and 2000 ms). c) Distribution of differences in reading times between agreement attraction conditions shows that effect sizes are reasonably expected in the range of +/- 500 ms. d) Distribution of standard deviations of reading times shows a reasonable range of variation, between 0 and 2500 ms, but some outliers of standard deviations larger than 10,000. a)+b)+d) Values > 10,000 are plotted as 10,000. c) Values < -2,000 are plotted as -2,000 and values > 2,000 are plotted as 2,000.}\label{fig:xFigPPC}
\end{figure}
Based on the simulated data, we compute several summary statistics (see Figure~\ref{fig:xFigPPC}).
The results from the prior predictive checks show that the simulated data is in a plausible range. Specifically, the distribution of histograms of reading times shows many values in the range between 0 and 1000 ms (Fig.~\ref{fig:xFigPPC}a). However, there is also a small number of unplausibly long reading times. Moreover, we plot the distribution of mean reading times (Fig.~\ref{fig:xFigPPC}b), which shows reasonable reading times in the range of 0 to 2000 ms. Crucially, the difference in reading times between agreement attraction conditions shows a distribution (Fig.~\ref{fig:xFigPPC}c) where effect sizes are reasonably expected in the range of +/- 500 ms. Last, the distribution of standard deviations of reading times shows a reasonable range of variation, between 0 and 2500 ms (Fig.~\ref{fig:xFigPPC}d), however, there are again some outliers with very large standard deviations.
Overall, the variation is somewhat high, but the summary statistics of the a priori simulated data generally show results that are in a plausible range, which supports the use of the priors on which these simulations are built (Schad et al., 2021). We therefore proceed with using these priors to analyze the real experimentally observed reading time data.
\hypertarget{fitting-the-brms-model}{%
\subsubsection{Fitting the brms model}\label{fitting-the-brms-model}}
The next step would be to fit the model to the empirical data and to estimate Bayes factors. Note that we have already performed this model fitting and Bayes factor estimation above. We here show the brms-code used to do the model fitting:
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# run alternative model}
\NormalTok{m1\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{brm}\NormalTok{(rt }\OperatorTok{\textasciitilde{}}\StringTok{ }\DecValTok{1}\OperatorTok{+}\NormalTok{x }\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{x}\OperatorTok{|}\NormalTok{subj)}\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{x}\OperatorTok{|}\NormalTok{item),}
\DataTypeTok{data =}\NormalTok{ lagoE1,}
\DataTypeTok{family =} \KeywordTok{lognormal}\NormalTok{(),}
\DataTypeTok{prior =}\NormalTok{ priors,}
\DataTypeTok{warmup =} \DecValTok{2000}\NormalTok{,}
\DataTypeTok{iter =} \DecValTok{10000}\NormalTok{,}
\DataTypeTok{cores =} \DecValTok{4}\NormalTok{,}
\DataTypeTok{save\_pars =} \KeywordTok{save\_pars}\NormalTok{(}\DataTypeTok{all =} \OtherTok{TRUE}\NormalTok{),}
\DataTypeTok{control =} \KeywordTok{list}\NormalTok{(}\DataTypeTok{adapt\_delta =} \FloatTok{0.99}\NormalTok{,}
\DataTypeTok{max\_treedepth=}\DecValTok{15}\NormalTok{))}
\CommentTok{\# run null model}
\NormalTok{m0\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{brm}\NormalTok{(rt }\OperatorTok{\textasciitilde{}}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{x}\OperatorTok{|}\NormalTok{subj)}\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{x}\OperatorTok{|}\NormalTok{item),}
\DataTypeTok{data =}\NormalTok{ lagoE1,}
\DataTypeTok{family =} \KeywordTok{lognormal}\NormalTok{(),}
\DataTypeTok{prior =}\NormalTok{ priors[}\OperatorTok{{-}}\DecValTok{2}\NormalTok{,],}
\DataTypeTok{warmup =} \DecValTok{2000}\NormalTok{,}
\DataTypeTok{iter =} \DecValTok{10000}\NormalTok{,}
\DataTypeTok{cores =} \DecValTok{4}\NormalTok{,}
\DataTypeTok{save\_pars =} \KeywordTok{save\_pars}\NormalTok{(}\DataTypeTok{all =} \OtherTok{TRUE}\NormalTok{),}
\DataTypeTok{control =} \KeywordTok{list}\NormalTok{(}\DataTypeTok{adapt\_delta =} \FloatTok{0.99}\NormalTok{,}
\DataTypeTok{max\_treedepth=}\DecValTok{15}\NormalTok{))}
\CommentTok{\# run bridge sampler}
\NormalTok{lml\_m1\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{bridge\_sampler}\NormalTok{(m1\_lagoE1, }\DataTypeTok{silent =} \OtherTok{TRUE}\NormalTok{)}
\NormalTok{lml\_m0\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{bridge\_sampler}\NormalTok{(m0\_lagoE1, }\DataTypeTok{silent =} \OtherTok{TRUE}\NormalTok{)}
\CommentTok{\# compute Bayes factor}
\NormalTok{h\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{bayes\_factor}\NormalTok{(lml\_m1\_lagoE1, lml\_m0\_lagoE1)}
\end{Highlighting}
\end{Shaded}
We show the results from the posterior analyses:
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{round}\NormalTok{(}\KeywordTok{fixef}\NormalTok{(m1\_lagoE1),}\DecValTok{3}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Estimate Est.Error Q2.5 Q97.5
## Intercept 6.015 0.056 5.903 6.127
## x -0.031 0.008 -0.046 -0.015
\end{verbatim}
They show that for the fixed effect \texttt{x}, capturing the agreement attraction effect, the 95\% credible interval does not overlap with zero. This indicates that there is some hint that the effect may have the expected negative direction, reflecting shorter reading times in the plural condition. As mentioned earlier, this does not provide direct evidence of the hypothesis that the effect exists and is not zero. This is not inferred here, because we did not specify the null hypothesis of zero effect explicitly. We can, however, investigate this null hypothesis by using the Bayes factor. We now look at the Bayes factor of the alternative model compared to the null model (\(BF_{10}\)):
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{h\_lagoE1}\OperatorTok{$}\NormalTok{bf}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 6.744471
\end{verbatim}
Next, we check the estimated model.
We first check whether the posterior fit was successful. We look at the \(\hat{R}\) statistic.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rhat}\NormalTok{(m1\_lagoE1)[}\StringTok{"b\_x"}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## b_x
## 0.9999915
\end{verbatim}
It is very close to one, indicating no problem with convergence of the parameter mean. Moreover, the parameters \texttt{Bulk\_ESS\ =\ 54816} and \texttt{Tail\_ESS\ =\ 24746} for factor \texttt{x} suggested a large enough effective sample size to estimate the effect of agreement attraction.
Also, the model showed no warnings for divergent transitions, indicating no problems in the posterior fit.
Next, we look at posterior densities and at trace plots for the intercept parameter and for the parameter estimating the critical effect of agreement attraction. Figure~\ref{fig:TracePlots1} shows that posterior samples look reasonable and do not suggest any problems with the posterior model fit.
\begin{figure}
{\centering \includegraphics{figure-TracePlots1-1}
}
\caption{Density plots (left panels) and trace plots (right panels) for the intercept parameter (upper panels) and for the effect of agreement attraction (labelled as x; lower panels).}\label{fig:TracePlots1}
\end{figure}
\hypertarget{posterior-predictive-checks}{%
\subsubsection{Posterior predictive checks}\label{posterior-predictive-checks}}
We next perform posterior predictive checks to see whether the model captures the structure in the data well. Posterior predictive simulations can be performed using the brms function \texttt{posterior\_predict()}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{pred\_lagoE1 \textless{}{-}}\StringTok{ }\KeywordTok{posterior\_predict}\NormalTok{(m1\_lagoE1)}
\end{Highlighting}
\end{Shaded}
\begin{figure}
{\centering \includegraphics{figure-FigPost-1}
}
\caption{Posterior predictive checks. Distributions are over posterior predictive simulated data. a) Histograms of reading times. 10-90 percent, 20-80 percent, 30-70 percent, and 40-60 percent quantiles across histograms are shown as shaded areas; the median is shown as a dotted line and the observed data as a solid line. For illustration, values > 2000 are plotted as 2000; modeling was done on the original data. b) Average reading times. c) Differences in reading times between agreement attraction conditions. d) Standard deviations of reading times.}\label{fig:FigPost}
\end{figure}
The results displayed in Figure~\ref{fig:FigPost} show that the histogram of reading times was well captured by the brms model (Fig.~\ref{fig:FigPost}a). Likewise, the mean reading time across subjects (Fig.~\ref{fig:FigPost}b) and the average agreement attraction effect (Fig.~\ref{fig:FigPost}c) were well captured by the model. Only the distribution of standard deviations of reading times in the model was smaller than in the empirical data (Fig.~\ref{fig:FigPost}d), suggesting that the model had some difficulty capturing all the variability in the data well. Overall we are satisfied with the results from the posterior predictive checks and proceed further with our Bayes factor workflow.
\hypertarget{stability-of-bayes-factors-against-mcmc-draws}{%
\subsubsection{Stability of Bayes factors against MCMC draws}\label{stability-of-bayes-factors-against-mcmc-draws}}
To make sure that we are using enough MCMC draws to support stable Bayes factor estimates, we estimate the H0 and the H1 models four times on the empirical data.
The results from the four runs show that the Bayes factor estimates are all fairly close to each other:
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{round}\NormalTok{(BF\_lagoE1,}\DecValTok{2}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 6.57 6.60 6.49 6.39
\end{verbatim}
Based on these rather similar results, we conclude that the Bayes factor estimation is stable enough for our current purposes.
We then go to the next step, which is to test the accuracy of Bayes factor estimates using SBC.
\hypertarget{simulation-based-calibration}{%
\subsubsection{Simulation-based calibration}\label{simulation-based-calibration}}
We perform 500 runs of SBC for the data set by Lago et al. (2015) based on the priors for the parameters from the meta analysis, and based on a priori model probabilities for the H0 and H1 of each 50\%.
\begin{verbatim}
## CI.2.5
## pH1 45.95 50.33 54.7
\end{verbatim}
The results show that the average posterior probability for the H1 (versus H0) is \(50.33\), and thus very close to the prior value of \(50\), and 95\% confidence intervals clearly include the 50\%.
This SBC analysis therefore shows that posterior inference on model probabilities based on Bayes factors is accurate and not biased, at least for the specific and simple case, model, and experimental design that we investigate here.
In addition to these SBC results, we can also investigate additional calibration questions of interest by looking at posterior model probabilities as a function of which prior hypothesis (model) was sampled in a given run. For each ``true'' hypothesis, we can now look at how much posterior probability mass is allocated to the two models by the Bayesian analysis. If the artificial data were simulated based on the H0, how high is the posterior probability for the H0? Is it higher than chance? And if so, by how much. Moreover, if the artificial data were simulated based on the H1, what is the posterior probability for the H1?
\begin{verbatim}
## true_hypothesis pH0 pH1
## 1 H0 54 46
## 2 H1 46 54
\end{verbatim}
The results in the first row show that if the H0 was used to simulate artificial data, then the Bayesian procedure allocated an average of 54\% posterior probability to the H0. Thus, the chance to support the null hypothesis correctly is not much better than 50/50, i.e., not much better than chance, in this data set and model. Note that these probabilities are close to the prior probabilities for the hypotheses, which were set to 50\% for the H0. Thus, averaged across data sets, the data do not provide a lot of information for changing the prior beliefs.
Moreover, the second row of the table shows that if the H1 was used to simulate the artificial data, then the posterior probability for the H1 was an average of 54\%. Thus, the alternative hypothesis is also not likely to be correctly supported much in the present setting. Taken together, this analysis shows that the data and the model on average provide hardly any information or evidence for the hypotheses of interest. Larger sample studies or more precise hypotheses may be needed to obtain better posterior information from the data.
Importantly, we can see that given the priors and the model that we have defined, the present experimental design does not seem to contain a lot of information for making inferences about the true hypothesis that has generated the simulated data. Thus, larger sample sizes or more informative priors may be needed to obtain clear results from the empirical data.
\hypertarget{adapting-prior-distributions}{%
\subsubsection{Adapting prior distributions}\label{adapting-prior-distributions}}
The previous simulations showed that the Bayesian models had major difficulties in determining the true model from the data. A major reason for this is that in the SBC, the a priori assumptions about model parameters (which were used to simulate data) were quite vague. For example, variance components indicated high levels of noise in the data simulations. As an alternative, we here use the posterior distributions based on the Lago et al. (2015) data (see fitted model object \texttt{m1\_lagoE1}) as the priors for the data simulation (i.e., posterior predictive analyses). Note that this is something that we would normally never do in practice. We cannot take the posterior from the analysis for some data set as a prior for the analysis of the same data. Here, we do this simply to illustrate what would happen in case we would have more informative priors.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{priorsLagoPost \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}
\CommentTok{\# fixed effects}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal( 6.02 ,0.0570 )"}\NormalTok{,}\DataTypeTok{class=}\StringTok{"Intercept"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal({-}0.0284,0.00754)"}\NormalTok{,}\DataTypeTok{class=}\StringTok{"b"}\NormalTok{),}
\CommentTok{\# SD parameters items}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0.04,0.02)"}\NormalTok{,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{,}\DataTypeTok{coef=}\StringTok{"Intercept"}\NormalTok{,}\DataTypeTok{group=}\StringTok{"item"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0.02,0.01)"}\NormalTok{,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{,}\DataTypeTok{coef=}\StringTok{"x"}\NormalTok{,}\DataTypeTok{group=}\StringTok{"item"}\NormalTok{),}
\CommentTok{\# SD parameters subjects}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0.31,0.04)"}\NormalTok{,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{,}\DataTypeTok{coef=}\StringTok{"Intercept"}\NormalTok{,}\DataTypeTok{group=}\StringTok{"subj"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0.03,0.02)"}\NormalTok{,}\DataTypeTok{class=}\StringTok{"sd"}\NormalTok{,}\DataTypeTok{coef=}\StringTok{"x"}\NormalTok{,}\DataTypeTok{group=}\StringTok{"subj"}\NormalTok{),}
\CommentTok{\# residual variance + correlation}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"normal(0.41,0.01)"}\NormalTok{, }\DataTypeTok{class=}\StringTok{"sigma"}\NormalTok{),}
\KeywordTok{set\_prior}\NormalTok{(}\StringTok{"lkj(2)"}\NormalTok{, }\DataTypeTok{class=}\StringTok{"cor"}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
Based on these priors, we again perform simulation based calibration of the Bayes factors.
Again, we can perform SBC by looking at the average posterior probabilities (``means'' and 95\% confidence intervals) for each of the models.
\begin{verbatim}
## CI.2.5
## pH1 47.33 51.71 56.08
\end{verbatim}
The results for the average posterior model probabilities show - as for the previous simulations -, that posterior probability for the H1 (versus the H0) was very close to the prior model probabilities of 50\%, and that the 95\% confidence intervals clearly include the prior probability of 50\%. This analysis thus supports our earlier result that bridge sampling yields unbiased estimates of Bayes factors (and posterior model probabilities), now for a set of more informative priors. This result thus again supports the use of bridge sampling for the analysis of our multilevel (i.e., generalized linear mixed-effects) model for our present case study. However, note that this is also a relatively small ensemble (with \(n = 500\) simulations) and hence not a very strong test of bridge sampling.
Second, we can again perform additional calibration analyses by looking at the supported hypotheses given that either the H0 or the H1 was used to simulate the data.
\begin{verbatim}
## true_hypothesis pH0 pH1
## 1 H0 70 30
## 2 H1 28 72
\end{verbatim}
The results now show much better identifiability of the hypotheses. When the H0 was the true model in the simulations (first row of the matrix), then this was correctly supported with an average posterior probability of 70\%, suggesting moderately higher evidence for the correct null model. At the same time, 30\% of the posterior probability was falsely allocated to the H1.
Moreover, when the H1 was the true model in the simulations (second row of the matrix), then this was correctly supported with an average posterior probability of 72\%, again suggesting a moderately higher evidence for the correct alternative model. Note that this identifiability is better than before, but may be still worse than often expected in frequentist analyses.
\hypertarget{data-variability}{%
\subsubsection{Data variability}\label{data-variability}}
Based on the good average performance of posterior model probabilities, we next look at how posterior model probabilities vary across posterior predictive data sets. That is, even if the average performance of inferences based on Bayes factors seems good, it is unclear how much this inference varies with variability in the data.
\begin{figure}
{\centering \includegraphics{figure-SBCvar2-1}
}
\caption{Histograms of posterior probabilities for the H1 across 500 simulated data sets, where either the H0 (left panel) or the H1 (right panel) was the true hypothesis in the data simulations.}\label{fig:SBCvar2}
\end{figure}
As is shown in Figure~\ref{fig:SBCvar2}, the posterior probabilities widely varied across individual data sets. The lower panels of Figure~\ref{fig:SBCvar2} represent the SBC using the adjusted (more informative) priors from the section above, which was based on the posterior from the Lago et al. (2015) study. We can see that in this analysis, when the true hypothesis was the H0 (left panels), then posterior model probabilities tended to be smaller and closer to 0, whereas when the true hypothesis was the H1 (right panels), then posterior model probabilities tended to be larger and closer to 1. Thus, in the second set of posterior predictive simulations, there seemed to be some information contained in the data. However, there was still a large amount of variation, and posterior probabilities could be 0 or 1 for individual data sets irrespective of which hypothesis was true in the data.
Even less information was contained in the first set of simulations (upper panels of Fig.~\ref{fig:SBCvar2}), which indicated the prior predictive analysis based only on the meta analysis. Here, the distributions of posterior model probabilities seem quite the same, irrespective of which hypothesis was true (i.e., used to simulate the data). However, while on average there was not a lot information in the data, individual data sets still seemed to provide quite some support to either the H0 or the H1, approaching posterior probabilities close to zero or one. Importantly, this support is an illusion here, since we know what the true hypothesis was in each of these cases. This shows that for the present experimental design, priors, and effect size, an individual data set may seem to provide evidence either for or against the effect, nearly independent of which hypothesis had really been true in the simulation of the data. Thus, these individual data sets are not sufficient to inform inference or even decisions based on them, and larger data sets and/or larger effect sizes might be needed for reliable inferences or decisions.
\hypertarget{using-sbc-simulations-to-calibrate-decisions-1}{%
\subsubsection{Using SBC simulations to calibrate decisions}\label{using-sbc-simulations-to-calibrate-decisions-1}}
The actions that we aim to perform are either to declare discovery or to not declare discovery. To study these decisions based on the Bayesian evidence, we first define utility functions. We use the same utilities that we had used above:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{utility \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\OperatorTok{{-}}\DecValTok{50}\NormalTok{,}\DecValTok{5}\NormalTok{,}\DecValTok{10}\NormalTok{,}\OperatorTok{{-}}\DecValTok{5}\NormalTok{)}
\KeywordTok{names}\NormalTok{(utility) \textless{}{-}}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"TrueH0.Disc"}\NormalTok{,}\StringTok{"TrueH0.NoDisc"}\NormalTok{,}\StringTok{"TrueH1.Disc"}\NormalTok{,}\StringTok{"TrueH1.NoDisc"}\NormalTok{)}
\NormalTok{utility}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## TrueH0.Disc TrueH0.NoDisc TrueH1.Disc TrueH1.NoDisc
## -50 5 10 -5
\end{verbatim}
Next, we investigate which Bayes factor threshold gives highest overall utility.
\begin{figure}
{\centering \includegraphics{figure-utility2-1}
}
\caption{Average expected utility as a function of the critical BF threshold: prior predictive analysis of the Lago et al. (2015) data.}\label{fig:utility2}
\end{figure}
The results, displayed in Figure~\ref{fig:utility2} show that the optimal Bayes factor threshold is the value \(5\). With this value, we go back to our empirical data. Above, we had performed analyses to check the stability of Bayes factor estimates against different MCMC draws. This analysis had revealed a Bayes factor of roughly \(BF_{10} = 6.5\):
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{round}\NormalTok{(BF\_lagoE1,}\DecValTok{2}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 6.57 6.60 6.49 6.39
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{mean}\NormalTok{(BF\_lagoE1)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 6.513411
\end{verbatim}
We can now compare the Bayes factor estimate of \(BF_{10} = 6.5\) to the threshold of \(5\). The result shows that the Bayes factor is larger than the threshold. Thus, we would declare discovery. That is, we would claim the discovery that facilitatory agreement attraction exists.
However, remember that our analyses of the data variability of Bayesian evidence showed that posterior evidence widely varied based on this experimental design, models, and priors. This suggests that we should be quite cautious with our discovery claim, since it seems that there might be a good chance that it originated from chance only. We look at this more closely by studying TDR and FDR:
\begin{verbatim}
## true_hypothesis No_Discovery Discovery
## 1 TrueH0 0.97 0.03
## 2 TrueH1 0.96 0.04
\end{verbatim}
The results show a FDR of 3\% and a TDR of 4\%. Thus, even though the optimal threshold was exceeded, this analysis suggests that there is a \(3\%/(3\%+4\%)*100=43\%\) chance that the discovery claim in fact originates from false discovery. This demonstrates that the given experimental design was insufficient to constrain decisions based on our uninformative priors. We might thus consider improving the experimental design or the prior information that we take into account. We here repeat the analysis using the adjusted (more informative) prior information, which was based on the posterior from the Lago et al. (2015) study (see section ``Adapting prior distributions'' above).
We obtain a Bayes factor of 6.73.
We again use the same utilities for optimizing the Bayes factor threshold. Now, with the new more informative priors, we obtain an optimal Bayes factor threshold of \(4\). The empirical Bayes factor estimate of \(BF_{10} = 6.7\) thus again supports a discovery claim.
\begin{verbatim}
## true_hypothesis No_Discovery Discovery
## 1 True hypothesis: H0 0.95 0.05
## 2 True hypothesis: H1 0.55 0.45
\end{verbatim}
When we look at the FDR and TDR, we can see that the more informative prior beliefs yield again a low FDR of 5\%. However, the TDR is now much higher with a value of 45\%, providing more confidence that the discovery claim based on the empirical data might be a true discovery rather than a false discovery.
\hypertarget{discussion}{%
\section{Discussion}\label{discussion}}
We provided a discussion of the Bayesian quantification of evidence in favor of one of two alternative hypotheses and investigated the performance of Bayes factors with respect to prior assumptions, effective sample size, simulation-based calibration, data variability, and utility functions. We implemented competing hypotheses in hierarchical Bayesian models using the R-package \texttt{brms}, and tested these hypotheses against each other by estimating Bayes factors approximately using bridge sampling.
The results illustrate the strong dependence of Bayes factors on the prior assumptions, which calls for the use of (1) (weakly) informative priors (cf., Schad et al., 2021) and of (2) prior sensitivity analyses, to investigate Bayes factors for different prior assumptions about the size of the effect. Our results moreover illustrate challenges and limitations in the performance of Bayes factor analyses. First, we studied theoretical aspects of Bayes factor estimation. We showed that Bayes factors can be estimated in an unstable way because a very large effective sample size (of the Hamiltonian Markov Chain Monte Carlo sampler) is needed in order to obtain stable results from the bridge sampling algorithm. Moreover, we noted that even if Bayes factor approximations are stable, because bridge sampling does not come with strong guarantees, it is unclear whether approximate Bayes factor estimates are accurate, i.e., whether approximate Bayes factor estimates correspond to the true Bayes factor. We showed how simulation-based calibration can be used to investigate whether Bayes factor estimates are accurate for a given application case (i.e., model, priors, and experimental design). We moreover performed further ordinary Bayesian calibration analyses by testing average posterior model probabilities given the data was simulated based on the H0 or the H1. Our results illustrate how a robust effect in the cognitive sciences, i.e., facilitatory agreement attraction effects, could hardly be detected from a standard experimental design and analysis, and how much stronger effect sizes (or much larger samples) are needed if the aim is to draw firm conclusions from single experimental studies.
Second, analyses of artificial and of real replication data showed that the results from Bayes factor analyses - just like p-values in frequentist analyses - can considerably vary across different repeated replication attempts. However, for a range of different real empirical studies, the results also show some robustness against drawing strong conclusions. This again suggests that some typical linguistic or psychological experiments may not be sufficiently powered to provide strong evidence for or against the small effect sizes that may be realistic to expect and that are of theoretical interest. Importantly, using Bayesian statistics and the Bayes factor does not solve this problem, since low-powered studies will most likely yield inconclusive results in a Bayes factor analysis. Studies with larger sample sizes or stronger effect sizes may therefore be needed, for example by sharing data across labs, to overcome such situations of low power.
Third, we studied decision-making based on Bayesian analyses and saw how decisions can widely vary with the data. We discussed some heuristics for performing decisions, and illustrated how utility functions can be used to obtain optimal decisions.
Based on these challenges to the robustness of Bayes factors and the resulting inferences and decisions, we here formulate a Bayes factor workflow, where simulations-based calibrations can be used to investigate these different issues for a given application case. This workflow then allows us to judge the extend to which inferences and decisions based on Bayes factors are robust for the given application case.
Taken together, Bayes factor analyses provide a useful tool that can be used to investigate evidence for different hypotheses in the cognitive sciences. We showed how Bayes factors can misbehave based on estimation error, data variation, and poor Bayesian decision-procedures, partially reflecting that fact that wide-spread limitations in experimental design can also limit the conclusions that can be drawn based on individual data sets. We propose a Bayes factor workflow to identify these potential problems for a given application case. When used with care and calibrated accordingly, Bayes factors provide a useful approach for quantifying evidence and supporting decision-making on discovery claims in the cognitive sciences.
\hypertarget{acknowledgements}{%
\section{Acknowledgements}\label{acknowledgements}}
This work was partly funded by the Deutsche Forschungsgemeinschaft (DFG), Sonderforschungsbereich 1287, Project Q (PIs: Shravan Vasishth and Ralf Engbert), project number 317633480 (Limits of Variability in Language).
\newpage
\hypertarget{references}{%
\section{References}\label{references}}
\begingroup
\setlength{\parindent}{-0.5in}
\setlength{\leftskip}{0.5in}
\hypertarget{refs}{}
\begin{cslreferences}
\leavevmode\hypertarget{ref-aitkin1991posterior}{}%
Aitkin, M. (1991). Posterior Bayes factors. \emph{Journal of the Royal Statistical Society: Series B (Methodological)}, \emph{53}(1), 111--128.
\leavevmode\hypertarget{ref-barr2013random}{}%
Barr, D. J., Levy, R., Scheepers, C., \& Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. \emph{Journal of Memory and Language}, \emph{68}(3), 255--278.
\leavevmode\hypertarget{ref-benjamin2018redefine}{}%
Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R., \ldots{} others. (2018). Redefine statistical significance. \emph{Nature Human Behaviour}, \emph{2}(1), 6--10.
\leavevmode\hypertarget{ref-bennettEfficientEstimationFree1976}{}%
Bennett, C. H. (1976). Efficient estimation of free energy differences from Monte Carlo data. \emph{Journal of Computational Physics}, \emph{22}(2), 245--268. \url{https://doi.org/10.1016/0021-9991(76)90078-4}
\leavevmode\hypertarget{ref-betancourt2016diagnosing}{}%
Betancourt, M. (2016). Diagnosing suboptimal cotangent disintegrations in Hamiltonian Monte Carlo. \emph{arXiv Preprint arXiv:1604.00695}.
\leavevmode\hypertarget{ref-betancourt2017conceptual}{}%
Betancourt, M. (2017). A conceptual introduction to Hamiltonian Monte Carlo. \emph{arXiv Preprint arXiv:1701.02434}.
\leavevmode\hypertarget{ref-betancourt2018calibrating}{}%
Betancourt, M. (2018). Calibrating model-based inferences and decisions. \emph{arXiv Preprint arXiv:1803.08393}.
\leavevmode\hypertarget{ref-Betancourt2019calibration}{}%
Betancourt, M. (2019). Probabilistic modeling and statistical inference. \emph{GitHub repository}. \url{https://github.com/betanalpha/knitr_case_studies/tree/master/modeling_and_inference}; commit: b474ec1a5a79347f7c9634376c866fe3294d657a.
\leavevmode\hypertarget{ref-Betancourt2020mcmc}{}%
Betancourt, M. (2020a). Markov chain Monte Carlo. \emph{GitHub repository}. \url{https://github.com/betanalpha/knitr_case_studies/tree/master/markov_chain_monte_carlo}; commit: b474ec1a5a79347f7c9634376c866fe3294d657a.
\leavevmode\hypertarget{ref-Betancourt2020workflow}{}%
Betancourt, M. (2020b). Towards a principled Bayesian workflow (RStan). \emph{GitHub repository}. \url{https://github.com/betanalpha/knitr_case_studies/tree/master/principled_bayesian_workflow}; commit: aeab31509b8e37ff05b0828f87a3018b1799b401.
\leavevmode\hypertarget{ref-bishop2006pattern}{}%
Bishop, C. M. (2006). \emph{Pattern recognition and machine learning}. New York: Springer.
\leavevmode\hypertarget{ref-Buerkner2017brms}{}%
Bürkner, P.-C. (2017). brms: An R package for Bayesian multilevel models using Stan. \emph{Journal of Statistical Software}, \emph{80}(1), 1--28. \url{https://doi.org/10.18637/jss.v080.i01}
\leavevmode\hypertarget{ref-Buerkner2018brms}{}%
Bürkner, P.-C. (2018). Advanced Bayesian multilevel modeling with the R package brms. \emph{The R Journal}, \emph{10}(1), 395--411. \url{https://doi.org/10.32614/RJ-2018-017}
\leavevmode\hypertarget{ref-carpenter2017stan}{}%
Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., \ldots{} Riddell, A. (2017). Stan: A probabilistic programming language. \emph{Journal of Statistical Software}, \emph{76}(1).
\leavevmode\hypertarget{ref-chow2017bayesian}{}%
Chow, S.-M., \& Hoijtink, H. (2017). Bayesian estimation and modeling: Editorial to the second special issue on Bayesian data analysis. \emph{Psychological Methods}, \emph{22}(4), 609--615.
\leavevmode\hypertarget{ref-cumming2014new}{}%
Cumming, G. (2014). The new statistics: Why and how. \emph{Psychological Science}, \emph{25}(1), 7--29.
\leavevmode\hypertarget{ref-DickeyLientz1970}{}%
Dickey, J. M., Lientz, B., \& others. (1970). The weighted likelihood ratio, sharp hypotheses about chances, the order of a Markov chain. \emph{The Annals of Mathematical Statistics}, \emph{41}(1), 214--226.
\leavevmode\hypertarget{ref-dillon2013contrasting}{}%
Dillon, B., Mishler, A., Sloggett, S., \& Phillips, C. (2013). Contrasting intrusion profiles for agreement and anaphora: Experimental and modeling evidence. \emph{Journal of Memory and Language}, \emph{69}(2), 85--103.
\leavevmode\hypertarget{ref-dillon2011structured}{}%
Dillon, B. W. (2011). \emph{Structured access in sentence comprehension} (PhD thesis).
\leavevmode\hypertarget{ref-van2021bayes}{}%
Doorn, J. van, Aust, F., Haaf, J. M., Stefan, A., \& Wagenmakers, E.-J. (2021). Bayes factors for mixed models. \emph{PsyArXiv Preprint PsyArXiv:Y65h8}.
\leavevmode\hypertarget{ref-engelmann2019effect}{}%
Engelmann, F., Jäger, L. A., \& Vasishth, S. (2019). The effect of prominence and cue association on retrieval processes: A computational account. \emph{Cognitive Science}, \emph{43}(12), e12800.
\leavevmode\hypertarget{ref-etz2018how}{}%
Etz, A., Gronau, Q. F., Dablander, F., Edelsbrunner, P. A., \& Baribault, B. (2018). How to become a Bayesian in eight easy steps: An annotated reading list. \emph{Psychonomic Bulletin \& Review}, \emph{25}(1), 219--234.
\leavevmode\hypertarget{ref-etz2018introduction}{}%
Etz, A., \& Vandekerckhove, J. (2018). Introduction to Bayesian inference for psychology. \emph{Psychonomic Bulletin \& Review}, \emph{25}(1), 5--34.
\leavevmode\hypertarget{ref-Freedman1984}{}%
Freedman, L. S., Lowe, D., \& Macaskill, P. (1984). Stopping rules for clinical trials incorporating clinical opinion. \emph{Biometrics}, \emph{40}(3), 575--586.
\leavevmode\hypertarget{ref-gabry2019visualization}{}%
Gabry, J., Simpson, D., Vehtari, A., Betancourt, M., \& Gelman, A. (2019). Visualization in Bayesian workflow. \emph{Journal of the Royal Statistical Society: Series A (Statistics in Society)}, \emph{182}(2), 389--402.
\leavevmode\hypertarget{ref-ge2018turing}{}%
Ge, H., Xu, K., \& Ghahramani, Z. (2018). Turing: A language for flexible probabilistic inference. In \emph{International conference on artificial intelligence and statistics} (pp. 1682--1690). PMLR.
\leavevmode\hypertarget{ref-gelman2013bayesian}{}%
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., \& Rubin, D. B. (2013). \emph{Bayesian data analysis}. New York: CRC press.
\leavevmode\hypertarget{ref-Gelman14}{}%
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., \& Rubin, D. B. (2014). \emph{Bayesian data analysis} (Third). Boca Raton, FL: Chapman; Hall/CRC.
\leavevmode\hypertarget{ref-gelman2020bayesian}{}%
Gelman, A., Vehtari, A., Simpson, D., Margossian, C. C., Carpenter, B., Yao, Y., \ldots{} Modrák, M. (2020). Bayesian workflow. \emph{arXiv Preprint arXiv:2011.01808}.
\leavevmode\hypertarget{ref-Good:1950aa}{}%
Good, I. J. (1950). \emph{Probability and the weighing of evidence}. New York: Hafners.
\leavevmode\hypertarget{ref-gronau2017tutorial}{}%
Gronau, Q. F., Sarafoglou, A., Matzke, D., Ly, A., Boehm, U., Marsman, M., \ldots{} Steingroever, H. (2017a). A tutorial on bridge sampling. \emph{Journal of Mathematical Psychology}, \emph{81}, 80--97.
\leavevmode\hypertarget{ref-gronauTutorialBridgeSampling2017}{}%
Gronau, Q. F., Sarafoglou, A., Matzke, D., Ly, A., Boehm, U., Marsman, M., \ldots{} Steingroever, H. (2017b). A tutorial on bridge sampling. \emph{Journal of Mathematical Psychology}, \emph{81}, 80--97. \url{https://doi.org/10.1016/j.jmp.2017.09.005}
\leavevmode\hypertarget{ref-Gronau2020bridgesampling}{}%
Gronau, Q. F., Singmann, H., \& Wagenmakers, E.-J. (2020). bridgesampling: An R package for estimating normalizing constants. \emph{Journal of Statistical Software}, \emph{92}(10), 1--29. \url{https://doi.org/10.18637/jss.v092.i10}
\leavevmode\hypertarget{ref-grunwald2000model}{}%
Grünwald, P. (2000). Model selection based on minimum description length. \emph{Journal of Mathematical Psychology}, \emph{44}(1), 133--152.
\leavevmode\hypertarget{ref-hammerly2019grammaticality}{}%
Hammerly, C., Staub, A., \& Dillon, B. (2019). The grammaticality asymmetry in agreement attraction reflects response bias: Experimental and modeling evidence. \emph{Cognitive Psychology}, \emph{110}, 70--104.
\leavevmode\hypertarget{ref-heck2020review}{}%
Heck, D. W., Boehm, U., Böing-Messing, F., Bürkner, P.-C., Derks, K., Dienes, Z., \ldots{} others. (2020). A review of applications of the Bayes factor in psychological research.
\leavevmode\hypertarget{ref-hoijtink2017bayesian}{}%
Hoijtink, H., \& Chow, S.-M. (2017). Bayesian hypothesis testing: Editorial to the special issue on Bayesian data analysis. \emph{Psychological Methods}, \emph{22}(2), 211--216.
\leavevmode\hypertarget{ref-jager2017similarity}{}%
Jäger, L. A., Engelmann, F., \& Vasishth, S. (2017). Similarity-based interference in sentence comprehension: Literature review and Bayesian meta-analysis. \emph{Journal of Memory and Language}, \emph{94}, 316--339.
\leavevmode\hypertarget{ref-jager2020interference}{}%
Jäger, L. A., Mertzen, D., Van Dyke, J. A., \& Vasishth, S. (2020). Interference patterns in subject-verb agreement and reflexives revisited: A large-sample study. \emph{Journal of Memory and Language}, \emph{111}, 104063.
\leavevmode\hypertarget{ref-jeffreys1939theory}{}%
Jeffreys, H. (1939). \emph{Theory of probability}. Oxford: Clarendon Press.
\leavevmode\hypertarget{ref-kass1995bayes}{}%
Kass, R. E., \& Raftery, A. E. (1995). Bayes factors. \emph{Journal of the American Statistical Association}, \emph{90}(430), 773--795.
\leavevmode\hypertarget{ref-kruschke2011bayesian}{}%
Kruschke, J. K. (2011). Bayesian assessment of null values via parameter estimation and model comparison. \emph{Perspectives on Psychological Science}, \emph{6}(3), 299--312.
\leavevmode\hypertarget{ref-lago2015agreement}{}%
Lago, S., Shalom, D., Sigman, M., Lau, E. F., \& Phillips, C. (2015). Agreement processes in spanish comprehension. \emph{Journal of Memory and Language}, \emph{82}, 133--149.
\leavevmode\hypertarget{ref-lee2011cognitive}{}%
Lee, M. D. (2011). How cognitive modeling can benefit from hierarchical Bayesian models. \emph{Journal of Mathematical Psychology}, \emph{55}(1), 1--7.
\leavevmode\hypertarget{ref-Lewandowski:2009aa}{}%
Lewandowski, D., Kurowicka, D., \& Joe, H. (2009). Generating random correlation matrices based on vines and extended onion method. \emph{Journal of Multivariate Analysis}, \emph{100}(9), 1989--2001.
\leavevmode\hypertarget{ref-liu2008bayes}{}%
Liu, C. C., \& Aitkin, M. (2008). Bayes factors: Prior sensitivity and model generalizability. \emph{Journal of Mathematical Psychology}, \emph{52}(6), 362--375.
\leavevmode\hypertarget{ref-lunn2000winbugs}{}%
Lunn, D. J., Thomas, A., Best, N., \& Spiegelhalter, D. (2000). WinBUGS-a bayesian modelling framework: Concepts, structure, and extensibility. \emph{Statistics and Computing}, \emph{10}(4), 325--337.
\leavevmode\hypertarget{ref-matuschek2017balancing}{}%
Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., \& Bates, D. (2017). Balancing Type I error and power in linear mixed models. \emph{Journal of Memory and Language}, \emph{94}, 305--315.
\leavevmode\hypertarget{ref-mengSimulatingRatiosNormalizing1996}{}%
Meng, X.-l., \& Wong, W. H. (1996). Simulating ratios of normalizing constants via a simple identity: A theoretical exploration. \emph{Statistica Sinica}, 831--860.
\leavevmode\hypertarget{ref-Moreyetal2011}{}%
Morey, R., \& Rouder, J. (2011). Bayes factor approaches for testing interval null hypotheses. \emph{Psychological Methods}, \emph{16}, 406--419. \url{https://doi.org/10.1037/a0024377}
\leavevmode\hypertarget{ref-mulder2016editors}{}%
Mulder, J., \& Wagenmakers, E.-J. (2016). Editors' introduction to the special issue ``Bayes factors for testing hypotheses in psychological research: Practical relevance and new developments''. \emph{Journal of Mathematical Psychology}, \emph{72}, 1--5.
\leavevmode\hypertarget{ref-myung1997applying}{}%
Myung, I. J., \& Pitt, M. A. (1997). Applying Occam's razor in modeling cognition: A Bayesian approach. \emph{Psychonomic Bulletin \& Review}, \emph{4}(1), 79--95.
\leavevmode\hypertarget{ref-navarro2015learning}{}%
Navarro, D. (2015). \emph{Learning statistics with R}. https://learningstatisticswithr.com.
\leavevmode\hypertarget{ref-navarro2019between}{}%
Navarro, D. J. (2019). Between the devil and the deep blue sea: Tensions between scientific judgement and statistical model selection. \emph{Computational Brain \& Behavior}, \emph{2}(1), 28--34.
\leavevmode\hypertarget{ref-NicenboimVasishth2016}{}%
Nicenboim, B., \& Vasishth, S. (2016). Statistical methods for linguistic research: Foundational Ideas - Part II. \emph{Language and Linguistics Compass}, \emph{10}(11), 591--613. \url{https://doi.org/10.1111/lnc3.12207}
\leavevmode\hypertarget{ref-nicenboim2020words}{}%
Nicenboim, B., Vasishth, S., \& Rösler, F. (2020). Are words pre-activated probabilistically during sentence comprehension? Evidence from new data and a bayesian random-effects meta-analysis using publicly available data. \emph{Neuropsychologia}, 107427.
\leavevmode\hypertarget{ref-oelrich2020bayesian}{}%
Oelrich, O., Ding, S., Magnusson, M., Vehtari, A., \& Villani, M. (2020). When are Bayesian model probabilities overconfident? \emph{arXiv Preprint arXiv:2003.04026}.
\leavevmode\hypertarget{ref-ohagan2006uncertain}{}%
O'Hagan, A., Buck, C. E., Daneshkhah, A., Eiser, J. R., Garthwaite, P. H., Jenkinson, D. J., \ldots{} Rakow, T. (2006). \emph{Uncertain judgements: Eliciting experts' probabilities}. John Wiley \& Sons.
\leavevmode\hypertarget{ref-phillips2011grammatical}{}%
Phillips, C., Wagers, M. W., \& Lau, E. F. (2011). Grammatical illusions and selective fallibility in real-time language comprehension. In \emph{Experiments at the Interfaces} (Vol. 37, pp. 147--180). Emerald Bingley, UK.
\leavevmode\hypertarget{ref-plummer2003jags}{}%
Plummer, M., \& others. (2003). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In \emph{Proceedings of the 3rd international workshop on distributed statistical computing} (Vol. 124, pp. 1--10). Vienna, Austria.
\leavevmode\hypertarget{ref-Rabe2021designr}{}%
Rabe, M. M., Kliegl, R., \& Schad, D. J. (2021). \emph{Designr: Balanced factorial designs}. Retrieved from \url{https://maxrabe.com/designr}
\leavevmode\hypertarget{ref-robert2007bayesian}{}%
Robert, C. (2007). The Bayesian choice. Springer-Verlag.
\leavevmode\hypertarget{ref-rouder2018bayesian}{}%
Rouder, J. N., Haaf, J. M., \& Vandekerckhove, J. (2018). Bayesian inference for psychology, part IV: Parameter estimation and Bayes factors. \emph{Psychonomic Bulletin \& Review}, \emph{25}(1), 102--113.
\leavevmode\hypertarget{ref-rouder2009bayesian}{}%
Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., \& Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. \emph{Psychonomic Bulletin \& Review}, \emph{16}(2), 225--237.
\leavevmode\hypertarget{ref-salvatier2016probabilistic}{}%
Salvatier, J., Wiecki, T. V., \& Fonnesbeck, C. (2016). Probabilistic programming in python using pymc3. \emph{PeerJ Computer Science}, \emph{2}, e55.
\leavevmode\hypertarget{ref-schad2020toward}{}%
Schad, D. J., Betancourt, M., \& Vasishth, S. (2021). Toward a principled Bayesian workflow in cognitive science. \emph{Psychological Methods}, \emph{26}(1), 103--126. \url{https://doi.org/10.1037/met0000275}
\leavevmode\hypertarget{ref-schad2018posterior}{}%
Schad, D. J., \& Vasishth, S. (2019). The posterior probability of a null hypothesis given a statistically significant result. \emph{arXiv Preprint arXiv:1901.06889}.
\leavevmode\hypertarget{ref-schad2020capitalize}{}%
Schad, D. J., Vasishth, S., Hohenstein, S., \& Kliegl, R. (2020). How to capitalize on a priori contrasts in linear (mixed) models: A tutorial. \emph{Journal of Memory and Language}, \emph{110}, 104038. \url{https://doi.org/https://doi.org/10.1016/j.jml.2019.104038}
\leavevmode\hypertarget{ref-schonbrodt2018bayes}{}%
Schönbrodt, F. D., \& Wagenmakers, E.-J. (2018). Bayes factor design analysis: Planning for compelling evidence. \emph{Psychonomic Bulletin \& Review}, \emph{25}(1), 128--142.
\leavevmode\hypertarget{ref-TQMP12-3-175}{}%
Sorensen, T., Hohenstein, S., \& Vasishth, S. (2016). Bayesian linear mixed models using Stan: A tutorial for psychologists, linguists, and cognitive scientists. \emph{Quantitative Methods for Psychology}, \emph{12}(3), 175--200. Retrieved from \url{http://www.ling.uni-potsdam.de/~vasishth/statistics/BayesLMMs.html}
\leavevmode\hypertarget{ref-spiegelhalter1994bayesian}{}%
Spiegelhalter, D. J., Freedman, L. S., \& Parmar, M. K. (1994). Bayesian approaches to randomized trials. \emph{Journal of the Royal Statistical Society. Series A (Statistics in Society)}, \emph{157}(3), 357--416.
\leavevmode\hypertarget{ref-taatgen2006modeling}{}%
Taatgen, N. A., Lebiere, C., \& Anderson, J. R. (2006). Modeling paradigms in ACT-R. \emph{Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation}, 29--52.
\leavevmode\hypertarget{ref-Talts:2018aa}{}%
Talts, S., Betancourt, M., Simpson, D., Vehtari, A., \& Gelman, A. (2018). Validating Bayesian inference algorithms with simulation-based calibration. \emph{arXiv Preprint arXiv:1804.06788}.
\leavevmode\hypertarget{ref-vandekerckhove2018bayesian}{}%
Vandekerckhove, J., Rouder, J. N., \& Kruschke, J. K. (2018). Bayesian methods for advancing psychological science. \emph{Psychonomic Bulletin \& Review}, \emph{25}(1), 1--4.
\leavevmode\hypertarget{ref-vanpaemel2010prior}{}%
Vanpaemel, W. (2010). Prior sensitivity in theory testing: An apologia for the Bayes factor. \emph{Journal of Mathematical Psychology}, \emph{54}(6), 491--498.
\leavevmode\hypertarget{ref-VasishthEtAl2017EDAPS}{}%
Vasishth, S., Nicenboim, B., Beckman, M. E., Li, F., \& Kong, E. (2018). Bayesian data analysis in the phonetic sciences: A tutorial introduction. \emph{Journal of Phonetics}, \emph{71}, 147--161. \url{https://doi.org/10.1016/j.wocn.2018.07.008}
\leavevmode\hypertarget{ref-vehtari2020rank}{}%
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., Bürkner, P.-C., \& others. (2020). Rank-normalization, folding, and localization: An improved \(\widehat R\) for assessing convergence of MCMC. \emph{Bayesian Analysis}.
\leavevmode\hypertarget{ref-wagenmakersPrinciplePredictiveIrrelevance2019}{}%
Wagenmakers, E.-J., Lee, M. D., Rouder, J. N., \& Morey, R. D. (2019). \emph{The Principle of Predictive Irrelevance, or Why Intervals Should Not be Used for Model Comparison Featuring a Point Null Hypothesis} (preprint). PsyArXiv. \url{https://doi.org/10.31234/osf.io/rqnu5}
\leavevmode\hypertarget{ref-wagenmakersBayesianHypothesisTesting2010}{}%
Wagenmakers, E.-J., Lodewyckx, T., Kuriyal, H., \& Grasman, R. (2010). Bayesian hypothesis testing for psychologists: A tutorial on the Savage-Dickey method. \emph{Cognitive Psychology}, \emph{60}(3), 158--189. \url{https://doi.org/10.1016/j.cogpsych.2009.12.001}
\leavevmode\hypertarget{ref-wagers2009agreement}{}%
Wagers, M. W., Lau, E. F., \& Phillips, C. (2009). Agreement attraction in comprehension: Representations and processes. \emph{Journal of Memory and Language}, \emph{61}(2), 206--237.
\leavevmode\hypertarget{ref-Zellner1980PosteriorOR}{}%
Zellner, A., \& Siow, A. (1980). Posterior odds ratios for selected regression hypotheses. \emph{Trabajos de Estadistica Y de Investigacion Operativa}, \emph{31}, 585--603.
\end{cslreferences}
\endgroup
\end{document}
|
1,108,101,564,654 | arxiv | \section{Introduction}
The hadronic states studied in hadron spectroscopy have been successfully explained by quark-antiquark states (mesons) or three quark states (baryons). States whose quantum numbers could be understood in terms of more complex quark structures were up to now not observed. Such exotic states were already proposed on the basis of quark and bag models~\cite{RLJ} in the early days of QCD, with the hope that the discovery of such states would provide new insights into the dynamics of quark interactions. The Skyrme model~\cite{AM,MC} predicts new exotic states belonging to higher SU(3) representations. Using this model, Praszalowicz~\cite{MP} provided the
first estimate of the lightest exotic state of M$\sim$1530 MeV. The Chiral Quark Soliton model~\cite{MC} was aused to obtain an exotic baryon of spin 1/2, isospin 0 and strangeness S=+1. In this approach~\cite{HW,DPP} the baryons are rotational states of the soliton nucleon in spin and
isospin space, and the lightest exotic baryon lies at the apex of an anti-decuplet with spin 1/2,
which corresponds to the third rotational excitation in a three flavour system. Treating the known
N(1710) resonance as a member of the antidecuplet, Diakonov, Petrov and Polyakov~\cite{DPP} derived a
mass of 1530 MeV and a width of less than 15 MeV for this exotic baryon, named the
$\Theta^{+}$. It corresponds to a $uudd\bar{s}$ configuration, and decays through the channels
$\Theta^{+}\rightarrow pK^0_S$ or $nK^+$.
Experimental evidence for an exotic baryon first came recently~\cite{TN} from the observation of a narrow resonance at $1540 \pm 10$ MeV in the $K^-$ missing mass spectrum for the $\gamma n \rightarrow K^+K^-n$ reaction on $^{12}C$. The decay mode corresponds to a $S=+1$ resonance and signals which can be associated with an exotic pentaquark state with content $uudd\bar{s}$. Confirmation came quickly from a series of experiments, with the observation of sharp peaks~\cite{DIANA,CLAS,SAPHIR,AEA} in the $nK^{+}$ and $pK_S^{0}$ invariant mass spectrum near 1540 MeV, in each case with a width limited by the experimental resolution. The failure to observe a corresponding $\Theta^{++}$ peak in the $pK^+$ invariant mass spectrum in some of these experiments was taken to suggest that the state is an isospin singlet.
The $\Theta^{+}$ has been observed both in fixed target experiments and high energy experiments. In the case of fixed target experiments the $\Theta^{+}$ can originate from the valence quarks as opposed to high energy experiments were the $\Theta^{+}$ is produced in the fragmentation.
The baryon states at the bottom two vertices of the anti-decuplet are also exotic. Strong evidence in support of the baryon decuplet comes from the reported observation of an exotic $S=-2$, $Q=-2$ baryon resonance in proton-proton collisions at $\sqrt{s}=17.2$ GeV at the CERN SPS~\cite{NA49}. A narrow peak at a mass of about 1862 MeV in the $\Xi^{-}\pi^{-}$ invariant mass spectrum is proposed as a candidate for the predicted exotic $\Xi^{--}_{\frac{3}{2}}$ baryon with $S=-2$, $I=\frac{3}{2}$ and a quark content of $dsds\bar{u}$. At the same mass,
a peak is observed that is a candidate for $\Xi^{0}_{\frac{3}{2}}$. The corresponding anti-baryon spectra shows enhancement at the same mass.
This paper presents the results of the searches for strange pentaquarks from HERMES~\cite{HERMES}, ZEUS~\cite{ZEUSTHETA,ZEUSTHETA2,ZEUSCASCADE} and HERA-B~\cite{HERAB}, and for an anti-charmed baryon decaying into $D^{*-}p$ from H1~\cite{H1} and ZEUS~\cite{ZEUSDSTAR}.
\section{Kinematics at HERA}
HERA is a positron and proton storage ring with four experiment halls.
Positrons with an energy of 27.5 GeV are collided on protons of 820-920 GeV in two interaction regions (H1 and ZEUS) yielding a centre-of-mass energy of $\sqrt{s} =300 - 318$ GeV. In a third interaction region (HERMES) the positrons interact on a deuteron target at $\sqrt{s} =7.2$ GeV. In the last interaction region (HERA-B) the protons interact on a carbon (C), titanium (Ti) or tungsten (W) target at $\sqrt{s} = 41.6$ GeV.
The kinematics of the lepton-nucleon scattering is described by three independent variables: the centre-of-mass energy $\sqrt{s}$, the four-momentum transfer squared $q^2 = -Q^2$ and either the scaling variable $x=Q^2/2P\cdot q$ or the inelasticity $y=P\cdot q/P\cdot k$, where $P$ and $k$ denote the four-momentum of the nucleon and the lepton, respectively. The $\gamma p$ centre-of-mass energy squared is given by $W_{\gamma p}^2 \approx y\cdot s - Q^2$.
\begin{figure} [t]
\vspace*{-.2cm}
\unitlength 1cm
\begin{minipage}{6cm}
\vspace*{0.2cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=hermes1a.eps,height=6cm,clip=}}
\put(4.5,5.){\tiny(a)}
\end{picture}
\end{minipage}
\hfill
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=DESY-04-056_4.eps,height=6cm,clip=}}
\put(4.3,5.){\tiny(b)}
\end{picture}
\end{minipage}
\caption{
Invariant $M_{p\pi^+\pi^-}$ mass distribution observed by (a) HERMES at $Q^2>1$ GeV$^2$ and (b) ZEUS at $Q^2>20$ GeV$^2$.
\label{plot_Theta}
}
\end{figure}
\section{Search for strange pentaquarks}
The production of $\Theta^{+}$ has been studied via its decay into $K^0_Sp$ in three different kinematic regions by HERMES~\cite{HERMES}, ZEUS~\cite{ZEUSTHETA,ZEUSTHETA2} and HERA-B~\cite{HERAB}. Furthemore the ZEUS~\cite{ZEUSCASCADE} and the HERA-B ~\cite{HERAB} collaborations have searched for the S=-2 baryons $\Xi^{--}_{\frac{3}{2}}$ and $\Xi^{0}_{\frac{3}{2}}$.
\subsection{Search for $\Theta^+$}
The HERMES collaboration has performed a $\Theta^+$ search using the decay chain $\Theta^+ \rightarrow p K_S^{0} \rightarrow p \pi^+ \pi^-$. The sample used is eN scattering data with a longitudinally polarized deuterium gas target having an integrated luminosity of 250 pb$^{-1}$. The yields were summed over two spin orientations. The kinematic region is restricted to $0.02 < x < 0.8$, $Q^2 > 1$ GeV$^2$ and $W>2$ GeV. Hadron identification is accomplished with the Ring-Imaging Cerenkov detector which provides good separation of pions, kaons and protons. Identified protons are combined with well identified secondary vertices with an invariant $M_{\pi^+\pi^-}$ mass within $2\sigma$ of the reconstructed $K_S^{0}$ mass. Possible $\Lambda$ contamination is suppressed by rejecting $K_S^{0}$ candidates with a $M_{p\pi^-}$ mass within $2\sigma$ of the nominal $\Lambda$ mass.
The resulting $M_{p{\pi}^+\pi^-}$ mass distribution is shown in Figure~\ref{plot_Theta}a. A narrow peak structure is observed around the $\Theta^+$ mass. No such structure is obtained when the $\pi^+\pi^-$ mass combinations from the $K_S^{0}$ side bands are used instead. the data are also compared with expectations from the PYTHIA6 Monte Carlo simulation~\cite{PYT} (gray shaded histogram) and from the mixed-event model (fine-binned histogram) normalized to PYTHIA6. In the second model, mixed-event model, it is assumed that the 4-momenta of the $K_S^{0}$ and the proton are largely uncorrelated. The background can then be simulated by combining a kaon and a proton from different events which satisfy the same cuts as in the original analysis.
No peak structure is visible in the Monte Carlo or the mixed-event model expectations. The PDG~\cite{PDG} reports the possible existance of several $\Sigma$ bumps decaying to $N\overline{K}$ in this mass region. These are not included in the simulation and may account for the discrepancies.
The fit to the data shown in Fig.~\ref{plot_Theta}a (smooth solid line), which is based on the mixed-event model, the $\Sigma$ bumps (dotted curves) and a Gaussian (dashed curve) for a possible $\Theta^+$ signal, yields a good description. A peak of about 80 $\Theta^+$ events with a significance of $4.3\sigma$ is observed at a mass of $M=1527\pm 2.3 (stat.)$ MeV.
A similar analysis has been performed in the ZEUS collaboration at higher energies using the $ep$ data taken in the years 1996-2000 with an integrated luminosity of $121$ pb$^{-1}$. The kinematic region is restricted to $Q^2>1$ GeV$^2$ and $0.01 \le y \le 0.95$.
The decay chain $\Theta^+ \rightarrow p K_S^{0} \rightarrow p \pi^+ \pi^-$ has also been used. About 866800 $K_S^{0}$ candidates are selected for $Q^2>1$ GeV$^2$. They are combined with proton candidates selected via the energy-loss measurement, $dE/dx$, in the central tracking chamber.
The $M_{p{\pi}^+\pi^-}$ mass distribution shows sign of structure below about 1600 MeV. For $Q^2>10$ GeV$^2$, a peak is seen in the mass distribution around 1520 MeV. In Figure~\ref{plot_Theta}b the $M_{p{\pi}^+\pi^-}$ is shown for $Q^2>20$ GeV$^2$. The figure includes the Monte Carlo expectation from ARIADNE~\cite{ARI} scaled to the data for $M_{p{\pi}^+\pi^-} > 1650$~MeV. After scaling ARIADNE does not describe the data at low masses, maybe due to the absence of the $\Sigma$ bumps in the simulation.
A fit to the data of a smooth background function and two Gaussians, also shown in Figure~\ref{plot_Theta}b, gives a signal of $221 \pm 48$ events at a mass of $1521.5\pm 1.5(stat.)$~MeV with a significance of $4.6 \sigma$. The Gaussian width of $6.1$ MeV is found to be consistent with the experimental resolution. The signal is observed at similar rate for protons and for antiprotons suggesting the existance of the anti-pentaquark $\Theta^-$.
\begin{figure} [t]
\unitlength 1cm
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=273_2.eps,height=6.0cm,clip=}}
\put(1.,1.){\tiny(a)}
\end{picture}
\end{minipage}
\hfill
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=273_3.eps,height=6.0cm,clip=}}
\put(1.,1.){\tiny(b)}
\end{picture}
\end{minipage}
\caption{ZEUS measurements of
(a) cross sections for the 1522 MeV baryonic state decaying to $K^0_S p(\bar{p})$ integrated above $Q^2_{min}$, (b) the ratio of the cross section $\sigma(\Theta^+\rightarrow K^0_S p(\bar{p}))$ to the inclusive $\Lambda$ cross section, $\sigma(\Lambda+\bar{\Lambda})$,integrated above $Q^2_{min}$.
\label{plot_zeusThetacross}
}
\end{figure}
The ZEUS collaboration has also measured the cross section for the production of the $\Theta^+$ baryons and their antiparticles in the kinematic region $Q^2>20$ GeV$^2$, $0.04 \le y \le 0.95$, $p_T > 0.5$ GeV and $|\eta|< 1.5$,
\begin{displaymath}
\sigma (e^\pm p \rightarrow e^\pm \Theta^+ X \rightarrow e^\pm K_S^{0} p X) = 125 \pm 27 (stat.) ^{+36}_{-28} (syst.)\; pb.
\end{displaymath}
Figure~\ref{plot_zeusThetacross}a shows the cross section integrated above $Q^2_{\rm min}$. Figure~\ref{plot_zeusThetacross}b shows the ratio of this cross section to the $\Lambda$ cross section integrated above $Q^2_{\rm min}$, where the ratio, defined in the same kinematic region as above, is
\begin{displaymath}
ratio = \frac{\sigma (e^\pm p \rightarrow e^\pm \Theta^+ X \rightarrow e^\pm K_S^{0} p X)}{\sigma (e^\pm p \rightarrow e^\pm \Lambda X)}.
\end{displaymath}
This ratio for $Q^2_{\rm min}=20$ GeV$^2$, is $4.2\pm 0.9 (stat.) ^{+1.2}_{-0.9}(syst.) \%$ and, in the analyzed data, shows no significant dependence on $Q^2_{\rm min}$. Since the $\Theta^+$ has other decay channels in addition to $\Theta^+\rightarrow K_S^{0} p$, this ratio sets a lower limit on the production rate of the $\Theta^+$ to that of the $\Lambda$-baryon.
The HERA-B Collaboration has searched for the $\Theta^{+}$ pentaquark candidates in proton-induced reactions on C, Ti and W targets at mid-rapidity and $\sqrt{s}=41.6$ GeV, in $2 \cdot 10^8$ inelastic events. No evidence for a narrow signal in the $K_S^{0} p$ spectrum is found. The 95\% confidence level (C.L.) upper limits for the inclusive production cross section times the branching fraction is ${\cal B} d\sigma /dy | _{y \sim 0}$ is 3.7 $\mu b/nucleon$ for a mass of 1530 MeV and 22 $\mu b/nucleon$ for a mass of 1540 MeV. The upper limit of the ratio $\Theta^+/\Lambda (1520)<2.7\%$ is significantly lower than the model predictions based on the Gribov-Regge approach for describing the $\Theta^{+}$ production and its $\sqrt{s}$ dependence in pp collisions~\cite{GRI}.
HERMES and ZEUS have also searched for the $\Theta^{++}$ signal via its possible decay $\Theta^{++} \rightarrow K^+ \pi^+$. Figure~\ref{plot_Thetaplusplus} show the $M_{pK^-}$ and $M_{pK^+}$ mass spectrum observed by HERMES (Fig.~\ref{plot_Thetaplusplus}a) and ZEUS (Fig.~\ref{plot_Thetaplusplus}b). No peak structure is observed in the $M_{pK^+}$ spectrum but in the $M_{pK^-}$ spectrum the well established resonance $\Lambda (1520) \rightarrow pK^-$ is clearly seen. As no signals are found in the $\Theta^+$ mass range, this suggests that the $\Theta^+$ could be isoscalar.
\begin{figure} [t]
\unitlength 1cm
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=avetik.penta_q-3.eps,height=6.0cm,clip=}}
\put(5.2,5.5){\tiny(a)}
\end{picture}
\end{minipage}
\hfill
\begin{minipage}{6cm}
\vspace*{-1.5cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=penta_support_2.eps,height=4.5cm,clip=}}
\put(5.2,3.5){\tiny(b)}
\end{picture}
\end{minipage}
\caption{
Invariant mass distribution $M_{pK^-}$ (top) and $M_{pK^+}$ (bottom) observed by (a) HERMES and (b) ZEUS.
\label{plot_Thetaplusplus}
}
\end{figure}
\begin{figure} [t]
\unitlength 1cm
\begin{minipage}{6cm}
\vspace*{-1.cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=293_3.eps,height=4.5cm,clip=}}
\put(5.4,3.6){\tiny(a)}
\end{picture}
\end{minipage}
\hfill
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=xipi_jul13e.eps,height=5.8cm,clip=}}
\put(5.2,5.3){\tiny(b)}
\end{picture}
\end{minipage}
\caption{
Invariant mass distribution $M_{\Xi\pi}$ observed by (a) ZEUS for $Q^2>1$ GeV$^2$ and for all four charge combinations combined (b) HERA-B in p+C collisions separated in different charge combinations.
\label{plot_cascade}
}
\end{figure}
\subsection{Search for $\Xi^{--}_{\frac{3}{2}}$ and $\Xi^{0}_{\frac{3}{2}}$}
ZEUS has performed an analysis in the channel $\Xi^-\pi^\pm$ to search for the strange pentaquark $\Xi^{--}$ and its neutral partner. The decay chain $\Xi^{--} \rightarrow \Xi^-\pi^- \rightarrow \Lambda\pi^- \pi^-$ has been considered. $\Lambda$ baryons were identified by the charged-decay mode, $\Lambda \rightarrow p \pi^-$ , using pairs of tracks from secondary vertices.
These are then combined with another pion from the primary vertex. Figure~\ref{plot_cascade}a shows the $M_{\Xi\pi}$ mass distribution for all possible $\Xi\pi$ charge combinations for $Q^2>1$ GeV$^2$. While the $\Xi^0(1530)$ is clearly visible, no signal is observed around the 1860 MeV as observed by the NA49 collaboration~\cite{NA49}. Even when restricting to $Q^2>20$ GeV$^2$, where the $\Theta^+$ signal was best seen by ZEUS, no signal is observed.
HERA-B has also searched for the strange pentaquark $\Xi^{--}$ in the $\Xi^-\pi^\pm$ channels in proton-induced reactions on C, Ti and W targets.
In the analysis, clean signals for $\Xi^-\rightarrow\Lambda\pi^-$ are obtained by requesting the $\Lambda\pi^-$ vertex to be at least 2.5 cm downstream of the target and the event to exhibit a cascade topology: a further downstream $\Lambda$ vertex and a $\Xi^-$ pointing back to the target wire (impact parameter $b < 1$ mm). The pion candidates were required to originate from the primary vertex. Figure~\ref{plot_cascade}b shows the $M_{\Xi\pi}$ mass distribution for the C target, separated in the different charge combinations. In the neutral channels the $\Xi^{0}(1530)$ resonance is seen with a signal of $\sim 10^3$ events. The observed width of ($\sim 9.5$ MeV) agrees with the MC simulation. None of the mass spectra of Fig.~\ref{plot_cascade}b show evidence for the narrow resonance reported by the NA49 collaboration.
\section{Search for a narrow charmed baryonic state}
The production of a charmed pentaquark $\Theta_c$ has been studied via its decay into $D^* p$ by H1~\cite{H1} and ZEUS~\cite{ZEUSDSTAR}.
The analysis of H1 is based on the DIS data taken in the years 1996-2000 with a luminosity of $75$ pb$^{-1}$ in the kinematic region $1\le Q^2 \le 100$ GeV$^2$ and $0.05 \le y \le 0.7$. The $D^{*\pm}$ charmed meson has been reconstructed via its decay chain $D^{*+} \rightarrow D^0 \pi_S^{+} \rightarrow (K^- \pi^+) \pi_S^{+}$. Around 3400 $D^*$ candidates are selected. $D^*$ candidates having a mass difference $\Delta M_{D^*} = m(K\pi\pi_S) - m(K\pi)$ within 2.5 MeV around the nominal $M(D^*) -M(D^0)$ mass difference are combined with proton candidates selected via $dE/dx$.
The resulting $M_{D^{*-}p}$ distribution in Fig.~\ref{plot_h1Thetac}a shows a clear narrow peak close to the threshold. the data are compared with the absolute expectations from the $D^*$ Monte Carlo (dark histogram) and the non-charmed induced background (light histogram), estimated from the same charge $K\pi$ combinations. No enhancement is seen in any of the background samples. The sum of the two samples reproduces the data well except for the signal region. No peak is observed when selecting either $K\pi\pi_S$ combinations from the $D^*$ side bands or $K\pi$ combinations with masses above the region where the charm contributes or selecting pions instead of protons. The signal is both observed in the $D^{*-}p$ and in the $D^{*+}\overline{p}$ sample with compatible mass, width and rate. No significant enhacement is observed in the like sign $D^* p$ sample.
\begin{figure} [t]
\unitlength 1cm
\vspace*{-1.5cm}
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=h1_fig2a.eps,height=4.5cm,clip=}}
\put(0.8,4.){\tiny(a)}
\end{picture}
\end{minipage}
\hfill
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=h1_fig5a.eps,height=4.5cm,clip=}}
\put(0.8,4.){\tiny(b)}
\end{picture}
\end{minipage}
\vfill
\vspace*{-1.5cm}
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=h1_fig7.eps,height=4.35cm,clip=}}
\put(0.8,4.){\tiny(c)}
\end{picture}
\end{minipage}
\hfill
\begin{minipage}{6cm}
\begin{picture}(6,6)
\end{picture}
\end{minipage}
\caption{
(a) Invariant mass distribution $M_{D^*p}$ from H1 for $Q^2>1$ GeV$^2$. (b) Momentum distribution of charged particles yielding $M_{D^*p}$ values falling in the signal and side band regions. (c) $M_{D^*p}$ distribution compared to the fit results with two hypotheses: signal plus background (solid line) and background only (dashed line).
\label{plot_h1Thetac}
}
\end{figure}
In order to check that the signal is due to the decay of a new particle the momentum distribution of the proton candidates without $dE/dx$ cuts has been studied. A background fluctuation should show similar distribution in the signal region and in the side bands. For a real decay, a harder spectrum is expected due to the Lorentz boost of the decaying particle. Fig.~\ref{plot_h1Thetac}b reveals a significantly harder spectrum in the $M(D^*p)$ signal region compared to the side bands.
The log-likehood fits to the $M(D^*p)$ distribution is shown in Fig.~\ref{plot_h1Thetac}c. The background is parametrised by a power law while a Gaussian is used for the signal. A signal of 51 events is observed with a mass of $3099\pm 3 (stat.) \pm 5 (syst.)$ MeV and a width of $12\pm 3 (stat.)$ MeV consistent with the experimental resolution. The background fluctuation probability has been estimated to be less than $4 \cdot 10^{-8}$.
\begin{figure} [t]
\unitlength 1cm
\vspace*{0.5cm}
\begin{minipage}{6cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=DESY-04-164_7-1.eps,height=7cm,clip=}}
\end{picture}
\end{minipage}
\hfill
\begin{minipage}{6cm}
\hspace*{-1.cm}
\begin{picture}(6,6)
\put(0.,0.){\psfig{file=DESY-04-164_6-2.eps,height=7cm,clip=}}
\end{picture}
\end{minipage}
\caption{
Invariant mass distribution $M_{D^*p}$ from ZEUS for charmed pentaquark candidates selected in the full data sample (a$\&$b) and DIS with $Q^2>1$ GeV$^2$ (c$\&$d), for the $D^*$ decay channels $D^{*\pm}\rightarrow K\pi\pi_S$ (a$\&$c) and $D^{*\pm}\rightarrow K\pi\pi\pi_S$ (b$\&$d).
The solid curves are fits to the background function outside the signal window. The shaded histogram show the Monte Carlo $\Theta_C$ signals, nurmalised to $1\%$ of the number of reconstructed $D^*$ mesons, and added to the fit interpolation (dashed curved) in the signal window.
Invariant mass distribution $M_{D^*p}$ obtained using H1 selection criteria in (e) DIS with $Q^2>1$ GeV$^2$ and (f) photoproduction with $Q^2<1$ GeV$^2$.
\label{plot_zeusThetac}
}
\end{figure}
A similar search has been performed by ZEUS in both photoproduction and DIS regimes. Data from the years 1995-2000 with an integrated luminosity of 126 pb$^{-1}$ have been analyzed. About 9700 D* candidates are selected for $Q^2>1$ GeV$^2$ and 43000 candidates for all data, and are combined with proton candidates selected via $dE/dx$. The mass distribution $M(D^*p)$ using the same selection criteria as H1 are shown in Fig.~\ref{plot_zeusThetac}(right) for data with $Q^2>1$ GeV$^2$ (Fig.\ref{plot_zeusThetac}e) and $Q^2<1$ GeV$^2$ (Fig.\ref{plot_zeusThetac}f). the data are compared with the absolute expectations from the $D^*$ Monte Carlo (solid histogram) and the combinatorial background (open histogram). No signal is observed at 3.1 GeV.
Figure~\ref{plot_zeusThetac}(left) shows the mass spectrum $M(D^*p)$ for the full data sample (a and b) and DIS with $Q^2>1$ GeV$^2$ (c and d) obtained with the ZEUS selection criteria~\cite{ZEUSDSTAR}. Two different $D^*$ decay channels $D^{*\pm}\rightarrow (K\pi)\pi_S$ (a and c) and $D^{*\pm}\rightarrow (K\pi\pi\pi)\pi_S$ (b and d) are considered. No signal is seen in any of the decay channels or kinematic regions considered.
Upper limits on the fraction of $D^*$ mesons originating from the $\Theta_c^0$ decays, $R=N(\Theta_c \rightarrow D^*p)/N(D^*p)$, were set by ZEUS in the signal window of $3.07 < M(D^*p) < 3.13$ GeV. This window covers the H1 measurement. The $95\%$ confidence level upper limit on the fraction $R$ is $0.23\%$. The upper limit for DIS with $Q^2>1$ GeV$^2$ is $0.35\%$ at $95\%$ C.L. Thus, the ZEUS results are not compatible with the report of the H1 collaboration of a charmed pentaquark which contributes around $1\%$ of the $D^{*\pm}$ production rate.
\section{CONCLUSIONS}
Recent results from H1, ZEUS, HERMES and HERA-B on searches for exotic baryons in ep collisions, eD scattering and pA scattering at HERA have been presented. ZEUS and HERMES have found evidence for the production of the strange pentaquark $\Theta^{+}$. HERA-B on the contrary has not found any signal compatible with the $\Theta^{+}$ and has obtained limits for its production in pA scattering.
ZEUS and HERA-B have not found any evidence for the signal seen by the NA49 collaboration attributed to the $\Xi^{--}$. Both collaborations see a clear signal for the $\Xi^{0}(1530)$ resonance.
H1 has found evidence for the existence of a narrow anti-charmed baryon decaying to $D^{*-}p$. This result has not been confirmed by the ZEUS analysis which has been performed in a similar kinematic region.
Pentaquark searches and studies are still an open issue at HERA. Further studies are needed to understand the positive and negative results obtained in the different searches performed by the four HERA collaborations.
|
1,108,101,564,655 | arxiv | \section{Introduction}
The study of the finite temperature phase transition in QCD with Wilson
fermions is much more complicated than in the staggered fermion formulation,
because of the absence of an order parameter due to explicit chiral symmetry
breaking. This means e.g. that the existence of a massless pion at finite
lattice spacing is not at all obvious.
In recent years a detailed picture of the finite temperature phase diagramm of
QCD with Wilson fermions has emerged \cite{Phdiag}. This picture is based on
the idea of spontaneous breakdown of parity and flavour symmetry \cite{Aoki-I}
and has been investigated analytically as well as numerically.
The features of this phase diagramm are
(i) the critical line $\kappa_c(\beta)$ defined by a vanishing pion screening
mass for finite temporal lattice size turns back towards strong coupling
forming a cusp,
(ii) the region bounded by the critical line represent a phase of spontaneously
broken parity and flavour symmetry,
(iii) the finite temperature phase transition line $\kappa_t(\beta)$ presumably
does not cross the critical line, but runs past it towards larger values of
the hopping parameter\cite{Aoki-II}.
From an analysis of the Gross-Neveu model in two dimensions, where three cusps
connected to doublers develop, one expects the critical line for QCD in four
dimensions to form five cusps moving towards weak coupling with increasing
temporal lattice size $N_\tau$. Simulations with the standard Wilson
formulation for quarks and gluons have shown, that at $N_\tau=1/(aT)=4$ the
tip of the cusp lies in the strong coupling regime at $\beta \approx 3.9$ and
moves only slowly towards weak coupling as $N_\tau$ is increased.
Since one
expects the same features to hold for a wider class of actions including the
clover action, which tends to reduce cutoff dependencies, a study of
the phase diagramm using improved actions is of practical as well as
theoretical importance\cite{improv}.
\section{Results}
We have conducted a simulation of 2 flavour QCD on an $8^3 \times 4$ lattice
using tree level Symanzik improved actions for quarks and gluons. In the gauge
sector this amounts to adding a $2 \times 1$-loop to the standard plaquette
action and for the fermions in adding the clover term with $c_{SW}=1$.
We have used
a Hybrid Monte Carlo algorithm with a timestep $\delta\tau=0.01$ and the number
of molecular dynamics steps $N_{MD}=20$, which so far amounts to rather short
trajectories. We simulated several $\kappa$ values
for each $\beta=3.00, 3.50, 3.75$ and $4.00$. We have measured the Polyakov
loop, the pion norm and the average number of iterations it takes to invert
the fermion matrix. Each observable is now discussed in detail.
\subsection{Locating the critical line}
Although not a physical observable in its own right, the average number of
iterations it takes to invert the fermion matrix is a very good indicator for
criticality.
The simulations where done in the following way. At each $\beta$-value we
started a simulation at $\kappa=0.12$ and used a thermalized configuration from
this run as a start configuration at a higher value of $\kappa$. We continued
to do so for higher and higher $\kappa$-values. At those $\beta$-values, where
we saw a drastic increase in the number of iterations, we started a simulation
at a much higher $\kappa$-value and continued towards smaller $\kappa$-values.
Our findings are summarized in {Fig.\,1}. At $\beta=3.00$ we were not
able to simulate the system for $0.1770<\kappa<0.1825$. At $\beta=3.50$ we
saw an initial increase of iterations, even after switching from minimal
residual to conjugate gradient, which usually decreased the number of
iterations. With increasing $\kappa$ the number of iterations decreased
again, only to increase again at fairly high $\kappa$-values. At $\beta=3.75$
and $4.00$ we saw a similar behaviour as at $\beta=3.50$ but not as pronounced.
We experimented with different inversion routines and our conclusions are, that
close to the critical line conjugate gradient is superiour to overrelaxed
minimal residual and BiCGstab1.
\begin{figure}[t]
\begin{center}
\epsfig{file=fig1.eps, width=7.75cm}
\end{center}
{\small Fig.\,1 Average number of iterations as a function of
$\kappa$ for $\beta=3.00$ (triangles), $\beta=3.50$ (circles),
$\beta=3.75$ (diamonds) and $\beta=4.00$ (boxes). The inverter used are
conjugate gradient (lines), overrelaxed minimal residual (dashed),
BiCGstab1 (dotted)}
\end{figure}
\subsection{Pion Norm}
Since it is not possible to reliably extract the pion screening mass on small
lattices, we use the pion norm instead to indicate the existence of a critical
line of vanishing pion screening mass.
The pion norm is the integrated pion correlator and is defined as
follows:
\bq
\Pi= \frac{1}{4 {N_{\sigma}}^3 {N_{\tau}}} \cdot {\rm Tr~}
\left[{\cal M}^{-1} \gamma_5 {\cal M}^{-1} \gamma_5\right]
\end{equation}
Here ${\cal M}$ is the fermion matrix on a particular gauge configuration.
Near the critical line the pion norm behaves as $\Pi \approx 1/m_{\pi}^2$,
hence a diverging pion norm indicates the existence of the critical line.
Our results are displayed in {Fig.\,2}. At $\beta=3.00$ we find a clear signal
for two critical lines close to $\kappa=0.1770$ and $0.1825$.
The difference in
$\kappa$ is already quite small, so we are near the tip of the cusp. At
$\beta=3.50$ the pion norm develops a small peak at $\kappa=0.1550$, which is
located where the crossover from the low temperature to the high temperature
phase starts (see section on Polyakov loop). No divergent behaviour is observed
and the critical line ceases to exist in this coupling region. The simulations
at $\beta=3.75$ and $4.00$ do not even find a peak for the pion norm, but
rather a smooth behaviour as a function of hopping parameter. We conclude, that
the critical line turns back towards strong coupling and develops a cusp
between $\beta=3.00$ and $3.50$.
\begin{figure}[t]
\begin{center}
\epsfig{file=fig2.eps, width=7.75cm}
\end{center}
{\small Fig.\,2 Pion norm as a function of $\kappa$ for $\beta=3.00$
(triangles), $\beta=3.50$ (circles), $\beta=3.75$ (diamonds) and
$\beta=4.00$ (boxes).}
\vspace{-1ex}
\end{figure}
\subsection{Polyakov Loop}
The Polyakov loop is defined as follows:
\bq
L = \frac{1}{{N_{\sigma}}^3} \sum_{\vec{x}} \frac{1}{N_c} {\rm Tr~} \prod_{\tau=1}^{{N_{\tau}}}
U_4(\vec{x},\tau)
\end{equation}
This observable is sensitive to the finite temperature phase transition
although it is no order parameter in the full theory. Our results are displayed
in {Fig.\,3}. At $\beta=3.00$ we find
a confined phase for $\kappa \le 0.1770$. For for $\kappa > 0.1825$, when one
approaches the critical line from above, the Polyakov loop decreases to
$|L| \approx 0.1$. This indicates that the system develops a low temperature
behaviour in the vicinity of the critical line. On the other hand we do
not see a sharp crossover, so we cannot conclude that one crosses the thermal
line as one lowers $\kappa$ towards $\kappa_c(\beta)$.
At $\beta=3.50$ the Polyakov loop displays a sharp crossover phenomenon, which
means that the system crosses the thermal line for
{$0.1550 < \kappa < 0.1600$}.
At $\beta=3.75$ and $4.00$ the system is in the high temperature phase down to
$\kappa=0.12$. This is not unexpected, since the finite temperature phase
transition in the quenched theory for our choice of action occurs at
$\beta_c=4.07$.
\begin{figure}[t]
\begin{center}
\epsfig{file=fig3.eps, width=7.75cm}
\end{center}
{\small Fig.\,3 Polyakov loop as a function of $\kappa$ for
$\beta=3.00$ (triangles), $\beta=3.50$ (circles), $\beta=3.75$ (diamonds)
and $\beta=4.00$ (boxes).}
\end{figure}
\section{Summary and Conclusions}
From measuring the pion norm, Polyakov loop and the average number of
iterations to invert the fermion matrix we conlude, that the finite temperature
phase structure observed with the standard Wilson formulation is preserved for
the tree level Symanzik improved formulation of quarks and gluons. We note that
the difference in $\beta$ between the location of the cusp and the quenched
$\beta_c$ is considerably reduced. What this means in terms of physical scales
remains to be investigated by measurements of the lattice spacing. After this
preparatory simulation on a small lattice, we will investigate the phase
diagramm on larger lattices including a precise measurement of the
pion screening mass and quark mass, the latter enabling us to study the chiral
condensate as well \cite{ChWI}.
|
1,108,101,564,656 | arxiv | \section{Introduction}
\citet{Snowden_etal_1994} reported the existence of mysterious X-ray
contamination episodes in the {\it ROSAT} all sky survey data which they
termed long-term enhancements (LTEs). During an LTE, the X-ray counting
rate in the lower energy bands as much as doubled on a time scale of
1--2~days. However, they could not find any correlation with other
observational parameters, such as spacecraft position or look direction.
New insight on LTEs was obtained by the discovery of X-ray emission from
comet Hyakutake (\cite{Lisse_etal_1996}). Following the discovery,
X-rays were detected from many comets (e.g.,
\cite{Dennerl_etal_1997,Cravens_2002}), and the emission mechanism is
now well understood as charge exchange of solar-wind heavy ions with
cometary neutrals (see \cite{Krasnopolsky_etal_2004} for a review).
Then \citet{Cox_1998} and \citet{Cravens_2000} suggested that solar-wind
charge exchange with neutrals in the Earth's geocorona and in the
heliosphere accounts for a part of soft X-ray background below 1~keV.
\citet{Robertson_etal_2001} showed that the LTEs of the {\it ROSAT} all
sky survey data were well correlated with the solar-wind proton flux,
which strongly suggests the origin of the LTEs to be solar-wind charge
exchange with H\emissiontype{I} in the geocorona. Solar-wind charge
exchange results in line emission from highly ionized ions
(\cite{Krasnopolsky_etal_2004} and references therein). {\it ROSAT} did
not have enough spectral resolution to resolve those lines. Spectral
information on geocoronal solar-wind charge exchange was first obtained
during a {\it Chandra} dark moon observation \citep{Wargelin_etal_2004}.
The X-ray photons detected in the direction of the dark moon are most
likely from this source. The emission spectrum could be described by a
sum of C\emissiontype{VI}, O\emissiontype{VII}, and O\emissiontype{VIII}
K-lines, although the statistics and energy resolution were limited.
More recently, \citet{Snowden_etal_2004} reported time variation of soft
X-ray intensity during the {\it XMM-Newton} {\it Hubble} deep field
north observation. The enhancement was correlated with solar-wind proton
flux variations. They detected C\emissiontype{VI}, O\emissiontype{VII},
O\emissiontype{VIII}, Ne\emissiontype{IX}, and Mg\emissiontype{XI}
emission lines in the enhancement.
The importance of solar-wind charge-exchange emission is three-fold.
Firstly, it will enable us to remotely study low density neutrals in
geocorona, in outer atmosphere of planets such as Jupiter, in
interplanetary space, and in particular, around comets. Secondly, it
can be used as a highly sensitive ion probe for the solar wind. Perhaps
most importantly, it becomes a significant contaminating
foreground in the study of the cosmic soft X-ray background below
$\sim$ 1~keV. More than half of the soft X-ray background at these
energies is considered to arise from hot gas in the disk and halo of our
galaxy and in intergalactic space. Although such emission was detected
early in the history of X-ray astronomy (e.g. \cite{Tanaka_1977}), its
origins and the physical state of the hot gas are not yet well
understood. Geocoronal interactions with the solar wind produce at
least a sporadic contamination that must be avoided in observations of
any extended source. Charge exchange with interstellar neutrals moving
through the heliosphere creates a more subtle contribution where much of
the diagnostic rapid time variation is washed out by the large travel
time of solar-wind events through interplanetary space, and the spectral
lines arise from the same ions expected in hot interstellar plasmas.
\citet{Lallement_2004} used a simplified model of interplanetary charge
exchange to estimate that essentially all of the minimum flux observed
near the Galactic plane in the 1/4~keV (R12) band of the {\it ROSAT} sky
survey could arise from this source (see also
\cite{Pepino_2004}). We are far from an understanding of solar-wind
charge exchange adequate to determine the true extent of this
contribution.
The X-ray Imaging Spectrometer (XIS) on board {\it Suzaku}
\citep{Suzaku} has a significantly improved spectral line response
function compared to previous X-ray CCD cameras, particularly below
1~keV \citep{XIS}. Together with the large effective area of the X-ray
telescopes (XRT; \cite{XRT}), this will open a new era for the study of
soft X-ray background. In this paper, we report on {\it Suzaku}
observations of a blank field in direction of the north ecliptic pole
(NEP). We detected a significant enhancement of the soft X-ray flux
lasting for $\sim 10$~hours. The enhancement is mostly explained by
increases in C\emissiontype{VI} through Mg\emissiontype{XI} emission
lines. During the enhancement, both C\emissiontype{VI} $n=2$ to 1 and
C\emissiontype{VI} $n=4$ to 1 transition lines were clearly detected,
which is a firm evidence for charge-exchange emission. We consider that
the emission is due to the charge-exchange interaction of solar-wind
heavy ions with neutrals in the magnetosheath at 2--8 Earth radii
($R_{\oplus}$). In this paper, we will concentrate on the X-ray spectra
and the emission processes, and their implications for cosmic X-ray
observations. The geophysical implications of the results will be
discussed in a separate paper.
Errors quoted in the text and tables are at 90\% confidence single
parameter errors and at $1\sigma$ confidence level in the figures,
unless otherwise stated.
\section{Analysis and results}
The NEP region was observed with {\it Suzaku} twice during the Science
Working Group (SWG) observation time. The XIS was set to normal clocking
mode and the data format was either $3\times 3$ or $5\times 5$. A log of
the observations is shown in table~\ref{tab:obslog}. In this paper, we
concentrate on the spectral change during a ``flare'' detected in the first
observation. For that purpose, we try to model the stable components,
and then evaluate the spectral change. We use the data from the
backside-illuminated CCD (XIS1), because of its much superior
performance below 1~keV \citep{XIS}.
\begin{table*}
\caption{Log of the NEP observations}
\label{tab:obslog}
\begin{center}
\begin{tabular}{c|cc}\hline\hline
target coordinates & \multicolumn{2}{c}{($\alpha$, $\delta$) =
(272.8000, 66.0000)}\\
observation ID & 100018010 & 500026010\\
observation period & 2005 Sep. 2 14:30--Sep. 4 15:00 & 2006
Feb. 10 5:50--Feb. 12. 2:00\\
net exposure time & 109.8~ks & 88.6~ks\\
& ($3\times 3$: 95.7~ks, $5\times 5$: 14.1~ks) &
($3\times 3$: 71.5~ks, $5\times 5$: 17.1~ks) \\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Data reduction}
\label{sec:data_reduction}
We used version 0.6 processed {\it Suzaku} data\footnote{Version 0
processing is an internal processing applied to the Suzaku data obtained
during the SWG phase, for the purpose of establishing the detector
calibration as quickly as possible. Some processes that are not critical
for most of the initial calibration and scientific studies, e.g., aspect
correction, fine tuning of the event time tagging of the XIS data, are
skipped in version 0 processing, and hence, quality of the products is
limited in these directions, compared with the official data supplied to
the guest observers. As of 2006 July, version 0.7 is the latest, where
the absolute energy scale accuracy of $\pm0.2$~eV at the
iron K$\alpha$ energy and $\pm5$~eV below 1~keV is achieved for the XIS
data \citep{XIS}. In this paper, we used version 0.6 data where the
energy scale of the XIS data are less accurate ($\sim10$~eV
below 1~keV) than that of version 0.7, because the
empirical model of the contamination distribution was obtained based on
version 0.6 data. Instead, we adjusted the scale and the offset of the
response matrix, as shown in section~\ref{sec:data_reduction}.}. In
addition to the standard data selection criteria: elevation from sunlit
earth rim $> 20^{\circ}$, elevation from dark earth rim $> 5^{\circ}$,
we applied cutoff rigidity (COR) $>8$ to clean the XIS data. The XIS
pulse height data for each X-ray event ($3\times 3$ or $5\times 5$
format) were converted to PI (Pulse Invariant) channels using the
`xispi' software version 2005-12-26 and CTI parameters from 2006-01-25.
We first created a time history of the X-ray counting rate by binning
the event data into 256~s time bins. In figure~\ref{fig:light_curve},
we show the 0.3--2~keV counting rate of XIS1 where the non-X-ray
(particle-induced) background rate is not subtracted. The counting rate
shows a clear enhancement in the first $\sim 4 \times 10^4$~s. The
particle background rate of the XIS is known to vary because the cosmic
ray flux changes as a function of the spacecraft position over the
Earth. The background rate is well reproduced as a consistent function
of the local cutoff rigidity. From the XIS data during intervals when
the telescope is pointed at the dark side of the Earth, we found that
the 0.3--2~keV XIS1 non-X-ray counting rate varies only from
0.03~cts\,s$^{-1}$ to 0.07~cts\,s$^{-1}$ when the cutoff rigidity varies
from 15 to 6~GV. Thus the observed enhancement cannot
be particle background variation.
The enhancement lasted for $\sim 10$~hours. Within the enhancement
there are shorter time variations. For example, there are sharp spikes
just before and after the highest peak at $\sim 2.2 \times 10^4$~s. A
time scale of these variations is as short as $\sim 10$~minutes. We
defined the ``flare'' interval to be 0 to $4\times 10^4$~s and the
``stable'' interval as $4\times 10^4$ to $16.5\times 10^4$~s as shown in
figure~\ref{fig:light_curve}, then created X-ray images for ``flare''
and ``stable'' periods separately. The images in figure~\ref{fig:images}
show that the enhancement is not due to changes in any discrete sources.
\begin{figure}
\begin{center}
\FigureFile(80mm,120mm){figure-1.eps}
\end{center}
\caption{XIS1 counting rate in the 0.3--2 keV energy band (top panel),
solar-wind proton flux (second), C$^{+6}$ flux (third), and O$^{+7}$
flux (bottom) as a function of time. The {\it Suzaku} XIS counting rate
is shown in 256 s bins. Particle background counts are not
subtracted. The solar-wind proton flux was calculated using level 2 ACE
SWEPAM data. Each bin of the ACE data was first shifted in time to
correct for the travel time of the solar wind from ACE to the Earth
(typically $\sim 2700$~s), then rebinned into 256 s bins. The ion fluxes
(C$^{+6}$, O$^{+7}$) were calculated from level 2 ACE SWICS data. Only
good data with quality flag 0 were used. See also text in
section~\ref{sec:discussion}.} \label{fig:light_curve}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure-2a.eps}
\FigureFile(80mm,80mm){figure-2b.eps}
\end{center}
\caption{XIS1 images in the 0.3--2~keV band for ``flare'' and ``stable''
periods.}
\label{fig:images}
\end{figure}
We subtracted the particle background from both ``stable'' and ``flare''
spectra using the dark earth data with the same distribution of cutoff
rigidities. The counting rates of the background subtracted spectra
above 10~keV, where the data are dominated by non X-ray background
events, are $(2.9\pm 0.5)\times 10^{-2}$~cts\,s$^{-1}$ for ``flare'' and
$(1.2\pm 0.3)\times 10^{-2}$~cts\,s$^{-1}$ for ``stable'', where the
error is $1\sigma$ statistical error. These correspond to 6\% and 2\%
of the counting rates of the dark earth data, respectively. Note that,
in the 2--5~keV band, change of the spectral model due to this
background difference is much smaller than the statistical error, and
hence, they do not affect the results in the later sections.
Since we suppose the diffuse X-ray emission to be approximately uniform
over the field of view of the XIS, we created an X-ray telescope (XRT)
response function, or more specifically, an ancillary response file
(ARF) used in the XSPEC spectral fit software \citep{xspec}, for a
flat field, using the XIS ARF builder software {\it xissimarfgen}
\citep{xissimarfgen}. It is known that contamination has been
accumulating on the optical blocking filters of the XIS sensors since
the detector doors were opened following the launch, and that it
accumulates much more quickly at the center of the field of view than at
the outside \citep{XIS}. We used the contaminant thickness and radial
distribution functions version 2006-05-28 when we built an ARF with
{\it xissimarfgen}. At the early time of this NEP observation, the contaminant
column densities were only $4.1\times 10^{17}$~carbon~atoms~cm$^{-2}$
and $0.7\times 10^{17}$~oxygen~atoms~cm$^{-2}$ at the center for XIS1.
This reduces the efficiency by about 12\% at the energy of
C\emissiontype{VI} (367~eV), 6\% at O\emissiontype{VII} (561~eV), 4\%
at O\emissiontype{VIII} (653~eV), and $<2$\% at Ne\emissiontype{IX}
(905~eV) and higher energies. Systematic errors in the contaminant
thickness are estimated to be about 10\%. The transmission uncertainty
due to this systematic error is only 1\% for C\emissiontype{VI}, and
less for lines at higher energies; hence it is negligible compared with
other errors. For the XIS response function, we used the
ae\_xi1\_20060213c.rmf file supplied by the XIS team, with energy scale
corrections of slope 0.9948 and offset $-0.0035$~keV as determined
through the iterative analysis described in the next section.
\subsection{Spectral fit of ``stable'' spectrum}
We then performed spectral fits to the ``stable'' spectrum. We first
restricted the fitting energy range to 2--5~keV. In this range the
emission is dominated by the Cosmic X-ray Background (CXB), which is
largely emission from unresolved AGNs and can be represented by a power
law function absorbed by neutral material along the line of sight
through our Galaxy. We thus fitted the spectrum with a power-law
function with absorption by a neutral medium with solar abundances
\citep{AG89} and fixed the absorbing column density at the total
Galactic value in this direction $N_{\rm H} = 4.4 \times 10^{20}~{\rm
cm}^{-2}$ \citep{NH}\footnote{We used `nH' tool available at
http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl.}. The fit
results are summarized in the second column of
table~\ref{tab:fits_stable}. The photon index is consistent with the
nominal CXB value ($1.40\pm 0.05$; \cite{Marshall_etal_1980}). The
normalization of the power-law function is also consistent with previous
observations; 9--11~photons\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$\,keV$^{-1}$
at 1~keV (e.g., \cite{Gendreau_etal_1995,Revnivtsev_etal_2005}).
\begin{table*}
\caption{Results of spectral fits to the ``stable'' spectrum}
\label{tab:fits_stable}
\begin{center}
\begin{tabular}{llll}\hline\hline
\hspace{5mm} & Parameter & 2--5~keV & 0.2--2~keV \\\hline
\multicolumn{4}{l}{Power-law component}\\
\hline
& $N_{\rm H}$ [$10^{22}$~cm$^{-2}$] & 0.044 (fixed) & 0.044 (fixed)\\
& photon index $\Gamma$ & $1.33\pm 0.23$ & 1.33 (fixed) \\
&normalization\footnotemark[$*$] & $10.4^{+3.1}_{-2.4}$ &
10.4 (fixed) \\
\hline
\multicolumn{4}{l}{Thin-thermal component (VMEKAL)}\\
\hline
& $kT$ [keV] & --- & $0.177^{+0.003}_{-0.002}$\\
& C abundance (solar) & --- & $1.92^{+0.40}_{-0.36}$\\
& N abundance (solar)& --- & $2.14^{+0.33}_{-0.31}$\\
& O abundance (solar)& --- & 1.0 (fixed)\\
& Ne abundance (solar)& --- & $2.77^{+0.53}_{-0.59}$\\
& Fe abundance (solar)& --- & $1.42^{+0.20}_{-0.22}$\\
& normalization\footnotemark[$\dagger$] & --- & $16.35^{+0.62}_{-0.68}$\\
\hline
\multicolumn{2}{l}{gain slope} & 0.9948 (fixed) & 0.9948 \\
\multicolumn{2}{l}{gain offset} & $-0.0035$ (fixed) & $-0.0035$\\
\hline
\multicolumn{2}{l}{$\chi^2$/degrees of freedom} & 38.39/38 & 280.15/228\\
\hline
\multicolumn{3}{@{}l@{}}{\hbox to 0pt{\parbox{95mm}{\footnotesize
\footnotemark[$*$] In units of photons\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$\,keV$^{-1}$ at 1~keV.
\par\noindent
\footnotemark[$\dagger$] $(4\pi)^{-1}D_{\rm A}^{-2}(1+z)^{-2} 10^{-14}\int n_{\rm e}
n_{\rm H}dV$ per steradian, where $D_{\rm A}$ is the angular size
distance to the source (cm), and $n_{\rm e}$, $n_{\rm H}$ are the
electron and hydrogen densities (cm$^{-3}$), respectively.
}\hss}}
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\begin{center}
\FigureFile(80mm,60mm){figure-3.eps}
\end{center}
\caption{Spectral fit to the ``stable'' spectrum in the 0.2--2~keV range.
Observed spectrum is plotted in the upper panel with crosses where the
vertical bars correspond to $1\sigma$ statistical errors. The thick
step function is the best-fit model function convolved with the X-ray
mirror and the detector response functions. The dotted lines show
contributions of different spectral components. The lower panel shows the
residuals of the fits.
}\label{fig:spec_stable}
\end{figure}
We then fitted the spectrum over the 0.2--2~keV range. As shown in
figure~\ref{fig:spec_stable}, it is clear that there is additional emission
below 1 keV and the excess contains emission lines such as
O\emissiontype{VII}, O\emissiontype{VIII} and Ne\emissiontype{IX}
K$\alpha$. Thus, fixing the spectral parameters of the CXB component to
the best fit values of the 2--5~keV fit, we added a thin thermal
emission component using the MEKAL model
(\cite{Mewe_1985,Mewe_1986,Kaastra_1992,Liedahl_1995}). We fixed the
abundance of O at the solar value, and set abundances of other elements
(C, N, Ne, Fe) free. The version 0.6 XIS data products are known to
contain systematic energy calibration errors of $\sim 10$~eV amplitude
below $\sim 2$~keV. We therefore adjusted the energy scale by varying
the gain slope and offset as free parameters of the fit. We determined
the gain and the offset in this fitting, and applied the same gain and
offset throughout the paper, including the fitting of the CXB component
described in the previous paragraph. The results are shown in the third
column of table~\ref{tab:fits_stable}. The model adequately represent
the observed spectrum\footnote{There are small positive residuals,
especially at around 300~eV and 450~eV. We can model these features with
additional delta functions of variable energy and amplitude. Even if we
add them, however, normalizations of the delta functions employed for
the ``flare'' spectrum (section~\ref{sec:flare spectrum}) are affected
by 10\% at most.}. Therefore, we adopt the model shown in
table~\ref{tab:fits_stable} as a representative model for the ``stable''
spectrum, in order to evaluate the spectral change during the ``flare''.
The spectrum could be fit by other thermal models. When we adopt the
present single temperature model with varying abundances, the
temperature is determined primarily by the O\emissiontype{VII} to
O\emissiontype{VIII} K$\alpha$ emission line intensity ratio, and the
abundances are determined by the intensities of C\emissiontype{VI}
K$\alpha$, N\emissiontype{VI} K$\alpha$, Fe\emissiontype{XVII}-L and
Ne\emissiontype{IX} K$\alpha$. If we employ a multi-temperature thermal
model, it may not be necessary to vary abundances because line intensity
ratios can be adjusted by choosing temperatures. The most important
result here is that the excess emission above the CXB below 1~keV can be
represented by the emission lines of a thermal model. We tried to
include additional continuum emission represented by a power-law
function or thermal bremsstrahlung model. However, there was no
improvement in $\chi^2$.
\subsection{Spectral fit of ``flare'' spectrum}
\label{sec:flare spectrum}
\begin{table*}
\caption{Results of spectral fits to the ``flare'' spectrum. Parameter
values of components added to the ``stable'' spectral model.}
\label{tab:fits_flare}
\begin{center}
\begin{tabular}{lllll}\hline\hline
\hspace{5mm} & Parameter & 2--5~keV & 0.2--2~keV &
Line identification\footnotemark[$*$] \\\hline
\multicolumn{5}{l}{Additional Power-law component without absorption}\\
\hline
& photon index $\Gamma$ & $0.0\pm 1.7$ & 0.0 (fixed) \\
& normalization\footnotemark[$\dagger$] & $7.4^{+8.8}_{-1.3}\times 10^{-1}$ & \multicolumn{2}{l}{0.74 (fixed) } \\
\hline
\multicolumn{5}{l}{Additional narrow Gaussian lines}\\
\hline
1 & center energy [eV] & --- & $269\pm 4$ & C band lines \\
& normalization\footnotemark[$\ddagger$] & --- & $7.7^{+1.8}_{-1.7}$& \\
2 & center energy [eV] & --- & $357^{+6}_{-8}$ & C\emissiontype{VI} 2p to 1s (367 eV)\\
& normalization\footnotemark[$\ddagger$] & --- & $7.3^{+2.2}_{-1.4}$& \\
3 & center energy [eV] & --- & $455^{+5}_{-13}$ & C\emissiontype{VI} 4p to 1s (459 eV) \\
& normalization\footnotemark[$\ddagger$] & --- & $3.09^{+0.74}_{-0.76}$ \\
4 & center energy [eV] & --- & $558^{+8}_{-9}$ & O\emissiontype{VII}
(561~eV) \\
& normalization\footnotemark[$\ddagger$] & --- &$5.1^{+1.1}_{-1.0}$ & \\
5 & center energy [eV] & --- & $649^{+4}_{-6}$ & O\emissiontype{VIII} 2p to 1s (653 eV) \\
& normalization\footnotemark[$\ddagger$] & --- & $5.02^{+0.58}_{-0.76}$ \\
6 & center energy [eV] & --- & $796^{+10}_{-8}$ & Fe\emissiontype{XVII,XVIII}-L + O\emissiontype{VIII} 3p to 1s (774 eV)?\\
& normalization\footnotemark[$\ddagger$] & --- &$1.67^{+0.35}_{-0.34} $ &\\
7 & center energy [eV] & --- &$882^{+14}_{-17}$ &Fe\emissiontype{XVII,XVIII}-L +
Ne\emissiontype{IX} (905~eV) \\
& normalization\footnotemark[$\ddagger$] & --- &$0.95^{+0.26}_{-0.33}$ & + O\emissiontype{VIII} 6p to 1s (847 eV)?\\
8 & center energy [eV] & --- &$1022^{+11}_{-7}$ &Ne\emissiontype{X} (1022~eV) \\
& normalization\footnotemark[$\ddagger$] & --- &$1.04^{+0.20}_{-0.29}$ \\
9 & center energy [eV] & --- &$1356^{+16}_{-20}$ & Mg\emissiontype{XI}
(1329~eV) \\
& normalization\footnotemark[$\ddagger$] & --- & $0.73^{+0.19}_{-0.20}$ \\
\hline
\multicolumn{2}{l}{$\chi^2$ /d.o.f} & 17.65/14 & 161.85/114\\
\hline
\multicolumn{4}{@{}l@{}}{\hbox to 0pt{\parbox{160mm}{\footnotesize
\footnotemark[$*$] Line energies at the rest frame are taken from
\citet{Kharchenko_2003}, \citet{Krasnopolsky_2004}. Energies of the
forbidden line are shown for O\emissiontype{VII}, Ne\emissiontype{IX},
and Mg\emissiontype{XI} K$\alpha$, because the forbidden line becomes much
stronger at the charge-exchange emission (e.g., \cite{Kharchenko_2003}).
\par\noindent
\footnotemark[$\dagger$] In units of photons\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$\,keV$^{-1}$ at
1~keV.
\par\noindent
\footnotemark[$\ddagger$] In units of photons\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$.
}\hss}}
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\begin{center}
\FigureFile(80mm,60mm){figure-4.eps}
\end{center}
\caption{Comparison of the ``flare'' spectrum with the CXB model fit to
the ``stable'' spectrum at 2--5~keV. An additional power-law component
was required in this range for the ``flare'' spectrum. Parameters are
given in the second column of table~3.}
\label{fig:spec_flare_2-5}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,60mm){figure-5.eps}
\end{center}
\caption{Comparison of the ``flare'' spectrum with the best fit model
for the ``stable'' spectrum plus additional power-law component during
the ``flare'' in the 0.2--2~keV energy range. Vertical arrows
indicate line-like structures in the residuals. }
\label{fig:spec_compare}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,60mm){figure-6.eps}
\end{center}
\caption{``Flare'' spectrum and best-fit model spectrum. The model
spectrum is a sum of the best fit model for the ``stable'' spectrum plus
additional power-law component during the ``flare'' shown with a thin
solid curve (same as that shown in figure~\ref{fig:spec_compare}), and
nine emission lines shown with dotted curves. Best-fit parameters of the
nine emission lines are summarized in table~\ref{tab:fits_flare}.}
\label{fig:spec_flare}
\end{figure}
We first compared the ``flare'' spectrum and the best fit model for the
``stable'' spectrum in the 2--5~keV range. We found that there is an
excess hard emission ($\chi^2$/d.o.f $=46.21/16$) and added an
additional power law component, as shown in figure~4. Then the ``flare''
0.2--2~keV spectrum was compared with a model consisting of the best fit
``stable'' model and the additional power law component as shown in
figure~\ref{fig:spec_compare}. The residuals at the bottom of the
figure show line-like structures, so we have added nine emission lines
as indicated by the arrows. All lines were modeled by delta functions
of variable energy and amplitude. We show the results in the third
column of table~\ref{tab:fits_flare} and
figure~\ref{fig:spec_flare}. The ``flare'' spectrum is well represented by
the model\footnote{The least significant line is that at 882~eV. By
adding this line, $\chi^2$/d.o.f was improved from 188.61/116 to
161.85/114. This line is significant at a greater than 99.98\%
confidence level based on the $F$-test.}. Therefore, the enhancement of
the X-ray intensity during the ``flare'' can be explained by an increase
in these emission lines and the hard power-law emission.
In the fourth column of table~\ref{tab:fits_flare}, we show probable
identifications of lines. The lowest energy line below the carbon edge
at $269\pm4$~eV is considered to be a sum of multiple L emission
lines. The line at $455^{+5}_{-13}$~eV is most likely the $n=4$ to 1
transition (Ly$\gamma$) of C\emissiontype{VI} at 459~eV, since the line
energy is consistent within the statistical error and there are no other
likely emission lines at this energy. If we introduced two lines at
436~eV (C\emissiontype{VI} Ly$\beta$; $n=3$ to 1) and 459~eV
(C\emissiontype{VI} Ly$\gamma$) instead of one line of variable energy,
the normalizations of these lines became $1.93^{+0.75}_{-0.99}$ and
$1.68^{+0.76}_{-0.71}$~photons\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$,
respectively. Therefore, C\emissiontype{VI} Ly$\beta$ could have a
comparable contribution. In either case, we definitely need the
C\emissiontype{VI} Ly$\gamma$ line. \citet{Dennerl_2003} attributed a
weak peak structure found in the XMM-Newton spectrum of comet C/2000 to
a sum of C\emissiontype{VI} Ly$\beta$ and Ly$\gamma$, and
C\emissiontype{VI} Ly$\gamma$ due to charge-exchange emission between
the highly ionized solar wind and exospheric or interplanetary neutrals
during an XMM-Newton observation of the Hubble Deep Field--North was
reported by \citet{Snowden_etal_2004}. This NEP observation, however,
seems to be the clearest detection so far. We also have detected
O\emissiontype{VII} to Mg\emissiontype{XI} lines. The lines at 796 and
882 eV~are likely to represent complex structures due to Fe-L and other
lines.
\section{Discussion}
\label{sec:discussion}
The short ($\sim 10$~minutes) time scale variations observed during the
enhancement of X-ray intensity imply that the size of
the emission region is no larger than 10 light minutes. On the other
hand, the apparent size of the emission region must be equal to or
larger than the XIS field of view ($18'$). These require the emitter of
the X-ray enhancement to be within a distance of $10~{\rm
light~minutes}/18'$, or $\sim 10^{-3}$~pc. Because the enhanced X-ray
emission consists of emission lines from C\emissiontype{VI} to
Mg\emissiontype{XI}, this requires an ion source within
$10^{-3}$~pc. There is only one ion source in this distance range. That
is the Sun.
The Sun may produce X-ray emission lines in our observations in two
possible ways: scattering of solar X-rays by the Earth's atmosphere, and
solar-wind charge exchange. In the former case, the X-ray intensity is
proportional to the solar X-ray intensity multiplied by the sunlit
atmospheric column density. The solar X-ray intensity is continuously
monitored by the GOES (Geostationary Operational Environmental
Satellites)\footnote{The data available at
http://www.ngdc.noaa.gov/stp/GOES/goes.html.}, but data show no
correlation with the enhancement. Moreover, using the MSIS atmosphere
model \citep{Hedin_1991}, we found that the column density of sunlit
atmosphere varied by many orders of magnitude ($10^{9}$) during the
observations, but no correlation was found with the observed X-ray
intensity. Thus scattering of solar X-rays can be excluded.
\begin{figure}
\begin{center}
\FigureFile(80mm,60mm){figure-7.eps}
\end{center}
\caption{0.3--2 keV X-ray intensity (upper panel) and geocentric
distance of the point whose geomagnetic field becomes open to the space
for the first time along the line of sight from the spacecraft position
in units of the earth radius, as a function of time during the
observation (lower panel). See also the schematic view shown in
figure~\ref{fig:magneto_schematic}.}
\label{fig:magneto}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,60mm){figure-8.eps}
\end{center}
\caption{Schematic view of the magnetosphere and the definition of
$r_{\rm mp}$ used in figure~\ref{fig:magneto}.}
\label{fig:magneto_schematic}
\end{figure}
In figure \ref{fig:light_curve}, we show the proton flux observed by the
ACE (Advanced Composition Explorer)\footnote{The data available at
http://www.srl.caltech.edu/ACE/ASC/.} together with the {\it Suzaku}
X-ray counting rate. The ACE data were shifted in time to account for
propagation time from ACE to the Earth. Clearly the proton flux was
enhanced during the X-ray ``flare''. This is consistent with solar-wind
charge-exchange model.
Charge-exchange X-ray emission also is strongly supported by the
detection of the C\emissiontype{VI} $n=4$ to 1 transition line
(Ly$\gamma$). In CIE (collisional ionization equilibrium) thermal
emission, the Ly$\beta$ and Ly$\gamma$ lines of C\emissiontype{VI} are
suppressed relative to Ly$\alpha$ by the Boltzmann factor in the
distribution of exciting electrons. In charge exchange between
C\emissiontype{VII} and H\emissiontype{I}, the electron is deposited
primarily in the $n=4$ level (\citet{Krasnopolsky_etal_2004} and
references therein), and the X-ray lines are produced when it cascades
to the $n=1$ level, guided only by branching ratios. In high energy
collisions, angular momentum states are tend to be populated
statistically by weight of the state's degeneracy, and electrons are
primarily captured into maximal $l$, where $n$ can change only by 1 unit
at a time during the cascade \citep{Beiersdorfer_2001}, again resulting
in relatively weak Ly$\beta$ and Ly$\gamma$. Behind the Earth's bow
shock, however, the solar wind velocity is reduced, which should result
in recombination into low $l$-orbitals and strong Ly$\beta$ and
Ly$\gamma$. In the ``flare'' spectrum, both C\emissiontype{VI}
Ly$\alpha$ and Ly$\gamma$ lines were detected and this is a strong
evidence for charge exchange. In addition, O\emissiontype{VIII} $n=3$
to 1 and $n=6$ to 1 transition lines are also enhanced in the comet
emission model \citep{Kharchenko_2003}. However, we cannot separate
those lines from Fe\emissiontype{XVII}-L emission lines with the present
energy resolution.
Although X-ray intensity and the proton flux are correlated on a time
scale of $\sim 10$ hours, they do not show much correlation on short
time scales. We consider that the short term X-ray intensity variation
is at least partly arising from orbital motion of the spacecraft. In
figure~\ref{fig:magneto}, we show the geocentric distance of the point
whose geomagnetic field becomes open to the space for the first time
along the line of sight from the spacecraft position, i.e., the point
where the line of sight encounters the magnetosheath (see also the
schematic view shown in figure~\ref{fig:magneto_schematic} for the
definition). We evaluated the end point of the magnetic field using the
software GEOPACK-2005 and T96 magnetic field model
(\citet{Tsyganenko_2005} and references therein)\footnote{The software
package available at
http://modelweb.gsfc.nasa.gov/magnetos/data-based/modeling.html.}. We
obtained the solar-wind parameters required to perform the calculation
from the CDAWeb (Coordinated Data Analysis Web)\footnote{The data
available at http://cdaweb.gsfc.nasa.gov/cdaweb/sp\_phys/.}. We find
that the line of sight during the present observation was rather special
in the sense that it goes through the north magnetic pole region where
charged particles of the magnetosheath can penetrate down to
2--$8R_{\oplus}$ moving along open field lines. The short-term X-ray
intensity variations during the time intervals shown by boxes in figure
\ref{fig:magneto} indicate anti-correlation with the distance to the
magnetosheath. This indicates that the charge exchange of solar-wind
heavy ions is taking place at 2--$8R_{\oplus}$ where the neutral matter
density is high. \citet{Robertson_etal_2006} recently studied
theoretically solar-wind charge-exchange emission from the
magnetosheath. Implications of the present results on the solar-wind
ion composition and the Earth's magnetosheath including comparisons with
the theoretical model will be reported in a separate paper.
Finally, since solar-wind charge-exchange emission can become a
difficult foreground in the study of soft diffuse sources, we summarize
a procedure to examine possible contamination in the {\it Suzaku}
spectra below $\sim$ 1~keV by solar activity.
\begin{enumerate}
\item check the light curve, if time variation is not expected for the object,
\item check the solar X-ray intensity and the column density of sunlit
atmosphere along the line of sight,
\item check the solar wind proton flux,
\item check the radius of the magnetosheath on the line of sight.
\end{enumerate}
\bigskip
We would like to thank Prof.~K.~Maezawa, and Prof.~I.~Shinohara for
their help in calculations of the Earth's magnetosheath and also for
valuable discussions. Thanks are also due to Prof.~T.~Mukai,
Prof.~A.~Matsuoka, and Prof.~H.~Hayakawa for discussions on the relation
between {\it Suzaku} data and solar wind. We are grateful to the
referee, Dr.~A.~Dalgarno, for useful comments to improve this paper.
|
1,108,101,564,657 | arxiv | \section{Introduction}
In the recent past, the Chern--Simons (C-S) field
theories \cite{jao,hageno} have been intensively studied in connection
with several
physical and mathematical applications \cite{csapps,ienkur}.
A convenient gauge fixing for these theories is provided by the Coulomb gauge.
As a matter of fact, despite of the presence of nontrivial
interactions in the gauge fixed action, the calculations become
considerably simpler than in the covariant gauges and a perturbative
approach is possible also on non-flat manifolds \cite{ffprd}.
Moreover, the dependence on the time in the Green functions is trivial,
so that the C--S field theories can be treated in practice as two
dimensional models.
Starting from the seminal works of refs. \cite{hageno,hagent} and \cite{jat},
the Coulomb gauge has been already applied in a certain number of
physical problems involving C--S based models
\cite{ienkur}, \cite{vo}--\cite{bcv}, but still remains
less popular than the covariant and axial gauges.
One of the main reasons is probably the fact that there are
many
perplexities concerning the use of
this gauge fixing, in particular in the case of the four dimensional
Yang--Mills theories \cite{taylor}--\cite{leiwil}.
Recently, also the consistency of the C--S field theories in the
Coulomb gauge has been investigated using various techniques \cite{ffprd,
devone, cscgform, frig},
but so far a detailed perturbative analysis in the non-abelian case
is missing.
To fill this gap, the radiative corrections of the Green functions
are computed here at any loop order and it is shown
that they vanish identically.
No regularization is needed for the ultraviolet and infrared
divergences since, remarkably, they do not appear in the amplitudes.
The present result agrees with the previous analysis
of \cite{frig}, in which the
commutation relations between the fields are proved to be trivial
using the
Dirac's canonical approach to constrained systems.
It is important to notice that the absence of any quantum
correction despite of the presence of nontrivial self-interactions in the
Lagrangian
is a peculiarity of the Coulomb gauge that cannot be totally
expected from the fact that the theories under consideration
are topological, as finite renormalizations of the fields and
of the coupling constants are always possible.
For instance, in the analogous case of the covariant gauges,
only the perturbative
finiteness of the C--S amplitudes has been shown \cite{csformal}
in a regulatization
independent way exploiting BRST techniques \cite{ss}.
Indeed, a finite shift of the C--S coupling constant has been observed
in the Feynman gauges by various authors \cite{shift, alr}.
The material presented in this paper is divided as follows.
In Section 2 the C--S field theories with $SU(n)$ gauge group are quantized
using the BRST approach. The Coulomb gauge constraint
is weakly imposed and
the proper Coulomb gauge is recovered suitably
choosing the gauge fixing parameter.
The singularities
that may appear in the perturbative calculations are studied in details.
Ultraviolet divergences are predicted by the naive power counting,
but it will be shown in Section 3 that they are absent in
the perturbative expansions
of the Green functions.
Still there are spurious singularities, which arise because the
propagators are undamped in the time direction. They are completely
removed with the introduction of a cut off
in the zeroth components of the momenta.
In Section 3, the quantum contributions to the $n-$point
correlation functions are derived
at all orders in perturbation theory.
The one loop case is the most difficult, as
nontrivial cancellations occur among different Feynman diagrams.
To simplify the calculations, a crucial observation is proved, which
drastically
reduces their number.
The total contribution of the
remaining diagrams is shown to vanish
after some algebra. The gluonic $2-$point function
requires some care and it is treated separately.
At two loop, instead, any single Feynman diagram is identically zero.
The reason is that, in order to build such diagrams,
some components of the propagators and of the vertices are required,
which are missing in the Coulomb gauge.
At higher orders, the vanishing of the Feynman diagrams
is proved by induction in the loop number $N$.
Finally, in the Conclusions some open problems and future developments are
discussed.
\section{Chern-Simons Field Theory in the Coulomb Gauge: Feynman Rules and
Regularization}
The C--S action in the Coulomb gauge looks as follows:
\begin{equation}
S_{CS}=S_0+S_{GF}+S_{FP}
\label{action}
\end{equation}
where
\begin{equation}
S_0=\frac s{4\pi }\int
d^3x\epsilon ^{\mu \nu \rho }\left( \frac
12A_\mu ^a\partial _\nu A_\rho ^a-\frac 16f^{abc}A_\mu ^aA_\nu ^bA_\rho
^c\right) \label{csaction}
\end{equation}
\begin{equation}
S_{GF}=\frac {is}{8\pi \lambda }\int d^3x\left( \partial
_iA^{a\,i}\right) ^2 \label{gf}
\end{equation}
and
\begin{equation}
S_{FP}=i\int
d^3x\,\overline{c}^a\partial _i\left( D^i\left[
A\right] c\right) ^a \label{fp}
\end{equation}
In the above equations $s$ is a dimensionless coupling constant and the
vector fields $A_\mu^a$
represent the gauge potentials. Greek letters $\mu,\nu,\rho,\ldots$
denote space--time indices, while the first latin letters $a,b,c,\ldots=
1,\ldots,N^2-1$ are used for
the color indices of the $SU(n)$ gauge group
with structure constants $f^{abc}$.
The theory is considered on the flat space-time
$\mbox{\bf R}^3$ equipped with the standard euclidean metric
$g_{\mu\nu}=\mbox{\rm diag}(1,1,1)$.
The total antisymmetric
tensor $\epsilon^{\mu\nu\rho}$ is defined by the convention
$\epsilon^{012}=1$.
Finally,
$$D_\mu^{ab} \left[ A\right] =\partial _\mu \delta ^{ab}-f^{abc}A_\mu ^c$$
is the covariant derivative and
$\lambda $ is an arbitrary gauge fixing parameter.
In eq. (\ref{action}) the Coulomb gauge constraint is weakly imposed
and the proper Coulomb gauge fixing\footnotemark{}\footnotetext{
From now on, middle latin letters like $i,j,k,\ldots=1,2$ will indicate
space indices.}, given by:
\begin{equation}
\partial _iA^{a\,i}=0 \label{gaugefix}\qquad\qquad\qquad i=1,2
\end{equation}
is recovered setting $\lambda=0$ in eq. (\ref{gf}).
The partition function of the CS field theory
described by eq. (\ref{action}) is:
\begin{equation}
Z=\int DAD\overline{c}Dce^{iS_{CS}} \label{partfunct}
\end{equation}
and it is invariant under the BRST transformations listed below:
\begin{eqnarray}
\delta A_\mu ^a &=&\left( D_\mu \left[ A\right] \right) ^a \label{brst} \\
\delta \overline{c}^a &=&\frac s{4\pi \lambda }\partial _iA^{a\,i} \nonumber
\\
\delta c^a &=&\frac 12f^{abc}c^bc^c \nonumber
\end{eqnarray}
From (\ref{action}), it is possible to derive the Feynman rules of C--S
field theory in the Coulomb gauge.
The components of the gauge field
propagator
$G_{\mu \nu }^{ab}(p)$ in the Fourier space
are given by:
\begin{equation}
G_{jl}^{ab}(p)=
-\delta ^{ab}\frac{4\pi \lambda }s\frac{p_ip_l}{{\mbox{\rm\bf p}}^4}
\label{gjl}
\end{equation}
\begin{equation}
G_{j0}^{ab}(p)=
\delta ^{ab}
\left(
\frac{4\pi }s\epsilon _{0jk}\frac{p^k}{\mbox{\rm\bf p}^2}-
\frac{4\pi \lambda }s
\frac{p_jp_0}{\mbox{\rm\bf p}^4}
\right)
\label{gjo}
\end{equation}
\begin{equation}
G_{0j}^{ab}(p)=
-\delta ^{ab}
\left(
\frac{4\pi }s
\epsilon _{0jk}
\frac{p^k}{\mbox{\rm\bf p}^2}+
\frac{4\pi \lambda }s\frac{p_0p_j}{\mbox{\rm\bf p}^4}\right)
\label{goj}
\end{equation}
\begin{equation}
G_{00}^{ab}(p)=
-\delta^{ab}
\frac{4\pi \lambda }s
\frac{p_0^2}{\mbox{\rm\bf p}^4}
\label{goo}
\end{equation}
with $\mbox{\rm\bf p}^2=p_1^2+p_2^2$, while the
ghost propagator $G_{gh}^{ab}(p)$ reads as follows:
\begin{equation}
G_{gh}^{ab}(p)=\frac{\delta ^{ab}}{\mbox{\rm\bf p}^2} \label{ggh}
\end{equation}
Finally, the three gluon vertex and the ghost-gluon vertex
are respectively given by:
\begin{equation}
V_{\mu _1\mu _2\mu _3}^{a_1a_2a_3}(p,q,r)=-\frac{is}{3!4\pi }(2\pi
)^3f^{a_1a_2a_3}\epsilon ^{\mu _1\mu _2\mu _3}\delta ^{(3)}(p+q+r)
\label{aaa}
\end{equation}
and
\begin{equation}
V_{\mathrm{gh\thinspace }i _1}^{a_1a_2a_3}(p,q,r)=-i(2\pi )^3\left(
q\right) _{i_1}f^{a_1a_2a_3}\delta ^{(3)}(p+q+r) \label{acc}
\end{equation}
In the above equation we have only given the spatial components of the
ghost-gluon vertex.
From eq. (\ref{fp}), it is in fact easy to realize that
in the Coulomb gauge
its temporal component is zero.
At this point, a regularization should be introduced in order to handle the
singularities that may arise in the computations of the Feynman diagrams.
The potential divergences are
of three kinds.
\begin{enumerate}
\item Ultraviolet divergences (UV). The naive power counting gives the
following degree of divergence $\omega (G)$ for a given Feynman diagram $G$:
\begin{equation}
\omega (G)=3-\delta -E_B-\frac{E_G}2 \label{napoco}
\end{equation}
with \footnotemark{}\footnotetext{We use here the same notations of ref.
\cite{itzu}}
\begin{enumerate}
\item $\delta =$ number of momenta which are not integrated inside the loops
\item $E_B=$ number of external gluonic legs
\item $E_G=$ number of external ghost legs
\end{enumerate}
Eq. (\ref{napoco}) shows that UV divergences are possible in the
two and three point functions, both with gluonic or ghost
legs. Moreover, there is also a possible logarithmic divergence in the case
of the four point interaction among two gluons and two ghosts.
In principle, we had to introduce a regularization for these divergences
but in practical calculations this is not necessary.
As a matter of fact, we will see in Section 3 that there are no
UV divergences in the quantum corrections of the Green functions.
\item Infrared (IR) divergences.
In the pure C--S field theories \cite{hageno} there are no problems
of infrared divergences.
As a matter of fact, it can be seen from the Feynman rules written above that
the IR behavior of the gluonic propagator is
very mild ($\sim \frac 1{|\mbox{\rm\bf p}|}$). The
potentially more dangerous
IR singularities due to the ghost propagator are screened by the presence of
the external derivative in the ghost--gluon vertex (\ref{acc}).
However, we notice that IR divergences appear
in the interacting case.
For instance, in three dimensional quantum electrodynamics coupled
with a C--S term, the IR divergences have been discussed in refs.
\cite{jao,jat}.
\item Spurious divergences.
These singularities appear because the propagators
(\ref{gjl})--(\ref{ggh}) are undamped in the time direction and are
typical of the Coulomb gauge.
To regularize spurious divergences
of this kind,
it is
sufficient to introduce a cutoff $\Lambda _0>0$ in the domain of integration
over the variable $p_0$:
\begin{equation}
\int_{-\infty }^\infty dp_0\rightarrow \int_{-\Lambda _0}^{\Lambda _0}dp_0
\label{spureg}
\end{equation}
The physical situation is recovered in the limit $\Lambda _0\rightarrow
\infty $.
\end{enumerate}
\noindent
As we will see, this regulatization does not cause
ambiguities in the evaluation of the
radiative corrections at any loop order.
In fact, the integrations over the temporal components of the
momenta inside the loops turn out to be trivial and do not interfere
with
the integrations over the spatial components.
\section{Perturbative Analysis}
In this Section we compute the $n-$point
correlation functions of C--S field theories at any loop order.
To this purpose, we choose for simplicity
the proper Coulomb gauge, setting
$\lambda=0$ in eq. (\ref{gf}).
In this
gauge the gluon-gluon propagator has only two nonvanishing components:
\begin{equation}
G_{j0}(p)=
-G_{0j}(p)=
\delta^{ab}
\frac{4\pi }s
\epsilon _{0jk}\frac{p^k}{
\mbox{\rm\bf p}^2} \label{gjopcg}
\end{equation}
The presence of $p_0$ remains confined in the vertices
(\ref{aaa})--(\ref{acc})
and it is trivial because it is concentrated in the Dirac $\delta $%
--functions expressing the momentum conservations. As a consequence,
the CS field theory can be considered as a two dimensional model.
First of all we will discuss the one loop calculations.
The following observation greatly reduces the number of diagrams to be
evaluated:
\begin{description}
\item[Observation:]
Let $G^{(1)}$ be a one particle irreducible (1PI) Feynman diagram containing
only one closed loop. Then all the
internal lines of $G^{(1)}$ are either ghost or gluonic lines.
\end{description}
To prove the above observation, we notice that the only way to have
a gluonic line preceding or following a ghost line inside a loop
is to exploit the ghost--gluon vertex (\ref{acc}).
Thus, if a one loop diagram $G^{(1)}$ with both gluonic and ghost
legs exists, the situation illustrated in fig. \ref{figtw} should occur,
in which at least one gluonic tree diagram
$T_{\nu _1\mu _2...\mu _{n-1}\nu _n}$ is connected to the rest of $G^{(1)}$
by gluing two of its legs, those carrying the indices $\nu_1$ and
$\nu_2$ in the figure, to two ghost--gluon vertices
$V_{\mathrm{gh\thinspace }\nu_1}$ and
$V_{\mathrm{gh\thinspace }\nu_n}$.
At this point, we recall that these vertices have only spatial components
$V_{\mathrm{gh\thinspace }i_1}$ and
$V_{\mathrm{gh\thinspace }i_n}$, $i_1,i_2=1,2$.
As a consequence, since the contractions between gluonic legs are
performed with the propagator (\ref{gjopcg}), it is clear that the necessary
condition for which the whole diagram $G^{(1)}$ does not vanish
is that $\nu_1=\nu_n=0$. On the other side, this is not possible, as
it is shown by fig.
(\ref{figtr}). In fact, because of the presence of an
$\epsilon^{\mu\nu\rho}$ tensor in the gluonic vertex (\ref{aaa}), the most
general gluonic tree diagrams with $n$ legs
$T_{\nu _1\mu _2...\mu _{n-1}\nu _n}$ must have at least
$n-1$ spatial indices in order to be different from zero.
This proves the observation.
\begin{figure}
\vspace{1.5truein}
\special{psfile=fig2rrr.eps hscale=61 vscale=61 hoffset=0 voffset=0}
\vspace{0.25in}
\caption{The figure shows the only possible way in which a tree diagram
$T_{\nu_1\mu_2\ldots\mu_{n-1}\nu_n}$ with
$n$ gluonic legs can be glued to another tree
diagram containing also ghost legs in order to build a
one loop diagram with mixed ghost and gluonic internal lines.}
\label{figtw}
\vspace{1.9truein}
\special{psfile=fig3.eps hscale=61 vscale=61 hoffset=0 voffset=0}
\vspace{0.25in}
\caption{This figure shows that in an arbitrary tree diagram
$T_{\nu_1\nu_2\ldots\nu_{n-1}\nu_n}$
constructed in terms of the
gauge fields propagator (\ref{gjopcg}) and the
three gluon vertex (\ref{aaa}), only one component in the
space-time indices $\nu_i$,
$i=1,\ldots,n$, can be temporal.}
\label{figtr}
\end{figure}
An important consequence is that,
at one loop, the only non--vanishing diagrams occur when all the external
legs are gluonic. Hence we have to evaluate only the diagrams describing the
scattering among $n$ gluons.
This can be done as follows. First of all, we
consider the diagrams
with internal gluonic lines.
After suitable
redefinitions of the indices and of the momenta, it is possible to see that
their total contribution is given by:
\begin{eqnarray}
V_{i_1...i_n}^{a_1...a_n}
\left(
1;p_1,...,p_n
\right)=
C\left[ -i
\left(2\pi
\right)^3
\right]^n
\frac{n!(n-1)!}2\delta^{(2)}
(\mbox{\rm\bf p}_1+...+
\mbox{\rm\bf p}_n) && \label{vone} \\
f^{a_1b_1^{\prime }c_1^{\prime }}f^{a_2b_2^{\prime }b_1^{\prime
}}...f^{a_nc_1^{\prime }b_{n-1}^{\prime }}\int d^2\mbox{\rm\bf q}_1
\frac{\left[
q_1^{i_1}...q_n^{i_n}+q_1^{i_2}\ldots q_j^{i_{j+1}}
\ldots q_{n-1}^{i_n}q_n^{i_1}\right] }{
\mbox{\rm\bf q}%
_1^2...\mbox{\rm\bf q}_n^2} && \nonumber
\end{eqnarray}
where $C=\left( 2\Lambda _0\right) ^{2n}$ is a finite constant coming from
the integration over the zeroth components of the momenta and
\begin{equation}
\begin{array}{cccc}
q_2= & q_1+p_1+p_n+p_{n-1}+ & \ldots & +p_3 \\
\vdots & \vdots&\ddots&\vdots \\
q_j=&q_1+p_1+p_n+p_{n-1}+&\ldots&+p_{j+1} \\
\vdots & \vdots&& \\
q_n=&q_1+p_1&&
\end{array}
\label{qform}
\end{equation}
for $j=2,\ldots,n-1$.
As it is possible to see from eq. (\ref{vone}),
the only nonvanishing components of $V_{\mu
_1...\mu _n}^{a_1...a_n}\left( 1;p_1,...,p_n\right) $ are those for which $\mu
_1=i_1,$ $\mu _2=i_2,...,\mu _n=i_n$, i. e. all tensor indices
$\mu_1,\ldots,\mu_n$
are
spatial.
The case of the Feynman diagrams containing ghost internal lines
is more complicated. After some work, it is possible to
distinguish two different contributions to the Green functions
with $n$ gluonic legs:
\begin{eqnarray}
V_{i_1...i_n}^{a_1...a_n}\left( 2a;p_1,...,p_n\right)=-C\left[ -i\left(
2\pi \right) ^3\right]^n\frac{n!(n-1)!}2
&& \nonumber \\
\delta^{(2)}(\mbox{\rm\bf p}_1+...%
\mbox{\rm\bf p}_n)
f^{a_1b_1^{\prime }c_1^{\prime }}f^{a_2b_2^{\prime }b_1^{\prime
}}...f^{a_nc_1^{\prime }b_{n-1}^{\prime }}\int d^2\mbox{\rm\bf q}_1\frac{%
q_1^{i_1}...q_n^{i_n}}{\mbox{\rm\bf q}_1^2...
\mbox{\rm\bf q}_n^2}\label{vtwoa}&&
\end{eqnarray}
and
\begin{eqnarray}
V_{i_1...i_n}^{a_1...a_n}\left( 2b;p_1,...,p_n\right) =C\left( -1\right)
^{n-1}\left[ -i\left( 2\pi \right) ^3\right] ^n\frac{n!(n-1)!}2 &&
\nonumber \\
\delta ^{(2)}(\mbox{\rm\bf p}_1+...\mbox{\rm\bf p}_n)
f^{a_1b_1^{\prime }c_1^{\prime
}}f^{a_2b_2^{\prime }b_1^{\prime }}...f^{a_nc_1^{\prime }b_{n-1}^{\prime
}}\int d^2\mbox{\rm\bf q}'_1\frac{(q_1')^{i_1}...(q_n')^{i_n}}
{(\mbox{\rm\bf q}_1')^2...
(\mbox{\rm\bf q}_n')^2} &&
\label{vtwob}
\end{eqnarray}
where the constant $C$ is the result of the integration over the
zeroth components
of the momenta and it is the same of eq. (\ref{vone}).
Apart from an overall sign, eqs. (\ref{vtwoa}) and (\ref{vtwob}) differ
also by the
definitions of the momenta. In (\ref{vtwoa}) the variables $q_2,...,q_n$ are in
fact given by eq. (\ref{qform}). In eq. (\ref{vtwob}) we have instead:
\begin{equation}
\begin{array}{cccc}
q_2'=&q_1'+p_1&& \\
\vdots & \vdots&& \\
q_j'=&q_1'+p_1+&\ldots&+p_{j-1} \\
\vdots & \vdots&\ddots&\vdots \\
q_n'=&q_1'+p_1+&\dots&+p_{n-1}
\end{array}
\label{qformb}
\end{equation}
for $j=2,\ldots,n-1$.
To compare eq. (\ref{vtwob}) with (\ref{vone}) and (\ref{vtwoa}) we
perform the change of variables
\begin{equation}
q_1=-q_1'-p_1
\label{shift}
\end{equation}
in eq. (\ref{vtwob}).
Exploiting eq. (\ref{shift}) and
the relation $p_1+...+p_n=0$, we obtain:
\[
V_{i_1...i_n}^{a_1...a_n}\left( 2b;p_1,...,p_n\right) =
-C[-i(2\pi)^3]^n
\frac{n!(n-1)!}2f^{a_1b_1^{\prime
}c_1^{\prime }}f^{a_2b_2^{\prime }b_1^{\prime }}...f^{a_nc_1^{\prime
}b_{n-1}^{\prime }}
\]
\begin{equation}
\delta ^{(2)}(\mbox{\rm\bf p}_1+...+\mbox{\rm\bf p}_n)
\int d^2\mbox{\rm\bf q}_1
\frac{q_n^{i_1}q_1^{i_2}\ldots q_j^{i_{j+1}}\ldots q_{n-1}^{i_n}}
{(\mbox{\rm\bf q}_1')^2...
(\mbox{\rm\bf q}_n')^2}\label{finvtb}
\end{equation}
where the variables $q_2,\ldots,q_n$ are now defined as in eq. (\ref{qform}).
At this point we can sum
eqs. (\ref{vone}),
(\ref{vtwoa}) and (\ref{finvtb}) together.
It is easy to realize that the total result is zero, i. e.:
\begin{equation}
V_{i_1...i_n}^{a_1...a_n}\left( 1;p_1,...,p_n\right)
+V_{i_1...i_n}^{a_1...a_n}\left( 2a;p_1,...,p_n\right)
+V_{i_1...i_n}^{a_1...a_n}\left( 2b;p_1,...,p_n\right) =0 \label{finres}
\end{equation}
Still, it is not possible to conclude from eq. (\ref{finres}) that
there are no radiative corrections at one loop in
C--S field theory.
Let us remember in fact that
eq. (\ref{finres}) has been obtained from eq. (\ref{vtwob}) after performing
the shift of variables
(\ref{shift}). This could be dangerous
if there are unregulated
divergences.
However, it is not difficult to verify that
each of the integrals appearing
in the right hand sides of eqs. (\ref{vone}), (\ref{vtwoa})
and (\ref{vtwob}) is IR and UV finite for $n\ge 3$.
Only the case $n=2$ needs some more care.
Summing together eqs. (\ref{vone}), (\ref{vtwoa}) and (\ref{finvtb})
for $n=2$, we obtain the following result:
\[
V_{ij}^{ab}\left( 1;p_1,p_2\right) +V_{ij}^{ab}\left(
2a;p_1,p_2\right) +V_{ij}^{ab}\left( 2b;p_1,p_2\right) =
\]
\begin{equation}
\left( 2\pi \right) ^6\left( 2\Lambda _0\right) ^2N\delta ^{ab}\delta
^{(2)}(\mbox{\rm\bf p}_1+\mbox{\rm\bf p}_2)
\int d^2\mbox{\rm\bf q}\frac{\left[ q_{i}
(p_1)_{j}-q_{j}(p_1) _{i}
\right] }
{\mbox{\rm\bf q}^2\left( \mbox{\rm\bf q}+\mbox{\rm\bf p}_1\right) ^2}
\label{cru}
\end{equation}
where we have put $q_1'=q_1=q$.
As we see, the integrand appearing
in the rhs of (\ref{cru})
is both IR and UV
finite. Moreover,
a simple computation shows that
the integral over $\mbox{\rm\bf q}$
is zero without the need of the shift (\ref{shift}).
As a consequence, there are no
contributions to the Green functions at one loop.
Now we are ready to consider the higher order corrections.
At two loop, a general Feynman diagram $G^{(2)}$ can be obtained
contracting two legs of a tree diagram $G^{(0)}$ with
two legs of a one loop diagram $G^{(1)}$.
As previously seen, the latter have only gluonic
legs and their tensorial indices are all spatial.
Consequently, in order to perform the contractions by means of the propagator
(\ref{gjopcg}), there should exist one component of
$G^{(0)}$ with at least two temporal indices,
but this is impossible. To convince
oneself of this fact, it
is sufficient to look at fig. (\ref{figtr}) and related comments.
The situation does not improve
if we build $G^{(0)}$ exploiting also the ghost-gluon vertex
(\ref{acc}), because it has no temporal component.
As a consequence, all the Feynman graphs vanish identically at two loop order.
Let us notice that it is possible to verify their vanishing
directly, since
the number of two loop diagrams
is relatively small in the Coulomb gauge and one has just to contract the
space-time indices without performing the integrations over the internal
momenta.
However, this
procedure is rather long and will not be reported here.
Coming to the higher order computations, we notice that
a diagram with $N+1$ loops $G^{(N+1)}$
has at least
one subdiagram $G^{(N)}$
containing $N-$loops.
Supposing that $G^{(N)}$ is identically equal to
zero because it cannot be constructed with the
Feynman rules (\ref{ggh})--(\ref{acc}) and (\ref{gjopcg}), also $G^{(N+1)}$
must be zero.
As we have seen above, there are no Feynman diagrams for $N=2$.
This is enough to prove by induction that
the C--S field theories have no radiative corrections
in the Coulomb gauge
for any value of $N$.
\section{Conclusions}
In this paper we have proved with explicit computations that the C--S field
theories do not have quantum corrections in the Coulomb gauge.
At two loop order and beyond, this is a trivial consequence of the
fact that it is impossible to construct nonzero Feynman diagrams
starting from the vertices and propagators given in eqs.
(\ref{ggh})--(\ref{acc}) and (\ref{gjopcg}).
At one loop, instead, nontrivial cancellations occur
between the different diagrams.
We have also seen that the perturbative
expansion of the Green functions is not affected by
UV or IR divergences.
Only the spurious singularities are present, which are related to the fact
that the propagators are undamped in the time direction.
They are similar to the singularities observed in the four dimensional
Yang--Mills field theories
\cite{taylor}, but in the C--S case appear in a milder form.
In fact, after the regularizarion (\ref{spureg}), their
contribution at any loop order reduces to
a factor in the radiative
corrections and does not influence
the remaining calculations.
Therefore, the results obtained here are regularization
independent.
Moreover, the vanishing of the quantum contributions described in Section 3
is a peculiarity of the Coulomb gauge that does not strictly depend from the
fact that the C--S field theories are topological.
An analogous situation occurs in the light cone gauge in the presence of a
boundary. In that case, radiative corrections arise due to the interactions
of the fields with the boundary, but each Feynman diagram corresponding to
these interactions vanishes identically \cite{empi}.
In summary, our study indicates that the Coulomb gauge is a
convenient and reliable gauge fixing, especially in the
perturbative applications of C-S field theory.
Let us remember that, despite of the fact that the theory does non
contain degrees of freedom, the perturbative calculations
play a relevant role, for instance in the computations of knot invariants
\cite{alr}, \cite{witten}--\cite{axelrod}.
Contrary to what happens using the covariant gauges,
where it becomes more and
more difficult to evaluate the radiative corrections
as the loop number increases \cite{alr,gmm,chaichen},
in the Coulomb gauge
only the tree level contributions to the Green
functions survive. This feature is
particularly useful in the
case of non-flat manifolds, where the momentum representation does not exist.
For instance, Feynman rules analogous to those given in
eqs. (\ref{gjl})--(\ref{acc})
have been derived also on the compact Riemann surfaces
\cite{ffunp}.
In the future, besides the applications in knot theory, we plan to extend
our work also to C--S field theories with non-compact gauge group, in order
to include also the theory of quantum gravity in $2+1$ dimensions.
Moreover, most of the pathologies that seem to afflict the four dimensional
gauge field theories, like spurious
and infrared divergences, are also present in the C--S field
theories,
but in a milder form. As a consequence, the latter can be considered
as a good laboratory in order to study their possible remedies.
For example, it would be interesting
to apply to the Yang--Mills case
the regularization (\ref{spureg}) introduced here for the spurious
singularities.
Let us
notice that a different regularization
has been recently proposed in \cite{leiwil}.
Finally, the present analysis is limited to the pure
C--S field theories and more investigations
are necessary for the interacting case.
Until now, only the models based on abelian C--S field theory
have been studied in details, in particular
the
so-called Maxwell-Chern-Simons field theory, whose consistency
in the Coulomb gauge
has been checked with several tests \cite{devone}.
|
1,108,101,564,658 | arxiv | \section{Introduction}
Geometrically distinguished families of curves on a skew ruled surface in the
Euclidean space
\mathbb{R}
^{3}$ have been studied by a number of authors and in many points of view. A
range of results appears when one requires that the curves of the considered
family possess an additional property. The present paper contributes to this
field of themes. We consider special families of curves on a skew ruled
surface and suppose that the normal curvature along these curves has a
concrete form. Our aim is to find the type of all ruled surfaces with the
mentioned property and to classify them. The results are assembled in the
table at the end of the paper.\smallskip
To set the stage for this work the classical notation of ruled surface theory
is briefly presented; for this purpose \cite{Hoschek} is used as a general
reference. In the Euclidean space
\mathbb{R}
^{3}$ let $\Phi$ be a regular ruled surface without torsal rulings determined
on $G:=I\time
\mathbb{R}
$ ($I\subse
\mathbb{R}
$ open interval) and of the class $C^{3}$. $\Phi$ can be expressed in terms of
the striction line $\boldsymbol{s}=\boldsymbol{s}(u)$ and the unit vector
field $\boldsymbol{e}(u)$ pointing along the rulings a
\begin{equation}
\boldsymbol{x}(u,v)=\boldsymbol{s}(u)+v\, \boldsymbol{e}(u),\;u\in I,\;v\i
\mathbb{R}
.\label{1
\end{equation}
Moreover we can choose the parameter $u$ to be the arclength along the
spherical curve $\boldsymbol{e}(u)$. Putting $f\
\acute{
\,(u)=\frac{df}{du}$ for a differentiable function $f(u)$ we have
\begin{equation}
\langle \boldsymbol{s}\
\acute{
\,(u),\, \boldsymbol{e}\
\acute{
\,(u)\rangle=0,\quad \left \vert \boldsymbol{e}\
\acute{
\,(u)\right \vert =1\quad \forall \;u\in I,\label{2
\end{equation}
where $\langle$ , $\rangle$ denotes the standard inner product in
\mathbb{R}
^{3}$. The \emph{conical curvature }$k(u)$, the \emph{parameter of
distribution }$\delta(u)$ and the \emph{striction }$\sigma(u)$ of the surface
$\Phi$ are given by
\[
k(u)=(\boldsymbol{e}(u),\boldsymbol{e}\
\acute{
\,(u),\boldsymbol{e}\
\acute{
\, \
\acute{
\,(u)),\ \delta(u)=(\boldsymbol{e}(u),\boldsymbol{e}\
\acute{
\,(u),\boldsymbol{s}\
\acute{
\,(u)),\ \sigma(u):=\sphericalangle(\boldsymbol{e}(u),\boldsymbol{s}\
\acute{
\,(u)),
\]
where
\[
-\frac{\pi}{2}<\sigma \leq \frac{\pi}{2}\quad \text{and\quad}\operatorname*{sign
\sigma=\operatorname*{sign}\delta.
\]
The functions $k(u)$, $\delta(u)$ and $\sigma(u)$ consist a complete system of
invariants of the surface $\Phi$ (\cite{Hoschek}; p.19).\medskip \newline The
components $g_{ij}$ and $h_{ij}$ of the first and the second fundamental
tensors in (local) coordinates $u^{1}:=u$, $u^{2}:=v$ are the followin
\begin{equation}
\left \{
\begin{array}
[c]{c
{\normalsize (g}_{ij}{\normalsize )=}\left(
\begin{array}
[c]{cc
v^{2}+\delta^{2}\left( \lambda^{2}+1\right) & \delta \, \lambda \\
\delta \, \lambda & 1
\end{array}
\right) \\
{\small (h}_{ij}{\small )=}\frac{1}{w}\left(
\begin{array}
[c]{cc
-\left[ k\,v^{2}{\small +}\delta \
\acute{
v{\small +}\delta^{2}\left( k-\lambda \right) \right] & \delta \\
\delta & 0
\end{array}
\right)
\end{array}
\right. ,\label{3
\end{equation}
where $w:=\sqrt{v^{2}+\delta^{2}}$ and $\lambda:=\cot \sigma$. The Gaussian
curvature $K$ and the mean curvature $H$ of $\Phi$ are given respectively b
\begin{equation}
K=\frac{-\delta^{2}}{w^{4}},\quad H=-\frac{k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}\left( k+\lambda \right) }{2\,w^{3}}.\label{4
\end{equation}
Skew ruled surfaces $\Phi$, whose osculating quadrics are rotational
hyperboloids, are called \emph{Edlinger-surfaces} \cite{Edlinger},
\cite{Hoschek}. Necessary and sufficient conditions for a ruled surface $\Phi$
to be an Edlinger-surface are the following (\cite{Brauner}; p.103)
\[
\delta \
\acute{
\,=k\, \lambda+1=0.
\]
This is a ruled surface of constant parameter of distribution whose striction
line $\boldsymbol{s}(u)$ is a line of curvature. The curves of \emph{constant
striction distance}, i.e. the curves $v=constant$, are in this case lines of
curvature of $\Phi$. The other family of the lines of curvature is determined
b
\[
\lbrack k^{2}\,v^{2}+\delta^{2}(k^{2}+1)]\,du-\delta \,k\,dv=0.
\]
It is easily verified that the corresponding normal curvatures of the lines of
curvature (principal curvatures) are the followin
\begin{equation}
k_{1}=-k(u)\,w^{-1},\quad k_{2}=\frac{\delta^{2}(u)}{k(u)}w^{-3}. \label{5
\end{equation}
In the rest of this paper \emph{only skew (non-developable) ruled surfaces of
the space}\textit{ }
\mathbb{R}
^{3}$ are considered \emph{with the parametrization} (\ref{1})
\emph{satisfying the conditions} (\ref{2})\textit{.}
\section{The case of the principal curvatures}
Starting point of this paragraph are the relations (\ref{5}). Firstly the
problem \emph{of finding all ruled surfaces whose a principal curvature has
the following form} is considere
\begin{equation}
k_{i}=f(u)\,w^{n},\;n\i
\mathbb{Z}
,\;f(u)\in C^{0}(I),\;i=1\; \text{or}\;2. \label{6
\end{equation}
It is obvious that $f(u)\neq0\;$for all $u\in I$ since $\Phi$ is
non-developable.\smallskip
\noindent Using (\ref{3}) the normal curvature in direction $du:dv$ is found
to b
\begin{equation}
k_{N}=\frac{1}{w}\cdot \frac{-\left[ kv^{2}+\delta \
\acute{
\,v+\delta^{2}\left( k-\lambda \right) \right] \,du^{2}+2\, \delta
\,du\,dv}{\left[ v^{2}+\delta^{2}\left( \lambda^{2}+1\right) \right]
\,du^{2}+2\, \delta \, \lambda \,du\,dv+dv^{2}}\text{.} \label{7
\end{equation}
Taking into account (\ref{6}) it follows tha
\begin{align*}
& [f\,w^{n+1\,}[v^{2}+\delta^{2}(\lambda^{2}+1)]+k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k-\lambda)]\,du^{2}\\
& +2\, \delta \,(f\, \lambda \,w^{n+1}-1)\,du\,dv+f\,w^{n+1\,}dv^{2}=0.
\end{align*}
This second order in $du:dv$ equation has \emph{exactly one solution }if and
only if its discriminant vanishes
\begin{equation}
f^{2}\,w^{2n+4}+f\,[k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)]\,w^{n+1}-\delta^{2}=0\quad \forall \;u\in I,\;v\i
\mathbb{R}
. \label{8
\end{equation}
Let first be $n=0$. Then (\ref{8}) changes int
\[
f^{4}(v^{2}+\delta^{2})^{4}-2\,f^{2}\delta^{2}(v^{2}+\delta^{2})^{2
+\delta^{4}-f^{2\,}[k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)]^{2}(v^{2}+\delta^{2})=0.
\]
In the left hand stays a polynomial of degree eight in $v$ which vanishes for
all $u\in I$ and infinite $v\i
\mathbb{R}
$. Comparing its coefficients with those of the zero polynomial it becomes
obvious that $f$ vanishes, which it was previously excluded.\smallskip \newline
We distinguish now the following cases:\smallskip \newline \emph{Case I}: Let
$n\i
\mathbb{Z}
$ in (\ref{8}) be odd. Then we hav
\begin{align*}
Q(v): & =f\,[k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)](v^{2}+\delta^{2})^{\frac{n+1}{2}}\\
& +f^{2}(v^{2}+\delta^{2})^{n+2}-\delta^{2}=0\quad \forall \;u\in I,\;v\i
\mathbb{R}
.
\end{align*}
For $n\geq1$ the vanishing of the coefficient of $v^{2\left( n+2\right) }$,
which is the greatest power of $v$ of the polynomial $Q(v),$ implies $f=0$,
which is impossible.\smallskip \newline Let $n=-1$. Then
\[
Q(v)=f^{2}(v^{2}+\delta^{2})+f\,[k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)]-\delta^{2}=0.
\]
The vanishing of the coefficients of the polynomial $Q(v)$ give
\[
f=-k,\quad \delta \
\acute{
\,=k\, \lambda+1=0,
\]
therefore $\Phi$ is an Edlinger-surface.\smallskip \newline
\noindent Let $n=-3$ and $k_{1}$ the principal curvature having the form
(\ref{6}), i.e. $k_{1}=f(u)\,w^{-3}$. Then from (\ref{4}) for the other
principal curvature the following expression is obtaine
\[
k_{2}=f^{\ast}(u)\,w^{-1}\text{\quad with\quad}f^{\ast}(u):=\frac{-\delta
^{2}(u)}{f(u)},
\]
so that a principal curvature of $\Phi$ has the form (\ref{6}), where $n=-1$.
As it was previously established $\Phi$ is an Edlinger-surface.\smallskip
\newline The case $n\leq-5$ leads to a contradiction as one can easily
confirm.\medskip \newline \emph{Case II}: Let $n\i
\mathbb{Z}
$ in (\ref{8}) be even. For $n=-2$ it follows from (\ref{8}
\[
Q(v):=f^{2}[k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)]^{2}-(f^{2}-\delta^{2})^{2}(v^{2}+\delta^{2
)=0\quad \forall \;u\in I,\;v\i
\mathbb{R}
.
\]
The vanishing of the coefficient $f^{2}k^{2}$ of $v^{4}$ implies $k=0$. From
the vanishing of the remaining coefficients of the polynomial $Q(v)$ the
outcome i
\[
f^{2}\, \delta \
\acute{
\,^{2}-(f^{2}-\delta^{2})^{2}=2f^{2}\, \delta^{2}\, \delta \
\acute{
\, \lambda=f^{2\,}\delta^{4\,}\lambda^{2}-\delta^{2}(f^{2}-\delta^{2})^{2}=0.
\]
We finally obtain $\delta \
\acute{
=\lambda=0$ and $f=\pm \, \delta$. Thus the surface $\Phi$ is a right
helicoid.\smallskip \newline The cases $n\geq2$ and $n\leq4$ lead to
contradictions, as one can easily confirm.\smallskip \newline The results are
formulated as follows:
\begin{proposition}
\textit{Let }$\Phi \subse
\mathbb{R}
^{3}$\textit{ be a skew ruled }$C^{3}$\textit{-surface whose a principal
curvature has the form \emph{(\ref{6})}. Then one of the following
occurs:}\emph{\newline(a)}$\quad n=-1$\textit{, }$f(u)=-k(u)$\textit{ and
}$\Phi$\textit{ is an Edlinger-surface.\newline}\emph{(b)}$\quad
n=-2$\textit{, }$f(u)=\pm \, \delta(u)$\textit{ and }$\Phi$\textit{ is a right
helicoid.\newline}\emph{(c)}$\quad n=-3$\textit{, }$f(u)=\delta^{2
(u)\,k^{-1}(u)$\textit{ and }$\Phi$\textit{ is an Edlinger-surface.}
\end{proposition}
\noindent From this proposition follows the next
\begin{corollary}
\textit{Let }$\Phi \subse
\mathbb{R}
^{3}$\textit{ be a skew ruled }$C^{3}$\textit{-surface, whose principal
curvatures satisfy the relation}\emph{
\begin{equation}
\delta^{2\,}
\genfrac{}{}{0pt}{1}{3}{1
+k^{4}\,k_{2}=0.\label{9
\end{equation}
\textit{Then }$\Phi \ $\textit{is an Edlinger-surface}.
\end{corollary}
\begin{proof}
By using (\ref{4}) and (\ref{9}) we obtain $
\genfrac{}{}{0pt}{1}{4}{1
=k^{4}\,w^{-4}$, so that it is $k_{1}=\pm \,k\,w^{-1}$. From Proposition 1 it
follows $k_{1}=-k\,w^{-1}$. Thus the normal curvature $k_{1}$ has the required
form (\ref{6}), where $n=-1$. Hence $\Phi$ is an Edlinger-surface.
\end{proof}
\section{The case of the normal curvature}
Continuing this line of work we consider further geometrically distinguished
families of curves on the skew ruled surface $\Phi$ and suppose that the
normal curvature along the curves of these families are of the form
\begin{equation}
k_{N}=f(u)\,w^{n},\;n\i
\mathbb{Z}
,\;f(u)\in C^{0}(I). \label{10
\end{equation}
Our aim is the specification of these ruled surfaces.\smallskip \medskip
\noindent \textbf{3.1. }Let $S_{1}$ \emph{be the family of curves of constant
striction distance}. From (\ref{7}) the normal curvature along a curve of
$S_{1}$ is obtaine
\[
k_{N}=\frac{1}{w}\cdot \frac{-k\,v^{2}-\delta \
\acute{
\,v+\delta^{2}\left( k-\lambda \right) }{v^{2}+\delta^{2}\left( \lambda
^{2}+1\right) }.
\]
Therefore $k_{N}$ has the form (\ref{10}) if and only i
\begin{equation}
f\,w^{n+1\,}[v^{2}+\delta^{2}(\lambda^{2}+1)]+k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k-\lambda)=0\text{.}\label{11
\end{equation}
It is observed that the function $f$ vanishes exactly when for all $v\i
\mathbb{R}
$ hold
\begin{equation}
k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k-\lambda)=0,\label{12
\end{equation}
so that $k=\delta \
\acute{
\,=\lambda=0$, which means that $\Phi$ is a right helicoid.\smallskip \newline
Let now be $f\neq0$. For $n=-1$, using (\ref{11}) the following is derived
\[
Q(v):=f[v^{2}+\delta^{2}(\lambda^{2}+1)]+k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k-\lambda)=0.
\]
The vanishing of the coefficients of the polynomial $Q(v)$ implie
\[
f=-k,\quad \delta \
\acute{
\,=0,\quad \lambda \,(k\lambda+1)=0.
\]
Therefore the surface $\Phi$ is either an orthoid ruled surface\footnote{A
ruled surface is called \emph{orthoid} if its rulings are perpendicular to the
striction line.} of constant parameter of distribution ($\delta \
\acute{
=\lambda=0$) or an Edlinger-surface ($\delta \
\acute{
=k\lambda+1=0$).\smallskip \newline For $n>-1$ it follows from (\ref{11}
\[
Q(v):=f^{2}(v^{2}+\delta^{2})^{n+1}[v^{2}+\delta^{2}(\lambda^{2
+1)]^{2}-[k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k-\lambda)]^{2}=0.
\]
From the vanishing of the coefficient of $v^{2\left( n+3\right) }$\ it
follows that $f=0$ which it was excluded.\smallskip \newline For $n<-1$ it
follows from (\ref{11}
\[
Q(v):=(v^{2}+\delta^{2})^{-n-1}[k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k-\lambda)]^{2}-f^{2}[v^{2}+\delta^{2}(\lambda^{2}+1)]^{2}=0.
\]
The vanishing of the coefficient of $v^{2\left( n-1\right) }$ implies $k=0$.
Then the polynomial $Q(v)$ become
\begin{equation}
Q(v)=(v^{2}+\delta^{2})^{-n-1}(\delta \
\acute{
\,v-\delta^{2}\, \lambda)^{2}-f^{2}[v^{2}+\delta^{2}(\lambda^{2}+1)]^{2
=0.\label{13
\end{equation}
For $n=-2$ the polynomial $Q(v)$ takes the form
\[
Q(v)=(v^{2}+\delta^{2})(\delta \
\acute{
\,v-\delta^{2}\lambda)^{2}-f^{2}[v^{2}+\delta^{2}(\lambda^{2}+1)]^{2}=0.
\]
From the vanishing of the coefficients of the polynomial $Q(v)$ the result is
$f=0$, which is a contradiction.\smallskip \newline For $n<-2$ the vanishing of
the coefficient of $v^{-2n}\ $in (\ref{13}) implies $\delta \
\acute{
=0$, therefor
\begin{equation}
Q(v)=\delta^{4}\lambda^{2}(v^{2}+\delta^{2})^{-n-1}-f^{2}[v^{2}+\delta
^{2}(\lambda^{2}+1)]^{2}=0.\label{14
\end{equation}
In particular for $n=-3$ one ha
\[
Q(v)=\delta^{4}\lambda^{2}(v^{2}+\delta^{2})^{2}-f^{2}[v^{2}+\delta
^{2}(\lambda^{2}+1)]^{2}=0.
\]
From the vanishing of the coefficients of the polynomial $Q(v)$ it follows
again that $f=0$ which is a contradiction.\smallskip
\noindent For $n<-3$ (\ref{14}) results in $\lambda=f=0$ which is equally
impossible.\smallskip \newline Thus the following has been shown:
\begin{proposition}
\textit{Suppose that the normal curvature along the curves of constant
striction distance of a skew ruled }$C^{3}$\textit{-surface }$\Phi \subse
\mathbb{R}
^{3}$\textit{ has the form \emph{(\ref{10})}. Then one of the following
occurs:}\emph{\newline(a)\quad}$f=0$\textit{ and }$\Phi$\textit{ is a right
helicoid.}\emph{\newline(b)\quad}$n=-1,f(u)=-k(u)$\textit{ and }$\Phi$\textit{
is either an orthoid ruled surface of constant parameter of distribution or an
Edlinger-surface.}\emph{\medskip}
\end{proposition}
\noindent \textbf{3.2.} Let $S_{2}$ be \emph{the family of the orthogonal
trajectories of the family }$S_{1}$. This family is determined by
\[
\lbrack v^{2}+\delta^{2}(\lambda^{2}+1)]\,du+\delta \, \lambda \,dv=0.
\]
From (\ref{7}) the corresponding normal curvature is obtaine
\[
k_{N}=\frac{1}{w^{3}}\cdot \frac{-\delta^{2}\lambda \, \left[ \left( k\,
\lambda+2\right) v^{2}+\delta \
\acute{
\, \lambda \,v+\delta^{2}\left( \lambda^{2}+k\, \lambda+2\right) \right]
}{v^{2}+\delta^{2}\left( \lambda^{2}+1\right) }.
\]
Consequently $k_{N}$ has the form (\ref{10}) if and only i
\begin{equation}
f\,w^{n+3}[v^{2}+\delta^{2}(\lambda^{2}+1)]+\delta^{2}\lambda \lbrack(k\,
\lambda+2)v^{2}+\delta \
\acute{
\, \lambda \,v+\delta^{2}(\lambda^{2}+k\, \lambda+2)]=0. \label{15
\end{equation}
Obviously the function $f$ vanishes exactly when for all $v\i
\mathbb{R}
$ holds
\[
\lambda \,[(k\, \lambda+2)v^{2}+\delta \
\acute{
\, \lambda \,v+\delta^{2}(\lambda^{2}+k\, \lambda+2)]=0.
\]
In this case the function $\lambda$ vanishes too, because otherwise it would
have bee
\[
k\, \lambda+2=\delta \
\acute{
\, \lambda=\delta^{2}(\lambda^{2}+k\, \lambda+2)=0,
\]
which are impossible. Therefore it is $f=0$ if and only if $\lambda=0$ and
this means that $\Phi$ is\ an orthoid ruled surface.\smallskip \newline Let now
be $f\, \lambda \neq0.$ For $n=-3$ it follows from (\ref{15}
\[
Q(v):=f\,[v^{2}+\delta^{2}(\lambda^{2}+1)]+\delta^{2}\lambda \,[(k\,
\lambda+2)v^{2}+\delta \
\acute{
\, \lambda \,v+\delta^{2}(\lambda^{2}+k\, \lambda+2)]=0.
\]
From the vanishing of the coefficients of the polynomial $Q(v)$ it is obtained
tha
\[
f=-\delta^{2}\lambda \,(k\, \lambda+2),\quad \delta \
\acute{
=0,\quad k\, \lambda+1=0.
\]
This results in $f=-\delta^{2}\, \lambda$ and $\Phi$ is an
Edlinger-surface.\smallskip \newline One can easily confirm that the cases
$n>-3$ and $n<-3$ lead to contradictions. These results imply the following
\begin{proposition}
\textit{Suppose that the normal curvature along the orthogonal trajectories of
the curves of constant striction distance of a skew ruled }$C^{3
$\textit{-surface }$\Phi \subse
\mathbb{R}
^{3}$\textit{ has the form \emph{(\ref{10})}. Then one of the following
occurs:}\emph{\newline(a)}$\quad f=0$\textit{ and }$\Phi$\textit{ is an
orthoid ruled surface.}\emph{\newline(b)}$\quad n=-3$\textit{,
$f(u)=\delta^{2}(u)\,k^{-1}(u)$\textit{ and }$\Phi$\textit{ is an
Edlinger-surface.}\emph{\medskip}
\end{proposition}
\noindent \textbf{3.3.} Let $S_{3}$ be the family \emph{of the orthogonal
trajectories of the rulings}, i.e. the family which is determined b
\[
\delta \, \lambda \,du+dv=0.
\]
From (\ref{7}) the corresponding normal curvature is obtaine
\[
k_{N}=\frac{-k\,v^{2}-\delta \
\acute{
\,v-\delta^{2}\left( k+\lambda \right) }{w^{3}}.
\]
Therefore $k_{N}$ has the form (\ref{10}) if and only if
\begin{equation}
f\,w^{n+3}+k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)=0. \label{16
\end{equation}
The function $f$ vanishes if and only if (\ref{12}) holds for all $v\i
\mathbb{R}
$ or, equivalently, if $k=\lambda=\delta \
\acute{
=0$. Hence the surface $\Phi$ is a right helicoid.\smallskip \newline Let now
be $f\neq0$. For $n=-3$ it follows from (\ref{16}
\[
Q(v):=k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)+f=0.
\]
The vanishing of the coefficients of the polynomial $Q(v)$ implie
\[
f=-\delta^{2}\, \lambda,\quad k=0,\quad \delta \
\acute{
=0.
\]
Therefore $\Phi$ is a conoidal ruled surface of constant parameter of
distribution.\smallskip \newline For $n=-2$ it follows from (\ref{16}
\[
Q(v):=f^{2}(v^{2}+\delta^{2})-[k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)]^{2}=0.
\]
From the vanishing of the coefficients of the polynomial $Q(v)$ it follow
\[
k=0,\quad f^{2}=\delta \
\acute{
\,^{2},\quad \delta \
\acute{
\, \lambda=0,\quad f^{2}=\delta^{2}\lambda^{2}=0,
\]
therefore $f=0$ which it was excluded.\smallskip \newline From (\ref{16}) and
for $n=-1$ the outcome is
\[
Q(v):=f(v^{2}+\delta^{2})+k\,v^{2}+\delta \
\acute{
\,v+\delta^{2}(k+\lambda)=0.
\]
From the vanishing of the coefficients of the polynomial $Q(v)$ it follows
\[
f=-k,\quad \delta \
\acute{
=0,\quad \lambda=0.
\]
Therefore $\Phi$ is an orthoid ruled surface of constant parameter of
distribution.\smallskip \newline One can easily confirm that the cases $n\geq0$
and $n\leq-4$ lead to contradictions. So it can be stated that:
\begin{proposition}
\textit{Suppose that the normal curvature along the orthogonal trajectories of
the rulings of a skew ruled }$C^{3}$\textit{-surface }$\Phi \subse
\mathbb{R}
^{3}$\textit{ has the form \emph{(\ref{10})}. Then one of the following
occurs:}\emph{\newline(a)}$\quad f=0$\textit{ and }$\Phi$\textit{ is a right
helicoid.}\emph{\newline(b)}$\quad n=-3$\textit{, }$f(u)=-\delta
^{2}(u)\, \lambda(u)$\textit{ and }$\Phi$\textit{ is a conoidal ruled surface
of constant parameter of distribution.}\emph{\newline(c)}$\quad n=-1$\textit{,
}$f(u)=-k(u)$\textit{ and }$\Phi$\textit{ is an orthoid ruled surface of
constant parameter of distribution.}\emph{\medskip}
\end{proposition}
\noindent \textbf{3.4.} Let $S_{4}$ be \emph{the family of curves of constant
Gaussian curvature}\textit{ }\cite{Sachs}. This family is determined b
\[
\delta \
\acute{
\,(\delta^{2}-v^{2})\,du+2\delta \,v\,dv=0.
\]
Putting for abbreviation
\[
A=\left( 4\delta^{2}+\delta \
\acute{
\,^{2}\right) v^{4}+4\delta^{2\,}\delta \
\acute{
\lambda \,v^{3}+2\delta^{2}\left[ 2\delta^{2}\left( \lambda^{2}+1\right)
-\delta \
\acute{
\,^{2}\right] v^{2}-4\delta^{4\,}\delta \
\acute{
\, \lambda \,v+\delta^{4\,}\delta \
\acute{
\,^{2},
\]
the corresponding normal curvature can be computed from (\ref{7}) to b
\[
k_{N}=\frac{-1}{w}\cdot \frac{4\delta^{2}\,v\left[ k\,v^{3}+\delta^{2}\left(
k-\lambda \right) v+\delta^{2}\, \delta \
\acute{
\, \right] }{A}.
\]
$k_{N}$ has the form (\ref{10}) if and only i
\begin{align}
& \text{ }f\,w^{n+1}[(4\delta^{2}+\delta \
\acute{
\,^{2})v^{4}+4\delta^{2}\, \delta \
\acute{
\, \lambda \,v^{3}+2\delta^{2}[2\delta^{2}(\lambda^{2}+1)-\delta \
\acute{
\,^{2}]v^{2}\nonumber \label{18}\\
& -4\delta^{4\,}\delta \
\acute{
\, \lambda \,v+\delta^{4\,}\delta \
\acute{
\,^{2}]+4\delta^{2}v\,[k\,v^{3}+\delta^{2}(k-\lambda)v+\delta^{2}\delta \
\acute{
\,]=0.
\end{align}
The function $f$ vanishes exactly whe
\[
k\,v^{3}+\delta^{2}(k-\lambda)v+\delta^{2\,}\delta \
\acute{
\,=0
\]
for all $v\i
\mathbb{R}
$ or, equivalently, if $k=\lambda=\delta \
\acute{
\,=0$. Consequently $\Phi$ is a right helicoid.\smallskip \newline Let now be
$f\neq0$. For $n=-1$ it follows from (\ref{18}
\begin{align*}
Q(v) & :=f\,[(4\delta^{2}+\delta \
\acute{
\,^{2})v^{4}+4\delta^{2}\, \delta \
\acute{
\, \lambda \,v^{3}+2\delta^{2\,}[2\delta^{2}(\lambda^{2}+1)-\delta \
\acute{
\,^{2}]v^{2}\\
& -4\delta^{4}\, \delta \
\acute{
\, \lambda \,v+\delta^{4}\delta \
\acute{
\,^{2}]+4\delta^{2}\,v[k\,v^{3}+\delta^{2}(k-\lambda)v+\delta^{2\,}\delta \
\acute{
\,]=0.
\end{align*}
The coefficients
\begin{align*}
a_{4} & :=f\,(4\delta^{2}+\delta \
\acute{
\,^{2})+4\delta^{2}k,\quad a_{3}:=4f\, \delta^{2}\, \delta \
\acute{
\, \lambda,\\
a_{2} & :=2f\, \delta^{2}[2\delta^{2}(\lambda^{2}+1)-\delta \
\acute{
\,^{2}]+4\delta^{4}(k-\lambda),\\
a_{1} & :=-4f\, \delta^{4}\delta \
\acute{
\, \lambda+4\delta^{4}\, \delta \
\acute{
\,,\quad a_{0}:=f\, \delta^{4\,}\delta \
\acute{
\,^{2
\end{align*}
of the polynomial $Q(v)$ vanish. From $a_{0}=0$ it follows $\delta \
\acute{
=0$. Then from the vanishing of the coefficients $a_{2}$ and $a_{4}$ we
obtain
\[
f=-k,\quad \lambda \,(k\, \lambda+1)=0.
\]
Consequently $\Phi$ is either an orthoid ruled surface of constant parameter
of distribution ($\delta \
\acute{
=\lambda=0$) or an Edlinger-surface ($\delta \
\acute{
=k\, \lambda+1=0$).\smallskip \newline The cases $n>-1$ and $n<-1$ lead to
contradictions. The following has been shown
\begin{proposition}
\textit{Suppose that the normal curvature along the curves of constant
Gaussian curvature of a skew ruled }$C^{3}$\textit{-surface }$\Phi \subse
\mathbb{R}
^{3}$\textit{ has the form \emph{(\ref{10})}. Then one of the following
occurs:}\newline \emph{(a)}$\quad f=0$\textit{ and }$\Phi$\textit{ is a right
helicoid.}\emph{\newline(b)}$\quad n=-1$\textit{, }$f(u)=-k(u)$\textit{ and
}$\Phi$\textit{ is either an orthoid ruled surface of constant parameter of
distribution or an Edlinger-surface.}
\end{proposition}
The following table assemble the above results.\bigskip \newlin
\begin{tabular}
[c]{|c|c|c|c|}\hline
\begin{tabular}
[c]{c
{\small Normal curvature of the}\\
{\small form }${\small k}_{{\small N}}$ ${\small =}$ ${\small f\,w}^{n}$
{\small along
\end{tabular}
$ & ${\small f}$ & $n$ & {\small Type of the ruled surface }${\small \Phi
$\\ \hline \hlin
\begin{tabular}
[c]{c
{\small one family of}\\
{\small the lines of curvature
\end{tabular}
&
\begin{tabular}
[c]{c
${\small -k}$\\
${\small \pm \delta}$\\
${\small \delta}^{2}{\small k}^{-1}
\end{tabular}
&
\begin{tabular}
[c]{c
${\small -1}$\\
${\small -2}$\\
${\small -3}
\end{tabular}
&
\begin{tabular}
[c]{c
$\cdot$\ {\small Edlinger-surface}\\
$\cdot$\ {\small right helicoid}\\
$\cdot$ {\small Edlinger-surface
\end{tabular}
\\ \hlin
\begin{tabular}
[c]{c
{\small the curves of const.}\\
{\small striction distance
\end{tabular}
&
\begin{tabular}
[c]{c
${\small 0}$\\
${\small -k}
\end{tabular}
&
\begin{tabular}
[c]{c
{\small -}\\
${\small -1}
\end{tabular}
&
\begin{array}
[c]{c
{\small \cdot}\text{ {\small right helicoid}}\\
{\small \cdot}\text{\ {\small either an orthoid surface of}}\\
\text{{\small const. parameter of distrib.}}\\
\text{{\small or an Edlinger-surface}
\end{array}
$\\ \hlin
\begin{tabular}
[c]{c
{\small the orthogonal trajectories}\\
{\small of the curves of const.}\\
{\small striction distance
\end{tabular}
&
\begin{tabular}
[c]{c
${\small 0}$\\
${\small \delta}^{2}{\small k}^{-1}
\end{tabular}
&
\begin{tabular}
[c]{c
{\small -}\\
${\small -3}
\end{tabular}
&
\begin{tabular}
[c]{c
$\cdot$\ {\small orthoid surface}\\
$\cdot$\ {\small Edlinger-surface
\end{tabular}
\\ \hlin
\begin{tabular}
[c]{c
{\small the orthogonal trajectories}\\
{\small of the rulings
\end{tabular}
&
\begin{tabular}
[c]{c
${\small 0}$\\
${\small -k}$\\
${\small -\delta}^{2}{\small \lambda}
\end{tabular}
&
\begin{tabular}
[c]{c
{\small -}\\
${\small -1}$\\
${\small -3}
\end{tabular}
&
\begin{tabular}
[c]{c
$\cdot$\ {\small right helicoid}\\
$\cdot$\ {\small orthoid surface of}\\
{\small const. parameter of distrib.}\\
$\cdot$\ {\small conoidal surface of}\\
{\small const. parameter of distrib.
\end{tabular}
\\ \hlin
\begin{tabular}
[c]{c
{\small the curves of const.}\\
{\small Gaussian curvature
\end{tabular}
&
\begin{tabular}
[c]{c
${\small 0}$\\
${\small -k}
\end{tabular}
&
\begin{tabular}
[c]{c
{\small -}\\
${\small -1}
\end{tabular}
&
\begin{array}
[c]{c
{\small \cdot}\text{ {\small right helicoid}}\\
{\small \cdot}\text{\ {\small either an orthoid surface of}}\\
\text{{\small const. parameter of distrib.}}\\
\text{{\small or an Edlinger-surface}
\end{array}
$\\ \hline
\end{tabular}
\linebreak \ \newline \noindent
|
1,108,101,564,659 | arxiv | \section{Introduction}
Testing the isotropy of the speed of light serves as a sensitive test of special relativity and Lorentz invariance. The classic experiment to test the isotropy of the speed of light uses a Michelson interferometer and was first performed by A.A. Michelson more than hundred years ago. He was later joined by E.W. Morley and they published a $10^{-9}$ null-result in 1887,\cite{MM87} which surprised the scientific community at that time. Modern such type of experiments use electromagnetic resonators to probe for Lorentz invariance violations and are generally based on comparing the resonance frequencies of two similar orthogonal resonators while either actively rotating the setup or relying solely on Earth's rotation.\cite{Hall,Holger1,Wolf,Sven,Antonini,Stanwix,Eisele,Sven2} The basic principle of a modern Michelson-Morley type experiment is to search for orientation dependent relative changes of the eigenfrequencies $\delta\nu/\nu_0$ of the employed electromagnetic resonators which might be caused by Lorentz invariance violation.
In case of a linear resonator a relative frequency change is most generally described by $\delta\nu/\nu_0=\delta c/c_0-\delta L/L_0-\delta n/n_0$, where $\delta c/c_0$ denotes a relative change in the speed of light in vacuum along the optical path, $\delta L/L_0$ denotes a relative change in the length of the optical path, and $\delta n/n_0$ denotes a relative change in the index of refraction along the optical path. All three effects can occur in the case of spontaneous Lorentz symmetry breaking.\cite{Holger2,Holger3,Holger4} The magnitude of the different types of Lorentz violations depend on the composition of the material the resonator is made of. Comparing the eigenfrequencies of two similar resonators made of the same material -- as has been done in almost all previous reported modern Michelson-Morley experiments -- makes it impossible to distinguish between the different types of Lorentz violation and due to the substraction of the different types an overall Lorentz violating signal could be even suppressed or canceled. However, the material dependency makes it possible to distinguish between the different types of Lorentz violations by using dissimilar electromagnetic resonators.
In the past, we have combined results of an experiment performed in our laboratory in Berlin, Germany, consisting of linear optical resonators made of fused-silica with mirrors made of BK7 with the results of an experiment performed by Stanwix {\it et al.}\ in Perth, Australia, consisting of whispering gallery microwave resonators made of sapphire in order to give separate bounds on the different types of Lorentz violations.\cite{JoinedMM07} It is worth mentioning that since the experiments have not been optimized for this kind of comparison and have not been synchronized timewise, not all in principle obtainable information of such a combined experiment could be utilized.
\section{A slightly different modern Michelson-Morley experiment}
\begin{figure}
\centering
\epsfig{figure=Schema,width=0.80\textwidth}
\caption{Right: schematic (top) and picture (bottom) of the monolithic sapphire resonator. Left: schematic of the new setup. The monolithic sapphire resonator is located in the cryostat at the upper level. The fused-silica resonators are located in the vacuum chamber at the lower level. PDH = Pound-Drever-Hall locking electronics. TS = tilt sensor.}\label{fig:Schema}
\end{figure}
We have realized a combined experiment in our laboratory in which we could compare the resonance frequency of a monolithic linear optical sapphire resonator\cite{mn} with the resonance frequency of a stationary evacuated linear optical cavity made of ultra-low-expansion glass as well as with two evacuated optical resonators made of fused silica (used in our previous experiment).\cite{Sven2} The monolithic resonator and the fused silica resonators were actively rotated in a Michelson-Morley configuration on an air bearing turntable once every 45 s.
The monolithic sapphire resonator (see Figure \ref{fig:Schema}) features a finesse of about $10\,000$, corresponding to a linewidth of 200 kHz. The round trip loss inside the resonator is on the order of 600 ppm, although the loss due to absorption should only be on the order of $\sim10$ ppm/cm as measured by calorimetry. This leads to the conclusion that most of the losses are caused by flawed coatings. The incoupling efficiency of the monolithic sapphire resonator is less than $0.3\%$ resulting in a transmission of only $1.2\times10^{-7}$.
We placed the monolithic resonator inside a cryostat and cooled it down to liquid helium temperatures (4.2K) to reduce previously observed strong thermal noise effects within the monolithic crystal. At cryogenic temperatures an improvement of more than one order of magnitude in frequency stability for the eigenfrequencies of the monolithic sapphire resonator can be seen in the Allan deviation of the beat note (see Figure \ref{fig:AlaVar}). The cryostat containing the monolithic sapphire resonator offered optical free beam access through windows. For the Michelson-Morley experiment it was placed on a breadboard containing all necessary optics. The breadboard itself was mounted on the rotating part of the previously existing setup above the vacuum chamber containing the crossed fused-silica resonators (see Figure \ref{fig:Schema}) and thus represented a second new level within this setup. The sapphire resonator axis was orientated parallel to one of the fused silica's resonator axis and thus orthogonal to the resonator axis of the other fused-silica cavity. Except for these modifications there were no further changes of the previously existing setup and all measures implemented to reduce systematics connected with active rotation \cite{Sven2} also applied for the monolithic sapphire resonator.
\begin{figure}
\centering
\epsfig{figure=AlaVar,width=0.7\textwidth}
\caption{Relative frequency stability derived from the beat between the stabilized lasers (Sph = laser stabilized to the monolithic sapphire resonator, FS = laser stabilized to one of the fused-silica cavities).}\label{fig:AlaVar}
\end{figure}
Ten days of comparison of the resonance frequency of the actively rotated monolithic sapphire resonator with the stationary ULE cavity were performed in August 2010 (see Figure \ref{fig:Results2}). This corresponds to more than $19\,000$ turntable rotations. The advantage of comparing the rotating monolithic resonator with the stationary ULE cavity is that the prime modulation signal at twice the turntable rotation period can only originate from the monolithic resonator. Thus, less assumptions are needed in the analysis to extract any possible Lorentz invariance violating effects that are connected to light propagation in matter. As an additional check, we also recorded the beat-note between one of the fused silica cavities with the monolithic sapphire resonator as well as with the stationary ULE cavity.
\begin{figure}
\epsfig{figure=SinusCosinustagesfit,width=\textwidth}
\caption{Quadrature amplitudes $C$ and $S$ at twice the rotation frequency of recorded beat note. Nomenclature as in our previous experiment.$^9$}
\label{fig:Results2}
\end{figure}
The analysis of the beat note with respect to anisotropy signals characterizing Lorentz invariance violations follows the same procedure as in our previous experiment.\cite{Sven2} No significant anisotropy signal was found fixed to a sidereal frame (see Figure \ref{fig:Results}). Using the obtained sidereal modulation amplitudes we can conclude an upper limit for the anisotropy of the relative difference of the speed of light in vacuum and matter (sapphire) of $\Delta c/c = (0.8 \pm 0.7)\times10^{-16}$ (one standard deviation). A detailed analysis within the framework of the Lorentz invariance and CPT violating extension of the standard model of particle physics (SME)\cite{SME} has not been done, since the dependence of the index of refraction of sapphire in the optical region on Lorentz violating coefficients of the photonic and fermionic sector has not been completely worked out yet. However, M\"{u}ller \cite{Holger4} has already outlined a recipe for deriving this dependency.
\begin{figure}
\epsfig{figure=Results,width=\textwidth}
\caption{Modulation amplitudes (gray) and their mean values (black) as expected for an anisotropy of the speed of light fixed within a sidereal frame. Nomenclature as in our previous experiment.$^9$ Amplitudes $C_0$ and $S_0$ are most prone to constant systematic effects. The mean values and standard errors (one sigma) are $S_0=-3.\pm2.1$, $C_0=2.6\pm1.8$, $C_{s1}=-1.1\pm2.1$, $S_{s1}=-0.8\pm1.5$, $C_{c1}=1.8\pm1.6$, $S_{c1}=3.3\pm2.8$, $C_{s2}=3.4\pm1.1$, $S_{s2}=1.1\pm0.9$, $C_{c2}=1.8\pm1.5$, $S_{c2}=-0.4\pm1.3$ (all values $\times 10^{-16}$).}
\label{fig:Results}
\end{figure}
\section{Next generation experiment}
We plan to use ultra-stable cryogenic optical cavities made of sapphire to set up a next generation of a modern Michelson-Morley experiment with light propagation in vacuum. The new cavities should feature a relative frequency stability of better than $1\times10^{-16}$ up to long integration times.\cite{proc} The cavities will be arranged in a Michelson-Morley configuration and continuously rotated with a rotation period between 10s and 100s for more than one year using a custom-made high-precision low noise turntable system made of granite. The sensitivity of this setup to violations of Lorentz invariance should be in the $10^{-19}$ to $10^{-20}$ regime. This corresponds to more than a 100-fold improvement in precision of modern Michelson-Morley type experiments.\cite{Sven2}
Furthermore, ultra-stable cryogenic microwave whispering gallery resonators will be added to the experiment in collaboration with the University of Western Australia.\cite{mike} With this co-rotating microwave and optical resonator setup we will be able to search for additional types of Lorentz violating signals.
Additionally, we are involved in the planning of a space borne mission called STAR \footnote{STAR (Space-Time Asymmetry Research) is a collaboration between NASA Ames, JILA, Standford, Saudi-Arabian KACST, DLR, ZARM at University of Bremen, HTWG Konstanz, and Humboldt-University Berlin.} to test different aspects of the theory of relativity using optical resonators and an atomic reference.\cite{thilo}
\section*{References}
|
1,108,101,564,660 | arxiv | \section{Introduction} \label{SEC:Intro}
Fourier Transform~\citep{bracewell1965fourier} is a transform method, which converts a time varying function to its corresponding $\omega$-domain representation, where $\omega$ is its corresponding angular frequency~\citep{beerends2003fourier}. This transformation allows replacing the differentiation and integration in time domain analysis to multiplication and division operators in the frequency domain, which can be easily manipulated. Moreover, the $\omega$-domain representations of the differential equations can also be used for the frequency response analysis of the corresponding systems. Due to these distinguishing features, Fourier transform has been widely used for analyzing many continuous-time systems, such as signal~\citep{papoulis1977signal,gaydecki2004foundations}, and image~\citep{dougherty2009digital} processing algorithms, analog circuits~\citep{thomas2016analysis}, communication systems~\citep{ziemer2006principles,du2010wireless}, medical sciences~\citep{bracewell1965fourier,dougherty2009digital}, mechanical systems~\citep{oppenheim1996signals} and optics~\citep{gaskill1978linear,stark2012application}.
The first step in the Fourier transform based analysis of a continuous-time system,
is to model the dynamics of the system using a differential equation. This differential equation is then transformed to its equivalent $\omega$-domain representation by using the Fourier transform. Next, the resulting $\omega$-domain equation is simplified using various Fourier transform properties, such as existence, linearity, frequency shifting, modulation, time shifting, time scaling, time reversal and differentiation. The main objective of this simplification is to either solve the differential equation to obtain values for the variable $\omega$ or obtain the frequency response of the system corresponding to the given differential equation. The frequency response can in turn be used to analyze the dynamics of the system by studying the impact of different frequency components on the intended behaviour of the given system. The information sought from this analysis plays a vital role in analyzing reliable and performance efficient engineering systems.
Traditionally, the analysis of continuous-time systems, using transform methods, has been done using the paper-and-pencil based analytical technique. However, due to the highly involved human manipulation, the analysis process is error prone, especially when dealing with larger systems, and hence an accurate analysis cannot be guaranteed. Moreover, this kind of manual manipulation does not guarantee that each and every assumption required in the mathematical analysis is written down with the analysis. Thus, some vital assumptions may not accompany the final result of the analysis and a system designed based on such a result may lead to bugs later on.
For example, the Air France Flight 447 crashed in 2009, which resulted in 228 deaths, was attributed to the faulty warning system consisting of speed sensors. These sensors gave wrong/invalid reading about the speed of the airplane, which led to the crash. A more rigourous analysis of the warning system could have prevented this incident.
Computer-based methods including the numerical methods and the symbolic techniques, provide a more scalable option to analyze larger systems. Some of the computer tools involved in these analysis are MATLAB~\citep{MATLAB2016webref}, Mathematica~\citep{wolfram2015mathematica} and Maple~\citep{Maple2016webref}. The numerical analysis involves the approximation of the continuous expressions or the continuous values of the variables due to the finite precision of computer arithmetic, which compromises the accuracy of the analysis. Moreover, it involves a finite number of iterations, depending on the computational resources and computer memory, to judge the values of unknown continuous parameters, which introduces further inaccuracies in the analysis as well.
Similarly, the symbolic tools cannot assure absolute accuracy as they involve discretization of integral to summation while evaluating the improper integral in the definition of Fourier transform~\citep{taqdees2013formalization}.
Moreover, they also contain some unverified symbolic algorithms in their core~\citep{duran2013misfortunes}, which puts another question mark on the accuracy of the results.
Given the widespread usage of the continuous-time systems in many safety-critical domains, such as medicine and transportation, we cannot rely on these above-mentioned analysis methods as the analysis errors could lead to disastrous consequences, including the loss of human lives.
Formal methods~\citep{hasan2015formal} are computer based mathematical techniques that involve the mathematical modeling of the given system and the formal verification of its intended behaviour as a mathematically specified property, which is expressed in an appropriate logic. This verification of the properties of the underlying system is based on mathematical reasoning. Moreover, the mathematical nature of the system model and the desired property guarantees the accuracy of formal analysis. Formal methods have been widely used for the verification of software~\citep{schumann2001automated} and hardware~\citep{camilleri1986hardware} systems and the formalization (or mathematical modeling) of classical mathematics~\citep{hales2005introduction,avigad2014formally}.
Higher-order-logic theorem proving~\citep{harrison2009handbook} is a widely-used formal verification method that has been extensively used to completely analyze continuous systems by leveraging on the high expressiveness of higher-order logic and the soundness of theorem proving. \textit{Umair et al.} formalized the Z-transform~\citep{siddique2014formalization} and used them to analyze an Infinite Impulse Response (IIR) filter. Similarly, \textit{Hira et al.} formalized the Laplace transform ~\citep{taqdees2013formalization} and used their formalization to analyze a Linear Transfer Converter (LTC) circuit.
However, the formalization of Z-transform can only be utilized for the discrete-time system analysis. On the other hand, the formalization of Laplace transform can be used to reason about the solutions of ordinary differential equations and the transfer function analysis of the continuous-time systems~\citep{taqdees2013formalization}, but is only limited to causal functions, i.e., the functions that fulfill the condition: $f(x) = 0$ for all $x < 0$. However, many physical and engineering systems exhibit the non-causal continuous behaviors, involving functions with infinite extent. For example, in optics, the optical image of a point source of light may be described theoretically by a Gaussian function of the form $e^{-x^2}$, which exists for all \emph{x}~\citep{goodman2005introduction}. Another example is the rate of flow of water out of a tap at the bottom of a bucket of water, which can be modeled using $e^{-kt}$, where \emph{t} ranges over the whole real line~\citep{thibos2003fourier}. Fourier transform can cater for the analysis involving both continuous and non-causal functions and thus can overcome the above-mentioned limitations of Z and Laplace transforms.
With the objective of extending the scope of the analysis based on theorem proving, to cover non-causal functions, we present a higher-order-logic based formalization of Fourier transform in this paper.
In particular, we formalize the definition of Fourier transform in higher-order logic and use it to verify the classical properties of Fourier transform, such as existence, linearity, time shifting, frequency shifting, modulation, time scaling, time reversal, differentiation, and its relations to Fourier Cosine, Fourier Sine and Laplace transforms. These foundations can be built upon to reason about the analytical solutions of differential equations or frequency responses of the physical systems, as depicted in Fig.~\ref{FIG:proposed_methodology}. The user of the proposed formal analysis framework is required to develop a formal model of the given system, by using its corresponding differential equation. Similarly, the desired frequency response behavior from the given system can be captured as a proof goal (theorem) that can be formed using this behavior along with the formal model of the given system. The proof goal can then be verified based on the above-mentioned formalization of Fourier transform within the sound core of the HOL-Light theorem prover, which is an interactive theorem proving environment for conducting proofs in higher-order logic. The availability of the above-mentioned formally verified properties decreases the manual user interaction and thus effort required while performing the formal Fourier transform based analysis of a system. Besides the above-mentioned foundational formalization of Fourier transform, we also use these results to verify a relationship of frequency response of a generic n-order system~\citep{adams2012continuous}. This generic relationship can be specialized to facilitate the reasoning process of the formal frequency response analysis of any specific order system.
In order to illustrate the practical utilization of the proposed formalization, we present a formal analysis of an audio equalizer~\citep{adams2012continuous} and a MEMs accelerometer~\citep{kaajakari2009practical},
which are extensively used in communication systems and many safety critical systems, respectively.
We use the HOL-Light theorem prover~\citep{harrison-hol-light} for the proposed formalization in order to build upon its comprehensive reasoning support for multivariable calculus. Particularly, the proposed formalization heavily relies upon the formalization of differential, integration, topological and transcendental theories of multivariable calculus.
\begin{figure}[H]
\centering
\scalebox{0.25}
{\includegraphics[trim={0 0.0cm 0 0.0cm},clip]{proposed_methodology.pdf}}
\caption{Proposed Framework}
\label{FIG:proposed_methodology}
\end{figure}
The rest of the paper is organized as follows: Section \ref{SEC:Preliminaries} provides a brief introduction about the HOL-Light theorem prover and the multivariable calculus theories of HOL-Light.
Section \ref{SEC:Formalization_of_Fourier} presents the formalization of the Fourier transform definition and the conditions required for its existence. We provide the verification of the classical properties of Fourier transform in Section \ref{SEC:Formal_verif_Fourier_properties}. Section \ref{SEC:fourier_trans_comm_used_funs} presents the Fourier transforms of some commonly used functions. We present the formal analysis of generic n-order system in Section~\ref{SEC:formal_analysis_generic_n_order_sys}.
Section \ref{SEC:applications} provides the verification of the frequency response of
an audio equalizer and a MEMs accelerometer.
Section~\ref{SEC:discussion} presents the discussion, which highlights upon the main challenges faced in the proposed formalization.
Finally, Section \ref{SEC:Conclusion} concludes the paper.
\section{Preliminaries} \label{SEC:Preliminaries}
In this section, we present an introduction to the HOL-Light theorem prover and an overview about the multivariable calculus theories of HOL-Light, which provide the foundational support for the proposed formalization.
\subsection{HOL-Light Theorem Prover} \label{SUBSEC:HOL_Light_theorem_prover}
HOL-Light~\citep{harrison-hol-light} is an interactive theorem proving environment for conducting proofs in higher-order logic. The logic in the HOL-Light system is represented in the strongly-typed functional programming language ML~\citep{paulson_96}. A theorem is a formalized statement that may be an axiom or could be deduced from already verified theorems by an inference rule. A theorem consists of a finite set $\Omega$ of Boolean terms, called the assumptions, and a Boolean term $S$, called the conclusion. Soundness is assured as every new theorem must be verified by applying the basic axioms and primitive inference rules or any other previously verified theorems/inference rules. A HOL-Light theory is a collection of valid HOL-Light types, constants, axioms, definitions and theorems.
Various mathematical foundational concepts have been formalized and saved as HOL-Light theories.
The HOL-Light theorem prover provides an extensive support of theorems regarding, Boolean algebra, arithmetic, real numbers, transcendental functions and multivariate analysis such as differential, integration, vectors and topology, in the form of theories, which are extensively used in our formalization. In fact, one of the primary reasons to chose the HOL-Light theorem prover for the proposed formalization was the presence of an extensive support of multivariable calculus theories.
There are many automatic proof procedures and proof assistants~\citep{Harrison_formalized_mathematics} available in HOL-Light, which help the user in concluding a proof more efficiently.
Table~\ref{TAB:Hol_light_symbols} presents the standard and HOL-Light representations and the meanings of some commonly used symbols in this paper.
\begin{table}[h]
\flushleft
\caption{HOL-Light Symbols}
\label{TAB:Hol_light_symbols}
\resizebox{1.0\textwidth}{!}{\begin{minipage}{\textwidth}
{\renewcommand{\arraystretch}{1.005
\begin{tabular}{p{3.05cm} p{4.1cm} p{5.1cm}}
\hline\hline
HOL-Light Symbols & Standard Symbols & Meanings \\ \hline \hline
$\mathtt{/\backslash}$ & and & Logical $and$ \\ \hline
$\mathtt{\backslash/}$ & or & Logical $or$ \\ \hline
$\mathtt{\sim}$ & not & Logical $negation$ \\ \hline
$\mathtt{==>}$ & $ \longrightarrow$ & Implication\\ \hline
$\mathtt{<=>}$ & $ = $ & Equality in Boolean domain\\ \hline
$\mathtt{!x. t}$ & $ \forall x.t$ & For all $x$ : $t$ \\ \hline
$\mathtt{?x. t}$ & $ \exists x.t$ & There exists $x$ : $t$ \\ \hline
\texttt{$\lambda$x.t} & $\lambda x.t$ & Function that maps $x$ to $t(x)$ \\ \hline
$\mathtt{num}$ & $\{0,1,2,\ldots\}$ & Positive Integers data type \\ \hline
$\mathtt{real}$ & All Real numbers & Real data type \\ \hline
$\mathtt{SUC\ n}$& ($n + 1$)& Successor of natural number \\ \hline
$\mathtt{\& a}$ & $\mathbb{N} \rightarrow \mathbb{R}$ & Typecasting from Integers to Reals \\ \hline
$\mathtt{abs\ x}$ & $|x|$ & Absolute function \\ \hline
$\mathtt{EL\ n\ l}$ & $element$ & $n^{th}$ element of list l \\ \hline
\end{tabular}
}
\end{minipage}}
\end{table}
\subsection{Multivariable Calculus Theories in HOL-Light} \label{SEC:Mult_cal_theories}
A N-dimensional vector is represented as a $\mathds{R^N}$ column matrix with each of its element as a real number in HOL-Light~\citep{harrison2013hol}. All of the vector operations are thus performed using matrix manipulations. A complex number is defined as a 2-dimensional vector, i.e., a $\mathds{R}^2$ column matrix. All of the multivariable calculus theorems are verified in HOL-Light for functions with an arbitrary data-type $\mathds{R^N} \rightarrow \mathds{R^M}$.
Some of the frequently used HOL-Light functions in our work are explained below:
\begin{defn}
\label{DEF:cx_and_ii}
\emph{Cx and ii} \\{\small
\textup{\texttt{$\vdash$ $\forall$ a. Cx a = complex (a, \&0) \\
$\mathtt{}$$\vdash$ ii = complex (\&0, \&1)
}}}
\end{defn}
\noindent $\mathtt{Cx}$ is a type casting function from real ($\mathds{R}$) to complex ($\mathds{R}^2$). It accepts a real number and returns its corresponding complex number with the imaginary part equal to zero, where the $\texttt{\&}$ operator type casts a natural number ($\mathds{N}$) to its corresponding real number ($\mathds{R}$). Similarly, $\mathtt{ii}$ (iota) represents a complex number having the real part equal to zero and the magnitude of the imaginary part equal to 1.
\begin{defn}
\label{DEF:re_im_lift_drop}
\emph{Re, Im, lift and drop} \\{\small
\textup{\texttt{$\vdash$ $\forall$ z. Re z = z\$1 \\
$\mathtt{}$$\vdash$ $\forall$ z. Im z = z\$2 \\
$\mathtt{}$$\vdash$ $\forall$ x. lift x = (lambda i. x) \\
$\mathtt{}$$\vdash$ $\forall$ x. drop x = x\$1
}}}
\end{defn}
The function $\mathtt{Re}$ accepts a complex number and returns its real part. Here, the notation $\mathtt{z\$i}$ represents the $i^{th}$ component of vector $\texttt{z}$. Similarly, $\mathtt{Im}$ takes a complex number and returns its imaginary part. The function $\mathtt{lift}$ accepts a variable of type $\mathds{R}$ and maps it to a 1-dimensional vector with the input variable as its single component. It uses the \texttt{lambda} operator in HOL to construct a vector based on its components~\citep{harrison2013hol}. Similarly, $\mathtt{drop}$ takes a 1-dimensional vector and returns its single element as a real number. In order to make the functions $\mathtt{lift}$ and $\mathtt{drop}$ better understandable to a non-HOL user, we use $\mathtt{\overline{x}}$ and $\mathtt{\underline{x}}$ as the equivalent symbols for \texttt{lift x} and \texttt{drop x}, respectively.
\begin{defn}
\label{DEF:exp_ccos_csine}
\emph{Exponential, Complex Cosine and Sine Functions} \\{\small
\textup{\texttt{$\vdash$ $\forall$ x. exp x = Re (cexp (Cx x)) \\
$\mathtt{}$$\vdash$ $\forall$ z. ccos z = (cexp (ii $\ast$ z) + cexp (--ii $\ast$ z)) / Cx (\&2) \\
$\mathtt{}$$\vdash$ $\forall$ z. csin z = (cexp (ii $\ast$ z) - cexp (--ii $\ast$ z)) / (Cx (\&2) $\ast$ ii)
}}}
\end{defn}
The complex exponential and real exponentials are represented as $\texttt{cexp}:\mathds{R}^2 \rightarrow \mathds{R}^2$
and $\mathtt{exp}:\mathds{R} \rightarrow \mathds{R}$ in HOL-Light, respectively. Similarly, the complex cosine $\mathtt{ccos}$ and complex sine $\mathtt{csin}$ functions are formally defined in terms of $\texttt{cexp}$ using the Euler's formula~\citep{hol_light2016transcendentals}.
\begin{defn}
\label{DEF:vector_integral}
\emph{Vector Integral and Real Integral} \\
{\small
\textup{\texttt{$\vdash$ $\forall$ f i. integral i f = (@y. (f has\_integral y) i)
}}} \\
{\small
\textup{\texttt{$\vdash$ $\forall$ f i. real\_integral i f = (@y. (f has\_real\_integral y) i)
}}}
\end{defn}
The function $\mathtt{integral}$ represents the vector integral and is defined using the Hilbert choice operator $\texttt{@}$ in the functional form. It takes the integrand function $\texttt{f}$, having an arbitrary type $\mathds{R}^N \rightarrow \mathds{R}^M$, and a vector-space $\mathtt{i}: \mathds{R}^N \rightarrow \mathds{B}$, which defines the region of convergence as $\mathds{B}$ represents the boolean data type, and returns a vector $\mathds{R}^M$ which is the integral of $\mathtt{f}$ on $\mathtt{i}$. The function $\mathtt{has\_integral}$ represents the same relationship in the relational form.
It is a predicate, which accepts the integrand function, integral value and the region of integration, and returns \texttt{T} if the integral of the integrand function over the region of integration is equal to the integral value. Where as, the function $\mathtt{integral}$ accepts the integrand function, its region of integration and returns the value of the integral over the given region of the integration using the Hilbert choice operator $\texttt{@}$.
Similarly, the function $\mathtt{real\_integral}$ accepts the integrand function $\mathtt{f} : \mathds{R} \rightarrow \mathds{R}$ and a set of real numbers $\mathtt{i}: \mathds{R} \rightarrow \mathds{B}$ and returns the real-valued integral of the function $\mathtt{f}$ over $\mathtt{i}$.
The region of integration, for both of the above integrals can be defined to be bounded by a vector interval $[a, b]$ or real interval $[a, b]$ using the HOL-Light functions $\mathtt{interval \ [a,b]}$ and $\mathtt{real\_interval \ [a,b]}$, respectively.
\begin{defn}
\label{DEF:vector_derivative}
\emph{Vector Derivative and Real Derivative} \\
{\small
\textup{\texttt{$\vdash$ $\forall$ f net. vector\_derivative f net = (@f'. (f has\_vector\_derivative f') net)
}}} \\
{\small
\textup{\texttt{$\vdash$ $\forall$ f x. real\_derivative f x = (@f'. (f has\_real\_derivative f') (atreal x))
}}}
\end{defn}
The function $\mathtt{vector\_derivative}$ takes a function $\texttt{f} : \mathds{R}^1 \rightarrow \mathds{R}^M$ and a $\texttt{net} : \mathds{R}^1 \rightarrow \mathds{B}$, which defines the point at which $\texttt{f}$ has to be differentiated, and returns a vector of data-type $\mathds{R}^M$, which represents the differential of $\texttt{f}$ at $\texttt{net}$.
Moreover, depending on the usage of the definition, \texttt{net} can be specified as (\texttt{at a within s}) or (\texttt{at a}), which can be a point $\texttt{a} : \mathds{R}^1$ of a set $\texttt{s} : \mathds{R}^1 \rightarrow \mathds{B}$ or point $\texttt{a} : \mathds{R}^1$, respectively, where the function \texttt{f} has to be differentiated.
The function $\mathtt{has\_vector\_derivative}$ defines the same relationship in the relational form.
Similarly, the function $\mathtt{real\_derivative}$ accepts a function $\texttt{f} : \mathds{R} \rightarrow \mathds{R}$ and a real number $\texttt{x}$, which is the point at which $\texttt{f}$ has to be differentiated, and returns a variable of data-type $\mathds{R}$, which represents the differential of $\texttt{f}$ at $\texttt{x}$. The function $\mathtt{has\_real\_derivative}$ defines the same relationship in the relational form.
\begin{defn}
\label{DEF:limit_of_function}
\emph{Limit of a function} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f net. lim net f = (@l. (f $\rightarrow$ l) net)
}}}
\end{defn}
The function $\mathtt{lim}$ accepts a $\texttt{net}$ with elements of arbitrary data-type $\mathds{A}$ and a function $\texttt{f} : \mathds{A} \rightarrow \mathds{R}^M$ and returns $\texttt{l}$ of data-type $\mathds{R}^M$, i.e., the value to which $\texttt{f}$ converges at the given $\texttt{net}$. Moreover, \texttt{net} can take \texttt{at\_posinfinity} or \texttt{at\_neginfinity} to model the positive infinity or negative infinity, respectively.
We build upon the above-mentioned fundamental functions of multivariable calculus to formalize the Fourier transform in the next section.
\section{Formalization of Fourier Transform} \label{SEC:Formalization_of_Fourier}
The Fourier transform of a function $f(t)$ is mathematically defined as:
\begin{equation}\label{EQ:fourier_transform}
\mathcal{F} [f (t)] = F(\omega) = \int_{-\infty}^{+\infty} {f(t)e^{-i \omega t}} dt, \ \omega \ \epsilon \ \mathds{R}
\end{equation}
\vspace{2.5mm}
\noindent where $f$ is a function from $\mathds{R}^1 \rightarrow \mathds{C}$ and $\omega$ is a real variable. The limit of integration is from ${-\infty}$ to ${+\infty}$. We formalize Equation~\ref{EQ:fourier_transform} in HOL-Light as follows:
\begin{defn}
\label{DEF:fourier_transform}
\emph{Fourier Transform} \\{\small
\textup{\texttt{$\vdash$ $\forall$ w f. fourier\_transform f w = \\
$\mathtt{\ }$\hspace{2.80cm} integral UNIV ($\lambda$t. cexp (--((ii $\ast$ Cx w) $\ast$ Cx $\mathtt{\underline{t}}$)) $\ast$ f t)
}}}
\end{defn}
The function \texttt{fourier\_transform} accepts a complex-valued function $ \texttt{f}: \mathds{R}^1 \rightarrow \mathds{R}^2 $ and a real number $\texttt{w}$ and returns a complex number that is the Fourier transform of $ \texttt{f} $ as represented by Equation \ref{EQ:fourier_transform}. In the above function, we used complex exponential function $ \texttt{cexp}: \mathds{R}^2 \rightarrow \mathds{R}^2 $ because the return data-type of the function $ \texttt{f} $ is $ \mathds{R}^2 $. To multiply $\texttt{w}$ with $ \texttt{ii} $, we first converted $\texttt{w}$ into a complex number $ \mathds{R}^2 $ using $ \texttt{Cx} $. Similarly, the data-type of $ \texttt{t} $ is $ \mathds{R}^1 $ and to multiply it with $ \mathtt{ii \ast Cx \ w} $, it is first converted into a real number $\mathtt{\underline{t}}$ by using $\texttt{drop}$ and then it is converted to data-type $ \mathds{R}^2 $ using $ \texttt{Cx} $. Next, we use the vector function $ \texttt{integral} $ (Definition \ref{DEF:vector_integral}) to integrate the expression $ f(t)e^{-i \omega t} $ over the whole real line since the data-type of this expression is $ \mathds{R}^2 $. Since the region of integration of the vector integral function must be a vector space therefore we represented the interval of the integral by $ \texttt{UNIV}:\mathds{R}^1 $, which represents the whole real line.
The Fourier transform of a function $ f $ exists, i.e., the integrand of Equation \ref{EQ:fourier_transform} is integrable, and the integral has some converging limit value, if $f$ is piecewise smooth and is absolutely integrable on the whole real line~\citep{rashid2016formalization,beerends2003fourier,rashid2017tmformalization}. A function is said to be piecewise smooth on an interval if it is piecewise differentiable on that interval.
The Fourier existence condition can thus be formalized in HOL-Light as follows:
\begin{defn}
\label{DEF:fourier_exists}
\emph{Fourier Exists} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f. fourier\_exists f $\Leftrightarrow$ \\
$\mathtt{\ }$\hspace{0.5cm} ($\forall$ a b. f piecewise\_differentiable\_on interval [$\mathtt{\overline{a}}$, $\mathtt{\overline{b}}$]) $\wedge$ \\
$\mathtt{\ }$\hspace{1.85cm} f absolutely\_integrable\_on UNIV
}}}
\end{defn}
\noindent In the above function, the first conjunct expresses the piecewise smoothness condition for the function $\texttt{f}$.
Whereas, the second conjunct represents the condition that the function $\texttt{f}$ is absolutely integrable on the whole real line.
Next, we present a very important property of the Fourier existence as follows:
\begin{thm}
\label{THM:linearity_prop_four_exist}
\emph{Linearity of Fourier Existence} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f g a b. fourier\_exists f $\wedge$ fourier\_exists g $\Rightarrow$ \\
$\mathtt{\ }$\hspace{5.0cm} fourier\_exists ($\lambda$x. a $\ast$ f x + b $\ast$ g x)
}}}
\end{thm}
\noindent where $ \texttt{a}: \mathds{C} $ and $ \texttt{b}: \mathds{C} $ are arbitrary constants acting as the scaling factors.
The proof of above theorem is based on the linearity properties of integration, limit and piecewise differentiability.
\vspace*{-2mm}
\section{Formal Verification of Fourier Transform Properties} \label{SEC:Formal_verif_Fourier_properties}
In this section, we use Definitions~\ref{DEF:fourier_transform} and~\ref{DEF:fourier_exists} and Theorem~\ref{THM:linearity_prop_four_exist} to verify some of the classical properties of Fourier transform and its relationship with various transforms, like Fourier Cosine, Fourier Sine, and Laplace transforms and Fourier transform of a $n^{th}$-order differential equation in HOL-Light. The verification of these properties and the relationships not only ensures the correctness of our definitions but also plays a vital role in minimizing the user intervention and time consumption in reasoning about Fourier transform based frequency domain analysis of continuous-time systems, as will be depicted in Section \ref{SEC:applications} of this paper.
\subsection{Properties of Fourier Transform}
The existence of the improper integral of Fourier Transform is a pre-condition for most of the arithmetic manipulations involving the Fourier transform. This condition is formalized in HOL-Light as the following theorem:
\begin{thm}
\label{THM:prop_01_integrable_univ}
\emph{Integrability of Integrand of Fourier Transform Integral} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f w. fourier\_exists f $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.9cm} ($\lambda$t. cexp (--((ii $\ast$ Cx w) $\ast$ Cx $\mathtt{\underline{t}}$)) $\ast$ f t) integrable\_on UNIV
}}}
\end{thm}
The proof of Theorem~\ref{THM:prop_01_integrable_univ} is based on splitting the region of integration, i.e., the whole real line $\mathtt{UNIV}:\mathds{R}^1$, as a union of positive real line (interval $[0,\infty)$) and negative real line (interval $(-\infty, 0]$). Next, we split the complex-valued integrand, $f(t)e^{-i \omega t}$, into its corresponding real and imaginary parts. In this process, we need the integrability of the integrand, which can be derived by the piecewise differentiability of \texttt{fourier\_exists} condition.
Finally, some theorems regarding integration, integrability, continuity and some properties of the transcendental functions are used to conclude the proof of Theorem~\ref{THM:prop_01_integrable_univ}.
Next, we verified some of the classical properties of Fourier transform, given in Table~\ref{TAB:properties_of_Fourier_transform}.
\begin{scriptsize}
\begin{longtable}{|p{2cm}|p{3cm}|p{7cm}|p{3cm}|}
\caption{Properties of Fourier Transform}
\label{TAB:properties_of_Fourier_transform}
\endfirsthead
\endhead
\hline
\hline
\multicolumn{1}{l}{Mathematical Form} &
\multicolumn{1}{l}{\hspace{-0.4cm} Formalized Form}
\\ \hline \hline
\multicolumn{2}{c}{\textbf{Linearity}} \\ \hline
\multicolumn{1}{l}{ {$\begin{array} {lcl} \hspace{-0.2cm} \textit{$ \mathcal{F} [ \alpha f(t) + \beta g(t)] = $ } \\
\textit{$\mathtt{\ }$\hspace{0.4cm} $\alpha F(\omega) + \beta G(\omega) $ }
\end{array}$} } &
\multicolumn{1}{l}{{ $\begin{array} {lcl} \textup{\texttt{\hspace{-0.4cm}$\vdash$ $\forall$ f g w a b. fourier\_exists f $\wedge$ fourier\_exists g $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} fourier\_transform ($\lambda$t. a $\ast$ f t + b $\ast$ g t) w = }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.5cm} a $\ast$ fourier\_transform f w + b $\ast$ fourier\_transform g w }}
\end{array}$}} \\ \hline
\multicolumn{2}{c}{\textbf{Time Shifting (Time Advance and Time Delay)}} \\ \hline
\multicolumn{1}{l}{ {$\begin{array} {lcl} \hspace{-0.2cm} \textit{$ \mathcal{F} [f(t + t0)] = F(\omega)e^{+ i \omega t0} $ }
\end{array}$} } &
\multicolumn{1}{l}{ {$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w t0. fourier\_exists f $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.2cm} fourier\_transform ($\lambda$t. f (t + t0)) w = }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} fourier\_transform f w $\ast$ cexp ((ii $\ast$ Cx w) $\ast$ Cx $\mathtt{\underline{t0}}$) \hspace{-1.0cm} }}
\end{array}$} } \\ \hline
\multicolumn{1}{l}{ {$\begin{array} {lcl} \hspace{-0.2cm} \textit{$ \mathcal{F} [f(t - t0)] = F(\omega)e^{-i \omega t0} $ }
\end{array}$} } &
\multicolumn{1}{l}{ {$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w t0. fourier\_exists f $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.2cm} fourier\_transform ($\lambda$t. f (t - t0)) w = }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} fourier\_transform f w $\ast$ cexp (--((ii $\ast$ Cx w) $\ast$ Cx $\mathtt{\underline{t0}}$)) \hspace{-1.0cm} }}
\end{array}$} } \\ \hline
\multicolumn{2}{c}{\textbf{Frequency Shifting (Right and Left Shifting)}} \\ \hline
\multicolumn{1}{l}{ {$\begin{array} {lcl} \hspace{-0.2cm} \textit{$ \mathcal{F} [ e^{+i \omega _0 t} f(t)] = F(\omega - \omega _0) $ }
\end{array}$} } &
\multicolumn{1}{l}{ {$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w w0. fourier\_exists f $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.2cm} fourier\_transform ($\lambda$t. cexp ((ii $\ast$ Cx w0) $\ast$ Cx $\mathtt{\underline{t}}$) $\ast$ f t) w = \hspace{-1.0cm} }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} fourier\_transform f (w - w0) }}
\end{array}$} } \\ \hline
\multicolumn{1}{l}{ {$\begin{array} {lcl} \hspace{-0.2cm} \textit{$ \mathcal{F} [ e^{-i \omega _0 t} f(t)] = F(\omega + \omega _0) $ }
\end{array}$} } &
\multicolumn{1}{l}{ {$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w w0. fourier\_exists f $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.4cm} fourier ($\lambda$t. cexp (--(ii $\ast$ Cx w0) $\ast$ Cx $\mathtt{\underline{t}}$) $\ast$ f t) w = \hspace{-1.0cm} }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} fourier f (w + w0) }}
\end{array}$} } \\ \hline
\multicolumn{2}{c}{\textbf{Modulation (Cosine and Sine Based Modulation)}} \\ \hline
\multicolumn{1}{l}{ {$\begin{array} {lcl} \hspace{-0.2cm} \textit{$ \mathcal{F}[cos(\omega_0 t) f(t)] = $ } \\
\textit{$\mathtt{\ }$\hspace{-0.2cm} $\dfrac{F(\omega - \omega _0) + F(\omega + \omega _0)}{2} $ }
\end{array}$} } &
\multicolumn{1}{l}{{$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w w0. fourier\_exists f $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.4cm} fourier\_transform ($\lambda$t. ccos (Cx w0 $\ast$ Cx $\mathtt{\underline{t}}$) $\ast$ f t) w = }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} (fourier\_transform f (w - w0) + }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{1.5cm} fourier\_transform f (w + w0)) / Cx (\&2) }}
\end{array}$}} \\ \hline
\multicolumn{1}{l}{ {$\begin{array} {lcl} \hspace{-0.2cm} \textit{$ \mathcal{F}[sin(\omega_0 t) f(t)] = $ } \\
\textit{$\mathtt{\ }$\hspace{-0.2cm} $ \dfrac{F(\omega - \omega _0) - F(\omega + \omega _0)}{2i} $ }
\end{array}$} } &
\multicolumn{1}{l}{{$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w w0. fourier\_exists f $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.4cm} fourier\_transform ($\lambda$t. csin (Cx w0 $\ast$ Cx $\mathtt{\underline{t}}$) $\ast$ f t) w = }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} (fourier\_transform f (w - w0) - }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{1.5cm} fourier\_transform f (w + w0)) / (Cx (\&2) $\ast$ ii) }}
\end{array}$}} \\ \hline
\multicolumn{2}{c}{\textbf{Time Scaling}} \\ \hline
\multicolumn{1}{l}{ $ \mathcal{F}[f(at)] = \dfrac{1}{|a|}F(\dfrac{\omega}{a}) $ } &
\multicolumn{1}{l}{{$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w a. fourier\_exists f $\wedge$ $\sim$(a = \&0) $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.2cm} fourier\_transform ($\lambda$t. f (a \% t)) w = }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.2cm} (Cx (\&1) / Cx (abs a)) $\ast$ fourier\_transform f (w / a) }}
\end{array}$}} \\ \hline
\multicolumn{2}{c}{\textbf{Time Reversal}} \\ \hline
\multicolumn{1}{l}{ $ \mathcal{F}[f(-t)] = F(-\omega) $ } &
\multicolumn{1}{l}{{$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w. fourier\_exists f $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.4cm} fourier\_transform ($\lambda$t. f (--t)) w = fourier\_transform f (--w) \hspace{-1.0cm} }}
\end{array}$}} \\ \hline
\multicolumn{2}{c}{\textbf{First-order Differentiation}} \\ \hline
\multicolumn{1}{l}{ $ \mathcal{F} [\dfrac{d}{dt}f(t) ] = i \omega F(\omega) $ } &
\multicolumn{1}{l}{{$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w. fourier\_exists f $\wedge$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.2cm} fourier\_exists ($\lambda$t. vector\_derivative f (at t)) $\wedge$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.2cm} ($\forall$t. f differentiable at t) $\wedge$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.2cm} (($\lambda$t. f $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_posinfinity $\wedge$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.2cm} (($\lambda$t. f $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_neginfinity }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.2cm} $\Rightarrow$ fourier\_transform ($\lambda$t. vector\_derivative f (at t)) w = \hspace{-1.0cm} }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{2.80cm} ii $\ast$ Cx w $\ast$ fourier\_transform f w }}
\end{array}$}} \\ \hline
\multicolumn{2}{c}{\textbf{Higher-order Differentiation}} \\ \hline
\multicolumn{1}{l}{$ \mathcal{F} [\dfrac{d^n}{{dt}^n}f(t)] = (i \omega)^n F(\omega)$ } &
\multicolumn{1}{l}{{$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f w n. fourier\_exists\_higher\_deriv n f $\wedge$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.2cm} ($\forall$t. differentiable\_higher\_derivative n f t) $\wedge$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.2cm} ($\forall$k. k < n $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} (($\lambda$t. higher\_vector\_derivative k f $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{1.2cm} at\_posinfinity) $\wedge$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.2cm} ($\forall$k. k < n $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{0.0cm} (($\lambda$t. higher\_vector\_derivative k f $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{1.2cm} at\_neginfinity) }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.5cm} $\Rightarrow$ fourier\_transform ($\lambda$t. higher\_vector\_derivative n f t) w = \hspace{-1.0cm} }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{2.14cm} (ii $\ast$ Cx w) pow n $\ast$ fourier\_transform f w }}
\end{array}$}} \\ \hline
\multicolumn{2}{c}{\textbf{Area Under a Function}} \\ \hline
\multicolumn{1}{l}{ $ \int_{-\infty}^{\infty} {f (t)} dt = F(0) $ } &
\multicolumn{1}{l}{{$\begin{array} {lcl} \textup{\texttt{\hspace{-0.3cm}$\vdash$ $\forall$ f. fourier\_exists f $\Rightarrow$ }} \\
\textup{\texttt{$\mathtt{\ }$\hspace{-0.3cm} integral UNIV f = fourier\_transform f (\&0) }}
\end{array}$}} \\ \hline
\end{longtable}
\end{scriptsize}
The first property is \textit{linearity}, which is frequently used for the analysis of systems having composition of subsystems and accept different scaled inputs.
Next, we verified the \textit{time shifting} property, which is usually used to evaluate the Fourier transform of the function $f$ that is shifted over some constant value of time. The time shifting of the function $f$ can be towards the left of the origin of the time axis (time advance) or towards the right side of the origin of the time-axis (time delay). Its mathematical and the formalized form is given in Table~\ref{TAB:properties_of_Fourier_transform}.
The \textit{frequency shifting} property of Fourier transform is usually used to evaluate the Fourier transform of multiplication of the function $f$ with the exponential function. It basically shifts the frequency domain representation of $f$ to a certain portion of the frequency spectrum, which is desired for the corresponding frequency analysis. Similar to the time shifting, the frequency shifting is of two types. The frequency right shifting (frequency delay) shifts the frequency signal to the right on the frequency axis
and the frequency left shifting (frequency advance) shifts the frequency signal to the left on the frequency axis. The mathematical and the formalized forms of both versions of the frequency shifting are given in Table~\ref{TAB:properties_of_Fourier_transform}.
The next entry in Table~\ref{TAB:properties_of_Fourier_transform} presents a variant of frequency shifting, called the \textit{modulation} property, which is usually used to evaluate the Fourier transform of multiplication of the function $f$ with the cosine and sine functions. This property forms the basis of the Amplitude Modulation (AM) in communication systems. The multiplication of the sinusoidal functions (carrier signals) with the function $f$ in time-domain shifts the frequency components to the portion of the frequency spectrum that is desired for a particular signal transmission.
Next, we verified the \textit{time scaling} property of Fourier transform of a function $ f $, as given in Table~\ref{TAB:properties_of_Fourier_transform}. Here $ a: \mathds{R} $ is an arbitrary constant. If $|a| < 1$, then the function $f (at)$ represents the function $f$ compressed by a factor of $a$ and its resulting frequency spectrum will be expanded by the same factor. Similarly, in the case of $|a| > 1$, the function $f (at)$ is expanded by the factor $a$ and its corresponding frequency spectrum will be compressed by the same factor.
The next property is the \textit{time reversal} property, which is a special case of time scaling property, under the condition $a = -1$.
The Fourier transform of the \textit {differential of a function} $ f $ is a very important property that enables us to evaluate the frequency spectrum of the derivative of a function $ f $ using the Fourier transform of $ f $. Its mathematical and formalized form is presented in Table~\ref{TAB:properties_of_Fourier_transform}. In its formalized form, the first two assumptions ensure that the Fourier transforms of the function $\texttt{f}$ and its derivative $ \frac{df}{dt} $ exist. The third assumption models the condition that the function $ \texttt{f} $ is differentiable at every $\texttt{t} \ \epsilon \ \mathds{R}$ . The last two assumptions represent the condition that $ \lim\limits_{t \to \pm\infty} {f (t)} = 0 $. Finally, the conclusion provides the Fourier transform of the first order derivative of the given function.
The proof of this property involves a significant amount of arithmetic reasoning along with the integration by parts and the fact $ f(t)e^{-i \omega t} {\mid}_{-\infty}^{\infty} = ( \lim\limits_{B \to \infty} {f (B)e^{-i \omega B}} - \lim\limits_{A \to - \infty} {f (A)e^{-i \omega A}} ) = 0 $
and the integrability of the Fourier integrand on the positive and negative real lines.
The next property is the \textit{Fourier transform of a n-times continuously differentiable function} $ f $, which is the foremost foundational property for analysing higher-order differential equations based on the Fourier transform. In its formalized form, the first assumption ensures the Fourier transform existence of $\texttt{f}$ and its first $\texttt{n}$ higher-order derivatives. Similarly, the second assumption ensures the differentiability of $\texttt{f}$ and its first $\texttt{n}$ higher-order derivatives on $ \texttt{t} \ \epsilon \ \mathds{R} $. The next two assumptions model the condition $ \lim\limits_{t \to \pm\infty} {f^{(k)} (t)} = 0 $ for each $ k = 0,1,2,...,n - 1 $, where $ f^{(k)} $ denotes the $k^{th}$ derivative of $\texttt{f}$ and $ f^{(0)} = \texttt{f} $. Finally, the conclusion is the Fourier transform of $n^{th}$ order derivative of the function. Its proof is mainly based on induction on variable $\texttt{n}$ along with Fourier transform of the first order derivative of the given function.
The Fourier transform can be used to evaluate the \textit{area under a function} $f$, as given in the final entry of Table~\ref{TAB:properties_of_Fourier_transform}.
\subsection{Relationship with Various Transforms}
This section presents the relationship of Fourier transform with various transforms, which include Fourier Cosine, Fourier Sine and Laplace transforms.
\subsubsection{Relationship with Fourier Cosine and Fourier Sine Transforms}\label{SUBSEC:relation_fourier_consine_sine}
The Fourier transform of the even and odd function enables us to relate the Fourier transform to Fourier Cosine and Fourier Sine transforms. The Fourier Cosine transform is mathematically expressed by the following indefinite integral:
\begin{equation}\label{EQ:fourier_cosine}
F_c(\omega) = \int_{-\infty}^{+\infty} {f(t)cos(\omega t)} dt
\end{equation}
\vspace{2.5mm}
If the input function is an even function, i.e., $ f(-t) = f(t)$ for all $ t \ \epsilon \ \mathds{R} $, then its Fourier transform is equal to its Fourier Cosine transform.
We verify the even function property as the following theorem:
\begin{thm}
\label{THM:fourier_even_function}
\emph{Fourier Transform of Even Function} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f w. fourier\_exists f $\wedge$ ($\forall$t. f (--t) = f t) \\
$\mathtt{\ }$\hspace{3.0cm} $\Rightarrow$ fourier\_transform f w = fourier\_cosine\_transform f w
}}}
\end{thm}
\noindent In the above theorem, the two assumptions ensure the Fourier existence of $\texttt{f}$ and model the even function condition, respectively. The conclusion presents the relationship of Fourier transform to Fourier Cosine transform.
Next, the Fourier Sine transform is mathematically expressed as:
\begin{equation}\label{EQ:fourier_sine}
F_s(\omega) = \int_{-\infty}^{+\infty} {f(t)sin(\omega t)} dt
\end{equation}
\vspace{2.5mm}
If the input function is an odd function, i.e., $ f(-t) = - f(t)$ for all $ t \ \epsilon \ \mathds{R} $, then its Fourier transform is equal to its Fourier Sine transform.
The odd function property is verified in HOL-Light as the following theorem:
\begin{thm}
\label{THM:fourier_odd_function}
\emph{Fourier Transform of Odd Function} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f w. fourier\_exists f $\wedge$ ($\forall$t. f (--t) = --f t) \\
$\mathtt{\ }$\hspace{2.5cm} $\Rightarrow$ fourier\_transform f w = --ii $\ast$ fourier\_sine\_transform f w
}}}
\end{thm}
In the above theorem, the first assumption presents the condition of the Fourier existence of the function $\texttt{f}$, whereas the second assumption models the odd function condition.
\subsubsection{Relationship with Laplace Transform}\label{SUBSEC:relation_fourier_laplace}
By restricting the complex-valued function $f:\mathds{R}^1 \rightarrow \mathds{R}^2$ and the variable $s:\mathds{R}^2$ for Laplace Transform, we can find a very important relationship between Fourier and Laplace transforms.
The Laplace transform of a function $f$ is given by the following equation:
\begin{equation}\label{EQ:laplace_transform}
F(s) = \int_{0}^{\infty} {f(t)e^{-s t}} dt, \ s \ \epsilon \ \mathds{C}
\end{equation}
\vspace{2.5mm}
A formalized form of the Laplace transform is as follows~\citep{taqdees2013formalization}:
\begin{defn}
\label{DEF:laplace_transform}
\emph{Laplace Transform} \\{\small
\textup{\texttt{$\vdash$ $\forall$ s f. laplace\_transform f s = \\
$\mathtt{\ }$\hspace{1.2cm} lim at\_posinfinity ($\lambda$b. integral (interval [$\mathtt{\overline{\&0}}$, $\mathtt{\overline{b}}$]) \\
$\mathtt{\ }$\hspace{6.75cm} ($\lambda$t. cexp (--(s $\ast$ Cx $\mathtt{\underline{t}}$)) $\ast$ f t))
}}}
\end{defn}
The Laplace transform of a function $f$ exists, if the function $\mathtt{f}$ is piecewise smooth and of exponential order on the positive real line.
The existence of the Laplace transform has been formally defined as follows~\citep{taqdees2013formalization,rashid2017tmformalization}:
\begin{defn}
\label{DEF:laplace_existence}
\emph{Laplace Exists} \\{\small
\textup{\texttt{$\vdash$ $\forall$ s f. laplace\_exists f s $\Leftrightarrow$ \\
$\mathtt{\ }$\hspace{1.2cm} ($\forall$ b. f piecewise\_differentiable\_on interval [$\mathtt{\overline{\&0}}$, $\mathtt{\overline{b}}$]) $\wedge$ \\
$\mathtt{\ }$\hspace{1.2cm} ($\exists$ M a. Re s > $\mathtt{\underline{a}}$ $\wedge$ exp\_order f M a)
}}}
\end{defn}
The function $\texttt{exp\_order}$ in the above definition has been formally defined as~\citep{taqdees2013formalization,rashid2017tmformalization}:
\begin{defn}
\label{DEF:exp_order_condition}
\emph{Exponential Order Function} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f M a. exp\_order f M a $\Leftrightarrow$ \&0 < M $\wedge$ \\
$\mathtt{\ }$\hspace{3.0cm} ($\forall$ t. \&0 <= t $\Rightarrow$ norm (f $\mathtt{\overline{t}}$) <= M $\ast$ exp ($\mathtt{\underline{a}}$ $\ast$ t))
}}}
\end{defn}
If the function $f$ is causal, i.e., $f (t) = 0$ for all $t < 0$ and the real part of Laplace variable $ \mathtt{s: R^2} $ is zero, i.e., $ \textit{Re s = 0} $, then the Fourier transform of function $f$ is equal to Laplace transform, i.e., $ {(\mathcal{F} f)(Im \ s) = (\mathcal{L} f)(s)\mid_{\textit{Re s = 0}}} $~\citep{thomas2016analysis}.
The above relationship is verified in HOL-Light as follow:
\begin{thm}
\label{THM:relation_fourier_laplace}
\emph{Relationship with Laplace Transform} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f s. laplace\_exists f s $\wedge$ \\
$\mathtt{\ }$\hspace{0.2cm} ($\forall$t. t IN \{t | $\mathtt{\underline{t}}$ <= \&0\} $\Rightarrow$ f t = vec 0) $\wedge$ ($\forall$t. Re s = \&0) \\
$\mathtt{\ }$\hspace{3.8cm} $\Rightarrow$ fourier\_transform f (Im s) = laplace\_transform f s
}}}
\end{thm}
The first assumption of above theorem ensure the existence of the Laplace transform. The next two assumptions ensure that $\texttt{f}$ is a causal function and the real part of the Laplace variable $\texttt{s}$ is zero. The proof of the above theorem is mainly based on the integrability of the Fourier integrand on the positive and negative real lines, properties of the complex exponential,
and the following important lemma:
\begin{lem}
\label{THM:laplace_alternate_representation}
\emph{Alternative Representation of Laplace Transform} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f s. laplace\_exists f s $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm} laplace\_transform f s = \\
$\mathtt{\ }$\hspace{1.0cm} integral \{t | \&0 <= $\mathtt{\underline{t}}$\} ($\lambda$t. cexp (--(s $\ast$ Cx $\mathtt{\underline{t}}$)) $\ast$ f t)
}}}
\end{lem}
\noindent The above lemma presents an alternative representation of the Laplace transform, given in Definition \ref{DEF:laplace_transform}.
This alternate representation of Laplace transform as well as the formalization of Fourier transform, given in Definition~\ref{DEF:fourier_transform}, is better than the formal definition (Definition~\ref{DEF:laplace_transform}), presented in~\citep{taqdees2013formalization}, which involves the notion of the limit. As the HOL-Light definition of the integral function implicitly encompasses infinite limits of integration, so we do not need to involve the notion of limit. Hence, this alternate representation covers the region of integration, i.e., $[0, \infty)$, as \texttt{\small{\{t | \&0 <= drop t\}}} and is equivalent to the definition of Laplace transform given by Definition~\ref{DEF:laplace_transform}. Similarly, the region of integration for Fourier transform, i.e., $(-\infty, \infty)$ is modeled as \texttt{\small{UNIV}}.
This relationship (Theorem~\ref{THM:laplace_alternate_representation}) can facilitate the formal reasoning process of Laplace transform related properties and thus can be very useful towards the formalization of inverse Laplace transform function and verification of its associated properties.
Moreover, the formal definition of the Fourier transform presented as Definition~\ref{DEF:fourier_transform} considerably simplifies the reasoning process in the verification of its properties.
\subsection{Differential Equation}\label{SUBSEC:differential_eq_property}
Differential equations are widely used to mathematical model the complex dynamics of a continuous-time system and hence characterize the behavior of the system at each time instant.
A general linear differential equation can be mathematically expressed as follow:
\begin{equation} \label{EQ:diff_eqn_nth_order}
\begin{split}
\textit{Differential} \ \textit{Equation} & = \sum _{k = 0}^{n} {{\alpha}_k \dfrac{d^ky}{{dt}^k}} \\
& = {{\alpha}_n \dfrac{d^ny}{{dt}^n}} + {{\alpha}_{n-1} \dfrac{d^{n-1}y}{{dt}^{n-1}}} + ... + {{\alpha}_1 \dfrac{d^1y}{{dt}^1}} + {{\alpha}_0 y}
\end{split}
\end{equation}
\noindent where $ n $ is the order of the differential equation and $ \alpha_i $ represents the list of the constant coefficients. The Fourier transform of the above $n^{th}$-order differential equation is given by the following mathematical expression:
\begin{equation}\label{EQ:ft_diff_eqn_nth_order}
\mathcal{F} \Big( \sum _{k = 0}^{n} {{\alpha}_k \dfrac{d^ky}{{dt}^k}} \Big) = Y(\omega) \ \sum _{k = 0}^{n} {{\alpha}_k {(i \omega)}^k}
\end{equation}
We formalize the above differential equation using the following definition in HOL-Light:
\begin{defn}
\label{DEF:diff_eqn_nth_order_con_coef}
\emph{Differential Equation of Order $n$} \\{\small
\textup{\texttt{$\vdash$ $\forall$ n lst f t. differential\_equation n lst f t = \\
$\mathtt{\ }$\hspace{2.8cm} vsum (0..n) ($\lambda$k. EL k lst $\ast$ higher\_order\_derivative k f t)
}}}
\end{defn}
The function $\mathtt{differential\_equation}$ accepts the order of the differential equation $\texttt{n}$, a list of constant coefficients $\texttt{lst}$, a differentiable function $\texttt{f}$ and the differentiation variable $\texttt{t}$. It utilizes the functions $\mathtt{vsum \ n \ f}$ and $\mathtt{EL \ m \ lst}$, which return the vector summation $\sum_{i=0}^{n}f_i$ and the $m^{th}$ element of a list $\texttt{lst}$, respectively, to generate the differential equation corresponding to the given parameters.
Next, we verify the Fourier transform of a linear differential equation, which is expected to be the most widely used result of our formalization as depicted in Sections~\ref{SEC:formal_analysis_generic_n_order_sys} and~\ref{SEC:applications}, and is given by the following theorem in HOL-Light.
\begin{thm}
\label{THM:fourier_transform_of_diff_equation}
\emph{Fourier Transform of Differential Equation of Order $n$} \\{\small
\textup{\texttt{$\vdash$ $\forall$ f lst w n. fourier\_exists\_higher\_deriv n f $\wedge$ \\
$\mathtt{\ }$\hspace{0.2cm} ($\forall$t. differentiable\_higher\_derivative n f t) $\wedge$ \\
$\mathtt{\ }$\hspace{0.2cm} ($\forall$k. k < n $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.2cm}(($\lambda$t. higher\_vector\_derivative k f $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_posinfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.2cm} ($\forall$k. k < n $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.2cm}(($\lambda$t. higher\_vector\_derivative k f $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_neginfinity) \\
$\mathtt{\ }$\hspace{1.8cm} $\Rightarrow$ fourier\_transform ($\lambda$t. differential\_equation n lst f t) w = \\
$\mathtt{\ }$\hspace{1.3cm} fourier\_transform f w $\ast$ vsum (0..n) ($\lambda$k. EL k lst $\ast$ (ii $\ast$ Cx w) pow k)
}}}
\end{thm}
The set of the assumptions of the above theorem is the same as that of property named \textit{Higher Order Differentiation in Time Domain}, given in Table~\ref{TAB:properties_of_Fourier_transform}. The conclusion of Theorem \ref{THM:fourier_transform_of_diff_equation} is the Fourier transform of a $n^{th}$-order linear differential equation.
The proof of above theorem is based on induction on variable $\texttt{n}$. The proof of the base case is based on simple arithmetic reasoning and the step case is discharged using Theorems \ref{THM:linearity_prop_four_exist}, \textit{linearity} and \textit{Higher Order Differentiation in Time Domain} properties, along with the following important lemma about the Fourier existence of the differential equation.
\begin{lem}
\label{THM:fourier_existence_of_diff_equation}
\emph{Fourier Existence of Differential Equation} \\{\small
\textup{\texttt{$\vdash$ $\forall$ n lst f. fourier\_exists\_higher\_deriv n f \\
$\mathtt{\ }$\hspace{3.0cm} $\Rightarrow$ fourier\_exists ($\lambda$t. differential\_equation n lst f t)
}}}
\end{lem}
\section{Fourier Transform of Some Commonly Used Functions} \label{SEC:fourier_trans_comm_used_funs}
In this section, we present Fourier transform of some functions that are commonly used for the analysis of the physical and engineering systems in various domains, i.e., signal processing, analog circuits, optical systems and communication systems etc.
\subsection{Fourier Transform of a Rectangular Pulse}
The rectangular pulse is characterized by having a constant value over a range of values on the time axis. It is represented by the following mathematical expression.
\begin{equation}\label{EQ:rect_pulse}
f (t) =
\begin{cases}
1, & |t| \leq T_1 \\
0, & \text{otherwise}
\end{cases}
\end{equation}
It has a constant value $1$ inside the interval $[-T_1, T_1]$, whereas it is $0$ outside this interval over the whole real line.
For the value of $T_1 = 0.5$, it is known as the unit gate function. The Fourier transform of the rectangular pulse in given by the following equation.
\begin{equation} \label{EQ:fourier_rect_pulse}
\begin{split}
F (\omega) & = 2 \textit{T}_1 \textit{sinc} (\omega \textit{T}_1) \\
& = 2 \textit{T}_1 \dfrac{\textit{sin} (\omega \textit{T}_1)}{\omega \textit{T}_1}
\end{split}
\end{equation}
\noindent where $\textit{sinc} (\omega \textit{T}_1)$ is the sinc function, which is the multiplication of a sinusoidal function $sin (\omega \textit{T}_1)$ with the monotonically decreasing function $\dfrac{1}{\omega \textit{T}_1}$, which makes it a continuously decreasing sinusoidal function. It approaches to value $0$ at both endpoints of the axis. i.e., $-\infty$ and $+\infty$. It is also known as the interpolation or filtering function. We model the rectangular pulse and the sinc function using the following HOL-Light functions:
\begin{defn}
\label{DEF:rect_pulse}
\emph{Rectangular Pulse} \\{\small
\textup{\texttt{$\vdash$ rect\_pulse T1 = \\
$\mathtt{\ }$\hspace{1.7cm} ($\lambda$t. if t IN \{t | --$\mathtt{\underline{T1}}$ <= $\mathtt{\underline{t}}$ $\wedge$ $\mathtt{\underline{t}}$ <= $\mathtt{\underline{T1}}$\} \\
$\mathtt{\ }$\hspace{2.6cm} then Cx (\&1) \\
$\mathtt{\ }$\hspace{2.6cm} else Cx (\&0))
}}}
\end{defn}
\begin{defn}
\label{DEF:sinc_fun}
\emph{Sinc Function} \\{\small
\textup{\texttt{$\vdash$ sinc T1 w = csin (Cx w $\ast$ Cx $\mathtt{\underline{T1}}$) / (Cx w $\ast$ Cx $\mathtt{\underline{T1}}$)
}}}
\end{defn}
The Fourier transform of the rectangular pulse is represented as the following HOL-Light theorem:
\begin{thm}
\label{THM:fourier_rect_pulse}
\emph{Fourier Transform of Rectangular Pulse} \\{\small
\textup{\texttt{$\vdash$ $\forall$ T1 w. \&0 < $\mathtt{\underline{T1}}$ $\wedge$ $\sim$(w = \&0) \\
$\mathtt{\ }$\hspace{1.7cm} $\Longrightarrow$ fourier\_transform (rect\_pulse T1) w = \\
$\mathtt{\ }$\hspace{6.7cm} Cx (\&2) $\ast$ Cx $\mathtt{\underline{T1}}$ $\ast$ sinc T1 w
}}}
\end{thm}
We verified the above theorem using Definitions \ref{DEF:fourier_transform}, \ref{DEF:rect_pulse} and \ref{DEF:sinc_fun}, and the properties of the integration along with some arithmetic reasoning.
\subsection{Fourier Transform of Unilateral Negative Complex Exponential}
The unilateral negative complex exponential is given by the following mathematical expression:
\begin{equation}\label{EQ:one_sided_neg_cexp}
f (t) =
\begin{cases}
0, & t < 0 \\
e^{-ct}, & t \geq 0
\end{cases}
\end{equation}
\noindent where $c$ is a positive real constant, which makes the function $e^{-ct}$ an exponentially decaying function.
The Fourier transform of the unilateral complex exponential function is as below:
\begin{equation}\label{EQ:fourier_one_sided_neg_exp}
F (\omega) = \dfrac{1}{c + i \omega}
\end{equation}
We formally model the unilateral negative complex exponential by the following HOL-Light function:
\begin{defn}
\label{DEF:one_sided_neg_cexp}
\emph{Unilateral Negative Complex Exponential} \\{\small
\textup{\texttt{$\vdash$ unilat\_neg\_cexp c = \\
$\mathtt{\ }$\hspace{1.7cm} ($\lambda$t. if t IN \{t | \&0 <= $\mathtt{\underline{t}}$\} \\
$\mathtt{\ }$\hspace{2.6cm} then cexp (--Cx c $\ast$ Cx $\mathtt{\underline{t}}$) \\
$\mathtt{\ }$\hspace{2.6cm} else Cx (\&0))
}}}
\end{defn}
We verified the Fourier transform of the unilateral negative complex exponential as the following theorem.
\begin{thm}
\label{THM:fourier_one_sided_neg_exp}
\emph{Fourier Transform of Unilateral Negative Exponential} \\{\small
\textup{\texttt{$\vdash$ $\forall$ c w. \&0 < c $\wedge$ $\sim$(Cx c + ii $\ast$ Cx w = Cx (\&0)) \\
$\mathtt{\ }$\hspace{1.7cm} $\Longrightarrow$ fourier\_transform (unilat\_neg\_cexp c) w = \\
$\mathtt{\ }$\hspace{6.7cm} Cx (\&1) / (Cx c + ii $\ast$ Cx w)
}}}
\end{thm}
\subsection{Fourier Transform of Bilateral Complex Exponential}
The Fourier transform of the bilateral complex exponential is given by the following mathematical equation:
\begin{equation}\label{EQ:fourier_bilateral_exp}
\mathcal{F} [e^{-|t|}] = \dfrac{2}{1 + \omega^2}
\end{equation}
We verified its Fourier transform as the following theorem:
\begin{thm}
\label{THM:fourier_bilateral_exp}
\emph{Fourier Transform of Bilateral Complex Exponential} \\{\small
\textup{\texttt{$\vdash$ $\forall$ w. $\sim$(Cx (\&1) - ii $\ast$ Cx w = Cx (\&0)) $\wedge$ \\
$\mathtt{\ }$\hspace{0.88cm} $\sim$(Cx (\&1) + ii $\ast$ Cx w = Cx (\&0)) \\
$\mathtt{\ }$\hspace{1.7cm} $\Longrightarrow$ fourier\_transform ($\lambda$t. cexp (--Cx (abs $\mathtt{\underline{t}}$))) w \\
$\mathtt{\ }$\hspace{6.7cm} Cx (\&2) / (Cx (\&1) + Cx w pow 2)
}}}
\end{thm}
The verification process of the above theorem starts by rewriting with the definition of Fourier transform. Next, we split the region of integration, i.e., $(-\infty, +\infty)$, into $(-\infty, 0]$ and $[0, +\infty)$, as a result, we obtain two integrals with the same integrand and the respective regions of integration. We rewrite the resultant subgoal with the definition of absolute value of a real number to replace its value. i.e., $|t| = -t$ in interval $(-\infty, 0]$ and $|t| = t$ in interval $[0, +\infty)$. Next, these two integrals are evaluated using
the properties of integration along with the complex arithmetic reasoning to conclude the proof of Theorem~\ref{THM:fourier_bilateral_exp}.
\subsection{Fourier Transform of Finite Duration Sinusoidal Tone-burst}
The sinusoidal tone-burst occurring for a finite duration $-T_1$ to $T_1$ is mathematically defined as follows:
\begin{equation}\label{EQ:sine_tone_burst}
f (t) =
\begin{cases}
sin \omega_0 t, & |t| \leq T_1 \\
0, & \text{otherwise}
\end{cases}
\end{equation}
The Fourier transform of the above sinusoidal tone-burst is given by the following mathematical expression:
\begin{equation}\label{EQ:fourier_sine_tone_burst}
F (\omega) = - i T_1 \{ \textit{sinc} ((\omega - \omega_0) \textit{T}_1) - \textit{sinc} ((\omega + \omega_0) \textit{T}_1) \}
\end{equation}
The frequency spectrum corresponding to the above sine wave is inversely proportional to its duration $2T_1$. i.e., with the increase in the duration $2T_1$, it results into a narrower frequency line spectrum and vice versa.
We defined Equation \ref{EQ:sine_tone_burst} as the following HOL-Light function:
\begin{defn}
\label{DEF:sine_tone_burst}
\emph{Finite Duration Sinusoidal Tone-burst} \\{\small
\textup{\texttt{$\vdash$ $\forall$ T1 w0. sine\_tone\_burst T1 w0 = \\
$\mathtt{\ }$\hspace{3.7cm} ($\lambda$t. csin (Cx w0 $\ast$ Cx $\mathtt{\underline{t}}$) $\ast$ rect\_pulse T1 t)
}}}
\end{defn}
The Fourier transform of the above sinusoidal tone-burst is represented as the following theorem:
\begin{thm}
\label{THM:fourier_sine_tone_burst}
\emph{Fourier Transform of Finite Duration Sinusoidal Tone-burst} \\{\small
\textup{\texttt{$\vdash$ $\forall$ T1 w w0. \&0 < $\mathtt{\underline{T1}}$ $\wedge$ \\
$\mathtt{\ }$\hspace{1.88cm} $\sim$(w - w0 = \&0) $\wedge$ $\sim$(w + w0 = \&0) \\
$\mathtt{\ }$\hspace{2.4cm} $\Longrightarrow$ fourier\_transform (sine\_tone\_burst T1 w0) w = \\
$\mathtt{\ }$\hspace{2.8cm} --ii $\ast$ Cx $\mathtt{\underline{T1}}$ $\ast$ (sinc T1 (w - w0) - sinc T1 (w + w0))
}}}
\end{thm}
We start the verification process of the above theorem by rewriting with Definitions \ref{DEF:fourier_transform}, \ref{DEF:sine_tone_burst} and \ref{DEF:rect_pulse}, and the definition of complex sine $\texttt{csin}$ (Definition \ref{DEF:exp_ccos_csine}), which results into a subgoal having vector integral of the linear combination of the $\texttt{cexp}$ functions over interval $[-T_1, T_1]$, which is verified based on the properties of integration along with the arithmetic reasoning.
\subsection{Fourier Transform of Damped Unilateral Sinusoidal Function}
The damped unilateral sinusoidal function is basically a product of a decaying complex exponential function with the periodic sinusoidal function and is given as below:
\begin{equation}\label{EQ:damped_one_sided_sine}
f (t) =
\begin{cases}
0, & t < 0 \\
e^{-ct} sin \omega_0 t , & t \geq 0
\end{cases}
\end{equation}
\noindent where $c$ is a positive real constant. The Fourier transform of the damped unilateral sinusoidal function is represented by the following equation:
\begin{equation}\label{EQ:fourier_damped_one_sided_sine}
F (\omega) = \dfrac{w_0}{(c + i \omega)^2 + {w_0}^2}
\end{equation}
We formalized Equation \ref{EQ:damped_one_sided_sine} as the following HOL-Light function:
\begin{defn}
\label{DEF:damped_one_sided_sine}
\emph{Damped Unilateral Sinusoidal Function} \\{\small
\textup{\texttt{$\vdash$ damped\_unilat\_sine c w0 = \\
$\mathtt{\ }$\hspace{1.5cm} ($\lambda$t. if t IN \{t | \&0 <= $\mathtt{\underline{t}}$\} \\
$\mathtt{\ }$\hspace{2.5cm} then cexp (--Cx c $\ast$ Cx $\mathtt{\underline{t}}$) $\ast$ csin (Cx w0 $\ast$ Cx $\mathtt{\underline{t}}$) \\
$\mathtt{\ }$\hspace{2.5cm} else Cx (\&0))
}}}
\end{defn}
Its Fourier transform has been verified as the following theorem:
\begin{thm}
\label{THM:damped_one_sided_sine}
\emph{Fourier Transform of Damped Unilateral Sinusoidal Function} \\{\small
\textup{\texttt{$\vdash$ $\forall$ c w w0. \&0 < c $\wedge$ \\
$\mathtt{\ }$\hspace{1.70cm} $\sim$(Cx c + ii $\ast$ Cx (w - w0) = Cx (\&0)) $\wedge$ \\
$\mathtt{\ }$\hspace{1.70cm} $\sim$(Cx c + ii $\ast$ Cx (w + w0) = Cx (\&0)) $\wedge$ \\
$\mathtt{\ }$\hspace{1.70cm} $\sim$((Cx c + ii $\ast$ Cx w) pow 2 + Cx w0 pow 2 = Cx (\&0)) \\
$\mathtt{\ }$$\mathtt{\ }$\hspace{2.8cm} $\Longrightarrow$ fourier\_transform (damped\_unilat\_sine c w0) w = \\
$\mathtt{\ }$\hspace{5.0cm} Cx w0 / ((Cx c + ii $\ast$ Cx w) pow 2 + Cx w0 pow 2)
}}}
\end{thm}
The proof process of the above theorem involves Definitions \ref{DEF:fourier_transform} and \ref{DEF:damped_one_sided_sine} along with some properties about the
complex exponential functions.
This completes our formalization of Fourier transform in HOL-Light.
The source code of our formalization is available for download~\citep{adnan16contsystemfouriertrans} and can be utilized for further development and the analysis of continuous-time systems.
\section{Formal Analysis of a Generic n-order System} \label{SEC:formal_analysis_generic_n_order_sys}
In this section, we present the formal modeling and the frequency response analysis of a generic n-order Linear Time Invariant (LTI) system.
A generic n-order LTI system~\citep{adams2012continuous} presents a relationship between an input signal $x(t)$ and the output signal $y(t)$ and its dynamics are modeled using a higher-order differential equation. Due to the generic nature of its differential equation based model and its corresponding frequency response analysis, we can use it for the modeling and frequency analysis of any real-world application, which certainly eases out the formal reasoning based analysis of these system and is illustrated in the next section. Fig.~\ref{FIG:nth_order_system} provides the block diagram representation of a generic $n$-order system, which is primarily composed of
the addition, the scalar multiplication and the integration operations~\citep{girod2001signals}.
\begin{figure}[ht!]
\centering
\scalebox{0.30}
{\includegraphics[trim={0 0.0cm 0 0.0cm},clip]{nth_order_system.pdf}}
\caption{Block Diagram Representation of a Generic $n$-order System}
\label{FIG:nth_order_system}
\end{figure}
The generalized linear differential equation, with constant coefficient, describing the input-output relationship for this generic $n$-order system is mathematically expressed as~\citep{adams2012continuous}:
\begin{equation}\label{EQ:diff_eqn_nth_order_LTI_sys}
\sum _{k = 0}^{n} {{\beta}_k \dfrac{d^k}{{dt}^k} y(t)} = \sum _{k = 0}^{m} {{\alpha}_k \dfrac{d^k}{{dt}^k} x(t)}, \ \ \ \ m \leq n
\end{equation}
\noindent where $y(t)$ in the above equation is the output and $x(t)$ is the input to the system. The constants $\alpha_k$ and $\beta_k$ are the coefficients of the input and the output differentials of order $k$, respectively. The greatest index $n$ of the non-zero coefficient $\beta_n$ determines the order of the underlying system. The corresponding frequency response of the system is given by the following mathematical expression:
\begin{equation}\label{EQ:freq_res_nth_order_LTI_sys}
\dfrac{Y(\omega)}{X(\omega)} = \dfrac{\sum_{k = 0}^{m} {\alpha_k (i\omega)^k}}{\sum_{k = 0}^{n} {\beta_k (i\omega)^k}}
\end{equation}
In order to verify the above frequency response of the given system, we first model the corresponding differential equation
as the following HOL-Light function:
\begin{defn}
\label{DEF:diff_eqn_nth_order_lti_system}
\emph{Differential Equation of $n$-order LTI System} \\{\small
\textup{\texttt{$\vdash$ $\forall$ n outlst y m inlst x t. diff\_eq\_n\_order\_sys m n inlst outlst x y t $\Leftrightarrow$ \\
$\mathtt{\ }$\hspace{1.2cm} differential\_equation n outlst y t = differential\_equation m inlst x t
}}}
\end{defn}
Next, we verified the frequency response, given in Equation~\ref{EQ:freq_res_nth_order_LTI_sys}, of the generic n-order system as the following HOL-Light theorem.
\begin{thm}
\label{THM:freq_response_nth_order_lti_system}
\emph{Frequency Response of $n$-order LTI System} \\{\small
\textup{\texttt{$\vdash$ $\forall$ y x m n inlst outlst w. \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$t. differentiable\_higher\_derivative n y t) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$t. differentiable\_higher\_derivative m x t) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} fourier\_exists\_higher\_deriv n y $\wedge$ fourier\_exists\_higher\_deriv m x $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$k. k < n $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm}(($\lambda$t. higher\_vector\_derivative k y $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_posinfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$k. k < n $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm}(($\lambda$t. higher\_vector\_derivative k y $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_neginfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$k. k < m $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm}(($\lambda$t. higher\_vector\_derivative k x $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_posinfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$k. k < m $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm}(($\lambda$t. higher\_vector\_derivative k x $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_neginfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$t. diff\_eq\_n\_order\_sys m n inlst outlst x y t) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} $\sim$(fourier\_transform x w = Cx (\&0)) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} $\sim$(vsum (0..n) ($\lambda$t. Cx (EL t outlst) $\ast$ (ii $\ast$ Cx w) pow t) = Cx (\&0)) \\
$\mathtt{\ }$\hspace{2.2cm} $\Rightarrow$ fourier\_transform y w / fourier\_transform x w = \\
$\mathtt{\ }$\hspace{3.05cm} vsum (0..m) ($\lambda$t. Cx (EL t inlst) $\ast$ (ii $\ast$ Cx w) pow t) / \\
$\mathtt{\ }$\hspace{3.05cm} vsum (0..n) ($\lambda$t. Cx (EL t outlst) $\ast$ (ii $\ast$ Cx w) pow t)
}}}
\end{thm}
The first two assumptions ensure that the functions $\texttt{y}$ and $\texttt{x}$ are differentiable up to the $n^{th}$ and $m^{th}$ order, respectively.
The next assumption represents the Fourier transform existence condition upto the $n^{th}$ order derivatives of function $\texttt{y}$.
Similarly, the next assumption ensures that the Fourier transform exists up to the $m^{th}$ order derivative of the function $\texttt{x}$. The next two assumptions represent the condition $ \lim\limits_{t \to \pm\infty} {y^{(k)} (t)} = 0 $ for all $ k = 0, 1, ... , n - 1 $, i.e., $ \lim\limits_{t \to \pm\infty} {y^{(n - 1)} (t)} = 0 $, ... , $ \lim\limits_{t \to \pm\infty} {y^{(0)} (t)} = \lim\limits_{t \to \pm\infty} y(t) = 0 $, where $y^{(k)}(t)$ is the $k^{th}$ derivative of $\texttt{y}$ with respect to $\texttt{t}$. The next two assumptions provide the condition $ \lim\limits_{t \to \pm\infty} {x^{(k)} (t)} = 0 $ for each $ k = 0, 1, ... , m - 1 $.
The next assumption represents the formalization of Equation \ref{EQ:diff_eqn_nth_order_LTI_sys} and the last two assumptions provide some interesting design related relationships, which must hold for constructing a reliable continuous-time system. Finally, the conclusion of the above theorem represents the frequency response given by Equation \ref{EQ:freq_res_nth_order_LTI_sys}. The proof of Theorem \ref{THM:freq_response_nth_order_lti_system} was very straightforward and mainly based on Theorem~\ref{THM:fourier_transform_of_diff_equation}, along with some arithmetic reasoning, thanks to our foundational formalization presented in the previous sections. The verification of this theorem is very useful as it greatly simplifies the verification of the frequency response of any real-world application as illustrated in the next section.
\section{Applications} \label{SEC:applications}
In this section, to illustrate the utilization of our foundational formalization for analyzing real-world continuous systems, we present a formal analysis of an audio equalizer and a MEMs accelerometer. To the best of our knowledge, these systems could not have been verified while capturing their continuous behavior in the true form by any other existing computer-based analysis technique.
\subsection{Formal Analysis of an Audio Equalizer}\label{SUBSEC:audio_equalizer}
An audio equalizer~\citep{tan2007fundamentals} is an electronic circuit that adjusts the balance between different frequency components within an audio signal. The block diagram of a 3-channel audio equalizer is illustrated in Fig.~\ref{FIG:audio_equalizer}.
\begin{figure}[ht!]
\centering
\scalebox{0.26}
{\includegraphics[trim={0 0.0cm 0 0.0cm},clip]{audio_equalizer.pdf}}
\caption{Block Diagram of Audio Equalizer}
\label{FIG:audio_equalizer}
\end{figure}
\noindent It mainly consists of three different filters, which are low-pass, high-pass and bandpass, allowing a certain range of the frequency to pass on. The low-pass filter allows the passage of signals having frequency lower than then the cutoff frequency ($\omega_c = 1/2\pi f_c$), whereas, the high-pass filter passes the signal with a frequency higher than the cutoff frequency. Whereas the bandpass filter passes the signal having frequency components in a certain range only, as shown in Figure~\ref{FIG:audio_equalizer}. After each filtering stage, some signal amplification with gain ($g_i$) is applied in order to enhance the quality of the signal. Being a major component of an audio equalizer, we verify the frequency response of each of the individual filter. Here, we only present the formal verification of the frequency response of the bandpass filter only due to space restrictions and the verification of rest of the filters can be found in the proof script~\citep{adnan16contsystemfouriertrans}.
In order to verify the frequency response of the bandpass filter, we first model its corresponding differential equation, which is given by the following HOL-Light function:
\begin{defn}
\label{DEF:diff_equation_bandpass_filter}
\emph{Differential Equation of Bandpass Filter} \\{\small
\textup{\texttt{$\vdash$ $\forall$ wc. outlst\_de\_bpf wc = [wc pow 2; \&2 $\ast$ wc; \&1]
}}} \\
{\small
\textup{\texttt{$\vdash$ $\forall$ wc. inlst\_de\_bpf wc = [\&0; wc]
}}} \\
{\small
\textup{\texttt{$\vdash$ diff\_eq\_BP\_FILTER inlst\_de\_bpf outlst\_de\_bpf x y t wc $\Leftrightarrow$ \\
$\mathtt{\ }$\hspace{2.25cm} differential\_equation 2 (outlst\_de\_bpf wc) y t = \\
$\mathtt{\ }$\hspace{2.25cm} differential\_equation 1 (inlst\_de\_bpf wc) x t
}}}
\end{defn}
\noindent where the function \texttt{diff\_eq\_BP\_FILTER} accepts the function variables \texttt{x} and \texttt{y} and the lists of coefficients (\texttt{inlst\_de\_bpf} and \texttt{outlst\_de\_bpf}) and returns the corresponding differential equation of the bandpass filter.
Next, the frequency response of the bandpass filter is mathematically expressed as:
\begin{equation}\label{EQ:freq_res_audio_equalizer}
\begin{split}
\frac{Y(\omega)}{X (\omega)} & = \dfrac{\omega_c}{i\omega + \omega_c} \times \dfrac{i\omega}{i\omega + \omega_c} \\
& = \dfrac{\omega_c(i\omega)}{{(i\omega})^2 + 2\omega_c(i\omega) + (\omega_c)^2}
\end{split}
\end{equation}
\noinden
We verified the above frequency response as the following HOL-Light theorem:
\begin{thm}
\label{THM:freq_response_audio_equalizer_3}
\emph{Frequency Response of Bandpass Filter} \\{\small
\textup{\texttt{$\vdash$ $\forall$ y x w wc. \&0 < wc $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$t. differentiable\_higher\_derivative 2 y t) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$t. differentiable\_higher\_derivative 1 x t) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} fourier\_exists\_higher\_deriv 2 y $\wedge$ fourier\_exists\_higher\_deriv 1 x $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$k. k < 2 $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm}(($\lambda$t. higher\_vector\_derivative k y $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_posinfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$k. k < 2 $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm}(($\lambda$t. higher\_vector\_derivative k y $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_neginfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} (($\lambda$t. x $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_posinfinity $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} (($\lambda$t. x $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_neginfinity $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$t. diff\_eq\_BP\_FILTER inlst\_de\_bpf outlst\_de\_bpf x y t wc) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} $\sim$(fourier\_transform x w = Cx (\&0)) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} $\sim$((ii $\ast$ Cx w) pow 2 + Cx (\&2) $\ast$ Cx wc $\ast$ ii $\ast$ Cx w + Cx wc pow 2 = Cx (\&0)) \\
$\mathtt{\ }$\hspace{0.6cm} $\Rightarrow$ fourier\_transform y w / fourier\_transform x w = \\
$\mathtt{\ }$\hspace{1.5cm} Cx wc $\ast$ ii $\ast$ Cx w / \\
$\mathtt{\ }$\hspace{1.9cm} ((ii $\ast$ Cx w) pow 2 + Cx (\&2) $\ast$ Cx wc $\ast$ ii $\ast$ Cx w + Cx wc pow 2)
}}}
\end{thm}
The first assumption ensures that the variable corresponding to the cutoff frequency ($\texttt{wc}$) cannot be negative or zero. The next two assumptions ensure that the functions $\texttt{y}$ and $\texttt{x}$ are differentiable up to the second and first order, respectively. The next two assumptions represent the Fourier transform existence condition up to the second and first order derivatives of the functions $\texttt{y}$ and $\texttt{x}$, respectively. The next two assumptions represent the condition $ \lim\limits_{t \to \pm\infty} {y^{(k)} (t)} = 0 $ for each $k = 0, 1$.
The next two assumptions provide the condition $\lim\limits_{t \to \pm\infty} x(t) = 0 $.
The next assumption represents the formalization of the corresponding differential equation and the last two assumptions provide some interesting design related relationships, which must hold for constructing a reliable bandpass filter. Finally, the conclusion of the above theorem represents the frequency response given by Equation \ref{EQ:freq_res_audio_equalizer}. The proof is based on Theorem~\ref{THM:freq_response_nth_order_lti_system}, along with some arithmetic reasoning.
\subsection{Formal Analysis of MEMs Accelerometer}\label{SUBSEC:mems_accele}
An accelerometer is an electromechanical device that is used for the measurement of the static and the dynamic accelerations, i.e., the acceleration due to gravity anywhere on the earth and the acceleration due to the motion or vibration of an object according to the theory of relatively. It uses the sensors, which further make use of the environmental physical parameters, i.e., pressure, temperature, light and force.
Micro-Electro-Mechanical systems (MEMs) based accelerometer~\citep{kaajakari2009practical} are widely used as accelerometers. They are smaller in size, utilizing low power and thus due to these features, are integrated in a variety of applications, such as aircrafts~\citep{kuznetsov2011development}, airbag deployment~\citep{galvin2001microelectromechanical}, robotic telepresence~\citep{hung2004telepresence}, handheld computing gadgets~\citep{fennelly2012thermal}, natural disaster measurement devices~\citep{hsieh2014low} and automated external defibrillators (AEDs)~\citep{eggers2016wearable}.
Due to their wide usage in the safety critical domains, the accuracy of their frequency response analysis is of utmost importance. A typical MEMs accelerometer is depicted in
Fig.~\ref{FIG:mems_accelerometer_diagram}
whereas its mechanical lumped model~\citep{haykin2007signals} is illustrated in Fig.~\ref{FIG:mems_accelerometer}.
\begin{figure}[H]
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.6\linewidth]{MEMS-Accelerometer_2.jpg}
\caption{\hspace*{3.1cm}(a)}
\label{FIG:mems_accelerometer_diagram}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.6\linewidth]{mems_accelerometer.pdf}
\caption{\hspace*{2.5cm}(b)}
\label{FIG:mems_accelerometer}
\end{subfigure}
\caption{MEMs Accelerometer (a) Design (b) Mechanical Lumped Model}
\label{FIG:mems_accelerometer_total}
\end{figure}
The differential equation modeling the dynamical behaviour of the MEMs accelerometer can be expressed as~\citep{haykin2007signals}:
\begin{equation}\label{EQ:diff_eq_mems_accelerometer}
\dfrac{d^2y(t)}{dt^2} + \dfrac{D}{M}\dfrac{dy(t)}{dt} + \dfrac{K}{M} y(t) = u(t),
\end{equation}
In the above equation, $M$ is the proof mass, whereas, $K$ is the effective spring constant and $D$ represents the damping factor, which affects the dynamic movement of the proof mass as shown in Figure~\ref{FIG:mems_accelerometer}. All of these are design parameters of the underlying system and can have positive values only. Similarly, $x(t)$ is the external acceleration due to motion of the proof mass, whereas $y(t)$ is the displacement of the corresponding mass.
The corresponding frequency response of the MEMs accelerometer is given as follows:
\begin{equation}\label{EQ:freq_res_mems_accelerometer}
\frac{Y(\omega)}{U (\omega)} = \dfrac{1}{{(i\omega})^2 + \dfrac{D}{M}(i\omega) + \dfrac{K}{M}}
\end{equation}
In order to verify its frequency response, we first model the corresponding differential equation as the following HOL-Light function:
\begin{defn}
\label{DEF:diff_equation_mems_accelerometer}
\emph{Differential Equation of MEMs Accelerometer} \\{\small
\textup{\texttt{$\vdash$ $\forall$ K D M. outlst\_de\_ma K D M = [K / M; D / M; \&1]
}}} \\
{\small
\textup{\texttt{$\vdash$ inlst\_de\_ma = [\&1]
}}} \\
{\small
\textup{\texttt{$\vdash$ diff\_eq\_MEMs\_ACC inlst\_de\_ma outlst\_de\_ma u y t K D M $\Leftrightarrow$ \\
$\mathtt{\ }$\hspace{2.25cm} differential\_equation 2 (outlst\_de\_ma K D M) y t = \\
$\mathtt{\ }$\hspace{2.25cm} differential\_equation 0 inlst\_de\_ma u t
}}}
\end{defn}
\noindent where the function $\texttt{diff\_eq\_MEMs\_ACC}$ accepts the function variables $\texttt{x}$ and $\texttt{y}$ and the lists of coefficients ($\texttt{inlst\_de\_ma}$ and $\texttt{outlst\_de\_ma}$) and returns the corresponding differential equation of the MEMs accelerometer.
Next, we verify its frequency response as the following theorem in HOL-Light:
\begin{flushleft}
\begin{thm}
\label{THM:freq_response_mems_accelerometer}
\emph{Frequency Response of MEMs Accelerometer} \\{\small
\textup{\texttt{$\vdash$ $\forall$ y u w K D M. \&0 < M $\wedge$ \&0 < D $\wedge$ \&0 < K $\wedge$\\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$t. differentiable\_higher\_derivative 2 y t) $\wedge$ ($\forall$t. u differentiable at t) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} fourier\_exists\_higher\_deriv 2 y $\wedge$ fourier\_exists u $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$k. k < 2 $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm}(($\lambda$t. higher\_vector\_derivative k y $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_posinfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$k. k < 2 $\Rightarrow$ \\
$\mathtt{\ }$\hspace{1.0cm}(($\lambda$t. higher\_vector\_derivative k y $\mathtt{\overline{t}}$) $\rightarrow$ vec 0) at\_neginfinity) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} ($\forall$t. diff\_eq\_MEMs\_ACC inlst\_de\_ma outlst\_de\_ma u y t K D M) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} $\sim$(fourier u w = Cx (\&0)) $\wedge$ \\
$\mathtt{\ }$\hspace{0.0cm} $\sim$((ii $\ast$ Cx w) pow 2 + Cx (D / M) $\ast$ ii $\ast$ Cx w + Cx (K / M) = Cx (\&0)) \\
$\mathtt{\ }$\hspace{0.2cm} $\Rightarrow$ fourier y w / fourier u w = \\
$\mathtt{\ }$\hspace{1.5cm} Cx (\&1) / ((ii $\ast$ Cx w) pow 2 + Cx (D / M) $\ast$ ii $\ast$ Cx w + Cx (K / M))
}}}
\end{thm}
\end{flushleft}
The first three assumptions ensure that the variables corresponding to proof mass ($\texttt{M}$), spring constant ($\texttt{K}$) and damping factor ($\texttt{D}$) cannot be negative or zero. The next assumption ensures that the function $\texttt{y}$ is differentiable up to the second order. Similarly, the next assumption represents the differentiability condition for the function $\texttt{u}$. The next assumption represents the Fourier transform existence condition up to the second order derivatives of the function $\texttt{y}$. Similarly, the next assumption provides the Fourier transform existence condition of the function $\texttt{u}$. The next two assumptions represent the condition $ \lim\limits_{t \to \pm\infty} {y^{(k)} (t)} = 0 $ for each $ k = 0, 1 $, i.e., $ \lim\limits_{t \to \pm\infty} {y^{(1)} (t)} = 0 $ and $ \lim\limits_{t \to \pm\infty} {y^{(0)} (t)} = \lim\limits_{t \to \pm\infty} y(t) = 0 $, where $y^{(k)}$ is the $k^{th}$ derivative of $\texttt{y}$.
The next assumption represents the formalization of Equation \ref{EQ:diff_eq_mems_accelerometer} and the last two assumptions provide some interesting design related relationships, which must hold for constructing a reliable MEMs accelerometer. Finally, the conclusion of the above theorem represents the frequency response, given by Equation \ref{EQ:freq_res_mems_accelerometer}. The proof is based on Theorem~\ref{THM:freq_response_nth_order_lti_system} along with some arithmetic reasoning.
Besides the above-mentioned audio equalizer and a MEMs based accelerometer applications, we also used the proposed formalization to formally verify the frequency response of a drug therapy model, which can be useful towards finding out a particular amount of dosage of drug lidocaine that has to be supplied to a particular person having the Ventricular arrhythmia and the details about its verification can be found in proof script~\citep{adnan16contsystemfouriertrans}.
\section{Discussion} \label{SEC:discussion}
The distinguishing feature of our proposed formalization as compared to traditional analysis methods is that all of the theorems verified are of generic nature, i.e., all of the variables and functions are universally quantified and can thus be specialized in order to obtain the results for some given values. Moreover, all of the required assumptions are guaranteed to be explicitly mentioned along with the theorem due to the inherent soundness of the theorem proving approach. Similarly, the verification of the frequency response described in Section~\ref{SEC:formal_analysis_generic_n_order_sys}, is of a generic $n$-order system, can be specialized in order to formally analyse any real-world system as presented in Section~\ref{SEC:applications}. Whereas, in the computer simulation techniques, we have to model each of the system individually.
Moreover, the high expressiveness of the higher-order logic enables us to model the differential equation and the corresponding frequency response in their true continuous form, whereas, in model checking they are mostly discretized and modeled using a state-transition system, which may compromise the accuracy of the analysis.
The above-mentioned formalization is done interactively.
However, we tried to automate the proof processes by making some simplification tactics. We develop a tactic \texttt{ASM\_REAL\_SIMP\_TAC}, which simplifies an expression involving real arithmetics using all of the assumptions of the theorem. We also develop a tactic \texttt{ASM\_COMPLEX\_SIMP\_TAC}, which simplifies a complex expression involving arithmetic operations using all of the assumptions of a theorem. We developed some more simplification tactics that can be found in our proof script~\citep{adnan16contsystemfouriertrans}.
The major difficulty faced during the formalization was the unavailability of detailed proofs for the properties of Fourier transform in literature. The available paper-and-pencil based proofs were found to be very abstract and missing the complete reasoning about the steps. The other challenge in reported formalization was that some of the assumptions of the properties of the Fourier transform were not mentioned in the literature as in the case of the first-order and higher-order differentiation properties, the assumptions $4$ and $5$ of theorem of \textit{First-order Derivative}, presented in Table~\ref{TAB:properties_of_Fourier_transform}, and the assumptions $3$ and $4$ of the \textit{Higher-order Derivative}, presented in Table~\ref{TAB:properties_of_Fourier_transform}, were absent from most of the analysis books.
The effort involved in the verification of individual theorem in the form of proof lines and the man-hours is presented in Table~\ref{TAB:verification_details_each_thm}.
\begin{table}[h]
\centering
\captionsetup{justification=centering}
\caption{Verification Detail for Each Theorem}
\label{TAB:verification_details_each_thm}
\resizebox{1.0\textwidth}{!}{\begin{minipage}{\textwidth}
{\renewcommand{\arraystretch}{1.005
\begin{tabular}{p{10.4cm} p{0.8cm} p{0.8cm}}
\hline\hline
Formalized Theorems & Proof Lines & Man-hours \\ \hline \hline
Theorem~\ref{THM:prop_01_integrable_univ} (Integrability of Improper Integral) & 880 & 87 \\ \hline
Table~\ref{TAB:properties_of_Fourier_transform}: Linearity & 115 & 13 \\ \hline
Table~\ref{TAB:properties_of_Fourier_transform}: Time Shifting & 190 & 22 \\ \hline
Table~\ref{TAB:properties_of_Fourier_transform}: Frequency Shifting & 24 & 4 \\ \hline
Table~\ref{TAB:properties_of_Fourier_transform}: Modulation & 98 & 11 \\ \hline
Table~\ref{TAB:properties_of_Fourier_transform}: Time Scaling and Time Reversal & 200 & 25 \\ \hline
Table~\ref{TAB:properties_of_Fourier_transform}: First Order Differentiation in Time Domain & 355 & 29 \\ \hline
Table~\ref{TAB:properties_of_Fourier_transform}: Higher Order Differentiation in Time Domain & 110 & 15 \\ \hline
Table~\ref{TAB:properties_of_Fourier_transform}: Area under a function & 12 & 1 \\ \hline
Theorems~\ref{THM:fourier_even_function} and~\ref{THM:fourier_odd_function} (Relationship with Fourier Cosine and Fourier Sine Transforms) & 296 & 35 \\ \hline
Theorem~\ref{THM:relation_fourier_laplace} (Relationship with Laplace Transform) & 162 & 26 \\ \hline
Theorem~\ref{THM:fourier_transform_of_diff_equation} (Fourier Transform of Differential Equation of Order \textit{n}) & 170 & 25 \\ \hline
Theorems~\ref{THM:fourier_rect_pulse},~\ref{THM:fourier_one_sided_neg_exp},~\ref{THM:fourier_bilateral_exp},~\ref{THM:fourier_sine_tone_burst} and~\ref{THM:damped_one_sided_sine} (Fourier Transform of Some Commonly Used Functions) & 275 & 35 \\ \hline
Theorem~\ref{THM:freq_response_nth_order_lti_system} (Frequency Response of n-order LTI System) & 65 & 12 \\ \hline
Theorem~\ref{THM:freq_response_audio_equalizer_3} (Frequency Response of an Audio Equalizer) & 90 & 10 \\ \hline
Theorem~\ref{THM:freq_response_mems_accelerometer} (Frequency Response of MEMs Accelerator) & 63 & 6 \\ \hline
\end{tabular}
}
\end{minipage}}
\end{table}
\noindent The proof process for the formal verification of Theorems~\ref{THM:freq_response_nth_order_lti_system},~\ref{THM:freq_response_audio_equalizer_3} and~\ref{THM:freq_response_mems_accelerometer} took only 218 lines and 28 man-hours and was very simple and straightforward compared to the reasoning process of Theorem~\ref{THM:prop_01_integrable_univ}, \textit{first-order differentiation}, \textit{higher-order differentiation} properties presented in Table~\ref{TAB:properties_of_Fourier_transform} and Theorem~\ref{THM:fourier_transform_of_diff_equation}, which involves more effort and user interaction. This clearly illustrates the benefits of our foundational formalization, presented in Section~\ref{SEC:Formal_verif_Fourier_properties} of this paper. Moreover, the man-hours are calculated based on two factors. The first factor includes the number of lines of code per hour by a person with an average expertise and the second factor is the difficulty of the proof. For example, the proof lines for Theorem~\ref{THM:freq_response_nth_order_lti_system} and~\ref{THM:freq_response_mems_accelerometer} are almost same, whereas the man-hours for both of the theorems are different, i.e., the man-hours for Theorem~\ref{THM:freq_response_nth_order_lti_system} are double in number with respect to man-hours for Theorem~\ref{THM:freq_response_mems_accelerometer}.
\section{Conclusions}\label{SEC:Conclusion}
In this paper, we proposed a formalization of Fourier transform in higher-order logic in order to perform the frequency domain analysis of the continuous-time systems. We presented the formal definition of Fourier transform and based on it, verified its classical properties, namely existence, linearity, time shifting, frequency shifting, modulation, time reversal, time scaling, differentiation and its relationship to Fourier Cosine, Fourier Sine and Laplace transforms. We also presented the formal verification of some commonly used functions. Next, we provided the formal verification of the frequency response of a generic \textit{n}-order system. Lastly, in order to demonstrate the practical effectiveness of the proposed formalization, we presented a formal analysis of an audio equalizer and a MEMs accelerometer.
In the future, we aim to verify the two-dimensional Fourier transform~\citep{bracewell1965fourier}, which is frequently applied for the frequency-domain analysis of many optical systems, electromagnetic theory and image processing algorithms. We also plan to formalize the inverse Fourier transform and verification of its properties, which would be very helpful to reason about the solutions to differential equations. This formalization can be further used in our project on system biology~\citep{arashid2017sbiology}, to
formally analyze the differential equations corresponding to the reaction kinetic models of the biological systems.
\section*{Acknowledgements}
This work was supported by the National Research Program for Universities grant (number 1543) of Higher Education Commission (HEC), Pakistan.
\bibliographystyle{elsart-harv}
|
1,108,101,564,661 | arxiv | \section{Introduction}
Over the past decade, advances in data collection and increasing access to computational resources have led to a revolution in the use of data-driven techniques for the solution of intractable inverse problems \cite{ARCS,ARBM,guest2018deep,duraisamy2018turbulence}. One such problem is that of turbulence, the multiscale nature of which causes infeasible computational demands even for the most simple systems. This behavior is shared by all non-linear partial differential equations and necessitates the utilization of multiple modeling approximations for tractable compute times. One such modeling approach is that of large eddy simulation (LES) \cite{sagaut2006large}, which attempts to simulate the evolution of lower wavenumber modes of turbulence while the effects of higher wavenumber modes are modeled by an algebraic or differential equation. The procedure of modeling the finer scales is often denoted a \emph{closure} due to the lack of knowledge about higher-order wavenumber interactions in the coarse-grained flow \cite{berselli2006mathematics} and remains a critical component of accurate computational modeling for many applications \cite{hickel2014subgrid,yu2016dynamic,zhou2018structural}. From an LES point of view, the closure problem arises due to the fact that low-pass spatial filtering (due to coarse-graining and discrete numerical approximations) does not commute with the non-linear term.
Within the context of the Navier-Stokes equations, it is generally accepted that the finer scales are dissipative at the Kolmogorov length scales \cite{kolmogorov1941local} and therefore, most turbulence models seek to specify a sub-grid viscosity which mimics the dissipative behavior of the unsupported frequencies \cite{frisch1995turbulence}. Most sub-grid models can be traced back to the seminal work of Smagorinsky \cite{smagorinsky1963general}, where a model was proposed based on the concepts of an effective eddy viscosity determined by an \emph{a priori} specified mixing length and a $k^{-5/3}$ scaling recovery for the kinetic energy content in the wavenumber domain. Similar hypotheses have also been used for two-dimensional turbulence \cite{leith1968diffusion} (often utilized as a test-bed for geophysical scenarios, for instance see works by Pearson \textit{et al.}\cite{pearson2018log,pearson2017evaluation}), for approximating the $k^{-3}$ cascade in two-dimensional turbulence and generally have their roots in dimensional analysis related to the cascade of enstrophy. The two aforementioned models may be classified as functional due to the phenomenological nature of their deployment and represent the bulk of LES related turbulence models used in practical deployments.
In contrast, the structural approach to turbulence modeling utilizes no explicit specification of an eddy-viscosity and relies on an estimation of the low-pass spatial filtering nature of coarse-graining. With this approximate knowledge of the filter, arguments for scale-similarity \cite{bardina1980improved,layton2003simple} or approximate-deconvolution (AD) \cite{stolz1999approximate} are utilized to reconstruct the true non-linear term. In case of scale-similarity, the non-linear interactions of flow components are estimated by utilizing a forward filtering operation to the grid-resolved variables, while in AD an inverse filter is estimated using iterative resubstitutions. However, structural techniques are limited due to the fact that they approximately recover sub-filter stresses alone and are not dissipative enough due to the neglect of sub-grid considerations. Therefore, they require the specification of an additional (usually functional) sub-grid model or the specification of a finer resolution where sub-grid terms are negligible \cite{germano2015similarity}. Further information about turbulence models and whether they may be classified as functional or structural may be found in Saugaut's excellent text \cite{sagaut2006large}.
A common thread that connects both functional and structural models is the \emph{a priori} specification of a model coefficient or a characteristic filter width or ratio. Consequently, the choice of such parameters become crucial in the \emph{a posteriori} performance of the deployed model. Crucially, literature has consistently shown that the choice of these coefficients are not single-valued, particularly for off-nominal flow situations. One may refer to discussions by Galperin and Orszag \cite{galperin1993large} and Canuto and Cheng \cite{canuto1997determination} for examples for the effect of varying eddy viscosity. The effect of characteristic filter widths and the order of deconvolution has also been explored by San \textit{et al.}\cite{san2015posteriori} and by Schneiderbauer and Saeedipour\cite{schneiderbauer2018approximate}. With this contextual background, in this study, we introduce a hybrid modeling (physics-informed machine learning) methodology for determining sub-grid models without any phenomenological assumptions (in the spirit of structural models) but with sub-grid capture ability. This is accomplished by the use of artificial neural networks (ANNs) to establish data-driven maps between \emph{a priori} convolved and deconvolved fields but without the use of any explicit filter.
In recent times, data-driven techniques have become extremely popular for the spatio-temporal modeling of dynamical systems \cite{schmidt2009distilling,bright2013compressive,xiao2015non,brunton2016discovering,schaeffer2017learning,raissi2017machine,mohan2018deep,raissi2018hidden,rudy2018deep,san2018neural,wan2018data,kim2018deep,muravleva2018application,jin2018prediction}. With respect to turbulence, some widely used strategies for inference include symbolic regression \cite{weatheritt2016novel,weatheritt2017development,weatheritt2017hybrid}, where functional model-forms for RANS deployments were generated through optimization against high-fidelity data. Ma \textit{et al.}\cite{ma2015using} utilized compressive-sensing based machine learning for closure of multiphase system. Gautier \textit{et al.}\cite{gautier2015closed} utilized a genetic algorithm was utilized for regression tasks in a close-loop separation control deployment of a turbulent mixing layer. Other techniques incorporating Bayesian ideologies have also been used, for instance by Xiao \textit{et al.}\cite{xiao2016quantifying} where an iterative ensemble Kalman method was used to assimilate prior data for quantifying model form uncertainty in RANS models. In Wang \textit{et al.}\cite{wang2017physics,wang2017comprehensive} and Wu \textit{et al.}\cite{wu2018data}, random-forest regressors were utilized for RANS turbulence-modeling given DNS data. In Singh and Duraisamy \cite{singh2016using} and Singh \textit{et al.}\cite{singh2017machine}, an ANN was utilized to predict a non-dimensional correction factor in the Spalart-Allmaras turbulence model through a field-inversion process. The field-inversion process was utilized to develop optimal \emph{a priori} estimates for the correction factor from experimental data. Bypassing functional formulations of a turbulence model (a focus of this study) was also studied from the RANS point of view by Tracey \textit{et al.} \cite{tracey2015machine}. Ling and Templeton \cite{ling2015evaluation} utilized support vector machines, decision trees and random forest regressors for identifying regions of high RANS uncertainty. A deep-learning framework where Reynolds-stresses would be predicted in an invariant subspace was developed by Ling \textit{et al.} \cite{ling2016reynolds}. The reader is directed to a recent review by Duraisamy \textit{et al.}\cite{duraisamy2018turbulence}, for an excellent review of turbulence modeling using data-driven ideas.
As shown above, the use of machine learning ideologies and in particular ANNs has generated significant interest in the turbulence modeling community. This is motivated by the fact that a multilayered artificial neural network may be optimally trained to universally approximate any non-linear function \cite{hornik1989multilayer}. Greater accessibility to data and the GPU revolution has also motivated the development of advanced ANN architectures for constrained learning and improved physical interpretability. Within the context of LES (and associated with the scope of this paper) there are several investigations into sub-grid modeling using data-driven techniques. In one of the first studies of the feasibility of mapping to unresolved stresses using grid resolved variables by learning from high-fidelity data, Sarghini \textit{et al.}\cite{sarghini2003neural} utilized ANNs for estimating Smagorinsky model-form coefficients within a mixed sub-grid model for a turbulent channel flow. This may be considered similar to the field-inversion procedure describe previously. ANNs were also used for wall-modeling by Milano and Koumotsakos \cite{milano2002neural} where it was used to reconstruct the near wall field and compared to standard proper-orthogonal-decomposition techniques. An alternative to ANNs for sub-grid predictions was proposed by King \textit{et al.}\cite{king2016autonomic} where \emph{a priori} optimization was utilized to minimize the $L^2$-error between true and modeled sub-grid quantities in a least-squares sense using a parameter-free Volterra series. Maulik and San \cite{maulik2017neural} utilized an extreme-learning-machine (a variant of a single-layered ANN) to obtain maps between low-pass spatially filtered and deconvolved variables in an \emph{a priori} sense. This had implications for the use of ANNs for turbulence modeling without model-form specification. A more in-depth investigation has recently been undertaken by Fukami \textit{et al.}\cite{fukami2018super} where convolutional ANNs are utilized for reconstructing downsampled snapshots of turbulence. Gamahara and Hattori \cite{gamahara2017searching}, utilized ANNs for identifying correlations with grid-resolved quantities for an indirect method of model-form identification in turbulent channel flow. The study by Vollant \textit{et al.} \cite{vollant2017subgrid} utilized ANNs in conjuction with optimal estimator theory to obtain functional forms for sub-grid stresses. In Beck \textit{et al.}\cite{beck2018neural}, a variety of neural network architectures such as convolutional and recurrent neural networks are studied for predicting closure terms for decaying homogeneous isotropic turbulence. A least-squares based truncation is specified for stable deployments of their model-free closures. Model-free turbulence closures are also specified by Maulik \textit{et al.}\cite{maulik2019subgrid}, where sub-grid scale stresses are learned directly from DNS data and deployed in \emph{a posteriori} through a truncation for numerical stability. King \textit{et al.}\cite{king2018deep} studied generative-adversarial networks and the LAT-NET \cite{hennigh2017lat} for \emph{a priori} recovery of statistics such as the intermittency of turbulent fluctuations and spectral scaling. A detailed discussion of the potential benefits and challenges of deep learning for turbulence (and fluid dynamics in general) may be found in the article by Kutz \cite{kutz2017deep}.
While a large majority of the LES-based frameworks presented above utilize a least-squares error minimization technique for constructing maps to sub-grid stresses \emph{directly}, this work represents a physics-informed implementation of sub-grid source terms through the learning of convolutional and deconvolution maps between grid-resolved and unresolved fields. In other words, our framework is able to reproduce, approximately, a map related to the convolution associated with insufficient grid-support in LES implementations of the Navier-Stokes equations as well as its inverse. These optimal maps are obtained by supervised learning from subsampled direct numerical simulation (DNS) data and are deployed in an \emph{a posteriori} fashion for the LES of two-dimensional turbulence. In this manner, we unite the advantages of functional and structural modeling of turbulence in addition to precluding the use of any phenomenological arguments. Through this, we also aim to achieve a harmonious combination of first-principles based physics as well data-driven mechanisms for high accuracy. A hybrid formulation leveraging our knowledge of governing equations and augmenting these with machine learning represents a great opportunity for obtaining optimal LES closures for multiscale physics simulations \cite{langford1999optimal,moser2009theoretically,labryer2015framework,king2016autonomic,pathak2018hybrid}.
Therefore, this investigation represents an advancement of the concepts proposed by the authors previously \cite{maulik2017neural}, where solely the deconvolutional ability of artificial neural networks was investigated in an \emph{a priori} sense for sub-filter stresses. The adaptations proposed in our current study are targeted towards recovering the sub-grid component of the coarse-grained LES computation. In addition, we not only address the issue of \emph{a priori} sub-grid recovery with our proposed closure, but also demonstrate its robustness in \emph{a posteriori} deployment with associated numerical challenges. While the two-dimensional turbulence case is utilized for a proof-of-concept as well as for its geophysical implications where improved closure development is still sought extensively, our generalized framework may easily be scaled up to multiple dimensional non-linear partial differential equations. Our results indicate that the proposed framework provides for a robust sub-grid model with a dynamically computed effective eddy-viscosity within the structural modeling ideology.
\section{Turbulence modeling equations}
We proceed with the introduction of our framework by outlining the governing equations for two-dimensional turbulence. These are given by the Navier-Stokes equations in the vorticity-streamfunction formulation. In place of a primitive variable formulation, our decaying turbulence problem is solved for using the temporal evolution of the following non-dimensionalized and coupled system of equations,
\begin{align}
\label{Eq1a}
\begin{split}
\frac{\partial \omega}{\partial t} + J(\omega,\psi) &= \frac{1}{Re} \nabla^2 \omega, \\
\nabla^2 \psi &= -\omega,
\end{split}
\end{align}
where the velocity vector components may be recovered as
\begin{align}
\label{Eq1b}
\begin{split}
\frac{\partial \psi}{\partial y} &= u \\
\frac{\partial \psi}{\partial x} &= -v.
\end{split}
\end{align}
The computational necessities of coarse-graining result in a grid-filtered system of equations
\begin{align}
\label{Eq2}
\begin{split}
\frac{\partial \overline{\omega}}{\partial t} + J(\overline{\omega},\overline{\psi}) &= \frac{1}{Re} \nabla^2 \overline{\omega} + \Pi, \\
\nabla^2 \overline{\psi} &= -\overline{\omega},
\end{split}
\end{align}
where overbarred quantities imply grid-resolved variables. A resulting unclosed term is obtained, ideally represented as
\begin{align}
\label{Eq3}
\Pi = J(\overline{\omega},\overline{\psi}) - \overline{J(\omega,\psi)}.
\end{align}
The second term on the right-hand side of the above equation represents the primary target of approximation for the structural modeling mechanism. In contrast, the functional modeling procedure is to represent $\Pi$ as an effective eddy-viscosity multiplied by Laplacian of the vorticity. In this study, we shall utilize a data-driven paradigm for approximating
\begin{align}
\label{Eq4}
\overline{J(\omega,\psi)} \approx \widetilde{J(\omega^*,\psi^*)},
\end{align}
where asterisked quantities are those obtained by data-driven deconvolution and the tilde represents data-driven convolution. This procedure is similar to the AD mechanism which requires an \emph{a priori} low-pass spatial filter specification. Note that the proposed methodology effectively aims to approximate the operations of Fourier cut-off filtering and its inverse which is the primary reason why it blends the distinction between sub-filter and sub-grid recovery. The former is a potential limitation of the AD mechanism in its current implementation. Our approximate sub-grid model is thus given by
\begin{align}
\label{Eq4b}
\tilde{\Pi} = J(\bar{\omega},\bar{\psi})-\widetilde{J(\omega^*,\psi^*)}.
\end{align}
For the purpose of comparison we also introduce the Smagorinsky and Leith models which utilize algebraic eddy-viscosities for sub-grid stress calculation given by
\begin{align}
\label{Eq5}
\Pi_e = \nabla . \left(\nu_e \nabla \bar{\omega}\right),
\end{align}
where for the Smagorinsky model we have
\begin{align}
\label{Eq6}
\nu_e = (C_s \delta)^2 |\bar{S}|,
\end{align}
and the Leith hypothesis states
\begin{align}
\label{Eq7}
\nu_e = (C_l \delta)^3 |\nabla \bar{\omega}|.
\end{align}
Note that $|\bar{S}| = \sqrt{2 S_{ij} S_{ij}}$ and $|\nabla \bar{\omega}|$ correspond to two commonly used kernels for eddy-viscosity approximations. Here, $\delta$ is generally assumed to be the characteristic mixing length taken to be the grid size. The online performance of our proposed framework shall be compared to these simple, but robust closures. We remark here that the standard procedure for closure in the vorticity-streamfunction formulation (relevant to two-dimensional simulations) is based on sub-grid vorticity source term modeling but our generalized procedure may be extended to the primitive variable approach as a source term in the Navier-Stokes momentum equations. For the convenience of the reader we also tabulate some of the notation that will be widely used in the rest of this article in Table \ref{Table1}. We note that the variables outlined in this table are all defined on a coarse(i.e, LES) grid. Details regarding the preparation of the data for our machine learning methods shall be outlined in subsequent sections.
\begin{table}[H]
\label{Table1}
\small
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|c|} \hline
Notation & Category \\ \hline
$\bar{a}$ & Grid filtered (i.e, Fourier cut-off filtered) from DNS \\ \hline
$a^c$ & Comb filtered (i.e, sub-sampled) from DNS \\ \hline
$a^*$ & Data-driven deconvolved variable \\ \hline
$\tilde{a}$ & Data-driven convolved variable \\ \hline
\end{tabular}
}
\caption{A summary of filter and deconvolutional notation}
\end{table}
\section{Data-driven convolution and deconvolution}
The ANN, also known as a multilayered perceptron, consists of a set of linear or nonlinear mathematical operations on an input space vector to establish a map to an output space. Other than the input and output spaces, a ANN may also contain multiple hidden layers (denoted so due to the obscure mathematical significance of the matrix operations occurring here). Each of these layers is an intermediate vector in a multi-step transformation which is acted on by biasing and activation before the next set of matrix operations. Biasing refers to an addition of a constant vector to the incident vector at each layer, on its way to a transformed output. The process of activation refers to an elementwise functional modification of the incident vector to generally introduce nonlinearity into the eventual map. In contrast, no activation (also referred to as a linear activation), results in the incident vector being acted on solely by biasing. Note that each component of an intermediate vector corresponds to a unit cell also known as the neuron. The learning in this investigation is \emph{supervised} implying labeled data used for informing the optimal map between inputs and outputs. Mathematically, if our input vector $\textbf{p}$ resides in a $P$-dimensional space and our desired output $\textbf{q}$ resides in a $Q$-dimensional space, the ANN establishes a map $\mathbb{M}$ as follows:
\begin{align}
\label{eq8}
\mathbb{M} : \{ p_1, p_2, \hdots, p_P\} \in \mathbb{R}^P \rightarrow \{ q_1, q_2, \hdots, q_Q\} \in \mathbb{R}^Q.
\end{align}
In this study, we utilize two maps which relate to convolution and deconvolution of fields with grid-resolved and sub-grid components respectively. We must caution the reader here that the maps are not assumed to transform between isomorphic spaces (considered a limitation of structural AD \cite{guermond2004mathematical,germano2015similarity}). This allows for the estimation of sub-grid loss due to coarse-graining the degrees of freedom in an LES deployment. In equation form, our optimal map $\mathbb{M}_1$ relates coarse-grained field stencils to their grid-filtered (i.e., Fourier cut-off filtered) counterparts and is given by
\begin{align}
\label{eq9}
\begin{gathered}
\mathbb{M}_1 : \{ \omega_{i,j}^c, \omega_{i,j+1}^c, \omega_{i,j-1}^c \hdots, \omega_{i-1,j-1}^c \in \mathbb{R}^{9} \rightarrow \{ \tilde{\omega} \} \in \mathbb{R}^1.
\end{gathered}
\end{align}
where $\tilde{\omega}$ represents an approximation for $\bar{\omega}$.
Our second map, relates grid-filtered field stencils to their coarse-grained counterparts given by
\begin{align}
\label{eq10}
\begin{gathered}
\mathbb{M}_2 : \{ \bar{\omega}_{i,j}, \bar{\omega}_{i,j+1}, \bar{\omega}_{i,j-1} \hdots, \bar{\omega}_{i-1,j-1} \in \mathbb{R}^{9} \rightarrow \{ \omega^{*} \} \in \mathbb{R}^1.
\end{gathered}
\end{align}
where $\omega^{*}$ represents an approximation for $\omega^c$. Note that both maps are trained for optimal prediction using normalized inputs. Our normalization (approximately) rescales our data to zero mean and unit variance by using grid-resolved variable quantities. Therefore, both inputs and outputs to maps are normalized by quantities available dynamically and the deployment of the network does not require \emph{a priori} storage of training parameters. For instance, the normalization of $\bar{\omega}$ may be obtained by
\begin{align}
\label{Eq11a}
\bar{\omega}^n = \frac{\bar{\omega}-\mu(\bar{\omega})}{\sigma(\bar{\omega})},
\end{align}
where $\mu(a)$ and $\sigma(a)$ refer to the mean and variance of a field variable $a$. Similarly the normalization of $\omega^{*}$ is given by
\begin{align}
\label{Eq11b}
\omega^{*^{n}} = \frac{\omega^{*}-\mu(\bar{\omega})}{\sigma(\bar{\omega})}.
\end{align}
In this manner, no \emph{a priori} training coefficients may be recorded. In essence, we emphasize that all normalization is carried out to ensure the mean of grid-resolved quantities is zero and that the standard deviation of these quantities is unity. Trained maps using this normalization technique may thus be used for the convolution or deconvolution of any coarse-grained variable.
Note that both maps are trained for optimal prediction using normalized inputs. Our normalization (approximately) rescales our data to zero mean and unit variance by using grid-resolved variable quantities. Therefore, both inputs and outputs to maps are normalized by quantities available dynamically. A key facet of our proposed methodology is that our trained maps are obtained only from vorticity data even though they need deployment for the deconvolution of the streamfunction as well as the convolution of the Jacobian. Successful sub-grid information recovery (described in the results section) shows that this data-independence in training can be related to a true learning of the filtering and deconvolution characteristics between coarse and fine grids.
The pseudocode for a deployment of our optimally trained maps is shown in Algorithm \ref{Algo1} where it can be seen that each time step (or sub-step) of an explicit flow evolution requires the specification of a data-driven approximation to the true Jacobian $J\overline{(\omega,\psi)}$. In subsequent sections, we shall comment on the final \emph{a posteriori} constraining for ensuring numerical realizability. Figure \ref{Fig0} visually outlines the two networks deployed in this study.
\begin{algorithm}[H]
\caption{Proposed framework deployment}
\label{Algo1}
\begin{algorithmic}[1]
\State Given trained maps $\mathbb{M}_1 \textnormal{ and } \mathbb{M}_2$
\State Given $\overline{\omega} \textnormal{ and } \overline{\psi}$
\State Normalize $\overline{\omega} \textnormal{ and } \overline{\psi}$ to get $\overline{\omega}^n \textnormal{ and } \overline{\psi}^n$ respectively
\State Use $\mathbb{M}_2$ to obtain deconvolved variables $\omega^{n^*} \textnormal{ and }\psi^{n^*}$
\State Rescale to physical domain to get $\omega^{*} \textnormal{ and } \psi^{*}$
\State Calculate estimated coarse-grid Jacobian $J(\omega^*,\psi^*)$
\State Normalize Jacobian $J(\omega^*,\psi^*)$ to get $J(\omega^*,\psi^*)^n$
\State Use $\mathbb{M}_1$ to obtain convolved variables $\widetilde{J(\omega^*,\psi^*)^n}$
\State Rescale $\widetilde{J(\omega^*,\psi^*)^n}$ to physical domain to get $\widetilde{J(\omega^*,\psi^*)}$
\State Deploy turbulence model $\tilde{\Pi} = J(\bar{\omega},\bar{\psi}) - \widetilde{J(\omega^*,\psi^*)}$ subject to post-processing for numerical stability given by Equation \ref{Eq12}
\end{algorithmic}
\end{algorithm}
\begin{figure}
\centering
\caption{A schematic of our data-driven mapping for convolution and deconvolution. Two separate ANNs are utilized for projection to and from deconvolved variable space.}
\includegraphics[width=\columnwidth,trim=0cm 1cm 0cm 0cm,clip]{Figure_1.pdf}
\label{Fig0}
\end{figure}
As evident, implementation of the proposed framework requires multiple convolutional and deconvolutional passes over the grid-resolved variables and therefore we refer to this framework, from henceforth, as the data-driven convolutional and deconvolutional closure (DCD). Both our networks utilize one hidden layer along with the input and output layers. This hidden and output layers have a bias vector associated with it. For faster training, we utilize rectified linear activation functions (ReLU) for our hidden layer and a linear activation function for the output layer. Note that input data is not activated as it enters the network. Our hidden layer utilizes 100 unit cells (i.e., neurons) which are acted on by the ReLU transformation and biasing. The process of bias and activation at each neuron is displayed in Figure \ref{Fig1} and every neuron is fully connected to its previous layer (i.e, with incident inputs from all neurons from the previous layer). In subsequent sections, we outline a sensitivity study of our proposed ideology for varying architecture depths where it is proven that one-layered networks suffice for this particular problem.
\begin{figure}
\centering
\caption{A schematic of our biasing and activation at each hidden layer neuron. Assuming five inputs from previous layer.}
\includegraphics[width=\columnwidth,trim=0cm 14.5cm 0cm 5cm,clip]{Figure_2.pdf}
\label{Fig1}
\end{figure}
\section{Training and \emph{a priori} validation}
For the purpose of generating our optimal maps discussed in the previous section, we utilize two supervised learnings with sets of labeled inputs and outputs obtained from direct numerical simulation (DNS) data for two-dimensional Kraichnan turbulence. We have utilized a second-order accurate energy-conserving Arakawa scheme for the nonlinear Jacobian and second-order accurate finite-difference discretization schemes for the Laplacian of the vorticity. The Poisson update is performed using a spectrally-accurate solver and the time-integration is performed by a third-order accurate TVD Runge-Kutta explicit method. Further details on the problem setup and the implementation of an energy and enstrophy conserving numerical method can be found by the authors' previous studies \cite{san2012high,maulik2017stable}. Our grid-resolved variables (i.e., $\bar{\omega}$) are generated by a Fourier cut-off filter so as to truncate the fully-resolved DNS fields (obtained at $2048^2$ degrees-of-freedom) to coarse-grained grid level (i.e., given by $256^2$ degrees-of-freedom). Our subsampled variables (i.e., $\omega^c$) are obtained by a comb filtering procedure where every eighth data point is retained.
We also emphasize on the fact that, while the DNS data generated multiple time snapshots of flow evolution, data was harvested from times $t=0,1,2,3$ and $4$ for the purpose of training and validation. This represents a stringent subsampling of the total available data for map optimization. Our DNS utilized an explicit formulation with a constant timestep of 0.0001 implying potential generation of 40000 snapshots out of which only 4 were selected at regular intervals for data harvesting. This represents a 0.01\% utilization of total potential data during training which is particularly challenging for this unsteady problem. The generation of data sets at the coarse grained level is outlined in Algorithm \ref{Algo2}.
We also note that the Reynolds number chosen for generating the training and validation data sets is given by $Re=32000$ while deployment is tested for a higher Reynolds number of $64000$ for both \emph{a priori} and \emph{a posteriori} assessment. We remind the reader here, map training is performed solely on the vorticity field despite the fact that trained maps are to be utilized for vorticity, streamfunction and the Jacobian. For this reason, all our inputs are normalized to ensure zero mean and unit variance while our outputs are normalized in a similar fashion but to slightly different mean and variance i.e.,
\begin{align}
\label{Eq11}
a^n &= \frac{a - \mu(\bar{a})}{\sigma(\bar{a})},
\end{align}
where $a$ may either be grid-resolved or deconvolved quantities. In essence, we emphasize that all normalization is carried out to ensure the mean of grid-resolved quantities is zero and that the standard deviation of these quantites is unity. The aforementioned normalized quantities are then used as input-output pairs for the two different networks as discussed previously. The generation of data sets at the coarse grained level is outlined in algorithm \ref{Algo2}.
\begin{algorithm}[H]
\caption{Data harvesting from DNS}
\label{Algo2}
\begin{algorithmic}[1]
\State Obtain DNS data for vorticity $\omega^{DNS}$ at $N^2=2048^2$
\State Comb filter to obtain $\omega^c$ from $\omega^{DNS}$ by sub-sampling every eighth point
\State Grid filter to obtain $\bar{\omega}$ from $\omega^{DNS}$
\State Normalize $\bar{\omega}$ to $\bar{\omega}^n$ using Equations \ref{Eq11a}
\State Normalize $\omega^c$ to $\omega^{c^n}$ using Equation \ref{Eq11b}
\State $\omega^{c^n}$ and $\bar{\omega}^n$ are input and output pairs respectively for map $\mathbb{M}_1$ optimization, where we assume true output $\tilde{\omega}^n \approx \bar{\omega}^n$ according to Equation \ref{Eq4}
\State $\bar{\omega}^n$ and $\omega^{c^n}$ are input and output pairs respectively for map $\mathbb{M}_2$ optimization, where we assume true output $\omega^{*^n} \approx \omega^{c^n}$
\end{algorithmic}
\end{algorithm}
Two-thirds of the total dataset generated for optimization is utilized for training and the rest is utilized for test assessment. Here, training refers to the use of data for loss calculation (which in this study is a classical mean-squared-error) and backpropagation for parameter update. The test data, however, is utilized to record the performance of the trained network on data it was not exposed to during training. Similar behavior in training and test losses would imply a well-formulated learning problem. The final ANN (obtained post-training) would be selected according to the best loss on the test data after a desired number of iterations which for this study was fixed at 50. The choice for a low number of iterations was observed by Pearson correlation values reaching 0.99 for both training and test data sets. We also note that the error-minimization in the training of the ANN utilized the Adam optimizer \cite{kingma2014adam} implemented in the open-source neural network training platform TensorFlow. We remark that while the networks may have learned the target maps from the data they are provided for training and test, validation would require an \emph{a posteriori} examination as detailed in the following section. We note here that data preprocessing as well as architectural modifications (for instance network depth, number of neurons and activation types) need further investigation for improved generalization.
We first outline an \emph{a priori} study for the proposed framework where the optimal maps are utilized for predicting probability distributions for the true Jacobian i.e., $J(\overline{\omega,\psi})$. A pseudocode for the computation of this true Jacobian is outlined in Algorithm \ref{Algo3}. In other words, we assess the turbulence model for a one snapshot prediction. This study is carried out for one of our data snapshots $t=2$ but for both in and out-of-training data sets. We remark that the maps have previously been exposed to vorticity data from $Re=32000$ only and our out-of-training data set is given by a similar flow scenario but at higher Reynolds number given by $Re=64000$. One can thus make the argument for some transfer of learning between similar flow classes but with slight difference in physics. The performance of the framework is shown in Figure \ref{Fig3} where the framework predicts the density functions of the true Jacobian accurately for both sets of data. We also note that this study solely utilized a mean-squared-error minimization for the target variables without any physics-based regularization. A future study involving loss-functions devised with intuition from the Navier-Stokes equations would potentially aid in preserving invariance and symmetry properties between grid-resolved and deconvolved space. In addition, while the localized stencil based sampling for map deployments proposed here is amenable to deployment in structured grids, extension to arbitrary meshes would require the use of interpolation or graph convolutional kernels for unstructured information injection into the learning architecture.
\begin{algorithm}[H]
\caption{True Jacobian $\overline{J(\omega,\psi)}$ from DNS}
\label{Algo3}
\begin{algorithmic}[1]
\State Obtain DNS data for vorticity $\omega^{DNS}$ and streamfunction $\psi^{DNS}$ at $N^2=2048^2$
\State Calculate Jacobian on DNS grid i.e., $J(\omega^{DNS},\psi^{DNS})$
\State Apply grid filter to $J(\omega^{DNS},\psi^{DNS})$ in order to obtain $\overline{J(\omega,\psi)}$ at $N^2=256^2$.
\end{algorithmic}
\end{algorithm}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_3.pdf}
\caption{The prediction ability of the use of both forward and inverse maps in the calculation of the approximate underlying Jacobian $\widetilde{J(\omega^{*},\psi^{*})}$ for $Re=32000$ (left) and $Re=64000$ (right). The true Jacobian $\overline{J(\omega,\psi)}$ is also shown.}
\label{Fig3}
\end{figure}
\section{\emph{A posteriori} testing}
The ultimate test of any data-driven closure model is in an \emph{a posteriori} framework with subsequent assessment for the said model's ability to preserve coherent structures and scaling laws. While the authors have undertaken \emph{a priori} studies with promising results for data-driven ideologies for LES \cite{maulik2017neural}, the results of the following section are unique in that they represent a model-free turbulence model computation in temporally and spatially dynamic fashion. This test setup is particulary challenging due to the neglected effects of numerics in the \emph{a priori} training and testing. In the following we utilize angle-averaged kinetic energy spectra to assess the ability of the proposed framework to preserve integral and inertial range statistics. Theoretical comparisons with Kraichnan turbulence \cite{kraichnan1967inertial} and the expected $k^{-3}$ cascade are also provided. In brief, we mention that the numerical implementation of the conservation laws are through second-order discretizations for all spatial quantities (with a kinetic-energy conserving Arakawa discretization for the calculation of the nonlinear Jacobian). A third-order total-variation-diminishing Runge-Kutta method is utilized for the vorticity evolution and a spectrally-accurate Poisson solver is utilized for updating streamfunction values from the vorticity. Our proposed framework is deployed pointwise for estimating $\tilde{\Pi}$ at each explicit time-step until the final time of $t=4$ is reached. The robustness of the network to the effects of numerics is thus examined. For the purpose of numerical stability we ensure the following condition before deploying our framework
\begin{align}
\label{Eq12}
\Pi =
\begin{cases}
\tilde{\Pi},& \text{if } (\nabla^2 \bar{\omega}) (\tilde{\Pi}) > 0\\
0, & \text{otherwise.}
\end{cases}
\end{align}
where the truncation explicitly ensures no negative numerical viscosities due to the deployment of the sub-grid model. We remind the reader that the Smagorinsky and Leith hypotheses explicitly specify positive eddy-viscosities that are obtained by absolute value quantities as given in Equations \ref{Eq6} and \ref{Eq7}. An \emph{a priori} visual quantification of the truncation is shown in Figure \ref{Fig4} where quantities in the first and third quadrants are retained predictions and the others are discarded. A similar behavior is seen for both $Re=32000$ and $Re=64000$ data. This image also highlights the challenges of translating \emph{a priori} conclusions to \emph{a posteriori} implementations due to the requirement of numerical stability.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_4.pdf}
\caption{A visual assessment of the truncation of our numerical post-processing during deployment given by Equation \ref{Eq12}. Blue points indicate truncated deployment for ensuring no negative viscosity and numerical stability. A-priori predictions for $Re=32000$ (top) and $Re=64000$ (bottom) shown. }
\label{Fig4}
\end{figure}
Figure \ref{Fig5} displays the statistical fidelity of coarse-grained simulations obtained with the deployment of the proposed framework for $Re=32000$. Stable realizations of the vorticity field are generated due to the combination of our training and post-processing. For the purpose of comparison, we also include coarse-grained no-model simulations, i.e., unresolved numerical simulations (UNS) which demonstrate an expected accumulation of noise at grid cut-off wavenumbers. DNS spectra are also provided showing agreement with the $k^{-3}$ theoretical scaling expected for two-dimensional turbulence. Our proposed framework is effective at stabilizing the coarse-grained flow by estimating the effect of sub-grid quantities and preserving trends with regards to the inertial range scaling. Figure \ref{Fig6} visually quantifies the effect of the stabilization imparted by the proposed framework. The reader may observe that the proposed framework recovers an excellent scaling behavior. This is similar to the performance obtained by deploying the Smagorinsky model at $C_s=0.2$, a widely utilized parameteric choice obtained through prior numerical experimentation. The Leith performance at $C_l=0.2$ is slightly under-dissipative. The reader may notice that an arbitrary choice of $C_s=C_l=1.0$ leads to overdissipative performance of the eddy-viscosity closures. Our data-driven framework is thus more resistant to unnecessary dissipation. Note that choice of a higher eddy viscosity coefficient for two-dimensional turbulence has been detailed in previous literature \cite{cushman2011introduction}. Another quantification of the perfomance of the DCD closure is described in Figures \ref{Fig7} and \ref{Fig8} which juxtapose the varying performance of these parameter-dependant eddy-viscosity hypothesis (i.e., Smagorinsky and Leith respectively) to the proposed data-driven approach. One can observe that an optimal selection of parameters (after \emph{a posteriori} examination) given by $C_l = 0.5$ for the Leith model recreates the performance of the proposed framework well as well. This implies that the proposed framework has learned a similar dissipative nature through \emph{a priori} optimization of a filter and its inverse. Indeed, the application of the Smagorinsky model to various engineering and geophysical flow problems has revealed that the constant is not single-valued and varies depending on resolution and flow characteristics \cite{galperin1993large,canuto1997determination,vorobev2008smagorinsky} with higher values specifically for geophysical flows. In comparison, the proposed framework has embedded the adaptive nature of dissipation into its map which is a promising outcome. Before proceeding, we note that default parameteric choices for the Smagorinsky and Leith models are given by $C_s=C_l=0.2$.
For ensuring that the training is sufficiently generalized for this particular problem, we establish a suite of testing for the predictive performance and the numerical stability of our proposed framework. We first perform multiple forward simulations using the deployment of our proposed closure by utilizing a different random seed in the random-number generation required for the initial conditions at $Re=32000$ \cite{maulik2017stable}. This is to ensure that there is no data memorization by our maps. We choose 24 random initial conditions and ensemble-average their kinetic energy spectra at the completion of the LES for our model as well as the Smagorinsky, Leith and no-model (i.e., UNS) coarse-grid runs. We have also included ensemble results from Smagorinsky and Leith deployments at higher values of $C_s=C_l=1.0$ to describe the loss of fidelity at the lower wavenumbers in case of incorrect parameter specification. The resultant spectra are shown in Figure \ref{Fig9} where one can ascertain that the prediction quality of our framework remains identical regardless of varying initial conditions. This is promising as it validates our hypothesis that it is the smaller scales which are primarily affected by the proposed closure. We also demonstrate the utility of our learned map on an \emph{a posteriori} simulation for $Re=64000$ data where similar trends are recovered as seen in statistical comparisons (Figure \ref{Fig10}) and qualitative behavior (Figure \ref{Fig11}). This also demonstrates an additional stringent validation of the data-driven model for ensuring generalization.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_5.pdf}
\caption{The \emph{a posteriori} performance of proposed framework for $Re=32000$ in terms of energy spectra. At each step of sub-grid stress calculation, both forward and inverse maps are used for convolution and deconvolution in the estimation of the true underlying Jacobian.}
\label{Fig5}
\end{figure}
\begin{figure}
\centering
\centering
\includegraphics[width=\columnwidth]{Figure_6.pdf}
\caption{Visual quantification of the \emph{a posteriori} performance of proposed framework for $Re=32000$ with stabilized (top), under-resolved (middle) and filtered DNS contours (bottom) for vorticity.}
\label{Fig6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_7.pdf}
\caption{Performance comparison of proposed framework with co-efficient dependant Smagorinsky model. One can observe that higher $C_s$ values lead to over-dissipative models.}
\label{Fig7}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_8.pdf}
\caption{Performance comparison of proposed framework with co-efficient dependant Leith model. One can observe that higher $C_l$ values lead to over-dissipative models.}
\label{Fig8}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_9.pdf}
\caption{Ensemble-averaged \emph{a posteriori} performance of proposed framework for $Re=32000$ in terms of energy spectra. This determines the generalizability of proposed framework.}
\label{Fig9}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_10.pdf}
\caption{The \emph{a posteriori} performance of proposed framework for $Re=64000$ in terms of energy spectra. Training data limited to $Re=32000$ only.}
\label{Fig10}
\end{figure}
\begin{figure}
\centering
\centering
\includegraphics[width=\columnwidth]{Figure_11.pdf}
\caption{Visual quantification of the \emph{a posteriori} performance of proposed framework for $Re=64000$ with stabilized (top), under-resolved (middle) and filtered DNS contours (bottom) for vorticity. Note: Training only with $Re=32000$ data.}
\label{Fig11}
\end{figure}
We also seek to compare the performance of the proposed framework against the dynamic formulation of the Smagorinsky and Leith models \cite{germano1991dynamic} modified for the vorticity and streamfunction formulation as described by San and Maulik \cite{maulik2017stable} where a least-squares optimization problem is solved at two scales of resolution for an optimal value of the Smagorinsky and Leith coefficients calculated in a dynamic fashion defining a test filter. We note that even the dynamic formulation requires the specification of an \emph{a priori} characteristic filter-width ratio (i.e., a ratio between test and grid filters), $\kappa$, which affects \emph{a posteriori} results. In this comparison, we have utilized a filter-width ratio of $\kappa=2$ with the use of an explicit trapezoidal filter. The results of this comparison with our framework are shown for Reynolds numbers of $Re=32000$ and $Re=64000$ in Figures \ref{FigRev3} and \ref{FigRev4} respectively. One can observe that the performance of the dynamic implementations of our eddy-viscosity hypotheses are recreated in a qualitative fashion. Our model may thus be assumed to be both data-driven and dynamic in nature.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_12.pdf}
\caption{A comparison of the proposed framework with the Dynamic Smagorinsky and Dynamic Leith models for $Re=32000$. One can see an optimal solution being obtained by the data-driven formulation in a similar manner.}
\label{FigRev3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_13.pdf}
\caption{A comparison of the proposed framework with the Dynamic Smagorinsky and Dynamic Leith models for $Re=64000$. One can see an optimal solution being obtained by the data-driven formulation in a similar manner. Training data limited to $Re=32000$ only.}
\label{FigRev4}
\end{figure}
In terms of computational cost, we remark that the proposed framework adds a considerable computational expenditure (\emph{a posteriori} simulations led to 4 times the computational cost of the dynamic formulation) in the serial formulation. However, scalable deployments of the proposed framework in distributed environments are a subject of ongoing investigation for reducing this cost. While the data-driven framework promises more accuracy through exposure to multiple sources of turbulence data, its scalable deployment remains an important open question for successful integration into modern computational fluid dynamics solvers.
\section{Sensitivity study}
We investigate the robustness of our framework by ensuring that an optimal number of hidden layers or neurons have been utilized through an \emph{a posteriori} sensitivity study where a varying number of layers and neurons are tested for spectral scaling recovery. By keeping the default network architecture as a one layer, 100 neuron network, we investigate the effect of reduction or increase in neurons as well the effect of the number of hidden layers. We note that our studies are performed for $Re=64000$ as an additional cross-validation.
Figure \ref{Fig12} shows the effect of varying network depths, where it can be seen that a one-layered architecture performs sufficiently accurately to be considered optimal for deployment. This hints at a simpler nonlinear relationship between the inputs and outputs which has been captured by our framework. Figure \ref{Fig13} shows the effect of the number of neurons, where once again, it is observed that reduced model complexity does not impede performance. While this study utilized 100 neurons in the single hidden layer, even 10 would suffice for accurate scaling recovery. These observed behaviors imply that our framework allows for reduced network depths and reduced neurons and their associated computational advantages during training and deployment. However, we must caution the reader that greater amounts of data would necessitate deeper architectures for more generalization. In particular, our expectation is that if multiple flow scenarios were to be learned, simple feed-forward ANNs may prove to be inadequate. In particular, we note that our choice of localized sampling, network architecture and training loss-function are chosen specific to the resolution loss and physics at hand. Greater generalization (through improved diversity of training data) would require revised hyperparameter study.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_14.pdf}
\caption{Sensitivity study for proposed framework number of layers at $Re=64000$. Training data limited to $Re=32000$ only and with 100 neurons in each layer.}
\label{Fig12}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_15.pdf}
\caption{Sensitivity study for proposed framework number of layers at $Re=64000$. Training data limited to $Re=32000$ only and with 1 hidden layer only.}
\label{Fig13}
\end{figure}
For our problem of choice, it is evident that a 10 neuron, 1 layer ANN is sufficiently viable for estimating both $\mathbb{M}_1$ and $\mathbb{M}_2$. This lends evidence to the fact that our dual network formulation may also allow for simpler learning algorithms (i.e., for this particular problem). We perform an \emph{a priori} sensitivity study for training and test mean-squared-error measures for three other well-known statistical learning algorithms such as a linear regressor (LR), a random-forest regressor (RF) \cite{liaw2002classification} and a decision-tree regressor (DT) \cite{safavian1991survey}. We utilize the open-source scikit-learn machine learning library in python for standard implementations of these techniques. A quantitative training and testing mean-squared-error performance for these techniques in comparison to the ANN is shown in Figure \ref{Fig14} where it is observed that similar performance characteristics are observed despite vastly different learning methodologies for $\mathbb{M}_2$ optimization. It can thus be concluded that the utilization of our dual network framework has led to the simplification of a highly nonlinear problem to one that is tractable for linear learning methods.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_16.pdf}
\caption{Sensitivity study for machine learning algorithm for training and testing mean-squared-errors. These errors are shown for $\mathbb{M}_2$ optimization.}
\label{Fig14}
\end{figure}
The linear-regressor is also implemented in an \emph{a posteriori} manner as shown in Figures \ref{LinPos1} and \ref{LinPos2} for $Re=32000$ and $Re=64000$ respectively. The kinetic energy spectra predictions of these linear relationships which estimate the convolutional and deconvolutional relationships are slightly less dissipative in the inertial and grid cut-off length scales for the $Re=32000$ case. However, very similar performance is obtained for $Re=64000$. The slightly discrepancy in the performance of the linear implementations of the convolutional and deconvolutional maps may be attributed to a lower generalizability of the simpler nature of its learning. However, we would like to remark that this has positive implications for the utility of these techniques for the preservation of the solenoidal constraint and frame-invariance in higher-dimensional flows \cite{stolz1999approximate} on structured grids. We would also like to note that the utilization of the same data-local filter stencil in all locations of the specified mesh ensures Galilean invariance \cite{razafindralandy2007analysis}. In addition, the use of stencil inputs is philosophically aligned with \cite{moser2009theoretically}, where multipoint input data are used for optimal LES formulations. However, further research is necessary for importing concepts related to isotropization of these data-driven filter and inverse kernels for application to general unstructured grids. It is also necessary to explore the possibilities of `constrained-learning' which may embed the preservation of the solenoidal constraint in higher-dimensions through penalties introduced to the loss-functions \cite{raissi2018hidden}. That is a subject of on-going investigation.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_17.pdf}
\caption{The performance of a linear estimator (LR) for convolutional and deconvolutional maps in the proposed framework for $Re=32000$. A comparison to the default ANN is shown.}
\label{LinPos1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_18.pdf}
\caption{The performance of a linear estimator (LR) for convolutional and deconvolutional maps in the proposed framework for $Re=64000$. A comparison to the default ANN is shown. Training data limited to $Re=32000$ only.}
\label{LinPos2}
\end{figure}
\section{Modified truncation via mean filtering}
The truncation specified in Equation \ref{Eq12} and Figure \ref{Fig4} leads to an asymmetry in the estimation of the dissipation by finer wavenumbers. To that end, we introduce a modified truncation kernel based on a local-averaging for an added truncation of positive eddy-viscosity predictions to ensure a balance with backscatter. This is introduced through the concept of a locally-averaged eddy-viscosity prediction, for instance, given by
\begin{align}
\label{EqRev2}
\nu^{av}_{i,j} = \frac{1}{9}\left(\nu_{i,j}^e + \nu_{i,j+1}^e + \nu_{i,j-1}^e + \hdots + \nu_{i-1,j-1}^e \right),
\end{align}
where
\begin{align}
\nu_{i,j}^e = \frac{\tilde{\Pi}_{i,j}}{\nabla^2 \bar{\omega}_{i,j}}.
\end{align}
The averaging procedure in Equation \ref{EqRev2} may also be represented by a mean-filtering-kernel given as
\begin{align}
\nu^{av} = \frac{\nu^e}{9}
\begin{bmatrix}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1
\end{bmatrix}.
\end{align}
The transfer-function of this kernel may be visualized as shown in Figure \ref{FigBS} and this averaging filter has the effect of eliminating localized pointwise values which are unrepresentative of their surroundings.
\begin{figure}
\centering
\mbox{
\includegraphics[width=\columnwidth]{Figure_19.pdf}
}
\caption{Transfer function for truncation kernel to preserve statistical effects of backscatter.}
\label{FigBS}
\end{figure}
The quantity $\nu^{av}_{i,j}$ is basically the averaged dissipative (or energy-producing) nature of the local stencil of prediction and the quantity $\nu_{i,j}^e$ is the local effective eddy-viscosity prediction by our proposed framework. Our truncation scheme is then expressed as
\begin{align}
\label{EqRev3}
\Pi_{i,j} =
\begin{cases}
\tilde{\Pi}_{i,j},& \text{if } \nu^{av}_{i,j} > \nu^{e}_{i,j}\\
0, & \text{otherwise.}
\end{cases}
\end{align}
The effect of this modified truncation is described in Figure \ref{FigRev5a} where an increased truncation is observed quite clearly. Our model formulation may thus be assumed to preserve the statistical nature of the negative-eddy viscosities in a locally-averaged manner.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_20.pdf}
\caption{A visual assessment of the truncation of our numerical post-processing during deployment given by the BS-1 framework. Blue points indicate truncated deployment for ensuring no negative viscosity and numerical stability. A-priori predictions for $Re=32000$ (top) and $Re=64000$ (bottom) shown. }
\label{FigRev5a}
\end{figure}
\emph{A posteriori} deployments of this modified truncation scheme are displayed in Figures \ref{FigRev6} and \ref{FigRev7} where an improved capture of the inertial range is observed for $Re=32000$ and $Re=64000$ respectively. This implies that the statistical fidelity of the prediction has been improved by the integration of a local backscatter estimate. The combination of novel truncation strategies may further be studied in the context of this data-driven framework for close agreement with theoretical scaling laws.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_21.pdf}
\caption{A comparison of the choice of \emph{a posteriori} truncation utilized in our proposed framework. A statistical preservation of backscatter enforced by our proposed kernel leads to a better agreement with the inertial range statistics for $Re=32000$.}
\label{FigRev6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure_22.pdf}
\caption{A comparison of the choice of \emph{a posteriori} truncation utilized in our proposed framework. A statistical preservation of backscatter enforced by our proposed kernel leads to a better agreement with the inertial range statistics for $Re=64000$. Training data limited to $Re=32000$ only.}
\label{FigRev7}
\end{figure}
\section{Concluding remarks}
In this investigation, we have put forth and analyzed a physics-informed data-driven closure modeling framework for nonlinear partial differential equations. Our proposal is to use two single-layer feed-forward artificial neural networks for mapping transformations from grid-resolved variables with missing wavenumber content and subsampled direct numerical simulation data in order to close the two-dimensional Navier-Stokes equations. This investigation continues from the authors' previous work \cite{maulik2017neural}, which assessed the deconvolutional ability of neural networks, by employing them for estimating sub-grid relationships from grid-resolved variables.
Our framework precludes the utilization of any phenomenological arguments or model form constraints and relies, instead, solely on the approximation of the Fourier cut-off filtering inherent in coarse-graining as well as its approximate inverse. We remark that while there is truly no way to invert a Fourier cut-off filter, a-priori exposure to samples from resolved and filtered fields are used to estimate the information loss and reconstruct it. For the purpose of numerical stability, we also employ two postprocessing strategies with the first ensuring no aggregate negative viscosities in the computational domain and the second preserving backscatter in a statistical sense. This ensures that the stochastic nature of the network predictions do not trigger numerical instability amplification in an explicit flow computation. Of, the two proposed truncation mechanisms for the preservation of backscatter, our first formulation shows a good agreement with DNS statistics whereas the second truncates excessively. However, we note that the many such kernels may be investigated and we seek to undertake this for future research.
Another important feature of this investigation is that, despite its data-driven nature, our offline training phase necessitates no exposure to the true sub-grid stress data and predictions are viable simply through the estimation of the nature of the coarse-graining process in LES. Our sensitivity study reveals the benefits of this approach, where it is seen that increasing network complexity leads to no appreciable improvement in the \emph{a posteriori} performance for this current test case. The need for complicated network architectures (and their associated computational and memory burden) is thus minimized due to the physics-informed nature of our formulation.
Comparison with other well-established linear statistical learning methods also show that the novel dual network formulation presented here reduces the complexity of learning considerably. In particular, the performance of a linear map representation of convolution and deconvolution operations ensures a direct enforcement of the solenoidal constraint on the convolved and deconvolved fields for applicability to higher dimensions. \emph{A posteriori} realizations of the linear mappings between grid-resolved and sub-grid space, show an exhibition of the bias-variance trade-off issue where the simpler nature of the linear regressor leads to lower generalization for a different data-set. However, an effective parameter and model-form free closure is readily obtained in this case as well.
We also note that the data-local nature of our framework with the combination of solely one map (each for convolution and deconvolution) ensures that frame-invariance is respected for the specified mesh. As a future direction, this framework shall be studied with the view of integrating physics-based constraints in the offline training phase. These may be introduced through optimization penalties for continuity enforcement and for isotropization on arbitrary meshes. These are necessary for the generalization of this framework to higher-dimensional flows with arbitrary boundary conditions.
While the results of this study have proven promising for the development of purely data-driven closures for LES, the true test of these ideologies would be to develop generalized closures for a variety of flows. In terms of a long-term goal, the preliminary results displayed here must translate to a situation where \emph{a posteriori} closure is determined by \emph{a priori} exposure to a variety of flow classes. Additionally, the stencil based formulation for a predictive map leads to a resolution dependence of the trained relationships. This is because our LES to DNS ratio is fixed during the specification of training data. An exposure to different levels of coarse-graining for potential predictions would also increase the generalizability of this framework. With that in mind, we remark that the framework proposed here represents the advantages of implementing a data-driven paradigm from a physics-informed point of view with consequent benefits for framework complexity and ease of deployment.
\begin{acknowledgements}
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Award Number DE-SC0019290. O.S. gratefully acknowledges their support. Direct numerical simulations for this project were performed using resources of the Oklahoma State University High Performance Computing Center. Disclaimer: This report was prepared as an account of work sponsored by an agency of the United
States Government. Neither the United States Government nor any agency thereof, nor any of their
employees, makes any warranty, express or implied, or assumes any legal liability or responsibility
for the accuracy, completeness, or usefulness of any information, apparatus, product, or process
disclosed, or represents that its use would not infringe privately owned rights. Reference herein to
any specific commercial product, process, or service by trade name, trademark, manufacturer, or
otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by
the United States Government or any agency thereof. The views and opinions of authors expressed
herein do not necessarily state or reflect those of the United States Government or any agency thereof.
\end{acknowledgements}
|
1,108,101,564,662 | arxiv | \section{Introduction}\label{a}
The object of the present article is to introduce a special class of multi-dimensional diffusions in random
environment for which we are able to prove a law of large numbers and a functional central limit theorem governing
the corrections to the law of large numbers, valid for a.e. environment (a so-called {\textit{quenched}} functional central
limit theorem). The investigation of the asymptotic behavior of multi-dimensional diffusions in random environment
is well-known for its difficulty due to the massively non-self-adjoint character of the model, and to the rarity
of explicitly calculable examples. A special interest of the class we introduce stems from the fact that it offers
examples of diffusions with non-vanishing random drifts where on the one-hand no
invariant measure for the process of the environment viewed from the particle, absolutely continuous with respect
to the static distribution of the random environment, is known, and where on the other hand our results
hold without certain assumptions which guarantee condition (T) or (T') of Sznitman, see \cite{SCHM1}, \cite{GOE2}
in the continuous set-up. Thus when the limiting velocity vanishes such examples correspond to diffusive motions where
very few results are available, see \cite{brickup}, and \cite{BSZ}, in the discrete set-up, or \cite{SZNZEIT}
for diffusions in random environment. And when the limiting velocity does not vanish such examples differ from
existing results for {\textit{quenched}} functional central limit theorems, such as in the recent \cite{berzei},
or \cite{RS2}, in the
discrete set-up, for such results rely on finiteness assumptions for moments of certain regeneration times, which to the best of
our knowledge can only be checked through some sufficient criterion for (T) or (T'). Let us mention that at present
it is an open problem whether ballistic behavior in dimension 2 and above implies (T) or (T'). Our class contains
examples of ballistic motion, and we do not need to check (T) or (T') (which of course does not preclude that these
conditions may hold in these examples). The class we introduce here is a type of continuous
counterpart of the class considered in
\cite{BSZ} in the context of random walks in random environment. The formulas we obtain for the velocity
are reasonably explicit and might be amenable to the construction of some further examples or counterexamples,
in the spirit of what was done in \cite{BSZ},
although this is not carried out here given the length of the present work. Indeed the continuous
set-up is more delicate than the discrete set-up, and it is by no mean routine to adapt the general strategy of
\cite{BSZ} in the context of diffusions in random environment. For an overview of results and useful techniques
concerning this area of research we refer to
\cite{SZN2}, \cite{SZN}, \cite{ZEIT}.\\
Before describing our results any further, let us first introduce the model.
We consider integers $d_1\geq 5,\ d_2\geq 1$ and $d=d_1+d_2.$ The random environment is described by a probability space
$(\Omega,\cal{A},\bb{P})$
and we assume the existence of a group $\{\tau_x : x\in\bb{R}^{d}\}$ of $\bb{P}$-preserving transformations on $\Omega$ that are
jointly measurable in $x$ and $\omega.$ On $(\Omega,\cal{A},\bb{P})$ we consider an $\bb{R}^d$-valued random variable $b(\cdot)$
with vanishing first $d_1$
components, that is\begin{equation}\label{77}
b(\omega)= (\underbrace{0,\ldots,0}_{d_1},b^*(\omega))\in\bb{R}^d,\mbox{ for }\omega\in\Omega,
\end{equation}and we define
\begin{equation}\label{1}
b(x,\omega)\stackrel{\mathrm{def}}{=} b(\tau_x(\omega)),\qquad\mbox{ for }x\in\bb{R}^d.
\end{equation}
We assume this function to be bounded and Lipschitz continuous,
i.e. there is a constant $\kappa>0$ such that for
all $x,y\in\bb{R}^d, \omega\in\Omega,$
\begin{equation}\label{2}|b(x,\omega)|\leq \kappa,\quad
|b(x,\omega)-b(y,\omega)|\leq \kappa|x-y|,
\end{equation}
where $|\cdot|$ denotes the Euclidean norm in $\bb{R}^d.$ We will further assume finite range dependence for the
environment, that is for a Borel subset $F$ of $\bb{R}^d$ we define the $\sigma$-algebra
\begin{equation}\label{4}
\cal{H}_F\overset{\mbox{\scriptsize{def}}}{=}\sigma(b(x,\omega):x\in F)
\end{equation}
and assume that there is an $R>0$ such that
\begin{equation}\label{5}
\cal{H}_A \mbox{ and } \cal{H}_B \mbox{ are independent whenever }d(A,B)>R,
\end{equation}
where $d(A,B)=\inf\{|x-y|:x\in A,\ y\in B\}.$
We let stand $(X_t)_{t\geq 0}$ for the canonical process on $C(\bb{R}_+,\bb{R}^{d})$ and for $\omega\in\Omega, x\in\bb
{R}^d$ we denote with $P_{x,\omega}$ the unique solution to the martingale problem attached to $x$ and
\begin{equation}\label{270}
\cal{L}^{\omega}=\frac{1}{2}\Delta+ b(\cdot,\omega)\cdot\nabla,
\end{equation}i.e. the law $P_{x,\omega}$ describes the diffusion in the environment $\omega$ starting at $x$ and is
usually called the {\textit{quenched law.}} We write $E_{x,\omega}$ for the corresponding expectation. We endow the space
$C(\bb{R}_+,\bb{R}^{d})$ with the Borel $\sigma$-algebra $\cal{F}$ and the canonical right-continuous
filtration $(\cal{F}_t)_{t\geq 0}.$ For the study of the asymptotic properties of $X_.$, it is convenient
to introduce the {\it annealed law} which is the semi-direct product measure on $\Omega\times C(\bb{R}_+,\bb{R}^{d})$
defined as
\begin{equation}\label{78}
P_x\stackrel{\mathrm{def}}{=} \bb{P}\times P_{x,\omega}.
\end{equation}
We denote with $E_x$ the corresponding expectation.
Let us mention that the laws $P_x$ typically destroy the Markovian structure but restore a useful stationarity
to the problem.\\
Let us now explain the purpose of this work in more detail. In the first part of this article we prove a law of large
numbers, see Theorem \ref{89}, namely when $d_1\geq 5,$ we show that
\begin{equation}\label{201}
P_0\mbox{-a.s.,}\qquad \frac{X_t}{t}\longrightarrow v\stackrel{\mathrm{def}}{=} {E^{{P}\times K_0}
\left[\int_0^{T^1}b\left(\chi_u^{},\omega\right)du,\ T^0=0\right]},
\quad\mbox{as }t\to\infty,
\end{equation}
with a deterministic (possibly vanishing) limiting velocity $v.$ The process $\chi_u,\ u\geq 0,$ is defined
on an enlarged probability space, see Theorem \ref{47}, on which the notion of doubly infinite bilateral
cut times ($T^k,\ k\in\bb{Z}$)
for the Brownian part of the diffusion is superimposed, see (\ref{19}). The definition of cut times
involves neither the drift nor the random environment except for the parameters $\kappa$ of (\ref{2})
and $R$ of (\ref{5}).
The law of $(\omega,(\chi_u^{})_{u\geq 0})$ under the measure ${P}\times K_0$ recovers
the {\textit{annealed}} measure $P_0,$ see (\ref{17}), (1) of Theorem \ref{47} and (\ref{190}). \\
In the second part, assuming antipodal
symmetry in the last $d_2$ components of the drift, see (\ref{100}), and when $d_1\geq 7$ (in which case $v=0$), or
when $d_1\geq 13$ without symmetry properties, we derive a functional
central limit theorem under the \textit{quenched law,} see Theorem \ref{200}:
\begin{equation}\label{202}\begin{array}{l}
\mbox{for }\bb{P}\mbox{-a.e. }\omega,\mbox{ under the\, measure }P_{0,\omega},\mbox{ the }C(\bb{R}_+,\bb{R}^d)\mbox{\,-valued
random}\\
\mbox{variables }B^r_.\,\stackrel{\mathrm{def}}{=}\, r^{-1/2}\,(X_{r\cdot}-vr\cdot),\ r>0,
\mbox{ where $v$ corresponds to the}\\[2pt]
\mbox{limiting velocity in (\ref{201}), converge weakly to a Brownian motion with}\\[2pt]
\mbox{deterministic covariance matrix, as $r$ tends to infinity.}
\end{array}\end{equation}
The proofs of the above results are based on the existence of so-called cut times $T^k,\ k\in\bb{Z},$ which are defined
in a similar spirit to \cite{BSZ} and play a role comparable to the regeneration times introduced in \cite{SZNZER}.
The assumption $d_1\geq 5$ enables to exploit the presence of these cut times and discover a decoupling effect,
see Proposition \ref{59}. The cut times are in essence defined as follows. In the spirit of the technique applied in
\cite{CZ} for random walks in random environments or in \cite{SHEN2} for the continuous case, we couple our
diffusion at each integer time $n$ with an auxiliary Bernoulli variable $\Lambda_n$ such that when $\Lambda_n=1$,
the distribution of $X_{n+1}$ given $X_n$ does not depend on the environment. We then say that a cut time occurs at the
integer time $n,$ if the Bernoulli variable at time
$n-1$ takes value 1 and the future of the Brownian part of the diffusion, which corresponds to the first $d_1$ components,
after time $n$ stays at a distance at least $2R$ from the past before time $n-1,$ see (\ref{19}) and (\ref{22})
for the exact definition. Due to the finite range dependence, see (\ref{5}), we then can produce decoupling
in our process which allows an easy comparison to a process defined on a probability space with an ergodic shift in which
we can embed an additive functional. These considerations essentially reduce
the proof of (\ref{201}) to an application of
Birkhoff's Ergodic Theorem. With the help of a criterion introduced by Bolthausen and Sznitman in \cite{BS}, see Lemma 4, the {\textit{quenched}} invariance principle (\ref{202})
follows from the {\textit{annealed}} versions, see Theorem \ref{103} and \ref{120}, by a variance calculation which involves
a certain control on the intersections of two independent paths.
The main strategy behind the proofs of the {\textit{annealed}}
central limit theorems is to show an {\textit{annealed}} central limit theorem
for a process defined as the polygonal interpolation of an ergodic process
$Z^s_k, k\in\bb{Z},$ see (\ref{43}) and Proposition \ref{88}, which is then rescaled in time and space analogously to the
definition of $B^n_.$ in (\ref{202}) for integers $n\geq 1,$ and which is
comparable to the original diffusion $X_.,$ see Lemma \ref{510} and (\ref{122}). The proof without symmetry
assumption on the drift but $d_1\geq 13$ is more involved and needs an adaptation of Gordin's method, see
for instance the proof of Theorem 7.6 in \cite{D}.\\
Let us mention that the application of Girsanov's formula yields a very handy and reasonably explicit version
of the transition density for the last $d_2$ components of the diffusion in a fixed environment given the
Brownian part (first $d_1$ components), see (\ref{255}). The formula (\ref{255}) involves the Brownian transition
density and the
bridge measure which depend neither on the environment nor on the
first $d_1$ components of the diffusion and hence enables to inspect
the {\textit{quenched}} transition density directly. This formula is not available anymore if one wants to treat more
general diffusions in random environment where the diffusion matrix in the last $d_2$ components becomes
a genuinely environment dependent stationary process. Other methods would be required in this set-up, possibly in the spirit
of filtering theory.\\
Let us now explain how this article is organized. In Section \ref{b} we couple our diffusion with a suitable
sequence of i.i.d. Bernoulli variables, see Theorem \ref{47}. We then define the cut times $T^k,\ k\in\bb{Z},$
see (\ref{19}) and (\ref{22}), and provide the crucial decoupling, see Proposition \ref{59}. Finally we prove a law
of large numbers. Section \ref{c} is dedicated to two central limit
theorems under the annealed measure that are also consequences of the decoupling technique discussed in Section \ref{b}.
The first central limit theorem is proved under a symmetry assumption on the drift and $d_1\geq 7,$
see (\ref{100}), whereas for the second central limit theorem $d_1\geq 13$ is assumed. In Section \ref{d}
we show how one can strengthen the results of Section \ref{c} into central limit theorems under the quenched measure.
Finally, in the Appendix, two multidimensional versions of central limit theorems for martingales are proved. \\
{\textbf{Convention on constants:}} Unless otherwise stated, constants only depend on the quantities $d_1,d_2,\kappa,R.$
In calculations, generic constants are denoted by $c$ and may change from line to line, whereas $c_1, c_2, \ldots$ are constants with fixed values
at their first appearance. With $c(q,\eta)$ we denote constants that depend on the usual parameters $d_1,d_2,\kappa,R$
and additionally on $q$ and $\eta.$\\
{\textbf{Acknowledgements:}} I would like to express my sincere gratitude to my advisor Prof. A.-S. Sznitman for his
support during this work. I want also to thank T. Schmitz, L. Goergen and D. Windisch for many helpful and encouraging
discussions.
\section{Decoupling and a law of large numbers}\label{b}
In this section we will first take advantage of the special structure of the diffusions considered in our model to couple
auxiliary i.i.d. Bernoulli variables $\Lambda_n$ with the diffusion, see Theorem \ref{47}. Under the coupled measure, the
distribution of the diffusion at integer time $n$ will only depend on the position at time $n-1$ and not on the
environment when $\Lambda_{n-1}=1.$ Due to the finite range dependence, see (\ref{5}), we then discover
with the help of cut times, which are introduced in Subsection \ref{241}, a certain decoupling effect
under the annealed law, see Proposition \ref{59}. This finally leads to a law of large numbers,
see Theorem \ref{89}.\\
For a real number $u\in\bb{R}$ we define its integer part as \begin{equation}\label{300}
[u]\stackrel{\mathrm{def}}{=}\sup\{n\in\bb{Z}\ |\ n\leq u\}.\end{equation}
Further we denote the $d_2$-dimensional, closed ball of radius $r>0$ centered at $y\in\bb{R}^{d_2}$ with $B^{d_2}_r(y)$ and
write $vol(d_2)$ for its volume. For $n\geq 1, z, z'\in\bb{R}^n$ and $s>0$ we introduce the $n$-dimensional Gaussian
kernel\begin{equation}\label{70}
p_{n}(s,z,z')\stackrel{\mathrm{def}}{=} \frac{1}{(2\pi s)^{n/2}}\exp\{-|z-z'|^2/2s\}.
\end{equation}
We denote with $W^{d_1}_0$ the set of all continuous $\bb{R}^{d_1}$-valued
functions on $\bb{R}$ that vanish at $0.$ Furthermore we consider the space
$W_+^{d_2}=C(\bb{R}_+,\bb{R}^{d_2})$ and the canonical coordinate processes $X^1_.,X^2_.$ defined as
\begin{equation}\begin{aligned}\label{79}
X^1_t(w)\stackrel{\mathrm{def}}{=} w(t)\mbox{ for all }t\in\bb{R}\mbox{ and }w\in W^{d_1}_0,\\
X^2_t(u)\stackrel{\mathrm{def}}{=} u(t)\mbox{ for all }t\geq 0\mbox{ and }u\in W^{d_2}_+.\hspace{0.2cm}
\end{aligned}\end{equation}
We endow the space $W^{d_1}_0$ with the $\sigma$-algebra $\cal{W}_0=\sigma(X^1_s, s\in\bb{R})$ and $W^{d_2}_+$ with the
$\sigma$-algebra $\cal{U}=\sigma(X^2_s, s\in\bb{R}_+)$ and the canonical filtration $\cal{U}_t=\sigma(X^2_s,0\leq s\leq t),t\geq 0$
which is neither right-continuous nor complete in
opposition to ${\cal{F}}_t,$ see above (\ref{78}).
$\bar{P}$ denotes the two-sided Wiener measure on $(W^{d_1}_0,\cal{W}_0)$ with $\bar{P}[X_0^1=0]=1.$
We write $\bar{E}$ for the expectation with respect to the measure $\bar{P}.$ On the measurable space
$(W^{d_2}_+,\cal{U})$ we introduce for $y,y'\in\bb{R}^{d_2}$ the Wiener measure $\tilde{P}_y$ with $\tilde{P}_y[X^2_0=y]=1$
and $\tilde{P}_{y,y'},$ the Brownian bridge measure from $y$ to $y'$ on [0,1]. We write $\tilde{E}_y$ and
$\tilde{E}_{y,y'}$ for the corresponding expectations. On the product space $(W^{d_1}_0\times W^{d_2}_+,\cal{W}_0
\otimes\cal{U})$ we define the $\bb{R}^d$-valued process
\begin{equation}\label{81}
\chi_t^{}\stackrel{\mathrm{def}}{=} (X_t^1,X_t^2),\quad t\geq 0.
\end{equation}
For $y\in\bb{R}^{d_2},\omega\in\Omega$ and $w\in W^{d_1}_0$ we denote with $\bar{K}_{y,\omega}(w)$ the probability kernel from
$(W^{d_1}_0,\cal{W}_0)$ to $(W^{d_2}_+,\cal{U})$ defined as the unique solution of the martingale problem starting at time 0
from $y$ and attached to
\begin{equation*}
\cal{L}_t^{w,\omega}=\frac{1}{2}\sum_{i=1}^{d_2}\partial_{ii}^2+\sum_{i=1}^{d_2}b_i^*\big((w(t),\cdot),\omega\big)
\partial_{i},
\end{equation*}see Theorem 6.3.4 in \cite{SV}. For $w\in W^{d_1}_0$ and $\omega\in\Omega$ we define the stochastic exponential
\begin{equation}\label{13}
\cal{E}(w,\omega)\stackrel{\mathrm{def}}{=} \exp\left\{
\int_0^1 b^*((w(s),X^2_s),\omega)dX^2_s-\frac{1}{2}\int_0^1|b^*((w(s),X_s^2),\omega)|^2ds\right\},
\end{equation}which is in $L^1(\tilde{P}_{y,y'})$ for all $y,y'\in\bb{R}^{d_2},$ see the proof of
Theorem 4.1 of \cite{LZ}, and introduce the transition density
\begin{equation}\label{255}
p_{w,\omega}(1,y,y')\stackrel{\mathrm{def}}{=} p_{d_2}(1,y,y')\tilde{E}_{y,y'}\left[\cal{E}(w,\omega)\right],
\end{equation}which is a measurable function of $\omega\in\Omega,w\in W^{d_1}_0$ and $y,y'\in\bb{R}^{d_2},$
see for instance Theorem 44 in \cite{P} on page 158, fulfilling
\begin{equation*}
\bar{K}_{y,\omega}(w)[X^2_1\in G]=\int_{G}dy'p_{w,\omega}(1,y,y'),
\end{equation*}for all Borel sets $G$ in $\bb{R}^{d_2},$ see equation (6.35) in Chapter 5 of \cite{KS} and
Girsanov's Formula in Theorem 6.4.2 of \cite{SV}.
\subsection{The coupling construction}
We are going to enlarge the probability space $(W^{d_1}_0\times W^{d_2}_+,\cal{W}_0\otimes\cal{U},\bar{P}\times\bar{K}_
{y,\omega})$ and provide a coupling of the process $\chi_.^{}$ with a sequence of i.i.d. Bernoulli variables,
see Theorem \ref{47}. Let us begin with an easy fact about the
transition density defined in (\ref{255}), which will be crucial in the construction of our coupling.
{\lemma\label{91}{
Under the assumptions (\ref{77}) and (\ref{2}) on the drift $b(\cdot)$ there is a constant $\varepsilon\in (0,1)$ such that
for all $\omega\in\Omega, w\in W^{d_1}_0, y\in\bb{R}^{d_2}$ and $y'\in B_1^{d_2}(y)$ the following holds:
\begin{equation}\label{92}
p_{w,\omega}(1,y,y')>\frac{2\varepsilon}{vol(d_2)}.
\end{equation}}}
{\textbf{Proof:} By Jensen's inequality and (\ref{2}) we obtain that $p_{\omega,w}(1,y,y')$ is larger or equal than
\begin{equation}\label{93}
e^{-\kappa^2/2}p_{d_2}(1,y,y')\exp\left\{\tilde{E}_{y,y'}\left[\int_0^1 b^*\big((w(s),X^2_s),\omega\big)dX^2_s
\right]\right\}.
\end{equation}
Note that under the measure $\tilde{P}_{y,y'}$ the process $X_t^2, t\in [0,1],$ is a Brownian bridge from $y$ to $y'$ in time 1 and
therefore satisfies the following stochastic differential equation, see Ex. 5.6.17 (i) and p. 354 in \cite{KS},
\begin{equation}{\begin{cases}
dX^{2}_t = d\beta_t+\frac{y'-X^{2}_t}{1-t}dt,&\quad0\leq t<1;\\
X^2_0=y,\ \tilde{P}_{y,y'}\mbox{-a.s. },\\
\end{cases}}
\end{equation}
for a $d_2$-dimensional standard Brownian motion $\beta_.$.
Thus,
\begin{eqnarray*}
\left|\tilde{E}_{y,y'}\left[\int_0^1 b^*\big((w(s),X^2_s),\omega\big)dX^2_s\right]\right| & = &
\left|\tilde{E}_{y,y'}\left[\int_0^1 b^*\big((w(s),X_s^2),\omega\big)\frac{y'-X_s^2}{1-s}ds\right]\right|\\
& \leq & \tilde{E}_{y,y'}\left[\kappa\int_0^1|\nabla_{X_s^2} \log p_{d_2}(1-s,X_s^2,y')|ds\right]\\
& \leq & c\exp\{-|y-y'|^2/c\}p_{d_2}^{-1}(1,y,y')\stackrel{\mathrm{def}}{=} g_{d_2}(y-y'),
\end{eqnarray*}
where the last inequality follows from a result of \cite{LZ}, see Theorem 2.4.
It is obvious that
\begin{eqnarray*}B^{d_2}_1(0) \ni z \longmapsto g_{d_2}(z)
\end{eqnarray*}
is a bounded map and thus (\ref{93}) is bounded away from 0 for all $y'\in B^{d_2}_1(y).$ This finishes the proof.
\begin{flushright}$\Box$\end{flushright}
Before providing the construction of the coupling, let us introduce some further notation. We denote with
$\Lambda_.=(\Lambda_j)_{j\in\bb{Z}}$ the canonical coordinate process on $\{0,1\}^{\bb{Z}}$ and with $\cal{S}$ the canonical
product $\sigma$-algebra generated by $\Lambda_.$. We write $\lambda_.=(\lambda_j)_{j\in\bb{Z}}$ for an element of
$\{0,1\}^{\bb{Z}}$ and with $\bf{\Lambda}^{\varepsilon},$
where $\varepsilon$ comes from (\ref{92}), we denote the unique
probability measure on $(\{0,1\}^{\bb{Z}},\cal{S})$ under which $\Lambda_.$ becomes a sequence of i.i.d. Bernoulli
random variables with success parameter $\varepsilon.$
We also introduce the shift operators $\{\theta_m:m\in\bb{Z}\}$ and $\{s_t:t\geq 0\}$
operating on $(W^{d_1}_0\times\{0,1\}^{\bb{Z}},\cal{W}_0\otimes\cal{S})$ and $(W^{d_2}_+,\cal{U})$ respectively such that
\begin{eqnarray}
\label{16}\theta_m(w,\lambda_.)&=&(w(m+\cdot)-w(m),\lambda_{m+\cdot}),\\
\label{256}s_t(u)&=&u(t+\cdot).
\end{eqnarray}
Note that the pair $(w,\lambda_.)\in W^{d_1}_0\times\{0,1\}^{\bb{Z}}$ stands for the pair of processes $((w(t))_{t\in\bb{R}},$ $(\lambda_j)_{j\in\bb{Z}})$
whose parameter sets are different. On the product space $(W^{d_1}_0\times\{0,1\}^{\bb{Z}},\cal{W}_0\otimes\cal{S})$
we define the product measure
\begin{equation}\label{17}
P\overset{\mbox{\scriptsize{def}}}{=} \bar{P}{\otimes}{\bf{\Lambda}}^{\varepsilon},
\end{equation}recall that $\bar{P}$ denotes the two-sided Wiener measure on $W^{d_1}_0$
with $\bar{P}[X^1_0=0]=1.$
\pagebreak
{\thm\label{47}{There exists a probability kernel from $\bb{R}^{d_2}\times\Omega\times W^{d_1}_0\times\{0,1\}^{\bb{Z}}$ to
$W^{d_2}_+$ which we denote with $K_{y,\omega}(w,\lambda_.)[O]$ for $y\in\bb{R}^{d_2},\omega\in\Omega,w\in W^{d_1}_0,
\lambda_.\in\{0,1\}^{\bb{Z}}$ and $O\in\cal{U},$ such that:\begin{itemize}
\item[(1)] For $(y,\omega,w,\lambda_.)\in\bb{R}^{d_2}\times\Omega\times W^{d_1}_0\times\{0,1\}^{\bb{Z}},$
under the measure $P\times K_{y,\omega}(w,\lambda_.),$ the process $(\chi_t^{})_{t\geq 0}$ is $P_{(0,y),\omega}$-distributed,
where $(0,y)\in\bb{R}^d,$ and in particular $W_t$ defined by
\begin{equation}\label{150}
W_t\stackrel{\mathrm{def}}{=} \chi_t^{}-(0,y)-\int_0^t b\left(\chi_s,\omega\right)ds,\mbox{ for all }t\geq 0,
\end{equation}
is a $d$-dimensional standard Brownian motion in its own filtration on
$W^{d_1}_0\times\{0,1\}^{\bb{Z}}\times W^{d_2}_+$ endowed with the probability $P\times K_{y,\omega}(w,\lambda_.).$
\item[(2)] For each integer $n\geq 0,\ (y,\omega,w,\lambda_.)\in\bb{R}^{d_2}\times\Omega\times W^{d_1}_0\times\{0,1\}^{\bb{Z}}$
and any bounded measurable function $f$ on $W^{d_2}_+,$ \begin{equation}\label{257}
K_{y,\omega}(w,\lambda_.)\mbox{-a.s., }\ E^{K_{y,\omega}(w,\lambda_.)}\left[f(X^2_.)\circ s_n\,|\,\cal{U}_n\right]=
E^{K_{X^2_n,\hat{\omega}}(\theta_n(w,\lambda_.))}\left[f(X^2_.)\right],
\end{equation}with $\hat{\omega}=\tau_{(w(n),0)}(\omega).$ Moreover, \begin{equation}\label{258}
E^{K_{y,\omega}(w,\lambda_.)}\left[f(X^2_.)\right]=E^{K_{0,\tilde{\omega}}(w,\lambda_.)}\left[f(y+X^2_.)\right],
\end{equation}with $\tilde{\omega}=\tau_{(0,y)}(\omega).$
\item[(3)] For each $(y,\omega,w,\lambda_.)\in \bb{R}^{d_2}\times\Omega\times W^{d_1}_0\times\{0,1\}^{\bb{Z}}$ with
$\lambda_0=1,$ we have that under the probability measure $K_{y,\omega}(w,\lambda_.),\ X^2_1$ is uniformly distributed on
the ball $B^{d_2}_1(y).$
\item[(4)] For each integer $n\geq 0,\ (z_1,z_2)\in\bb{R}^{d_1}\times\bb{R}^{d_2},\ (y,\omega,w,\lambda_.)
\in\bb{R}^{d_2}\times\Omega\times W^{d_1}_0\times\{0,1\}^{\bb{Z}}$ and any bounded measurable function $f$ on
$W_+^{d_2}\times C(\bb{R}_+,\bb{R}^{d}),$ we have that for $\bar{\omega}=\tau_{(z_1,z_2)}(\omega),$
\begin{equation}\label{75}
E^{K_{y,\bar{\omega}}(w,\lambda_.)}\left[f\bigg((X_{\cdot}^{2},b(\chi_{\cdot}^{},\bar{\omega}))_{\cdot\wedge n}\bigg)\right]
\mbox{ is }\cal{H}_{(z_1+w([0,n]))\times\bb{R}^{d_2}}\mbox{-measurable}.
\end{equation}
\end{itemize}}}In order to shorten notation we will usually not write explicitly the dependence of the kernels
on $(w,\lambda_.)\in W^{d_1}_0\times\{0,1\}^{\bb{Z}},$ i.e. for $y\in\bb{R}^{d_2}, \omega\in\Omega$
we write $K_{y,\omega}$ instead of $K_{y,\omega}(w,\lambda_.).$
In this sense for a fixed $y\in\bb{R}^{d_2}$ we define the {\textit{annealed kernel}} from
$(W^{d_1}_0\times\{0,1\}^{\bb{Z}},\cal{W}_0\otimes \cal{S})$
to $(W^{d_2}_+\times \Omega,\cal{U}\otimes{\cal{A}})$ by
\begin{equation}\label{190}K_y\stackrel{\mathrm{def}}{=}\bb{P}\times K_{y,\omega}.\end{equation}
\textbf{Proof:} Given a probability kernel $K^{(\lambda)}_{y,\omega}(w)[O]$ for $O\in\cal{U}_1, w\in W^{d_1}_0, \lambda\in
\{0,1\}, y\in\bb{R}^{d_2}$ and $\omega\in\Omega,$ which will be specified in (\ref{259}) below, there is a unique
probability measure $K_{y,\omega}(w,\lambda_.)$ on
$\cal{U}$ for $w\in W^{d_1}, \lambda_.\in\{0,1\}^{\bb{Z}}, y\in\bb{R}^{d_2}$ and $\omega\in\Omega$ such that for integer $m\geq 1,
O\in\cal{U}_1:$\begin{equation}\label{650}
K_{y,\omega}(w,\lambda_.)\mbox{-a.s., }\quad K_{y,\omega}(w,\lambda_.)\left[s_m^{-1}(O)\,|\,\cal{U}_m\right]=
K_{X^2_m,\tau_{(w(m),0)}(\omega)}^{(\lambda_m)}(w(m+\cdot)-w(m))\left[O\right].
\end{equation}
An application of Girsanov's Theorem, see for instance Theorem 6.4.2 of \cite{SV}, and equation (6.35) in
Chapter 5 of \cite{KS}, show that for
$w\in W^{d_1}_0, y\in\bb{R}^{d_2}, \omega\in\Omega$ and $O\in\cal{U}_1,$ see below (\ref{81}) and (\ref{255}),
\begin{eqnarray*}
\bar{K}_{y,\omega}(w)\left[O\right]=\tilde{E}_y\left[\cal{E}(w,\omega),O\right]=\int_{\bb{R}^{d_2}}dy' p_{w,\omega}(1,y,y')
\frac{\tilde{E}_{y,y'}\left[\cal{E}(w,\omega), O\right]}{\tilde{E}_{y,y'}\left[\cal{E}(w,\omega)\right]},
\end{eqnarray*} and so, we define for $\lambda\in\{0,1\}$ and $y'\in\bb{R}^{d_2},$
\begin{equation}\label{28}{h(w,\lambda,y,y',\omega)\stackrel{\mathrm{def}}{=}\begin{cases}
\frac{\bbm{1}{\{y'\in B^{d_2}_1(y)\}}}{vol(d_2)},
&\mbox{if } \lambda=1;\\
\frac{1}{1-\varepsilon}\Big(p_{w,\omega}(1,y,y')-\varepsilon\frac{\bbm{1}{\{y'\in B_1^{d_2}(y)\}}}
{vol(d_2)}\Big),
&\mbox{if } \lambda=0,\\
\end{cases}}
\end{equation} and set \begin{equation}\label{259}
K_{y,\omega}^{(\lambda)}(w)\left[O\right]=\int_{\bb{R}^{d_2}}dy'h(w,\lambda,y,y',\omega)\frac{\tilde{E}_{y,y'}\left[
\cal{E}(w,\omega), O\right]}{\tilde{E}_{y,y'}\left[
\cal{E}(w,\omega)\right]}.
\end{equation}
In view of (\ref{92}), this kernel is well defined. To check the measurability of the kernel one uses a
result of \cite{P}, see Theorem 44 on page 158. The same result can also be used to show (\ref{75}). It is then
straightforward to see that the resulting kernel $K_{y,\omega}(w,\lambda_.)$ fulfills (1)-(4).
\begin{flushright}$\Box$\end{flushright}
{\rem{In the notation ${\bf{\Lambda}}^{\varepsilon,\lambda}[\,\cdot\,]\stackrel{\mathrm{def}}{=}{\bf{\Lambda}}^{\varepsilon}
[\,\cdot\,|\,\Lambda_0=\lambda]$ for
$\lambda\in\{0,1\}$ and $K^{n}_{y,\omega}(w,\lambda_.)\stackrel{\mathrm{def}}{=} K_{y,\tau_{(w(n),0)}(\omega)}(w(n+\cdot)-w(n),\lambda_.)$ for
integer $n\geq 0,\ (y,\omega,w,\lambda_.)\in\bb{R}^{d_2}\times\Omega\times W^{d_1}_0\times \{0,1\}^{\bb{Z}}$ we find as a
consequence of (\ref{257}) and the fact that $K_{y,\omega}[X^2_{\cdot\wedge n}\in\star\,]$ depends on
$w([0,n]),\lambda_0,\ldots,\lambda_{n-1}$
only, that for a fixed Brownian path $w\in W^{d_1}_0,\ {\bf{\Lambda}}^{\varepsilon}\times K_{y,\omega}\mbox{-a.s.,}$
\begin{equation}
{\bf{\Lambda}}^{\varepsilon}\times K_{y,\omega}\left[(X^2_{n+\cdot},\Lambda_{n+\cdot})\in\star\,|\,\cal{U}_n
{\otimes}\sigma(\Lambda_0,\ldots,\Lambda_{n})\right]=
{\bf{\Lambda}}^{\varepsilon,\Lambda_n}\times K^n_{X^2_n,\omega}\left[(X^2_{\cdot},\Lambda_{\cdot})\in\star\,\right]
\end{equation}}}
{\rem{\label{034}
Thank to Girsanov's Theorem we were able to construct the above kernels quite explicitly, so that we have a very
concrete way to write expectations of $X^2_k,$ for integers $k\geq 1,$ under the quenched kernel $K_{0,\omega}$
using the formulas for the kernels for one time unit, see (\ref{259}). Indeed, applying (\ref{257}) successively for
$n=k-1,\ldots,1,$ we find that for all $(w,\lambda_.)\in W^{d_1}_0\times \{0,1\}^{\bb{Z}},$\begin{equation}\label{030}
E^{K_{0,\omega}}\left[X_{k}^2\right]=E^{K_{0,\omega}}\left[E^{K_{X^2_1,\hat{\omega}_1}\circ\theta_1}\left[\cdots E^{K_{X^2_1,
\hat{\omega}_{k-1}}\circ\theta_{k-1}}\left[
X_1^2\right]\cdots\right]\right],
\end{equation}with $\hat{\omega}_i=\tau_{(w(i),0)}(\omega),\ i=1,\ldots, k-1.$
Using the identities (\ref{650}) and (\ref{259}) in the proof of Theorem \ref{47}
we obtain with $y_0:=0$ that the right-hand side of (\ref{030}) equals\begin{equation}\label{031}
\int_{\bb{R}^{d_2}}\cdots\int_{\bb{R}^{d_2}}dy_1\cdots dy_{k}\prod_{i=0}^{k-1}h\left(w(i+\cdot)-w(i),
\lambda_i,y_i,y_{i+1},\hat{\omega}_i\right)y_{k}.
\end{equation}
}}
\subsection{The cut times $T^k$}\label{241}
In this subsection we will define the cut times which are at the heart of this work, see (\ref{19}). The assumption
$d_1\geq 5$ becomes crucial at this point since it ensures the existence of these times,
see (\ref{21}), (\ref{22}). For a concise review of the work of Erd\"os on cut times and more recent results see for instance
\cite{L2} and references therein. The consideration of these times is crucial to find certain decoupling effects in the process $\chi_.^{}$
under the annealed measure $P\times K_0,$ see Proposition \ref{59}, providing a comparison of $(\chi_k^{})_{k\geq 1}$ under
$P\times K_0$ with an ergodic sequence. This enables us to deduce rather easily a law of large numbers.\\
For $r\geq 0$ and a subset $A$ of $\bb{R}^{d_1}$ we define $A^r$ as the closed $r$-neighborhood of $A.$ For
$(w,\lambda_.)\in W^{d_1}_0\times\{0,1\}^{\bb{Z}}$ we define the set of cut times as \begin{equation}\label{19}
\cal{C}(w,\lambda_.)\overset{\mbox{\scriptsize{def}}}{=}\left\{n\in\bb{Z}\ \bigg|\ \bigg(X^1_{(-\infty,n-1]}(w)\bigg)^{R}
{\cap}\bigg(X^1_{[n,\infty)}(w)\bigg)^{R}=\emptyset,\ \Lambda_{n-1}(\lambda_.)=1\right\},
\end{equation}and consider the point process on $\bb{Z}$ \begin{equation}\label{20}
N((w,\lambda_.);dk)=\sum_{n\in\bb{Z}}\delta_n(dk)\bbm{1}_{\{n\in\cal{C}(w,\lambda_.)\}},
\end{equation}which is stationary for $\theta_1$ under the measure $P.$ It will turn out that the point process $N$ is
double infinite, i.e. the event
\begin{equation}\label{290}
W\stackrel{\mathrm{def}}{=}\left\{(w,\lambda_.)\in W^{d_1}_0\times\{0,1\}^{\bb{Z}}\ \Big|\ N((w,\lambda_.);\bb{Z}_-)=\infty=N((w,\lambda_.);\bb{Z}_+)\right\}
\end{equation}has full $P$-probability, see Lemma \ref{87} below. We will thus restrict $P,$ see (\ref{17}), on the shift-invariant set
$W.$ With $\cal{W}$ we denote the restriction of $\cal{W}_0{{\otimes}}\cal{S}$ to $W.$
{\rem{\label{291}On the event $\Lambda_{n-1}=1,\ n\geq 1,$ we have a very good control on the position of $\chi_n^{}$
by the knowledge of $\chi_{n-1}$ without any further information about the environment. Due to finite range dependence, this
will lead to a certain decoupling effect between the environment seen from the process $\chi_.^{}$ after a cut time $n$
and the environment affecting the process $\chi_.^{}$ before time $n-1.$ As a consequence we will find the key identity in law stated in
Proposition \ref{59}.}}
{\lemma\label{87}($d_1\geq 5$)
\begin{eqnarray}
\label{21}&&P[0\in\cal{C}]\geq c_1(\varepsilon)>0.\\
\label{22}&&P\left[W\right]=1,\quad\mbox{and hence on }W,\quad N((w,\lambda_.);dk)=\sum_{n\in\bb{Z}}\delta_{T^n(w,\lambda_.)}(dk),
\end{eqnarray}
where $T^n,n\in\bb{Z},$ are $\bb{Z}$-valued random variables on $W$ that are increasing in $n$ such that $T^0\leq 0< T^1.$\begin{eqnarray}
\label{23}&&\hat{P}\stackrel{\mathrm{def}}{=} P[\ \cdot\ |\ 0\in\cal{C}]\mbox{ is invariant under }\hat{\theta}_1
\overset{\mbox{\scriptsize{def}}}{=}\theta_{T^1}.\\[12pt]
\label{0200}&&T^{n+m}=T^n+T^m\circ\hat{\theta}_n,\mbox{ for all }n,m\in\bb{Z}.\\[11pt]
\label{24}&&E^{\hat{P}}[T^1]=P[0\in\cal{C}]^{-1}.\\[6pt]
\label{25}&&E^{P}[f]=\frac{E^{\hat{P}}\big[\sum_{k=0}^{T^1-1}f\circ\theta_k\big]}{E^{\hat{P}}[T^1]}
\end{eqnarray}
for any bounded measurable function $f$ on $W.$
\begin{eqnarray}\label{26}
&&P[T^1>n]\leq c_2(\log{n})^{1+\frac{d_1-4}{2}}n^{-\frac{d_1-4}{2}}, \quad n\geq 1,
\end{eqnarray}
for $c_2(\varepsilon)$ a positive constant.
}\\
\textbf{Proof:} Let us define for $w\in W^{d_1}_0,$
\begin{eqnarray*}
&&B^1_t(w)\stackrel{\mathrm{def}}{=} w(-t),\quad t\geq 0,\\
&&B^2_t(w)\stackrel{\mathrm{def}}{=} w(t),\quad t\geq 0.
\end{eqnarray*}
Noting that $B^1_., B^2_.$ and $\lambda_.$ are mutually independent and $B^1_., B_.^2$ are two $d_1$-dimensional
standard Brownian motions on
$(W^{d_1}_0\times\{0,1\}^{\bb{Z}},\cal{W}_0\otimes\cal{S},{P}),$ we find by using the Markov
property of Brownian motion that
\begin{equation*}
P[0\in \cal{C}] = \varepsilon\int_{\bb{R}^{d_1}}p_{d_1}(1,0,x){P}\left[\left(x+B^1_{[0,\infty)}\right)^{R}\cap
\left(B^2_{[0,\infty)}\right)^{R}=\emptyset\right]dx.
\end{equation*}
To prove (\ref{21}) it suffices to show that for some set $A\subseteq\bb{R}^{d_1}$ of positive Lebesgue measure,
\begin{equation}\label{31}
{P}\left[\left(x+B^1_{[0,\infty)}\right)^{R}\cap\left(B^2_{[0,\infty)}\right)^{R}=\emptyset\right]>0
\mbox{\quad for all }x\in A.
\end{equation}
For $i,j\geq 0$ let us define the event
\begin{equation}\label{32}A_{i,j}=\left\{\left(B^1_{[i,i+1]}\right)^{R}\cap\left(B^2_{[j,j+1]}\right)^{R}\ne\emptyset\right\}.\end{equation}
From the Markov property and the independence of $B^1_.$ and $ B_.^2,$ it follows for $(i,j)\ne(0,0)$ that
\begin{equation}\begin{split
{P}\left[A_{i,j}\right]\ = \ &\int_{\bb{R}^{d_1}}p_{d_1}(i+j,0,x)
{P}\Big[\Big(x+B^1_{[0,1]}\Big)^R\cap \Big(B^2_{[0,1]}\Big)^R\ne\emptyset\Big]dx\\
\ \leq\ & \int_{\bb{R}^{d_1}}p_{d_1}(i+j,0,x){P}
\Big[|x|\leq \sup_{0\leq s\leq 1}|B_s^1|+ \sup_{0\leq s\leq 1}|B_s^2|+2R\Big]dx.
\end{split}\end{equation} Using Fubini and the fact that $p_{d_1}(i+j,0,z)\leq c (i+j)^{-d_1/2}$ we obtain that
\begin{equation}\label{280}
P[A_{i,j}]\leq \frac{c}{(i+j)^{d_1/2}}\left(E^{P}\left[\sup_{0\leq s\leq 1}|B^1_s|^{d_1}\right]+R^{d_1}\right) \leq \frac{c}{(i+j)^{d_1/2}},
\end{equation}which implies, since $d_1\geq 5,$
\begin{equation}\label{34}
\sum_{i,j=0}^{\infty}P[A_{i,j}]<\infty.
\end{equation}
In analogy to the proof of Proposition 3.2.2 in \cite{L}, where intersection probabilities of two independent random
walks are investigated, we call $(i,j)$ a *-last intersection if $A_{i,j}$ occurs while
$A_{i',j'}$ for $i'\geq i, j'\geq j$ with $(i',j')\ne(i,j)$ do not. Because of (\ref{34}) and Borel-Cantelli's Lemma we know that ${P}$-a.e.
pair of paths $(B^1_t(w))_{t\geq 0}, (B^2_t(w))_{t\geq 0}$ has at least one such *-last intersection. Hence
\begin{equation*}
1\leq\sum_{i=0}^{\infty}\sum_{j=0}^{\infty}{P}\left[(i,j)\mbox{ is a *-last intersection}\right],
\end{equation*}
which implies the existence of a pair $(I,J)$ such that
\begin{eqnarray*}
0 & < & {P}\left[(I,J)\mbox{ is a *-last intersection}\right]\leq {P}\left[\left(B^1_{[I+1,\infty)}\right)^{R}\cap
\left(B^2_{[J+1,\infty)}\right)^{R}=\emptyset\right]\\
&= & \int_{\bb{R}^{d_1}}p_{d_1}(I+J+2,0,x){P}
\left[\left(x+B^1_{[0,\infty)}\right)^{R}\cap\left(B^2_{[0,\infty)}\right)^{R}=\emptyset\right]dx,
\end{eqnarray*}where in the last equality we used the Markov property and the independence of $B^1_.$ and $B_.^2.$ Since the integrand is non-negative, this proves
(\ref{31}) and hence (\ref{21}). By an analogous result for simple stationary point processes on $\bb{Z}$ as
Lemma II.12 in \cite{N}, one finds using the ergodicity of $\theta_1$ that (\ref{22}) holds true. The
measure $\hat{P}$ corresponds up to a multiplicative constant to the Palm measure attached to the
stationary point process $N,$ see
Chapter II in \cite{N}, in particular (10) on page 317. The statements (\ref{23})-(\ref{25})
are then standard consequences. Note that (\ref{25}) is a consequence of (19) on page 331 of
\cite{N} and that (\ref{24}) follows from (\ref{25}) with the choice $f=\bbm{1}_{\{0\in\cal{C}\}}.$
It remains to show (\ref{26}). For integer $L\geq 1$ and for $j\geq 0,$ we define $$k_j:=1+Lj.$$ For $J\geq 1$ we find
that\\
$P\left[T^1>k_{3J}\right]=P\left[N\left((w,\lambda_.);[1,k_{3J}]\right)=0\right]$
\begin{eqnarray*}
&&\leq P\left[N\left((w,\lambda_.);[1,k_{3J}]\right)=0,\,\bigcap_{j=0}^{3J}\left\{\left(X^1_{(-\infty,k_j-1]}\right)^{R}\cap
\left(X^1_{[k_{j+1},\infty)}\right)^{R}=\emptyset\right\}\right]\\
&&+\ \sum_{j=0}^{3J}P\left[\left(X^1_{(-\infty,k_j-1]}\right)^{R}\cap
\left(X^1_{[k_{j+1},\infty)}\right)^{R}\ne\emptyset\right]\\[11pt]
&&=:a_1+a_2.
\end{eqnarray*}
First we bound $a_2.$ Note that for integer $n\geq 1,$\begin{equation}\label{33}
P\left[\left(B^1_{[0,\infty)}\right)^{R}\cap\left(B^2_{[n,\infty)}\right)^{R}\ne\emptyset\right]\leq
\sum_{i\geq 0,j\geq n}P[A_{i,j}] \overset{(\ref{280})}{\leq} c\sum_{j\geq n}j^{1-\frac{d_1}{2}}\leq
c n^{-\frac{d_1-4}{2}},
\end{equation}and hence by stationarity of Brownian motion,\begin{equation}\label{087}
a_2=(3J+1)P\left[\left(B^1_{[0,\infty)}\right)^{R}\cap\left(B^2_{[L+1,\infty)}\right)^{R}\ne\emptyset\right]\leq
c(3J+1)(L+1)^{-\frac{d_1-4}{2}}.
\end{equation}
Now we turn to the control of $a_1.$ For $j=1,\ldots,3J,$ observe that on the event $\{N((w,\lambda_.);[1,k_{3J}])=0)\},$ the
following inclusion holds:
\begin{eqnarray*}
&\left\{\left(X^1_{(-\infty,k_{j-1}-1]}\right)^{R}\cap\left(X^1_{[k_{j},\infty)}\right)^{R}=\emptyset\right\}
\cap
\left\{\left(X^1_{(-\infty,k_j-1]}\right)^{R}\cap\left(X^1_{[k_{j+1},\infty)}\right)^{R}=\emptyset\right\}&\\[10pt]
&{\displaystyle{\subseteq}} \left\{\left(X^1_{[k_{j-1}-1,k_j-1]}\right)^{R}\cap\left(X^1_{[k_{j},k_{j+1}]}\right)^{R}\ne\emptyset\right\}
\cup\bigg\{\lambda_{k_j-1}=0\bigg\}.&
\end{eqnarray*}
We thus find that the event
\begin{equation*}
\bigcap_{j=3,6,\ldots}^{3J}\left\{\left(X^1_{[k_{j-1}-1,k_j-1]}\right)^{R}\cap\left(X^1_{[k_{j},k_{j+1}]}\right)^{R}
\ne\emptyset\right\}
\cup\bigg\{\lambda_{k_j-1}=0\bigg\}
\end{equation*}
occurs, whenever the event considered in $a$ occurs. By independence of Brownian increments and the fact that
$\theta_1$ preserves $P$ we obtain that
\begin{equation}\label{35}
a_1\leq P\left[\left\{\left(X^1_{[0,L]}\right)^{R}\cap\left(X^1_{[L+1,2L+1]}\right)^{R}\ne\emptyset\right\}
\cup\bigg\{\lambda_{L}=0\bigg\}\right]^J\leq P\left[0\notin\cal{C}\right]^J.
\end{equation}
Choosing a large enough $\gamma$ which depends on $d_1, R$ and $ \varepsilon$ and setting $J=[\gamma \log n],
L=[\frac{n}{3J}],$ we obtain (\ref{26}) from (\ref{087}) and (\ref{35}).\begin{flushright}$\Box$\end{flushright}
\subsection{A decoupling effect and a law of large numbers}\label{decoupling}
Now we will exploit the presence of cut times, see (\ref{22}), in order to produce decoupling in the
process $\chi_.^{}$ under the measure $P\times K_0,$ see (\ref{190}). For this purpose we introduce the process $Z_.$ living on an
enlarged space, see below (\ref{36}), equipped with a measure $Q^0,$ that uses our previous
coupling construction and the cut times, see (\ref{37}) and (\ref{38}). The idea behind the construction of the
process $Z_.$ is to start after each cut time a fresh path for $X^2_.$ in a new environment, which is chosen
independently from the previous environment, see Remark \ref{291}. We then recover the law of the process $\chi_.^{}$
at integer times under $P\times K_0,$ see Proposition \ref{59}.\\
First we have to introduce some further notation. Consider the product spaces
\begin{equation}\label{36}
\Gamma^0\stackrel{\mathrm{def}}{=} W\times(W_+^{d_2}\times\Omega)^{\bb{N}},\qquad
\Gamma^s\stackrel{\mathrm{def}}{=} W\times(W_+^{d_2}\times\Omega)^{\bb{Z}}
\end{equation}
endowed with their product $\sigma$-algebras, see (\ref{290}) for the definition of $W.$ Recall at this point the definition of $\hat{P},$ see
(\ref{23}), and note that in the sequel all the measures denoted with a $\hat{\ }$ correspond up to a different normalization to the Palm measure attached to the point
process $N((w,\lambda_.);dk),$ see (\ref{20}). On the spaces defined in (\ref{36}) we introduce the measures
\begin{equation}\label{37}
Q^0\stackrel{\mathrm{def}}{=} P\times M^0,\qquad \hat{Q}^0\stackrel{\mathrm{def}}{=} \hat{P}\times M^0 ,\qquad Q^s\stackrel{\mathrm{def}}{=} P\times M^s,\qquad \hat{Q}^s\stackrel{\mathrm{def}}{=} \hat{P}\times M^s,
\end{equation}
where $M^0$ and $M^s$ stand for the kernels from $W$ to $(W^{d_2}_{+}\times\Omega)^{\bb{N}}$ respectively from
$W$ to $(W^{d_2}_{+}\times\Omega)^{\bb{Z}}$ defined by
\begin{equation}\label{38}
M^0((w,\lambda_.);d\gamma^0)=K_0((w,\lambda_.);du_0d\omega_0){{\otimes}}\bigotimes_{m\geq 1}
K_0(\theta_{T^m}(w,\lambda_.);du_md\omega_m),
\end{equation}recall the definition (\ref{190}),
with $\gamma^0=(u_m,\omega_m)_{m\geq 0}\in (W_+^{d_2}\times\Omega)^{\bb{N}},$ and similarly
\begin{equation}\label{39}
M^s((w,\lambda_.);d\gamma^s)=\bigotimes_{m\in\bb{Z}}
K_0(\theta_{T^m}(w,\lambda_.);du_md\omega_m)
\end{equation}
with $\gamma^s=(u_m,\omega_m)_{m\in\bb{Z}}\in (W_+^{d_2}\times\Omega)^{\bb{Z}}.$ On $\Gamma^0$ we define the process
$(Z_t)_{t\geq 0}$ by
\begin{equation}\label{40}
Z_t\stackrel{\mathrm{def}}{=} (X_t^1,Y_t), t\geq 0,
\end{equation}
with $X^1_.$ defined in (\ref{79}) and
\begin{equation}\label{41}
\begin{aligned}
Y_t & \stackrel{\mathrm{def}}{=} & u_0(t),\hspace{3.3cm}\quad\mbox{ for }0\leq t<T^1,\mbox{ and }\\
Y_{(T^m+t)\wedge T^{m+1}} & \stackrel{\mathrm{def}}{=} & Y_{T^m}+u_{m}(t\wedge (T^{m+1}-T^m)),\quad\mbox{ for }m\geq 1,t\geq 0.
\end{aligned}
\end{equation}Note that $Z_0=0,\ Q^0$-a.s.. Loosely speaking, the process $Z_.$ is constructed by attaching after each cut time
a new path for the $\bb{R}^{d_2}$-components which evolves in a new
independent environment.
Similarly we define the two-sided process $(Z^s_t)_{t\in\bb{R}}$ on $\Gamma^s$ by
\begin{equation}\label{43}
Z_t^s\stackrel{\mathrm{def}}{=} (X^1_t,Y_t^s), t\in\bb{R},
\end{equation}
where for $m\in\bb{Z}, t\in\bb{R}_+,$
\begin{equation}\label{44}\begin{array}{rcl}
Y^s_0&\stackrel{\mathrm{def}}{=} &0,\\
Y^s_{(T^m+t)\wedge T^{m+1}}&\stackrel{\mathrm{def}}{=}& Y_{T^m}^s+u_m(t\wedge (T^{m+1}-T^m)),
\end{array}\end{equation}
and we introduce also the $\Omega$-valued process $(\alpha_t^s)_{t\in\bb{R}}$ by
\begin{equation}\label{45}
\alpha_t^s\stackrel{\mathrm{def}}{=}\tau_{{\scriptscriptstyle{Z^s_t-Z^s_{T^m}}}}(\omega_m),\mbox{ for }T^m\leq t<T^{m+1},m\in\bb{Z},
\end{equation}which plays the role of the "relevant environment viewed from the particle".
Note that by definition, $Z^s_0=0,\ Q^s$-a.s..
{\rem{\label{540}Note that by definition we have that under the measure $\hat{Q}^s,$ the joint distribution of $T^1$
and the piece of trajectory $(Z^s_t)_{t\in [0,T^1]}$
is the same as the joint distribution of $T^1$ and $(\chi_t^{})_{t\in [0,T^1]}$ under $\hat{P}\times K_{0},$ see (\ref{79}),
(\ref{81}) for the definition of $\chi_.^{}$ and recall that $\hat{P}[T^0=0]=1.$}}\\
The following proposition yields a crucial identity in law.
{\prop\label{59}
Under the measure $Q^0$, the sequence of random vectors $(Z_n)_{n\geq 0}$ has the same law as $(\chi_n^{})_{n\geq 0}$
under the measure $P\times K_0.$
}\\
{\textbf{Proof:}} The idea of the proof is to fix $(w,\lambda_.)\in W$ and then to show by induction that for all integers
$m\geq 0$ the following statement holds:
\begin{equation}\label{350}\begin{array}{c}
\mbox{For all bounded measurable functions }f^k,k=0,\ldots,m,\mbox{ on }\bb{R}^d,\\[11pt]
E^{K_0}\left[\prod_{k=0}^{m}f^k\left(\chi_k^{}\right)\right]=
E^{M^0}\left[\prod_{k=0}^{m}f^k\left(Z_k\right)\right].
\end{array}\end{equation} Proposition (\ref{59}) then follows by integrating out with respect to $P,$ see (\ref{37}) for the definition of $Q^0.$
Let us fix $(w,\lambda_.)\in W$ and note that (\ref{350}) holds true for $0\leq m\leq T^1(w,\lambda_.)$ by definition,
see (\ref{38}),(\ref{40}) and (\ref{41}). We assume the above statement to be true for $m$ and show that it must still hold
for $m+1.$ Without loss of generality we can assume that $l=T^N<m+1\leq T^{N+1}$ for an integer $1\leq N\leq m.$ Recall
that $K_0=\bb{P}\times K_{0,\omega},$ see (\ref{190}), and so, applying (\ref{257}) with $n=l$ and then with $n=l-1$ we obtain
that\begin{equation}\label{700}\begin{array}{lll}
E^{K_0}\left[\prod_{k=0}^{m+1}f^k(\chi_k^{})\right]
&=&\bb{E}\times E^{K_{0,\omega}}\bigg[
\prod_{k=0}^{l-1}f^k(\chi_k^{})E^{K_{X^2_{l-1},\hat{\omega}}\circ\theta_{l-1}}\bigg[f^l(w(l),X^2_1)\times\\[11pt]
&&E^{K_{X^2_{1},\tilde{\omega}}\circ\theta_{l}}\bigg[\prod_{k=1}^{m+1-l}f^{l+k}(w(l+k),X_k^2)\bigg]\bigg]\bigg]
\end{array}\end{equation}
with $\hat{\omega}=\tau_{(w(l-1),0)}(\omega)$ and $\tilde{\omega}=\tau_{(w(l),0)}(\omega).$ Since $l=T^N$ is a cut times, we have that $\lambda_{l-1}=1,$ see (\ref{19}), and hence with (3) of Theorem \ref{47} and (\ref{258}) we
find that the right-hand side of (\ref{700}) is equal to\begin{equation}\label{701}\begin{array}{l}
{\displaystyle{\int_{\bb{R}^{d_2}}}}\frac{dy}{vol(d_2)}\bb{E}\bigg[
E^{K_{0,\omega}}\bigg[\prod_{k=0}^{l-1}f^k(\chi_k^{})\bbm{1}_{\{y\in B^{d_2}_1(X^2_{l-1})\}}\bigg]f^l(w(l),y)\times\\[11pt]
E^{K_{0,\bar{\omega}}\circ\theta_l}\bigg[\prod_{k=1}^{m+1-l}f^{l+k}(w(l+k),y+X^2_k)\bigg]
\bigg]
\end{array}\end{equation}with $\bar{\omega}=\tau_{(w(l),y)}(\omega),$ where we also used Fubini's Theorem.
From the definition of the cut times $T^k$,
see (\ref{19}) and Lemma \ref{87}, and the measurability property
(\ref{75}), we see that all the factors under the $\bb{P}$-expectation in (\ref{701}) are independent, see (\ref{5}).
Together with the induction hypothesis and stationarity of the environment (i.e. $\tau_x\bb{P}=\bb{P}$),
we obtain that (\ref{701}) is equal to\begin{equation}\label{702}\begin{array}{l}
{\displaystyle{\int_{\bb{R}^{d_2}}}}\frac{dy}{vol(d_2)}
E^{M^0}\bigg[\prod_{k=0}^{l-1}f^k(Z_k)\bbm{1}_{\{y\in B^{d_2}_1(Y_{l-1})\}}\bigg]f^l(w(l),y)\times\\[11pt]
E^{K_{0}\circ\theta_l}\bigg[\prod_{k=1}^{m+1-l}f^{l+k}(w(l+k),y+X^2_k)\bigg].
\end{array}\end{equation}
Recalling the definitions (\ref{38}), (\ref{40}) and (\ref{41}), we deduce with the help of Fubini's
Theorem that (\ref{702}) equals$$
E^{M^0}\bigg[\prod_{k=0}^{l}f^k(Z_k)E^{K_{0}\circ\theta_l}\bigg[\prod_{k=1}^{m+1-l}f^{l+k}(w(l+k),Y_l+X^2_k)\bigg]\bigg]
=E^{M^0}\bigg[\prod_{k=0}^{m+1}f^k(Z_k)\bigg],
$$where we used that $\theta_l=\theta_{T^N}.$ This finishes the induction step.\begin{flushright}$\Box$\end{flushright}
{\rem{By construction of the probability kernel $K_{y,\omega}(w,\lambda_.),\
y\in\bb{R}^d,\ \omega\in\Omega,\ (w,\lambda_.)\in W,$
see in particular (3) of Theorem \ref{47}, we have that due to $\lambda_{T^k-1}=1,k\geq 1,$ see the
definition of cut times (\ref{19}), the transition from $X^2_{T^k-1}$ to $X^2_{T^k}$ depends only on the
position $X^2_{T^k-1}$ without any additional information on the environment. However, the piece of trajectory
$X^2_t,\ T^k-1\leq t\leq T^k,$ is influenced by the environment, see (\ref{75}).
That is the reason why a decoupling effect concerning the environment, as described by the process $Z_t,t\geq 0,$
under $Q^0,$ can only be observed in the original process
$\chi^{ }_t,t\geq 0,$ under $P\times K_0$ when we ignore the piece of trajectory during one unit of time
just before each cut time.
In fact it can be shown by the same arguments as in the proof of Lemma \ref{59}
that under $P\times K_0,$ the sequence of random variables
$\chi^{ }_{(T^m+\cdot)\wedge(T^{m+1}-1)}, m\geq 0,$ has the same law as
$Z_{(T^m+\cdot)\wedge(T^{m+1}-1)}, m\geq 0,$ under $Q^0,$ where we set $T^0=0.$
}}\\
We now introduce on $\Gamma^s$ a shift $(\Theta_k)_{k\in\bb{Z}}$ via:
\begin{equation}\label{51}
\Theta_k((w,\lambda_.),\gamma^s)=(\theta_k(w,\lambda_.),(u_{m+n},\omega_{m+n})_{m\in\bb{Z}})\mbox{ on }
T^n(w,\lambda_.)\leq k< T^{n+1}(w,\lambda_.),
\end{equation}
with $\gamma^s=(u_m,\omega_m)_{m\in\bb{Z}}\in (W^{d_2}_+\times\Omega)^{\bb{Z}}.$
{\prop{\label{88}For all $\bar{\gamma^s}\in\Gamma^s$ the following identities hold:
\begin{eqnarray}
\label{141} && Z^s_{l+a}(\bar{\gamma^s})-Z^s_l(\bar{\gamma^s})=Z^s_a\circ\Theta_l(\bar{\gamma^s}),\mbox{ for }l\in\bb{Z},
\ a\in\bb{R}_+,\\
\label{52}&& Z^s_n(\bar{\gamma^s})=\sum_{k=0}^{n-1}Z^s_1\circ\Theta_k(\bar{\gamma^s}),\mbox{ for integers }n\geq 1,\\
\label{53}&& \alpha_u^s(\bar{\gamma^s})=\alpha_{u_r}^s\circ\Theta_{[u]}(\bar{\gamma^s}),\mbox{ for }u=[u]+u_r\in\bb{R}.
\end{eqnarray}
Moreover,\begin{eqnarray}\label{54} && \Theta_1\mbox{ preserves }Q^s\mbox{ and in fact }(\Gamma^s,\Theta_1,Q^s)\mbox{ is ergodic,}\\
\label{234}&& E^{Q^s}\left[f\right]=\frac{E^{\hat{Q}^s}\big[\sum_{k=0}^{T^1-1}f\circ\Theta_k\big]}{E^{\hat{P}}\left[T^1\right]},
\mbox{ for any bounded}\\
\nonumber&&\mbox{measurable function $f$ on $\Gamma^s,$}\\[6pt]
\label{252} && Z_1^s\in L^m(Q^s)\mbox{ for all }m\in[1,\infty),\mbox{ when }d_1\geq 5.
\end{eqnarray}
}}
{\textbf{Proof:}} The identities (\ref{141})-(\ref{53}) follow by direct inspection of the definitions
(\ref{43})-(\ref{45}) and (\ref{51}). The proof of (\ref{54}) exactly follows the
proof in the discrete setting (see \cite{BSZ}, page 534-535). There are slight differences in the notation.
The objects $\Gamma_s, Q_s, M_s, \{\tilde{w}_m\}_{m\in\bb{Z}}$ and $\cal{D}$ in \cite{BSZ} correspond to
our $\Gamma^s, Q^s, M^s, \{u_m\}_{m\in\bb{Z}}$ and $\cal{C}.$ Further one has to read $(w,\lambda_.)$
instead of $w$ in the proof in \cite{BSZ}. Let us point out that the main strategy in showing the ergodicity
of $(\Gamma^s,\Theta_1,Q^s)$ is to prove that $(\Gamma^s\,{{\cap}}\,\{0\in\cal{C}\},\hat{\Theta}_1
\overset{\mathrm{\scriptscriptstyle{def}}}{=}\Theta_{T^1},\hat{Q}^s)$ is ergodic, which is indeed an equivalent statement,
see (34) on page 357 in \cite{N}.
Analogously to (\ref{25}) we find (\ref{234}) as a standard consequence of the first part of the statement in (\ref{54}).
We now come to the proof of
(\ref{252}). We choose $m\in [1,\infty),$
then by definition (\ref{43}),\begin{equation}\label{310}
E^{Q^s}\left[|Z_1^s|^m\right]\leq 2^{m-1}\left\{E^{Q^s}\left[|X_1^1|^m\right]+E^{Q^s}\left[|Y_1^s|^m\right]\right\}.
\end{equation}The first expectation on the right-hand side of (\ref{310}) is finite since $X^1_.$ is a standard
$d_1$-dimensional Brownian motion under $Q^s.$ In the notation (\ref{37}), (\ref{39}) and (\ref{44}) we have that
\begin{eqnarray*}
E^{Q^s}\left[|Y_1^s|^m\right]& = & E^{P}\left[E^{K_0\circ\theta_{T^0}}\left[|u_0(1-T^0)-u_0(-T^0)|^m\right]\right]\\
& = & \sum_{n\geq 0} E^{P}\left[T^0=-n,\ E^{K_0\circ\theta_{-n}}\left[|u_0(1+n)-u_0(n)|^m\right]\right]\\
&\overset{stat.}{=}& \sum_{n\geq 0} E^{\hat{P}}\left[T^1>n,\ E^{K_0}\left[|u_0(1+n)-u_0(n)|^m\right]\right]
P[T^0=0]
\end{eqnarray*}
If we show that the above expectation with respect to the measure $K_0$ is uniformly bounded, then (\ref{252}) follows since
$\sum_{n\geq 0}\hat{P}[T^1>n]=E^{\hat{P}}[T^1]=P[T^0=0]^{-1}<\infty,$ see (\ref{24}) and (\ref{21}).
Indeed, by construction of the kernel $K_0=\bb{P}\times K_{0,\omega},$ see (\ref{190})-(\ref{259}), we find that
for each fixed $(w,\lambda_.)\in W,$\begin{eqnarray}
\nonumber && E^{K_0}\left[\left|u_0(1+n)-u_0(n)\right|^{m}\right]\\
\label{002}&&=
\bb{E}\left[E^{K_{0,\omega}}\left[\int_{\bb{R}^{d_2}}
h(w(n+\cdot)-w(n),\lambda_n,u_0(n),y,\hat{\omega})\left|y-u_0(n)\right|^{m}dy
\right]
\right],
\end{eqnarray}with $\hat{\omega}=\tau_{(w(n),0)}(\omega).$ When $\lambda_n=1,$ we immediately see by the definition of $h,$ see (\ref{28}), that the integral under
the expectation is $K_{0,\omega}$-a.s. bounded by 1. In the other case, when $\lambda_n=0,$ the above integral is
$K_{0,\omega}$-a.s. less or equal to\begin{equation}\label{003}
\frac{1}{1-\varepsilon}\int_{\bb{R}^{d_2}}p_{w(n+\cdot)-w(n),\hat{\omega}}(1,u_0(n),y)\left|y-u_0(n)\right|^{m}dy+\frac{\varepsilon}{1-\varepsilon}.
\end{equation}A result in \cite{oleinik} concerning exponential bounds
on fundamental solutions of parabolic equations of second order, see Theorem 1 on page 67, tells us that\begin{equation}\label{004}
p_{w(n+\cdot)-w(n),\hat{\omega}}(1,u_0(n),y)\leq c_3(w,\omega)e^{-c_4(w,\omega)|y-u_0(n)|^2},
\end{equation}for some positive constants $c_3(w,\omega),\ c_4(w,\omega).$ A closer look into the proof of the applied result from
\cite{oleinik} reveals that the constants $c_3$ and $c_4$ in (\ref{004}) can indeed be chosen to be independent of the
Brownian path $w$ and the environment $\omega$ due to the uniform boundedness and Lipschitz constant of the the
drift $b,$ see (\ref{2}). With this in mind, combining (\ref{004}) and (\ref{003}) one easily sees that (\ref{002}) is also uniformly bounded in the case when $\lambda_n=0.$ This finishes the proof of (\ref{252}).\begin{flushright}$\Box$\end{flushright}
Now we are ready to state a law of large numbers when $d_1\geq 5$. For the notation see (\ref{190}), (\ref{23}), (\ref{37}),
(\ref{39}), (\ref{43})-(\ref{45}).
{\thm{\label{89}($d_1\geq 5$)\begin{equation}\label{55}
P_0\mbox{-a.s.,}\qquad \frac{X_t}{t}\underset {t\to\infty}{\longrightarrow} v\stackrel{\mathrm{def}}{=}
\frac{E^{\hat{P}\times K_0}
\left[\int_0^{T^1}b\left(\chi_u^{},\omega\right)du\right]}{E^{\hat{P}}[T^1]}
=E^{Q^s}\left[\int_0^1 b(\alpha_u^s)du\right]=
E^{Q^s}\left[Z_1^s\right].
\end{equation}
}}
{\textbf{Proof:}}
First we prove that\begin{equation}\label{293} P_0\mbox{-a.s.,}\quad \lim_{t\to\infty}\frac{X_t}{t}=
E^{Q^s}\left[Z_1^s\right].\end{equation}
For all $t\geq 1,$
\begin{equation}\label{56}
\left|\frac{X_t}{t}-E^{Q^s}\left[Z_1^s\right]\right|\leq \frac{1}{t}\left|X_t-X_{[t]}\right|
+\left|\frac{X_{[t]}}{[t]}\cdot\frac{[t]}{t}-E^{Q^s}\left[Z_1^s\right]\right|.
\end{equation}
For $\omega\in\Omega,$ under $P_{0,\omega}$ the process $(W^{'}_t)_{t\geq 0}$ defined as
$W'_t\stackrel{\mathrm{def}}{=} X_t-X_0-\int_0^{t}b(X_s,\omega)ds$ is a $d$-dimensional Brownian motion
and $P_{0,\omega}$-a.s.,
\begin{equation}\label{57}
\begin{aligned}
\frac{1}{t}\left|X_t-X_{[t]}\right| & = \frac{1}{t}\left|\int_{[t]}^t b(X_s,\omega)ds
+\int_{[t]}^t dW^{'}_s\right|\\
& \leq \frac{1}{t}\left(\kappa + \left|W^{'}_t-W^{'}_{[t]}\right|\right).
\end{aligned}
\end{equation}A standard application of Borel-Cantelli's Lemma and Bernstein's inequality shows that the last expression in
(\ref{57}) converges $P_{0,\omega}$-a.s. to 0, as $t\to\infty.$ Together with (\ref{56}) we see that to prove (\ref{293})
it suffices to show for integers $n\geq 1$ that $P_0$-a.s., $\frac{1}{n}X_n$ converges to
$E^{Q^s}\left[Z_1^s\right],$ as $n\to\infty.$ As a consequence of (1) of Theorem \ref{47} and
Proposition \ref{59}, we therefore obtain (\ref{293}), once we show that $\frac{Z_n}{n}\longrightarrow
E^{Q^s}[Z^s_1],\ Q^0$-a.s., as $n\to\infty.$ As we will now see, the latter claim follows from the convergence of
$\frac{Z^s_n}{n}$ under $Q^s,$ which is an immediate consequence of (\ref{52}), (\ref{54}), (\ref{252})
and Birkhoff's Ergodic
Theorem. Indeed, we construct an enlarged probability space on which both processes $Z_.$ and $Z_.^s$ can be defined.
Consider the product space
\begin{equation}\label{60}
\Gamma\stackrel{\mathrm{def}}{=} W\times\left(W^{d_2}_+\times\Omega\right)\times
\left(W^{d_2}_+\times\Omega\right)^{\bb{Z}}
\end{equation}
endowed with its product $\sigma$-algebra and the measure
\begin{equation}\label{61}
Q\stackrel{\mathrm{def}}{=} P\times M,
\end{equation}
where $M$ is the probability kernel from $W$ to $\left(W^{d_2}_+\times\Omega\right)\times
\left(W^{d_2}_+\times\Omega\right)^{\bb{Z}}$ defined as
\begin{equation}\label{62}
M((w,\lambda_.);d\gamma)=K_0((w,\lambda_.);du_0^{'}d\omega_0^{'}){{\otimes}}\bigotimes_{m\in\bb{Z}}
K_0(\theta_{T^m}(w,\lambda_.);du_md\omega_m)
\end{equation}
with $\gamma=((u_0^{'},\omega_0^{'}),(u_m,\omega_m)_{m\in\bb{Z}})\in (W^{d_2}_+\times\Omega)\times
(W^{d_2}_+\times\Omega)^{\bb{Z}}.$ With the projections
\begin{equation}\label{505}\begin{array}{l}
\pi^0:((w,\lambda_.),\gamma)\in\Gamma\longmapsto ((w,\lambda_.),(u_0^{'},\omega_0^{'}),(u_m,\omega_m)_{m\geq 1})
\in\Gamma^0,\\
\pi^s:((w,\lambda_.),\gamma)\in\Gamma\longmapsto ((w,\lambda_.),(u_m,\omega_m)_{m\in\bb{Z}})\in\Gamma^s,
\end{array}\end{equation}we find that $Q^0=\pi^0\circ Q$ and $Q^s=\pi^s\circ Q.$ We thus obtain that under $Q,$ the processes
\begin{equation}\label{107}
\tilde{Z}_t\stackrel{\mathrm{def}}{=} Z_t\circ\pi^0, t\geq 0,\mbox{ and }\tilde{Z}_t^s\stackrel{\mathrm{def}}{=} Z_t^s\circ\pi^s, t\in\bb{R},
\end{equation}
defined on $\Gamma$ have the same law as our original processes $Z_.$ and $Z_.^s$ under $Q^0$ and
$Q^s$ respectively. Since $Q$-a.s.,
\begin{equation}\label{63}
\tilde{Z}_{T^1+t}-\tilde{Z}_{T^1}=\tilde{Z}^s_{T^1+t}-\tilde{Z}^s_{T^1}\mbox{\quad for all }t\geq 0,
\end{equation}
it follows that $Q$-a.s.,
\begin{equation}\label{64}
\frac{1}{t}\left|\tilde{Z}_t-\tilde{Z}_t^s\right|\leq \frac{1}{t}\sup_{a\in[0,T^1]}\left|
\tilde{Z}_a-\tilde{Z}_a^s\right|\overset{\scriptscriptstyle{t\to\infty}}{\longrightarrow} 0.
\end{equation}
We thus find that $\frac{\tilde{Z^s_n}}{n}$ and $\frac{\tilde{Z_n}}{n}$ have the same limit $Q$-a.s., which concludes the proof of
(\ref{293}). We now show the second and the third
equality in (\ref{55}). First we show that\begin{equation}\begin{aligned}\label{65}
\lim_{n\to\infty}\frac{E^{\hat{P}\times K_0}\left[\int_0^{T^n}b(\chi_s,\omega)ds\right]}{E^{\hat{P}}\big[T^n\big]} & = &
E^{Q^s}\big[Z_1^s\big]
\end{aligned}
\end{equation}
holds and then we find that the sequence on the left is in fact constant and equals $v$.
Since the measure $\hat{P}\times K_0$ is absolutely continuous
with respect to $P\times K_0$ it follows from (\ref{293}) by using (1) of Theorem \ref{47} and the fact that
$P\times K_0$-a.s, $W_t/t\longrightarrow 0,$ as $t\to\infty$,
\begin{equation*}
\hat{P}\times K_0\mbox{-a.s.,}\qquad \frac{1}{t}\int_0^t b(\chi_s,\omega)ds\overset{\scriptscriptstyle{t\to\infty}}
{\longrightarrow} E^{Q^s}\big[Z_1^s\big].
\end{equation*}
By dominated convergence this limit holds true in $L^1(\hat{P}\times K_0)$ as well.
Because of the ergodicity of $(W{{\cap}}\{0\in\cal{C}\},\hat{\theta}_1,\hat{P})$, which is a
consequence of the ergodicity of $(W,\theta_1,P),$ see (34) on page 357 in \cite{N}, we have:
\begin{equation}\label{68}
\frac{T^n}{n}\overset{\scriptscriptstyle{(\ref{0200})}}{=}\frac{1}{n}\sum_{k=0}^{n-1}T^1\circ\hat{\theta}_k
\overset{\scriptscriptstyle{n\to\infty}}{\longrightarrow}
E^{\hat{P}}[T^1]\overset{\scriptscriptstyle{(\ref{21}),(\ref{24})}}
{<}\infty\qquad \hat{P}\mbox{-a.s. and in }L^1(\hat{P}),
\end{equation}
and we find that $\hat{P}\times K_0$-a.s. and in $L^1(\hat{P}\times K_0),$
\begin{equation*}
\lim_{n\to\infty}\frac{1}{n}\int_0^{T^n}b(\chi_s,\omega)ds=
\lim_{n\to\infty}\frac{T^n}{n}\ \frac{1}{T^n}\int_0^{T^n}b(\chi_s,\omega)ds=E^{Q^s}\big[Z_1^s\big]E^{\hat{P}}\big[T^1\big].
\end{equation*}Together with (\ref{0200}) and (\ref{23}), (\ref{65}) now follows.
For a fixed $(w,\lambda_.)\in W\cap\{0\in\cal{C}\}$ and $k\geq 1,$
we find by an application of (\ref{257}) with $n=T^k$ and then with $n=T^k-1$
and similar considerations to those leading to (\ref{701}) that
\begin{eqnarray*}
&& E^{K_0}\left[\int_{T^k}^{T^{k+1}}b(\chi_u,\omega)du\right]= E^{K_0}\left[\int_{0}^{T^{1}\circ\hat{\theta}_k}b(\chi_{T^k+u},\omega)du\right]\\
&&=\int_{\bb{R}^{d_2}}\frac{dy}{vol(d_2)}\bb{E}\bigg[
E^{K_{0,\omega}}\bigg[\bbm{1}_{\{y\in B^{d_2}_{1}(X^2_{T^k-1})\}}\bigg]
E^{K_{0,\bar{\omega}}\circ\hat{\theta}_k}
\bigg[\int_0^{T^1\circ\hat{\theta}_k}b\big((X^1_u\circ\hat{\theta}_k,X^2_u),\bar{\omega}\big)du\bigg]\bigg]
\end{eqnarray*}with $\bar{\omega}=\tau_{(w(T^k),y)}(\omega).$ By an independence argument as above (\ref{702}) and
stationarity of the environment we finally obtain that\begin{equation}\label{260}
E^{K_0}\left[\int_{T^k}^{T^{k+1}}b(\chi_u,\omega)du\right]=
E^{K_0}\left[\int_{0}^{T^{1}}b(\chi_u,\omega)du\right]\circ\hat{\theta}_k.
\end{equation}
Recalling that the measure $\hat{P}$ is invariant under $\hat{\theta}_k,$ see (\ref{23}), we thus find\begin{eqnarray*}
E^{\hat{P}\times K_0}\left[\int_0^{T^n}b(\chi_u,\omega)du\right] & = &
\sum_{k=0}^{n-1}E^{\hat{P}\times K_0}\left[\int_{T^k}^{T^{k+1}}b(\chi_u,\omega)du\right]\\
& \overset{(\ref{23}),(\ref{260})}{=} & n E^{\hat{P}\times K_0}\left[\int_0^{T^1}b(\chi_u,\omega)du\right],
\end{eqnarray*}and \begin{equation*}E^{\hat{P}}[T^n]\overset{(\ref{0200})}{=}E^{\hat{P}}\bigg[\sum_{k=0}^{n}
T^1\circ\hat{\theta}_k\bigg]\overset{(\ref{23})}{=}
nE^{\hat{P}}[T^1],\end{equation*} which shows that the sequence in (\ref{65}) is indeed constant and equal to
\begin{equation}\label{340}
v\stackrel{\mathrm{def}}{=}\frac{E^{\hat{P}\times K_0}[\int_0^{T^1}b(\chi_u^{},\omega)du]}{E^{\hat{P}}[T^1]}=
\frac{E^{\hat{Q}^s}[\int_0^{T^1}b(Z^s_u,\omega)du]}{E^{\hat{P}}[T^1]}{=}
\frac{E^{\hat{Q}^s}[\int_0^{T^1}b(\alpha_u^s)du]}{E^{\hat{P}}[T^1]},
\end{equation}
where we used Remark \ref{540} in first equality and definition (\ref{45}) together with the fact that $Z^s_{T^0}=Z^s_0=0,$
$\hat{Q}^s$-a.s.,
in the second equality in (\ref{340}). The second and the third equality in (\ref{55}) then follows from
(\ref{340}) by applying (\ref{53}) and (\ref{234}) to the last expression in (\ref{340}).
\begin{flushright}$\Box$\end{flushright}
{\rem{The formula for the limiting
velocity, see (\ref{55}), is reasonably explicit and depends only on a finite piece of trajectory up to the first
cut time after time 0 and its first moment.}}
\section{Two invariance principles under the annealed measure}\label{c}
In this section we provide two central limit theorems under the annealed measure. The first one is shown under a
symmetry assumption on the drift and $d_1\geq 7$, see Theorem \ref{103}, whereas for the second theorem there is no
symmetry assumption but we need to assume that $d_1\geq 13.$\\
For integer $n\geq 1$ we denote with $I_{n}$ the $n\times n$-dimensional identity matrix. We further introduce the reflection
\begin{eqnarray*}
\cal{R}:\bb{R}^{d_1}\times\bb{R}^{d_2}&\longmapsto &\bb{R}^{d_1}\times\bb{R}^{d_2}\\
(x,y) & \longmapsto & (x,-y).
\end{eqnarray*}
For the first central limit theorem we assume the following antipodal symmetry in the last $d_2$ components
of the drift under the measure $\bb{P}:$
\begin{equation}\label{100}
\left(\cal{R}(b(z,\omega))\right)_{z\in\bb{R}^{d}}\mbox{ has the same law as }\left(b(\cal{R}(z),\omega)\right)_{z\in\bb{R}^{d}}.
\end{equation}
Since the first $d_1$ components of the drift $b(\cdot,\cdot)$ vanish, we have that $\cal{R}\left(b(\cdot,\cdot)\right)$ equals $-b(\cdot,\cdot).$
Note that when (\ref{100}) holds, then $\cal{R}(X_.)$ has the same law under $P_0$ as $X_.,$ and $E_0[X_t]=0$ for all $t\geq 0.$
By definition of $(W^{'}_t)_{t\geq 0},$ see below (\ref{56}), we have that $P_{0,\omega}\mbox{-a.s., }X_t=X_0+\int_0^t b(X_s,\omega)ds+W^{'}_t,
$ for each $\omega\in \Omega.$
The strong law of large numbers for Brownian motion, see Problem 9.3 in \cite{KS}, and Theorem \ref{89} imply that
$P_0\mbox{-a.s., }\frac{1}{t}\int_0^t b\left(X_s,\omega\right)ds\longrightarrow v,$
and hence with dominated convergence the convergence holds in $L^1(P_0)$ as well. So $E_0[X_t]=E_0[\int_0^t b(X_s,\omega)ds]=0,$
and we deduce that
the limiting velocity in (\ref{55}) vanishes under the assumption (\ref{100}).\\
{\rem{\label{0150}
A possible example of a drift $b^*(z,\omega),$ with $z\in\bb{R}^d,\omega\in\Omega,$ see (\ref{77}), such that (\ref{100}) is satisfied
can be constructed as follows. We consider a canonical Poisson point process on $\bb{R}^d$ with constant intensity
as the random environment. Pick an $\bb{R}^{d_2}$-valued
measurable function $\varphi(z),z\in\bb{R}^d,$ which is supported in a ball
of radius $R/4$ and such that $\varphi(\cal{R}(z))=-\varphi(z)$ holds for all $z\in\bb{R}^d.$
Then make the convolution of the Poisson point process
with the function $\varphi$ and truncate the new function. After smoothing out with a
Lipschitz continuous real-valued mollifier $\rho(z), z\in\bb{R}^d,$
supported in a ball of radius $R/4$ and such that $\rho(\cal{R}(z))=\rho(z)$ for all $z\in\bb{R}^d,$ one obtains an
example of a possible $b^*(z,\omega).$
}}\\
For two $C(\bb{R}_+,\bb{R}^d)$-valued sequences $\xi^n_.$ and
$\zeta_.^n,\ n\geq 1,$ respectively defined on the probability spaces $(\Xi_1,\cal{D}_1,\mu_1)$ and $(\Xi_2,\cal{D}_2,\mu_2)$
we say that $(\xi^n_.)_{n\geq 1}$ under $\mu_1$ is weak convergence equivalent (abbreviated by wce) to $(\zeta^n_.)_{n\geq 1}$ under
$\mu_2,$ if the weak convergence of the law of $\xi_.^n$ under $\mu_1$ is equivalent to the weak convergence of the law of
$\zeta_.^n$ under $\mu_2,$ and if both limits are the same, when weak convergence holds true.\\
Before we come to the main results of this section we briefly discuss some integrability properties stated in the
following
{\lemma{\begin{eqnarray}
\label{032}&& \mbox{For all }\eta \geq 1:\ T^1\in L^{\eta}(P)\Leftrightarrow T^1\in L^{\eta+1}(\hat{P})\\
\label{111}&& T^1\in L^2(\hat{P}),\mbox{ when }d_1\geq 7.\\
\label{600}&& T^1\in L^4({P})\mbox{ and } T^1\in L^5(\hat{P}),\mbox{ when }d_1\geq 13.\\
\label{601}&& \sup_{{{a\in [0,T^1]}}}|\chi_{a}^{}|\in L^2(\hat{P}\times K_0),\mbox{ when }d_1\geq 7.\\
\label{602}&& \sup_{{{a\in [0,T^1]}}}|\chi_{a}^{}|\in L^4(\hat{P}\times K_0),\mbox{ when }d_1\geq 13.
\end{eqnarray}
}}\\
{\textbf{Proof:}} The equivalence (\ref{032}) is an easy consequence of (\ref{25}). With the help of (\ref{26}) we
find that $T^1\in L^1{(P)}$ when $d_1\geq 7$ and $T^1\in L^4{(P)}$ when $d_1\geq 13$ and so, (\ref{032}) yields
(\ref{111}) and (\ref{600}). With the help of the integral representation of $\chi_.^{},$ see (\ref{150})
and note that $\hat{P}\times K_{0,\omega}\ll {P}\times K_{0,\omega},$ and (\ref{2}) we see that for each $\omega\in\Omega,\
\hat{P}\times K_{0,\omega}$-a.s.,\begin{equation}\label{603}
\sup_{{{a\in [0,T^1]}}}|\chi_{a}^{}|^2\leq 2\kappa^2 (T^1)^2+2\sup_{{{a\in [0,T^1]}}}|W_{a}|^2.
\end{equation}Taking the $\hat{P}\times K_{0,\omega}$-expectation on both sides of (\ref{603}) we observe that (\ref{601})
follows from (\ref{111}) and once we show that uniformly in $\omega,$\begin{equation}\label{604}
E^{\hat{P}\times K_{0,\omega}}\left[\sup_{{{a\in [0,T^1]}}}|W_{a}|^2\right]\leq c_5(\varepsilon)<\infty.
\end{equation}The left-hand side of (\ref{604}) is equal to\begin{equation}\label{033}
\sum_{n\geq 1}E^{\hat{P}\times K_{0,\omega}}
\left[\sup_{{\scriptscriptstyle{a\in [0,n]}}}|W_{a}|^2, T^1=n\right]
\overset{{\mbox{\tiny{H\"older}}}}{\leq}\sum_{n\geq 1}E^{\hat{P}\times K_{0,\omega}}
\left[\sup_{{\scriptscriptstyle{a\in [0,n]}}}|W_{a}|^{2p}\right]^{1/p}\hat{P}[T^1=n]^{1/q},
\end{equation}with $1<q<\frac{6}{5}$ and $p$ the conjugate exponent. From (\ref{21}) and the definition of $\hat{P},$ see (\ref{23}),
we see that\begin{equation}\label{620}
\hat{P}[\,\cdot\,]\leq c_1(\varepsilon)^{-1}P[\,\cdot\,].
\end{equation}An application of Burkholder-Davis-Gundy-Inequality, see p. 166 of \cite{KS}, yields
\begin{equation*}
E^{\hat{P}\times K_{0,\omega}}\left[\sup_{{\scriptscriptstyle{a\in [0,n]}}}|W_{a}|^{2p}\right]^{1/p}\overset{\mbox{\tiny{(\ref{620})}}}{\leq}
c(\varepsilon,q)E^{{P}\times K_{0,\omega}}\left[\sup_{{\scriptscriptstyle{a\in [0,n]}}}|W_{a}|^{2p}\right]^{1/p}\leq c(\varepsilon,q)n\ ,
\end{equation*}and hence the right-hand side of (\ref{033}) is less or equal to\begin{equation}\label{605}
c(\varepsilon,q)\sum_{n\geq 1}n\hat{P}[T^1=n]^{1/q}
= c(\varepsilon,q)\sum_{n\geq 1}n\hat{P}[T^1=n]^{1/2}\hat{P}[T^1=n]^{1/q-1/2}.
\end{equation}From an application of Cauchy-Schwarz' inequality follows that (\ref{605}) is dominated
by\begin{equation}\label{522}
c(\varepsilon,q)\bigg\{E^{\hat{P}}[(T^1)^2]^{1/2}\bigg(\sum_{n\geq 1}\hat{P}[T^1=n]^{2/q-1}\bigg)^{1/2}\bigg\}.
\end{equation}Since $\hat{P}[T^1=n]\leq c_1(\varepsilon)^{-1} P[T^1>n-1]$ holds and $d_1\geq 7,$ one easily checks by using
(\ref{26}) that the sum in (\ref{522}) with $1<q<\frac{6}{5}$ is bounded by a constant $c(\varepsilon,q)$ and with
(\ref{111}), (\ref{604}) then follows. (\ref{602}) is shown analogously to (\ref{601}) with $1<q<\frac{6}{5}$
using now (\ref{600}) instead of (\ref{111}).\begin{flushright}$\Box$\end{flushright}
We now are ready to state our first invariance principle.
{\thm{\label{103}Let us assume $d_1\geq 7$ and (\ref{100}). Under the measure $P_0$, the $C(\bb{R}_+,\bb{R}^{d})$-valued
random variables
\begin{equation}\label{900}
B^r_.\stackrel{\mathrm{def}}{=} \frac{1}{\sqrt{r}}X_{r\cdot},\quad r>0,
\end{equation}
converge in law to a $d$-dimensional Brownian motion $B_.$ with covariance matrix
\begin{equation}\label{101}
{A}=E^{\hat{P}}[T^1]^{-1}\left(\begin{array}{cc}
E^{\hat{P}}[T^1]I_{d_1} & 0 \\
0 & E^{\hat{Q}^s}[(Y^s_{T^1})(Y^s_{T^1})^t]
\end{array}\right)\in \bb{R}^{d\times d},
\end{equation} as $r\to\infty.$}}\\
{\rem{\label{302}Before giving the proof of the theorem, let us recall some classical facts about weak convergence on
$C(\bb{R}_+,\bb{R}^d),$ that will be used several times throughout Section \ref{c}. More details on the following results
can be found in Chapter 3 of \cite{EK} and in Section 3.1 of \cite{S}. Let us consider the space $C(\bb{R}_+,\bb{R}^{d})$
and the metric
\begin{equation*}
d(\xi_.,\zeta_.)\stackrel{\mathrm{def}}{=}\sum_{m=1}^{\infty}2^{-m}\sup_{0\leq t\leq m}(|\xi_t-\zeta_t|\wedge 1)\leq 1,\quad \xi_.,\zeta_.\in
C(\bb{R}_+,\bb{R}^d).
\end{equation*}
Then $C(\bb{R}_+,\bb{R}^d)$ with the topology induced by $d(\cdot,\cdot)$ is a Polish space. Suppose $\xi_.^n$ and $\zeta_.^n, n\geq 1,$ are
two $C(\bb{R}_+,\bb{R}^d)$-valued sequences on some probability space $(\Xi,\cal{D},\mu).$ If $d(\xi_.^n,\zeta_.^n)$
converges in $\mu$-probability to 0, then $(\xi^n_.)_{n\geq 1}$ under $\mu$ is wce to $(\zeta^n_.)_{n\geq 1}$ under
$\mu,$ see below Remark \ref{0150} for the meaning of wce. Note that in order to verify the convergence in probability $\mu$
of the distance $d(\xi_.^n,\zeta_.^n)$ to 0, it suffices to check that for any $T>0,\varepsilon>0,$
\begin{equation}\label{102}
\mu\left(\sup_{0\leq t\leq T}|\xi_t^n-\zeta_t^n|>\varepsilon\right)\overset{\scriptscriptstyle{n\to\infty}}
{\longrightarrow}0.
\end{equation}}}
{\textbf{Proof of Theorem \ref{103}}:}
Observe that Theorem \ref{103} follows if we show that for $n\geq 1$ integer,
\begin{equation}\label{301}B^n_.\longrightarrow B_.\mbox{ in law under }P_0,\mbox{ as }n\to\infty.
\end{equation}
Indeed, (\ref{301}) implies that for $s_n\nearrow\infty,$ the sequence $\
[s_n]^{-1/2}X_{[s_n]\cdot}$ and thus $s_n^{-1/2}X_{[s_n]\cdot}$ converges in law to $B_.,$ recall (\ref{300}).
Therefore, the laws of $s_n^{-1/2}X_{[s_n]\cdot}$ are tight and hence, by Theorem 2.4.10 of \cite{KS}, for all
$T>0,\varepsilon>0,$ there exists an $\eta>0$
such that
\begin{equation*}
\sup_{n\geq 1}P_0\left[\sup_{\substack{\scriptscriptstyle{|s-t|\leq\eta}\\
\scriptscriptstyle{0\leq s,t\leq T}}}\frac{1}{\sqrt{s_n}}|X_{[s_n]t}-X_{[s_n]s}|\geq\varepsilon\right]
\leq\varepsilon.
\end{equation*}
Since $\sup_{t\leq T}|t-\frac{s_n}{[s_n]}t|\overset{\scriptscriptstyle n\to\infty}{\longrightarrow}0,$ we obtain that for large $n,$
\begin{equation*}
P_0\left[\sup_{0\leq t\leq T}\frac{1}{\sqrt{s_n}}|X_{[s_n]t}-X_{s_nt}|\geq\varepsilon\right]\leq \varepsilon.
\end{equation*}
In view of Remark \ref{302}, this shows that $B_.^{s_n}$ converges in law to $B_.$ for any $s_n\nearrow\infty,$ which
proves Theorem \ref{103}. For integer $n\geq 1$ we introduce the following piece-wise linear processes
(recall the definitions (\ref{40}) and (\ref{43})):\begin{equation}\label{09}\begin{array}{rcl}
\bar{B}_.^n & \stackrel{\mathrm{def}}{=} & \frac{1}{\sqrt{n}}\Big\{X_{[n\cdot]}+(n\cdot-[n\cdot])\left(X_{[n\cdot]+1}^{}
-X_{[n\cdot]}^{}\right)\Big\},\\
\bar{Z}_n(\cdot) & \stackrel{\mathrm{def}}{=} & Z_{[n\cdot]}+(n\cdot-[n\cdot])\left(Z_{[n\cdot]+1}^{}-Z_{[n\cdot]}^{}\right),\\
\bar{Z}_n^s(\cdot) & \stackrel{\mathrm{def}}{=} & Z^s_{[n\cdot]}+(n\cdot-[n\cdot])\left(Z^s_{[n\cdot]+1}-Z^s_{[n\cdot]}\right).
\end{array}\end{equation}Note that the processes $\bar{B}^n_.,\frac{1}{\sqrt{n}}\bar{Z}_n(\cdot)$ and $\frac{1}{\sqrt{n}}\bar{Z}_n^s(\cdot)$ are the
polygonal interpolations of $(X_k)_{k\geq 0},\ (Z_k)_{k\geq 0}$ and $(Z_k^s)_{k\geq 0}$ respectively,
which are then rescaled in time and space as in the definition of $B^n_.$ for integers $n\geq 1,$ see (\ref{900}).
{\lemma{\label{510}$(B^n_.)_{n\geq 1}$ under $P_0$ is wce to $\left(\frac{1}{\sqrt{n}}\bar{Z}^s_n(\cdot)\right)
_{n\geq 1}$ under $Q^s.$}}\\
{\textsc{Proof:}} As a first step we show that\begin{equation}\label{321}
(B^n_.)_{n\geq 1}\mbox{ under }P_0\mbox{ is wce to }(\bar{B}^n_.)_{n\geq 1}\mbox{ under }P_0.
\end{equation} In view of Remark \ref{302}, see in particular (\ref{102}), it suffices to prove that for any $T>0$ the sequence of random variables
$\sup_{0\leq t\leq T}|B^n_t-\bar{B}^n_t|$ converges in $P_0$-probability to 0, as $n\to\infty.$ Indeed, the process $(W'_t)_{t\geq 0}$
defined below (\ref{56}) is a $d$-dimensional Brownian motion
under $P_{0,\omega},$ and so for $T>0$ and $\varepsilon>0,$ when $n$ is large uniformly in $\omega,$
\begin{eqnarray*}
P_{0,\omega}\left[\sup_{0\leq t\leq T}|B^n_t- \bar{B}^n_t|\geq 4\varepsilon\right]&\leq &
P_{0,\omega}\left[\sup_{\substack{\scriptscriptstyle{k=0,\ldots,[Tn]}\\
\scriptscriptstyle{0\leq a\leq 1}}}|X_{k+a}-X_{k}|\geq 2\varepsilon\sqrt{n}\right]\\
&=& P_{0,\omega}\left[\sup_{\substack{\scriptscriptstyle{k=0,\ldots,[Tn]}\\
\scriptscriptstyle{0\leq a\leq 1}}}
\bigg|\int_k^{k+a}b(X_s,\omega)ds+W^{'}_{k+a}-W^{'}_{k}\bigg|\geq 2\varepsilon\sqrt{n}\right]\\
&\leq & c(1+Tn)\exp\{-\frac{\varepsilon^2}{2d^2}n\},\end{eqnarray*}
where we used (\ref{2}) and Bernstein's inequality in the last line, and (\ref{321}) follows. From the identities
in law stated in (1) of Theorem \ref{47} and Proposition \ref{59} we immediately deduce that
\begin{equation}\label{106}
(\bar{B}^n_.)_{n\geq 1}\mbox{ under }P_0\mbox{ is identical in law to }\left(\frac{1}{\sqrt{n}}\bar{Z}_n(\cdot)\right)_{n\geq 1}
\mbox{ under }Q^0.
\end{equation} A combination of (\ref{321}) and (\ref{106}) yields Lemma \ref{510} once we show that
\begin{equation}\label{810}
\left(\frac{1}{\sqrt{n}}\bar{Z}_n(\cdot)\right)_{n\geq 1}\mbox{ under }Q^0\mbox{ is wce to }
\left(\frac{1}{\sqrt{n}}\bar{Z}_n^s(\cdot)\right)_{n\geq 1}\mbox{ under }Q^s.
\end{equation}As in the proof of Theorem \ref{89} we define the processes $\bar{Z}_n(\cdot)$ and $\bar{Z}_n^s(\cdot)$
on a common probability space, see (\ref{60}) and below. Then we can again use the strategy discussed in Remark
\ref{302} to prove (\ref{810}). In the notation (\ref{60})-(\ref{107}), using the fact that
(\ref{63}) holds true, we find for $T>0$ that $Q$-a.s.,\begin{equation}\label{511}
\sup_{0\leq t\leq T}\frac{1}{\sqrt{n}}|\bar{Z}_n(t)\circ\pi^0-\bar{Z}_n^s(t)\circ\pi^s|\leq
\sup_{0\leq t\leq \frac{T^1}{n}}\frac{1}{\sqrt{n}}|\bar{Z}_n(t)\circ\pi^0-\bar{Z}_n^s(t)\circ\pi^s|.
\end{equation}Since $\bar{Z}_n(t)\circ\pi^0-\bar{Z}_n^s(t)\circ\pi^s,\ t\in [0,\frac{T^1}{n}],$ is a continuous
process which is piece-wise linear between the times $0,\frac{1}{n},\frac{2}{n},\ldots,\frac{T^1}{n},$ we find
that the right-hand side of (\ref{511}) is equal to \begin{equation*}
\sup_{k=0,\frac{1}{n},\ldots,\frac{T^1}{n}}\frac{1}{\sqrt{n}}|\bar{Z}_n(k)\circ\pi^0-\bar{Z}_n^s(k)\circ\pi^s|=
\sup_{k=0,\ldots,T^1}\frac{1}{\sqrt{n}}|{Z}(k)\circ\pi^0-{Z}^s(k)\circ\pi^s|,
\end{equation*}{which converges $Q$-a.s. to zero, as $n\to\infty$, since $Q[T^1<\infty]=1$, see (\ref{290}) and
(\ref{22}). This concludes the proof of (\ref{810}) and thus of Lemma \ref{510}.\begin{flushright}$\Box$\end{flushright}}
\pagebreak
Let us define an integer-valued function $0\leq \varphi(t)$ tending to infinity $P$-a.s., such that
\begin{equation}\label{108}
T^{\varphi(t)}\leq t<T^{\varphi(t)+1}\quad\mbox{for all }t\geq 0,
\end{equation}and
\begin{equation}\label{109}
\Sigma_m\stackrel{\mathrm{def}}{=} Z^s_{T^m}-Z^s_{T^0},\quad m\geq 0.
\end{equation} Furthermore, let us introduce the polygonal interpolation of $\Sigma_m,m\geq 0:$\begin{equation}\label{02}
\bar{\Sigma}_{\cdot}\stackrel{\mathrm{def}}{=}\Sigma_{[\cdot]}+(\cdot-[\cdot])\left(\Sigma_{[\cdot]+1}-\Sigma_{[\cdot]}\right),
\end{equation}and for integer $n\geq 1,$
\begin{equation}\label{800}
\bar{\Sigma}^{\varphi}_{n}(\cdot)\stackrel{\mathrm{def}}{=}\Sigma_{\varphi(n\cdot)}+(n\cdot-[n\cdot])\left(\Sigma_{\varphi(n\cdot+1)}-\Sigma_{\varphi(n\cdot)}\right),
\end{equation}which is constant and equal to $\Sigma_{\varphi(T^k)}=\Sigma_k$ on the time interval
$[\frac{T^k}{n},\frac{T^{k+1}}{n}-\frac{1}{n}),\ k\geq 0,$
and linear on the interval $[\frac{T^{k+1}}{n}-\frac{1}{n},\frac{T^{k+1}}{n}),$ interpolating the points $\Sigma_k$ and $\Sigma_{k+1}.$
{\lemma{\label{530}$\left(\frac{1}{\sqrt{n}}\bar{Z}_n^s(\cdot)\right)_{n\geq 1}$ under $Q^s$ is wce to
$\left(\frac{1}{\sqrt{n}}\bar{\Sigma}_n^{\varphi}(\cdot)\right)_{n\geq 1}$ under $Q^s.$}}\\
{\textsc{Proof:}} In view of Remark \ref{302} it suffices to show that for
any $T>0$ and $\varepsilon>0$ the following probability converges to 0, as $n\to\infty:$
\begin{equation}\label{535}
Q^s\left[\sup_{0\leq t\leq T}\frac{1}{\sqrt{n}}|\bar{Z}^s_n(t)-\bar{\Sigma}^{\varphi}_n(t)|>4\varepsilon\right]
\leq Q^s\Bigg[\underbrace{\sup_{\substack{\scriptscriptstyle{k=0,\ldots,[Tn]+1}\\
\scriptscriptstyle{a\in [0,T^{k+1}-T^k]}}}|Z^s_{T^k+a}-Z^s_{T^k}|>
\varepsilon{\sqrt{n}}}_
{\stackrel{\mathrm{def}}{=} A_n}\Bigg].
\end{equation}
Since the event $A_n$ is invariant under the shift $\Theta_{T^0}$ and the image of $Q^s$ under $\Theta_{T^0}$ is $E^{\hat{Q}^s}
[\, \cdot\, ,T^1]/E^{\hat{P}}[T^1],$ see (\ref{234}), it follows with the help of Cauchy-Schwarz' inequality that
\begin{equation}\label{128}
Q^s[A_n]=Q^s[\Theta_{T^0}^{-1}(A_n)]\leq E^{\hat{P}}[(T^1)^2]^{1/2}\hat{Q}^s[A_n]^{1/2}/E^{\hat{P}}[T^1],
\end{equation}where $E^{\hat{P}}[(T^1)^2]<\infty,$ see (\ref{111}). Thus, Lemma \ref{530} will follow once we show that
\begin{equation}\label{113}
\lim_{n\to\infty}\hat{Q}^s[A_n]=0.
\end{equation}Using (\ref{141}) and the fact that $\hat{\Theta}_k$ preserves $\hat{Q}^s,$ see the proof of Proposition \ref{88},
we find that\begin{equation}\label{533}
\hat{Q}^s[A_n]\leq (2+Tn)\hat{Q}^s\left[\sup_{a\in [0,T^1]}|Z_a^s|>\varepsilon\sqrt{n}\right]\leq\frac{2+Tn}{\varepsilon^2 n}
E^{\hat{Q}^s}\left[\sup_{a\in [0,T^1]}|Z_a^s|^2,\sup_{a\in [0,T^1]}|Z_a^s|>\varepsilon\sqrt{n}\right].
\end{equation}From (\ref{601}) and Remark \ref{540} follows that the last expression vanishes, as $n\to\infty,$ and hence (\ref{113})
holds true. This finishes the proof of Lemma \ref{530}.\begin{flushright}$\Box$\end{flushright}
\pagebreak
{\lemma{\label{01}
Under $Q^s,\ \left(\frac{1}{\sqrt{n}}\bar{\Sigma}_{n\cdot}\right)_{n\geq 1}$ converges in law to
$\sqrt{E^{\hat{P}}\left[T^1\right]}B_.,$ as $n\to\infty.$ }}
Before proving Lemma \ref{01}, let us explain how we conclude the proof of Theorem \ref{103}. Once we show that
\begin{equation}\label{03}\left(\frac{1}{\sqrt{n}}\bar{\Sigma}_n^{\varphi}(\cdot)\right)_{n\geq 1}
\mbox{ under $Q^s$ is wce to }
\left(\frac{1}{\sqrt{n}}\bar{\Sigma}_{n\cdot/E^{\hat{P}}[T^1]}\right)_{n\geq 1}\mbox{ under }Q^s,
\end{equation}we find with Lemma \ref{01} and a transformation of time that the first sequence in (\ref{03}) converges
weakly to $B_.,$ as $n\to\infty,$ and hence with Lemma \ref{530} and \ref{510} we deduce that (\ref{301}) holds, which finishes
the proof of Theorem \ref{103}.\\
For the proof of (\ref{03}) first note that since $T^n-T^0$ is invariant under
$\Theta_{T^0}$ we find by similar arguments to those leading to (\ref{128})
that the convergence in (\ref{68}) holds true $Q^s$-a.s. and not only $\hat{P}$-a.s.. It follows that $\frac{\varphi(t)}{t}\longrightarrow
E^{\hat{P}}[T^1]^{-1},\ Q^s$-a.s. and hence with Lemma 9.2 on page 572 of \cite{G}:\begin{equation}\label{114}
\mbox{for all }T\geq 0,\ Q^s\mbox{-a.s., }\sup_{0\leq t\leq T}\bigg|\frac{\varphi(nt)}{n}-\frac{t}{E^{\hat{P}}[T^1]}
\bigg|\overset{\scriptscriptstyle{n\to\infty}}{\longrightarrow}0,
\end{equation}and so for $\varepsilon>0,\eta>0,T>0$ and $n$ large enough,\begin{equation}\label{04}
Q^s\left[\sup_{0\leq t\leq T}\left|\frac{\varphi(nt)}{n}-\frac{t}{E^{\hat{P}}[T^1]}\right|\geq \eta\right]\leq \varepsilon.
\end{equation}Furthermore, from Lemma \ref{01} we infer that the laws of $n^{-1/2}\bar{\Sigma}_{n\cdot}$ under
$Q^s$ are tight and hence for all $T>0,\varepsilon >0,$ there exists an $\eta>0$ such that
\begin{equation}\label{05}
\sup_{n\geq 1}Q^s\left[\sup_{\substack{|s-t|\leq \eta\\ 0\leq s,t\leq T}}\frac{1}{\sqrt{n}}
\left|\bar{\Sigma}_{nt}-\bar{\Sigma}_{ns}\right|\geq \varepsilon\right]\leq\varepsilon,
\end{equation}see Theorem 2.4.10 of \cite{KS}. Together with (\ref{04}) we thus obtain that for arbitrary $\varepsilon>0$ and $ T>0,
$\begin{equation}\label{06}
Q^s\left[\sup_{ 0\leq t\leq T}\frac{1}{\sqrt{n}}
\left|\bar{\Sigma}_{nt/E^{\hat{P}}[T^1]}-\bar{\Sigma}_{\varphi(nt)}\right|\geq \varepsilon\right]\leq2\varepsilon
\end{equation}for sufficiently large $n.$ In order to prove (\ref{03}) it suffices to show that for $T>0$ and $\varepsilon>0$ the following probability
tends to zero with $n,$ see Remark \ref{302}:
\begin{eqnarray*}
Q^s\left[\sup_{0\leq t\leq T}\Big|\bar{\Sigma}_{{nt}/{E^{\hat{P}}[T^1]}}-\bar{\Sigma}^{\varphi}_n(t)\Big|>2\varepsilon\sqrt{n}\right]
&\leq& Q^s\left[\sup_{0\leq t\leq T}\Big|\bar{\Sigma}_{{nt}/{E^{\hat{P}}[T^1]}}-\bar{\Sigma}_{\varphi(nt)}\Big|>\varepsilon\sqrt{n}\right]\\
&& +Q^s\left[\sup_{0\leq t\leq T}\Big|\bar{\Sigma}_{\varphi(nt)}-\bar{\Sigma}^{\varphi}_n(t)\Big|>\varepsilon\sqrt{n}\right].
\end{eqnarray*}The first expression on the right-hand side vanishes, as $n\to\infty,$ due to (\ref{06}). Moreover, it can easily be seen
that the second expression on the right-hand side of the above inequality is less or equal to ${Q}^s[A_n],$ see
(\ref{535}) for the definition of $A_n,$ which tends to 0, as
$n\to\infty$, see (\ref{128}) and (\ref{113}). This finishes the proof of (\ref{03}) and hence of Theorem \ref{103}.\begin{flushright}$\Box$\end{flushright}
{\textsc{Proof of Lemma \ref{01}:}} Let us denote with $\bar{\Sigma}^{d_1}_.$ and $\bar{\Sigma}^{d_2}_.$ the first $d_1$ respectively
the last $d_2$ components of the process $\bar{\Sigma}_.$ defined in (\ref{02}). Note that Lemma \ref{01} follows from the next two statements:
\begin{equation}\label{118}\begin{array}{l}
\mbox{under }P,\mbox{ the sequence } \frac{1}{\sqrt{n}}\bar{\Sigma}^{d_1}_{n\cdot}, n\geq 1,
\mbox{ converges in law to a $d_1$-dimensional}\\
\mbox{Brownian motion with covariance matrix }E^{\hat{P}}[T^1]I_{d_1},\mbox{ as }n\to\infty,
\end{array}\end{equation}and for ${P}$-a.e. $(w,\lambda_.)\in W,$
\begin{equation}\label{116}\begin{array}{l}
\mbox{under the measure }M^s,\mbox{ the sequence }\ \frac{1}{\sqrt{n}}\bar{\Sigma}^{d_2}_{n\cdot},n\geq 1,
\mbox{ converges in}\\ \mbox{law to a }d_2\, \mbox{-dimensional Brownian motion with covariance matrix}\\
E^{\hat{Q}^s}[(Y^s_{T^1})(Y^s_{T^1})^t]\in\bb{R}^{d_2\times d_2}\mbox{ (independent of }(w,\lambda_.)),\mbox{ as }n\to\infty.\hspace{2.2cm}
\end{array}\end{equation}
Indeed, from (\ref{118}) and (\ref{116}) we can easily deduce that under $Q^s,$ the laws of
$n^{-1/2}\bar{\Sigma}_{n\cdot},$ $n\geq 1,$ are tight,
see Theorem 2.4.7 and 2.4.10 of \cite{KS}. Therefore, in order to prove Lemma \ref{01}, it suffices to show weak convergence of all finite dimensional
distributions of $n^{-1/2}\bar{\Sigma}_{n\cdot}$ to the finite dimensional distributions of
${\scriptstyle{\sqrt{E^{\hat{P}}[T^1]}}}B_.,$ as $n\to\infty,$ see Theorem 2.4.15 in \cite{KS}. But this can easily be inferred
from (\ref{118}) and (\ref{116}) with the help of characteristic functions.\\
Now, let us explain how to see (\ref{118}). Similarly as in the
proof of (\ref{03}), we first note that for $\varepsilon>0,\eta>0,T>0$ and $n$ large enough,\begin{equation}\label{07}
P\left[\sup_{0\leq t\leq T}\left|\frac{T^{[nt]}}{E^{\hat{P}}[T^1]n}-t\right|\geq \eta\right]\leq \varepsilon.
\end{equation}Furthermore, by definition and self-similarity of Brownian motion we know that under $P,$ the processes
$\ n^{-1/2}X^1_{{\scriptscriptstyle{E^{\hat{P}}[T^1]}}n\cdot}, n\geq 1,$ are distributed as a $d_1$-dimensional Brownian motion with covariance matrix
$E^{\hat{P}}[T^1]I_{d_1},$ and hence their laws are tight. So, we can derive the same estimate as in (\ref{05}) but for the process
$X^1_{{\scriptscriptstyle{E^{\hat{P}}[T^1]}}n\cdot}.$ Together with
(\ref{07}) we thus obtain that\begin{equation}\label{08}
P\left[\sup_{0\leq t\leq T}\left|X^1_{T^{[nt]}}-X^1_{E^{\hat{P}}[T^1]nt}\right|\geq \varepsilon\sqrt{n}\right]\leq 2\varepsilon
\end{equation}for sufficiently large $n.$ Pick $T>0,\ \varepsilon>0$ and then observe that\begin{eqnarray*}
&&P\left[\sup_{0\leq t\leq T}\left|\bar{\Sigma}_{nt}^{d_1}-
X^1_{E^{\hat{P}}[T^1]nt}\right|> 3\varepsilon\sqrt{n}\right]\\
&&\leq P\left[\sup_{0\leq t\leq T}\left|X^1_{T^{[nt]}}-X^1_{E^{\hat{P}}[T^1]nt}\right|> \varepsilon\sqrt{n}\right] +
P\Bigg[\sup_{\substack{\scriptscriptstyle{k=0,\ldots,[Tn]}\\
\scriptscriptstyle{a\in [0,T^{k+1}-T^k]}}}\left|X^1_{T^{k}+a}-X^1_{T^{k}}\right|> \varepsilon\sqrt{n}\Bigg].
\end{eqnarray*}The first term after the above inequality tends to zero with $n$ due to (\ref{08}) whereas the second term is dominated by
$Q^s(A_n)$ and thus converges to zero as well, as $n\to\infty.$ In view of Remark \ref{302} this proves (\ref{118}).\\
We now prove (\ref{116}). Note that for $\hat{P}$-a.e. $(w,\lambda_.)\in W
{{\cap}}\{0\in \cal{C}\},$ under $M^s,$ the increments $Y^s_{T^n}-Y^s_{T^{n-1}},\ n\geq 1,$ are independent,
see (\ref{39}) and (\ref{44}), with mean zero, which is a consequence of the symmetry assumption (\ref{100}). Indeed, for $\hat{P}$-a.e. $(w,\lambda_.)\in W
{\cap}\{0\in \cal{C}\},$\begin{equation}\label{832}
E^{M^s}[Y^s_{T^n}-Y^s_{T^{n-1}}]=\left(E^{K_0}[X_{T^1}^2]\right)\circ\hat{\theta}_{n-1}\overset{\scriptstyle{(\ref{190})}}{=}
\bb{E}\left[E^{K_{0,\omega}}[X_{T^1}^2] \right]\circ\hat{\theta}_{n-1}.
\end{equation} In Remark \ref{034} we have seen that we can write\begin{equation}\label{035}
E^{K_{0,\omega}}[X_{T^1}^2]=
\int_{\bb{R}^{d_2}}\cdots\int_{\bb{R}^{d_2}}dy_1\cdots dy_{T^1}\prod_{k=0}^{T^1-1}h\left(w(k+\cdot)-w(k),
\lambda_k,y_k,y_{k+1},\hat{\omega}_k\right)y_{T^1},
\end{equation}with $\hat{\omega}_k=\tau_{(w(k),0)}(\omega),\ k=0,\ldots, T^1-1$ and $y_0:=0.$
Let us denote with $h^{(\cal{R})}$ the analogue to $h,$ see (\ref{28}), defined via the transition density
$p^{(\cal{R})}_{w,\omega}(1,\cdot,\cdot)$
for $\omega\in\Omega,\ w\in W^{d_1}_0,$ attached by (\ref{13}) and (\ref{255}) to the drift
$-b^{*}((w(\cdot),-\,\cdot),\omega),$
see also (\ref{77}).
Since $(X^2_t)_{t\in [0,1]}$ under $\tilde{P}_{y_k,y_{k+1}}$ has the same law
as $(-X^2_t)_{t\in [0,1]}$ under $\tilde{P}_{-y_k,-y_{k+1}},$ see below (\ref{79}) for the definition of
$\tilde{P}_{\cdot,\cdot},$
we see that for $k=0,\ldots,T^1-1,$\begin{equation*}
p^{(\cal{R})}_{w,\omega}(1,-y_k,-y_{k+1})=p_{w,\omega}(1,y_k,y_{k+1})
\end{equation*}and hence
\begin{equation}\label{0101}
h^{(\cal{R})}\left(w(k+\cdot)-w(k),\lambda_k,-y_k,-y_{k+1},\hat{\omega}_k\right)=
h\left(w(k+\cdot)-w(k),\lambda_k,y_k,y_{k+1},\hat{\omega}_k\right).
\end{equation} With the help of Theorem 44 on page 158 in \cite{P} one can see that for fixed $(w,\lambda_.)\in W$
and $y_k,y_{k+1}\in\bb{R}^{d_2}$ the expression on the left-hand side of (\ref{0101}) is a measurable function of
$\cal{R}(b(\cal{R}(\,\cdot\,) ,\omega)),$ see (\ref{77}).
So, (\ref{0101}) together with our symmetry assumption (\ref{100})
implies that under $\bb{P},$ the product in (\ref{035}) is identical in law to
\begin{equation*}
\prod_{k=0}^{T^1-1}h\left(w(k+\cdot)-w(k),\lambda_k,-y_k,-y_{k+1},
\hat{\omega}_k\right)
\end{equation*}and so, a transformation of variables $(y_1,\ldots,y_{T^1})\mapsto (-y_1,\ldots,-y_{T^1})$ and
Fubini's Theorem then yield
$E^{K_{0}}[X_{T^1}^2]=-E^{K_{0}}[X_{T^1}^2]=0.$ Hence in view of (\ref{832}), $\hat{P}$-a.e. $(w,\lambda_.)\in W
{{\cap}}\{0\in \cal{C}\},$
\begin{equation*}
E^{M^s}[Y^s_{T^n}-Y^s_{T^{n-1}}]=0,
\end{equation*}since $\hat{\theta}_{n-1}$ preserves $\hat{P}.$ Furthermore,\begin{equation}\label{831}\begin{array}{rcl}
E^{\hat{P}}\left[E^{M^s}\left[|Y^s_{T^n}-Y^s_{T^{n-1}}|^2\right]\right]&\overset{\scriptstyle{(\ref{37}),(\ref{43})}}{\leq}&
E^{\hat{Q}^s}\left[|Z^s_{T^n}-Z^s_{T^{n-1}}|^2\right]\\[11pt]
&\overset{\scriptstyle{(\ref{141})}}{=}&E^{\hat{Q}^s}\left[|Z^s_{T^1}|^2\circ\hat{\Theta}_{n-1}\right]<\infty,
\end{array}\end{equation}since $\hat{\Theta}_{n-1}$ preserves $\hat{Q}^s$ and because of the integrability property (\ref{601}) and Remark \ref{540}.
In particular it follows that for $\hat{P}$-a.e. $(w,\lambda_.)\in W{{\cap}}\{0\in \cal{C}\},$\begin{equation}
\label{0100}
Y^s_{T^n}-Y^s_{T^{n-1}}\in L^2(M^s(w,\lambda_.)).
\end{equation}
Note that $(W{{\cap}}\{0\in\cal{C}\},\hat{\theta}_1,\hat{P})$ is ergodic as a consequence of the
ergodicity of $(W,{\theta}_1,{P}),$ see (34) on page 357 in \cite{N}.
An application of an invariance principle for vector-valued, square-integrable martingale
differences, see Theorem \ref{080}, shows that for $\hat{P}$-a.e.
$(w,\lambda_.)\in W{{\cap}}
\{0\in\cal{C}\},$ under the measure $M^s(w,\lambda_.),$ the $C(\bb{R}_+,\bb{R}^{d_2})$-valued random variables $n^{-1/2}{\bar{\Sigma}}_{n\cdot}^{d_2},\ n\geq 1$
converge weakly to a $d_2$-dimensional Brownian motion with covariance matrix as in (\ref{116}), as $n\to\infty.$
Note that in fact under the measure $M^s(w,\lambda_.)$ the increments $Y^s_{T^n}-Y^s_{T^{n-1}},\
n\geq 1,$ are independent and hence the standard functional central limit theorem for independent increments,
which is an immediate consequence of Theorem \ref{080}, could be applied. The ergodicity of $(W{{\cap}}\{0\in\cal{C}\},\hat{\theta}_1,\hat{P})$ and the integrability
property (\ref{831}) are used to show that the conditions (\ref{081}) and (\ref{082}) are satisfied.
Since for a continuous, bounded function $f$ on $W^{d_2}_+,$ the random variable
{$E^{M^s}[f({n^{-1/2}}\bar{\Sigma}_{n\cdot}^{d_2}))]$} is invariant under $\theta_{T^0}$ and since the image of
$P$ under $\theta_{T^0}$ is absolutely continuous with respect to $\hat{P},$ see (\ref{25}),
it follows that (\ref{116}) holds in fact for $P$-a.e. $(w,\lambda_.)\in W.$ This finishes the proof of Lemma \ref{01}.
\begin{flushright}$\Box$\end{flushright}
The next theorem shows us that our model also contains examples of diffusions in random environment with possibly ballistic
behavior when $d_1\geq 13,$ satisfying an invariance principle, recall (\ref{77})-(\ref{78}).
{\thm{\label{120} Let $d_1\geq 13$ and recall the definition of $v$ in (\ref{55}). Under the measure $P_0,$
the $C(\bb{R}_+,\bb{R}^{d})$-valued random variables
\begin{equation*}
B^r_.\stackrel{\mathrm{def}}{=}\frac{1}{\sqrt{r}}\left(X_{r\cdot}-vr\cdot\right),\quad r>0,
\end{equation*}converge in law to a $d$-dimensional Brownian motion $B_.$ with covariance matrix $A$ given
in (\ref{125}), as $r\to\infty.$}}\\
{\textbf{Proof:}} As in the proof of Theorem \ref{103} we can show that it suffices to prove that for
$n\geq 1$ integer, \begin{equation}\label{801}
B^n_.\longrightarrow B_. \mbox{ in law under }P_0,\mbox{ as }n\to\infty,
\end{equation}see (\ref{301}). By similar arguments as in the proof of Lemma \ref{510}
we find that \begin{equation}\label{060}\begin{array}{l}
\left(\bar{B}_.^n-v\sqrt{n}\,\cdot\right)_{n\geq 1}\mbox{ under }P_0
\mbox{ is wce to }\left(\frac{1}{\sqrt{n}}\left(\bar{Z}^s_n(\cdot)-vn\cdot\right)\right)_{n\geq 1}\mbox{ under }Q^s,
\end{array}\end{equation}see (\ref{106}) and (\ref{810}). Together with (\ref{321}) we see that (\ref{801})
follows once we show that\begin{equation}\label{122}\begin{array}{l}
\frac{1}{\sqrt{n}}\left(\bar{Z}^s_n(\cdot)-vn\cdot\right)\longrightarrow B_.\mbox{ in law under }Q^s,\mbox{ as }n\to\infty,
\end{array}\end{equation}
see (\ref{09}) for the definition of $\bar{Z}^s_n(\cdot)$ and $\bar{B}_.^n.$ In the notation\begin{equation}\label{121}
\cal{Z}\stackrel{\mathrm{def}}{=} Z^s_1-E^{Q^s}\left[Z^s_1\right]=Z^s_1-v,\quad\mbox{ we have that }\quad{Z}^s_n-vn
\overset{\scriptstyle{(\ref{52})}}{=}\sum_{k=0}^{n-1}\cal{Z}\circ\Theta_k,
\end{equation}recall (\ref{55}). We know from (\ref{252}) that\begin{equation}\label{040}
\cal{Z}\in L^m(Q^s)\qquad\mbox{ for all } m\in [1,\infty).
\end{equation}
For integers $k\geq 0,$ on the space $\Gamma^s$ we introduce the filtration\begin{equation}\label{123}
\cal{G}_k\stackrel{\mathrm{def}}{=}\sigma\left(Z^s_{n+1}-Z^s_n,\mbox{ for all } n\in\bb{Z}\mbox{ with }n<k\right).
\end{equation}The identity (\ref{141}) implies that for $k\geq 0:$\begin{equation}\label{084}
f\mbox{ is }\cal{G}_0\mbox{-measurable}\Longleftrightarrow f\circ\Theta_k\mbox{ is }\cal{G}_k\mbox{-measurable,}
\end{equation}and thus by stationarity, see (\ref{54}), we have that for $g\in L^1(Q^s),$\begin{equation}\label{092}
Q^s\mbox{-a.s. }E^{Q^s}\left[g\circ\Theta_k\,|\,\cal{G}_k\right]=E^{Q^s}\left[g\,|\,\cal{G}_0\right]\circ\Theta_k.
\end{equation}
The following adaptation of Gordin's method will play the key role in the proof.
{\lemma{\label{124}There is a $G\in L^2(\Gamma^s,\cal{G}_0,Q^s)$ such that\begin{equation*}
M_n\stackrel{\mathrm{def}}{=} G\circ\Theta_n-G+Z_n^s-vn=\sum_{k=0}^{n-1}\left(G\circ\Theta_1-G+\cal{Z}\right)\circ\Theta_k,\ n\geq 0,
\mbox{ is a }(\cal{G}_n)\mbox{-martingale.}
\end{equation*}}}Before we prove Lemma \ref{124}, let us explain how we conclude the proof of Theorem \ref{120} from it.
Using stationarity of $\Theta_1$ under $Q^s,$ see (\ref{54}), and applying Chebychev's inequality we obtain
for $T>0,\varepsilon>0,$
\begin{equation*}
Q^s\left[\sup_{\scriptscriptstyle{k=0,\ldots,[Tn]+1}}|G\circ\Theta_k|>\varepsilon\sqrt{n}\right]\leq
\frac{Tn+2}{\varepsilon^2 n}E^{Q^s}\left[|G|^2,|G|>\varepsilon\sqrt{n}\right]
\overset{\scriptscriptstyle{n\to\infty}}{\longrightarrow}0,
\end{equation*}
so that, in view of Remark \ref{302}, one easily finds that
$n^{-1/2}\left(\bar{Z}^s_n(\cdot)-vn\cdot\right)_{n\geq 1}$ under $Q^s$ is wce to the rescaled polygonal
interpolation of the process $M_k, k\geq 1,$ defined analogously to $\bar{B}_.^n$ in (\ref{09}), under $Q^s$ .
Since $M_n$ is a martingale with ergodic, square integrable increments, it follows from Theorem \ref{140}, see Appendix,
that under the measure $Q^s,$ the rescaled polygonal interpolation of $M_k, k\geq 1,$ converges in law to a $d$-dimensional Brownian motion
with covariance matrix\begin{equation}\label{125}
A=E^{Q^s}\left[\left(G\circ\Theta_1-G+\cal{Z}\right)\left(G\circ\Theta_1-G+\cal{Z}\right)^t\right],
\end{equation}
as $n\to\infty.$ This concludes the proof of (\ref{122}).
\begin{flushright}$\Box$\end{flushright}
{\textsc{Proof of Lemma \ref{124}:} First we explain how our
claim follows once we show that\begin{equation}\label{126}
\sum_{k\geq 0}\big{\|}E^{Q^s}\left[\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\Theta_k\ \big |\ \cal{G}_0\right]\big{\|}_{2}
<\infty,
\end{equation}
where in the previous notation, see (\ref{121}),\begin{equation}\label{127}
H\stackrel{\mathrm{def}}{=}\sum_{k=0}^{T^1-1}\cal{Z}\circ\Theta_k=\sum_{k=0}^{T^1-1}Z_1^s\circ\Theta_k-vT^1\overset{(\ref{52})}{=}
Z_{T^1}^s-vT^1.
\end{equation}
Note that $H\in L^2(Q^s).$ Indeed, \begin{eqnarray*}
E^{Q^s}\left[|H|^2\right]&\leq &E^{Q^s}\left[(T^1)^2\sum_{k=0}^{T^1-1}|\cal{Z}\circ\Theta_k|^2\right]=
\sum_{n\geq 1}n^2\sum_{k=0}^{n-1}E^{Q^s}\left[|\cal{Z}\circ\Theta_k|^2, T^1=n\right]\\
&\overset{{{\mbox{{\tiny{H\"older}}}}}}{\leq} &
\sum_{n\geq 1}n^2\sum_{k=0}^{n-1}E^{Q^s}\left[|\cal{Z}\circ\Theta_k|^{2p}\right]^{1/p}
P[T^1=n]^{1/q},\end{eqnarray*}with $1<q<9/8$ and $p$ the conjugate exponent. Since $P[T^1=n]\leq
P[T^1> n-1]$ and $E^{Q^s}[|\cal{Z}\circ\Theta_k|^{2p}]\leq c(p)<\infty$ by (\ref{54}) and (\ref{040}), we conclude
with the help of (\ref{26})
that the right-hand side of the above inequality is finite when $d_{1}\geq 13.$\\
For $m\geq 1$ we define
\begin{equation}\label{085}
G^m\stackrel{\mathrm{def}}{=} E^{Q^s}\left[H\ |\ \cal{G}_0\right]+\sum_{k=1}^{m-1}E^{Q^s}\left[\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)
\circ\Theta_k\ |\ \cal{G}_0\right].
\end{equation}
Then $G^m$ converges in $L^2(Q^s)$ towards a $G\in L^2(\Gamma^s,\cal{G}_0,Q^s)$ because of (\ref{126}). Moreover, for
$m\geq 1$ we define $N_m=N((w,\lambda_.);[1,m-1])+1$ in the notation of (\ref{20}), so that\begin{equation}\label{330}
\sum_{k=0}^{T^{N_m}-1}\cal{Z}\circ\Theta_k=H+\sum_{k=1}^{m-1}(H\bbm{1}_{\{0\in\cal{C}\}})\circ\Theta_k.
\end{equation}By stationarity, see (\ref{54}), we find that for $n\geq 0,$
\begin{equation}\label{129}
G\circ\Theta_n=\lim_{m\to\infty}G^m\circ\Theta_n\overset{\scriptstyle{(\ref{092}),(\ref{330})}}{=}
\lim_{m\to\infty}\ E^{Q^s}\left[\left(\sum_{k=0}^{T^{N_m}-1}\cal{Z}\circ\Theta_k\right)\circ\Theta_n
\ \Bigg|\ \cal{G}_n\right],
\end{equation}where the above limits are in $L^2(\Gamma^s,\cal{G}_n,Q^s).$
This yields for $n\geq 1,$\\[11pt]
$E^{Q^s}\left[M_{n+1}-M_n\ |\ \cal{G}_n\right]$
\begin{eqnarray*}
&&=\lim_{m\to\infty}\ E^{Q^s}\left[\left(\sum_{k=0}^{T^{N_m}-1}\cal{Z}\circ\Theta_k\right)\circ\Theta_{n+1}+\cal{Z}\circ\Theta_n
-\left(\sum_{k=0}^{T^{N_m}-1}\cal{Z}\circ\Theta_k\right)\circ\Theta_{n}\ \Bigg|\ \cal{G}_n\right],
\end{eqnarray*}where the limit is in $L^2(\Gamma^s,\cal{G}_n,Q^s).$ With the observation that\begin{equation*}
T^{N_m}\circ\theta_1=\left\{\begin{array}{ll}
T^{N_m}-1,& \mbox{ on }\{m\notin\cal{C}\},\\
T^{N_m+1}-1,& \mbox{ on }\{m\in\cal{C}\},
\end{array}\right.
\end{equation*}
we find that the quantity under the conditional expectation is equal to
\begin{equation*}
\left(\sum_{k=0}^{T^{N_m}\circ\theta_1}\cal{Z}\circ\Theta_k-\sum_{k=0}^{T^{N_m}-1}\cal{Z}\circ\Theta_k\right)\circ\Theta_n=
\left(\bbm{1}_{\{m\in\cal{C}\}}H\circ\Theta_m\right)\circ\Theta_n=\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\Theta_{n+m}.
\end{equation*}As an $L^2$-limit,
\begin{equation*}
\lim_{m\to\infty}\ E^{Q^s}\left[\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\Theta_{n+m}
\,\Bigg|\,\cal{G}_n\right]\overset{\scriptstyle{(\ref{092})}}{=}
\lim_{m\to\infty}\ E^{Q^s}\left[\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\Theta_{m}
\,\Bigg|\,\cal{G}_0\right]\circ\Theta_n\overset{\scriptstyle{(\ref{126})}}{=}0,
\end{equation*}
thus proving that $M_n$ is a $(\cal{G}_n)$-martingale.\\
It now remains to prove (\ref{126}). We consider $B\in L^2(\Gamma^s,
\cal{G}_0,Q^s)$ with $L^2$-norm $\|B\|_2= 1.$ Note that $B$ can be considered as a function of $(w,\lambda_.)$ and
$(u_m,\omega_m)_{m\leq 0}.$ Then it follows that for fixed $(w,\lambda_.)\in W,$ see (\ref{290}), the random vectors B and
\begin{equation*}
\sum_{k=T^m}^{T^{m+1}-1}\cal{Z}\circ\Theta_k=(X^1_{T^{m+1}}-X^1_{T^m},u_m(T^{m+1}-T^m))-v(T^{m+1}-T^m)\mbox{ for }m\geq 1,
\end{equation*}
are independent under the measure $M^s,$ see (\ref{39}). With these considerations in mind we find that for integer
$p\geq 1,$
\begin{equation}\label{088}\begin{array}{rcl}
E^{Q^s}\left[\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\Theta_p \cdot B\right]
&=&\sum_{m\geq 1}E^{Q^s}\left[\left(\sum_{k=T^m}^{T^{m+1}-1}\cal{Z}\circ\Theta_k\right)\cdot B,T^m=p\right]\\[11pt]
&=&\sum_{m\geq 1}E^{P}\left[E^{M^s}\left[\sum_{k=T^m}^{T^{m+1}-1}\cal{Z}\circ\Theta_k\right]
E^{M^s}\left[B\right],T^m=p\right]\\[11pt]
&=& E^{P}\left[\left(E^{M^s}\left[H\right]\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\theta_p\ E^{M^s}\left[B\right]\right].
\end{array}\end{equation}
Then observe that we can find measurable functions $\varphi$ and $\psi$ such that
\begin{equation}\label{131}\begin{array}{l}
E^{M^s}\left[H\right]\bbm{1}_{\{0\in\cal{C}\}}=\varphi\left(T^1,(X^1_t)_{t\geq 0},(\Lambda_n)_
{n\geq 0}\right)\bbm{1}_{\{0\in\cal{C}\}},\\
E^{M^s}\left[B\right]=\psi\left(T^0,(X^1_t)_{t\leq 0},(\Lambda_n)_{n\leq -1}\right),
\end{array}\end{equation}recall the definition of $\Lambda_n$ above (\ref{16}).
The reason why $E^{M^s}[B]$ depends only on $T^0,\ (X^1_t)_{t\leq 0},\ (\Lambda_n)_{n\leq -1},$
whereas the involved cut times $T^k,k\leq -1,$ are based on the whole trajectory $(X^1_t),\ t\in\bb{R},$ is
that the information about intersections needed to determine $T^k,k\leq -1,$ can be expressed only by
$T^0$ and $(X^1_t), t\leq T^0\leq 0,$ since by definition of $T^0,$ we have that
$(X^1_{(-\infty,k-1]})^R\cap(X^1_{[T^0,\infty)})^R=\emptyset,$ for all $k\leq T^0.$
In the sequel we will slightly abuse notation.
One has to think of the following objects to be defined on an extension of the probability space $(W,\cal{W},P),$ see (\ref{290}) and
below. Recall that under the measure $P=\bar{P}{\otimes}\bf{\Lambda}^{\varepsilon},$ see (\ref{17}), the
process $(X^1_t)_{t\in\bb{R}}$ is a two-sided $d_1$-dimensional Brownian motion with $P[X^1_0=0]=1$ which is independent of
$(\Lambda_n)_{n\in\bb{Z}},$
a two-sided sequence of i.i.d. Bernoulli random variables with success parameter $\varepsilon>0,$ see (\ref{92}).
We are interested in large values of $p$ and set
\begin{equation}
L=\left[\frac{p}{3}\right].
\end{equation}
We introduce a copy $((X^+_t)_{t\in\bb{R}},(\Lambda_j^+)_{j\in\bb{Z}})$ of $((X_t^1)_{t\in\bb{R}},
(\Lambda_j)_{j\in\bb{Z}})$ evolving according to $P$ such that
$X_t^+=X^1_{t+p}-X^1_p$ for $t\in [-L,\infty),$ and $\Lambda_j^+=\Lambda_{j+p}$ for $\ j\geq -L,$ and such that
$((X^+_t)_{t\in(-\infty,-L)},(\Lambda_j^+)_{j<-L})$ evolves
independently of $((X^1_{t+p}-X^1_p)_{t\in(-\infty,-L)},(\Lambda_{j+p})_{j<-L}).$ Moreover, we consider another
copy $((X_t^-)_{t\in\bb{R}},(\Lambda_j^-)_{j\in\bb{Z}})$ of $((X_t^1)_{t\in\bb{R}},(\Lambda_j)_{j\in\bb{Z}})$
which is independent of $((X_t^+)_{t\in\bb{R}},(\Lambda_j^+)_{j\in\bb{Z}})$ and evolves according to $P$
such that $X^-_t=X^1_t$ for $t\in (-\infty,L],$ and $\Lambda^-_j=\Lambda_j$ for $j\leq L-1,$ and such that
$((X_t^-)_{t\in(L,\infty)},(\Lambda_j^-)_{j\geq L})$ evolves independently
of $((X_t^1)_{t\in(L,\infty)},(\Lambda_j)_{j\geq L}).$ Note that \begin{equation}\label{089}
\left((X_t^1)_{t\in\bb{R}},(\Lambda_j)_{j\in\bb{Z}}\right)\overset{\mbox{\scriptsize{law}}}{=}\left((X_t^+)_{t\in\bb{R}},
(\Lambda_j^+)_{j\in\bb{Z}}\right)
\overset{\mbox{\scriptsize{law}}}{=}\left((X_t^-)_{t\in\bb{R}},(\Lambda_j^-)_{j\in\bb{Z}}\right)
\end{equation}and \begin{equation}\label{090}
\left((X_t^+)_{t\in\bb{R}},(\Lambda_j^+)_{j\in\bb{Z}}\right)\mbox{ is independent of }\left((X_t^-)_{t\in\bb{R}},
(\Lambda_j^-)_{j\in\bb{Z}}\right).
\end{equation}The random time $T^-$ is defined like $T^0$ relatively to $((X^-_t)_{t\in\bb{R}},(\Lambda_j^-)_{j\in\bb{Z}})$ and
$T^+$ is the analogue of $T^1$ attached to $((X^+_t)_{t\in\bb{R}},(\Lambda_j^+)_{j\in\bb{Z}}).$ The random set $\cal{C}^+$ is defined analogously
to $\cal{C}$ with $((X^+_t)_{t\in\bb{R}},(\Lambda_j^+)_{j\in\bb{Z}})$ in place of $((X_t^1)_{t\in\bb{R}},(\Lambda_j)_{j\in\bb{Z}}),$
see (\ref{19}). We then define
\begin{equation}\label{132}\begin{array}{l}
U=E^{M^s}\left[B\right],\ U^-=\psi\left(T^-,(X^-_t)_{t\leq 0},(\Lambda_n^-)_{n\leq -1}\right),\\
V=\left(E^{M^s}\left[H\right]\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\theta_p
=\varphi\left(T^1\circ\theta_p,(X_{t+p}^1-X^1_p)_{t\geq 0},(\Lambda_{n+p})_{n\geq 0}\right)\bbm{1}_{\{p\in\cal{C}\}},
\\
V^+=\varphi\left(T^+,(X^+_t)_{t\geq 0},(\Lambda_n^+)_{n\geq 0}\right)\bbm{1}_{\{0\in\cal{C}^+\}}.
\end{array}\end{equation}
By construction, see in particular (\ref{089}) and (\ref{090}), we have that
$U\overset{\mbox{\tiny{law}}}{=}U^-$ and due to the invariance of $P$ under the shift
$\theta_p,$ also $V\overset{\mbox{\tiny{law}}}{=}V^+,$ but
$U^-$ and $V^+$ are now independent. For $p\geq 1,$
\begin{eqnarray*}
E^{Q^s}\left[\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\Theta_p\cdot B\right]&\overset{\scriptstyle{(\ref{088})}}{=}&E^{P}\left[VU\right]
\\&=&E^{P}\left[V^+U^-\right]+E^{P}\left[V^+(U-U^-)\right]+E^{P}\left[(V-V^+)U\right].
\end{eqnarray*}
Note that the first term in the last line vanishes because of the independence mentioned above and the fact that
\begin{equation*}
E^{P}\left[V^+\right]=E^{P}\left[V\right]\overset{\mbox{\scriptsize{(\ref{088})}}}{=}
E^{Q^s}\left[H\bbm{1}_{\{0\in\cal{C}\}}\right]\overset{\mbox{\scriptsize{(\ref{24})}}}{=}
E^{\hat{Q}^s}\left[H\right]{E^{\hat{P}}}\left[T^1\right]^{-1}
\overset{{(\ref{234})}}{=}\ E^{Q^s}\left[\cal{Z}\right]\overset{\scriptstyle{(\ref{121})}}{=}0.
\end{equation*}
Therefore, after recalling that
$\|U\|_2\leq\|B\|_2= 1,$ we find with H\"older's inequality:
\begin{equation}\label{133}
E^{Q^s}\left[\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\Theta_p\cdot B\right]\leq
\|V^+\|_4\|U-U^-\|_{4/3}+\|V-V^+\|_2.
\end{equation}
Due to stationarity of $\theta_1$ under $P$ and Jensen's inequality we easily obtain that
\begin{equation*}
\|V^+\|_4=\|V\|_4\leq E^{Q^s}\left[|H|^4\bbm{1}_{\{0\in\cal{C}\}}\right]^{1/4}=
E^{\hat{Q}^s}\left[|H|^4\right]^{1/4}P\left[0\in\cal{C}\right]^{1/4}\leq
E^{\hat{Q}^s}\left[|H|^4\right]^{1/4}.
\end{equation*}From the definition of $H,$ see (\ref{127}), and Remark (\ref{540}) it then follows that\begin{equation}\label{134}
\|V^+\|_4=\|V\|_4\leq E^{\hat{P}\times K_{0}}\left[|\chi_{T^1}|^4\right]^{1/4}+vE^{\hat{P}}
\left[(T^1)^4\right]^{1/4}\overset{{\scriptstyle{(\ref{600}), (\ref{602})}}}{<}\infty.
\end{equation}
In view of the definitions (\ref{132}) we see that
\begin{equation}\label{050}
\|V-V^+\|_2\leq \left\|\left(|V|+|V^+|\right)\left(\bbm{1}_{\{T^+\ne T^1\circ\theta_p\}}+
|\bbm{1}_{\{p\in\cal{C}\}}-\bbm{1}_{\{0\in\cal{C}^+\}}|\right)\right\|_2.
\end{equation}
Since by stationarity of $\theta_1$ under $P$ and the identity in law (\ref{089}),
$\ P\left[\{p\in\cal{C}\}\smallsetminus\{0\in\cal{C}^+\}\right]$ is equal to
$P\left[\{0\in\cal{C}^+\}\smallsetminus\{p\in\cal{C}\}\right],$ an application of Cauchy-Schwarz' inequality to the
right-hand side of (\ref{050}) shows that
\begin{equation}\label{051}
\|V-V^+\|_2\leq 2\|V\|_4\left(P\left[T^+\ne T^1\circ\theta_p\right]^{1/4}+2P\left[\{p\in\cal{C}\}\smallsetminus
\{0\in\cal{C}^+\}\right]^{1/4}\right).
\end{equation}
Since $(X^+_t,\Lambda^+_n)$ and $(X^1_t,\Lambda_n)\circ\theta_p$ coincides for $t\in [-L,\infty),\ n\geq -L,$
with (\ref{19}) we see that for large $p,$ the events $\{T^+\ne T^1\circ\theta_p\}$ and $\{p\in\cal{C}\}\smallsetminus\{0\in\cal{C}^+\}$ are both included in
\begin{equation*}
\left\{\left(X^+_{(-\infty,-L]}\right)^{R}\cap\left(X^+_{[0,\infty)}\right)^{R}\ne\emptyset\right\}\cup
\left\{\left(\left(X^1_.\circ\theta_p\right)_{(-\infty,-L]}\right)^{R}\cap\left(\left(X^1_.\circ\theta_p\right)_
{[0,\infty)}\right)^{R}\ne\emptyset\right\},
\end{equation*}and so, together with (\ref{051}) we find using stationarity once again that
\begin{equation}\label{135}
\|V-V^+\|_2\leq c\|V\|_4P\left[\left(X^1_{(-\infty,0]}\right)^{R}\cap \left(X^1_{[L,\infty)}\right)^{R}
\ne\emptyset\right]^{1/4}.
\end{equation}
By analogous arguments as above we also find that,
\begin{equation}\label{136}\begin{array}{rcl}
&&\|U-U^-\|_{4/3} \\[5pt]
&&\leq \left\|\left(|U|+|U^-|\right)\bbm{1}_{\{T^0\ne T^-\}}\right\|_{4/3}\\[5pt]
&&\leq \Big\|\left(|U|+|U^-|\right)\Big(\bbm{1}
{\scriptscriptstyle{\left\{\left(X^-_{(-\infty,0]}\right)^{R}\cap\left(X^-_{[L,\infty)}\right)^{R}\ne\emptyset\right\}}}
+\bbm{1}
{\scriptscriptstyle\left\{\left(X^1_{(-\infty,0]}\right)^{R}\cap\left(X^1_{[L,\infty)}\right)^{R}\ne\emptyset\right\}}\Big)
\Big\|_{4/3}\\
&& \leq cP\left[\left(X^1_{(-\infty,0]}\right)^{R}\cap \left(X^1_{[L,\infty)}\right)^{R}
\ne\emptyset\right]^{1/4},
\end{array}\end{equation}
where we used H\"older's inequality and $\|U^-\|_2=\|U\|_2\leq\|B\|_2=1$ in the last inequality. Collecting (\ref{133}), (\ref{134}),
(\ref{135}) and (\ref{136}), we finally find\begin{eqnarray*}
\left\|E^{Q^s}\left[\left(H\bbm{1}_{\{0\in\cal{C}\}}\right)\circ\Theta_p\ |\ \cal{G}_0\right]\right\|_2 & \leq &
c\|V\|_4P\left[\left(X^1_{(-\infty,0]}\right)^{R}\cap \left(X^1_{[L,\infty)}\right)^{R}
\ne\emptyset\right]^{1/4}\\
&\overset{\scriptstyle{(\ref{33})}}{\leq}&c\|V\|_4\ p^{-\frac{d_1-4}{8}}.
\end{eqnarray*}This quantity is summable in $p,$ since $d_1\geq 13.$ This finishes the proof of (\ref{126}) and thus
of Theorem \ref{120}.
\begin{flushright}$\Box$\end{flushright}
\pagebreak
{\rem{
In the next section we will strengthen Theorem \ref{103} and \ref{120} to central limit theorems under the quenched measure. In the literature only
few results on quenched invariance principles for diffusions in random environment are available. One result is due to
Sznitman-Zeitouni \cite{SZNZEIT} who consider small perturbations of Brownian motion, and a second situation in which a
quenched central limit theorem holds is discussed in Osada \cite{O}. The latter result is attained with the technique
of the {\textit{environment viewed from the particle.}}
}}
\section{Central limit theorem under the quenched measure}\label{d}
We are going to show how one can improve the results of Section \ref{c} to central limit theorems under the
quenched measure $P_{0,\omega}$. We use an idea of Bolthausen and Sznitman, see Lemma 4 in \cite{BS}, to turn the {\textit{annealed}} invariance principle into a {\textit{quenched}} invariance principle,
by bounding certain variances through the control of intersections of two independent paths. For this
purpose we do not require an explicit invariant measure for the process of the environment viewed from the particle or
the control of moments of certain regeneration times, see for instance in \cite{berzei}, \cite{RS2}, \cite{RS4}, in
the discrete setting. We recall the definition of $v$ in (\ref{55}).\\
{\thm{\label{200}Assume $d_1\geq 7$ and (\ref{100}), or $d_1\geq 13.$ Then for $\bb{P}$-a.e.$\,\omega,$ under the measure $P_{0,\omega},$
the $C(\bb{R}_+,\bb{R}^d)$-valued random variables\begin{equation*}
B^r_.\stackrel{\mathrm{def}}{=}\frac{1}{\sqrt{r}}\left(X_{r\cdot}-vr\cdot\right),\qquad r>0,
\end{equation*} converge weakly to a Brownian motion $B_.$ with covariance matrix $A$ given in Theorem \ref{103} and
\ref{120} respectively, as $r\to\infty.$}}\\
{\textbf{Proof:}} By similar arguments as at the beginning of the proof of Theorem \ref{103}, see (\ref{301}) and
(\ref{321}), and the identity in law in (1) of Theorem \ref{47} we can see that it suffices to show that
\begin{equation}\label{161}\begin{array}{l}\mbox{for }\bb{P}\mbox{-a.e. }\omega,
\mbox{ under the measure }P\times K_{0,\omega},\mbox{ the }C(\bb{R}_+,\bb{R}^d)\mbox{-valued\hspace{2pt} random}\\
\mbox{variables }\beta^n_.\stackrel{\mathrm{def}}{=}\frac{1}{\sqrt{n}}\left\{\chi_{[n\cdot]}^{}+\left(n\cdot-[n\cdot]\right)\left(
\chi_{[n\cdot]+1}^{}-\chi_{[n\cdot]}^{}\right)-vn\cdot\right\},\mbox{\,for\,integers}\\
n\geq 1,\mbox{ converge weakly to }B_.,\mbox{ as }n\to\infty.
\end{array}\end{equation}
From the proofs of Theorem \ref{103} and \ref{120}, see in particular (\ref{301}), (\ref{321}), (\ref{060}) and (\ref{122}),
we know that \begin{equation}\label{061}
\beta_.^n\longrightarrow B_.\mbox{ in law under }P\times K_0,\mbox{ as }n\to\infty.
\end{equation}
From the proof of Lemma 4.1 in \cite{BS} we see that (\ref{161}) follows from (\ref{061}) and a variance calculation.
Let us introduce for $T>0$ the space of continuous, $\bb{R}^d$-valued functions on
$[0,T]$ denoted with $C([0,T],\bb{R}^d),$
which we equip with the distance\begin{equation}\label{304}
d_T(g,g')\stackrel{\mathrm{def}}{=}\sup_{t\leq T}|g(t)-g'(t)|\wedge 1.
\end{equation}The proof of Lemma 4.1 in \cite{BS} shows us that (\ref{161}) follows once we prove that for all
$T>0,\ \xi\in (1,2]$ and all bounded
Lipschitz functions $F$ on $C([0,T],\bb{R}^d),$\begin{equation}\label{160}
\sum_m \mathrm{Var}_{\bb{P}}\left(E^{P\times K_{0,\omega}}\left[F\left(\beta_.^{[\xi^m]}\right)\right]\right)<\infty
\end{equation} (with a slight abuse of notation). For this purpose we need some further notation. Given an environment
$\omega,$ we consider two independent copies $((\chi_t^{})_{t\geq 0}^{},(\Lambda_n)_{n\geq 0}^{})$ and
$((\tilde{\chi}_t^{})_{t\geq 0}^{},(\tilde{\Lambda}_n)_{n\geq 0}^{})$ evolving according to $P\times K_{0,\omega}.$
The corresponding first $d_1$ components of
$\chi_.^{}$ and $\tilde{\chi}_.^{}$ denoted with
$X^1_.$ and $\tilde{X}_.^1$ are then two independent $d_1$-dimensional Brownian motions. We also
introduce the corresponding polygonal interpolations $\beta_.^n$ and $\tilde{\beta}_.^n$ defined as in (\ref{161}). With $\cal{D}$ we denote the
set of one-sided cut times attached to $((X^1_t)_{t\in\bb{R}},(\Lambda_j)_{j\in\bb{Z}})$ defined via \begin{equation}\label{091}
\cal{D}=\left\{k\geq 1\ \Big|\ \left(X^1_{[0,k-1]}\right)^{R}{{\cap}}
\left(X^1_{[k,\infty)}\right)^{R}=\emptyset\mbox{ and }\Lambda_{k-1}=1\right\}.
\end{equation}$\tilde{\cal{D}}$ is defined analogously and attached to $((\tilde{X}_t^1)_{t\in\bb{R}},
(\tilde{\Lambda}_j)_{j\in\bb{Z}}).$ We then pick
\begin{equation*}\xi\in (1,2],\quad 0<\mu<\nu<\frac{1}{2},\end{equation*}and for $m\geq 1$ we define $n=[\xi^m],$
as well as
\begin{equation*}\rho_m\stackrel{\mathrm{def}}{=}\inf\left\{\cal{D}\,{{\cap}}\,[n^{\mu},\infty)\right\}<\infty,\quad P\mbox{-a.s.
(see (\ref{22}))},\end{equation*} and $\tilde{\rho}_m$ as the corresponding variable attached to $((\tilde{X}_t^1)_{t\in\bb{R}},
(\tilde{\Lambda}_j)_{j\in\bb{Z}}).$
In order to take advantage of decoupling effects we will consider the event\begin{equation*}
\cal{A}_m=\left\{\rho_m\vee\tilde{\rho}_m\leq n^{\nu},\ \left(X^1_{[0,\infty)}\right)^{R}{{\cap}}
\left(\tilde{X}^1_{[n^{\mu},\infty)}\right)^{R}=\emptyset,\ \left(X^1_{[n^{\mu},\infty)}\right)^{R}{{\cap}}
\left(\tilde{X}^1_{[0,\infty)}\right)^{R}=\emptyset\right\}.
\end{equation*}
We are now ready to prove (\ref{160}). Without loss of generality, we assume the Lipschitz constant and the absolute
value of $F$ to be bounded
by 1. For the remainder of the proof we write $E$ and $E_{\omega}$ for the expectation under the measure $P\times K_0$ and
$P\times K_{0,\omega}$ respectively. For $m\geq 1,$ we have
\begin{eqnarray*}
{\mathrm{Var}}_{\bb{P}}\left(E_{\omega}\left[F\left(\beta_.^n\right)\right]\right)&=&
\bb{E}\left[E_{\omega}{\otimes}E_{\omega}\left[F\left(\beta_.^n\right)F(\tilde{\beta}_.^n)
\right]\right]-E{{\otimes}}E\left[F\left(\beta_.^n\right)F(\tilde{\beta}_.^n)\right]\\
&=& \bb{E}\left[E_{\omega}{{\otimes}}E_{\omega}\left[F\left(\beta_.^n\right)F(\tilde{\beta}_.^n),
\cal{A}_m\right]\right]-E{{\otimes}}E\left[F\left(\beta_.^n\right)F(\tilde{\beta}_.^n),
\cal{A}_m\right]+d_m
\end{eqnarray*} with \begin{equation}\label{142}|d_m|\leq 2P{{\otimes}}P\left[\cal{A}_m^c\right].
\end{equation} Using that $F$ is bounded and Lipschitz and $d_T(\cdot,\cdot)\leq 1$ we obtain that the difference of the first
two terms in the last line above (\ref{142}) is equal to\begin{equation}\label{143}\begin{array}{l}
\bb{E}\left[E_{\omega}{{\otimes}}E_{\omega}\left[F\left(\beta_{\cdot+\frac{\rho_m}{n}}^n-\beta^n_{\frac{\rho_m}{n}}\right)
F\left(\tilde{\beta}_{\cdot+\frac{\tilde{\rho}_m}{n}}^n-\tilde{\beta}^n_{\frac{\tilde{\rho}_m}{n}}\right),\cal{A}_m\right]\right]\\[11pt]
-E{{\otimes}}E\left[F\left(\beta_{\cdot+\frac{\rho_m}{n}}^n-\beta^n_{\frac{\rho_m}{n}}\right)
F\left(\tilde{\beta}_{\cdot+\frac{\tilde{\rho}_m}{n}}^n-\tilde{\beta}^n_{\frac{\tilde{\rho}_m}{n}}\right),\cal{A}_m\right]
+\Delta^3_m\\[11pt]
=:\Delta_m^1-\Delta^2_m+\Delta^3_m,\end{array}\end{equation}
with \begin{equation*}
\Delta^3_m\leq 6E{{\otimes}}E\left[d_T\left(\beta_{\cdot+\frac{\rho_m}{n}}^n-\beta^n_{\frac{\rho_m}{n}}\,,\,
\beta_.^n\right),\cal{A}_m\right].
\end{equation*}We first want to show that\begin{equation}\label{805}\Delta_m^1=\Delta_m^2.
\end{equation}For each $\omega\in\Omega$ and fixed samples $(w,\lambda_.)$ and $(\tilde{w},\tilde{\lambda}_.)$ of
$(X^1_.,\Lambda_.)$ and $(\tilde{X}^1_.,\tilde{\Lambda}_.)$ respectively,
under $K_{0,\omega}(w,\lambda_.){{\otimes}}
K_{0,\omega}(\tilde{w}_.,\tilde{\lambda}_.),$ the processes $\beta^n_.$ and $\tilde{\beta}_.^n$
are independent and hence with the help of Fubini's Theorem we can write
\begin{equation*}
\Delta_m^1=E^{P}{{\otimes}}E^{P}\left[\bb{E}\left[E^{K_{0,\omega}}\left[
F\left(\beta_{\cdot+\frac{\rho_m}{n}}^n-\beta^n_{\frac{\rho_m}{n}}\right)\right]
E^{K_{0,\omega}}\left[F\left(\tilde{\beta}_{\cdot+\frac{\tilde{\rho}_m}{n}}^n-\tilde{\beta}^n_{\frac{\tilde{\rho}_m}{n}}\right)\right]\right],
\cal{A}_m\right].
\end{equation*}By similar arguments to those leading to (\ref{701}) we obtain that for each $(w,\lambda_.)\in W,$
\begin{equation}\label{071}\begin{array}{l}
E^{K_{0,\omega}}\left[F\left(\beta_{\cdot+\frac{\rho_m}{n}}^n-\beta^n_{\frac{\rho_m}{n}}\right)\right]\\[11pt]
={\displaystyle{\int_{\bb{R}^{d_2}}\frac{dy}{vol(d_2)}}}E^{K_{0,\omega}}\left[\bbm{1}{\{y\in B^{d_2}_1(X^2_{\rho_m-1})\}}\right]
E^{K_{0,\bar{\omega}}\circ\theta_{\rho_m}}\Big[F\left(\beta^n_.-\beta^n_0\right)\circ\theta_{\rho_m}\Big],
\end{array}\end{equation}with $\bar{\omega}=\tau_{(w(\rho_m),y)}(\omega).$ From (4) of Theorem \ref{47} it follows that the first
expectation in the second line of (\ref{071}) is measurable with respect to $\cal{H}_{(w([0,\rho_m-1]))\times\bb{R}^{d_2}},$ see (\ref{4}),
whereas the second expectation is $\cal{H}_{(w([\rho_m,\infty)))\times\bb{R}^{d_2}}$-measurable. With these considerations in mind
we find with Fubini's Theorem and finite range dependence, see (\ref{5}), that on $\cal{A}_m,$
\begin{eqnarray}
\label{070} && \bb{E}\left[E^{K_{0,\omega}}\left[F\left(\beta_{\cdot+\frac{\rho_m}{n}}^n-\beta^n_{\frac{\rho_m}{n}}\right)\right]
E^{K_{0,\omega}}\left[F\left(\tilde{\beta}_{\cdot+\frac{\tilde{\rho}_m}{n}}^n-\tilde{\beta}^n_{\frac{\tilde{\rho}_m}{n}}\right)\right]\right]\\[5pt]
\nonumber && =\int_{\bb{R}^{d_2}}\int_{\bb{R}^{d_2}}\frac{dy_1dy_2}{vol(d_2)^2}
\bb{E}\left[E^{K_{0,\omega}}\left[\bbm{1}{\{y_1\in B^{d_2}_1(X^2_{\rho_m-1})\}}\right]
E^{K_{0,\omega}}\left[\bbm{1}{\{y_2\in B^{d_2}_1(\tilde{X}^2_{\tilde{\rho}_m-1})\}}\right]\right]\\[5pt]
\nonumber &&\hspace{2cm}\times \bb{E}\left[E^{K_{0,\bar{\omega}}\circ\theta_{\rho_m}}\Big[F\Big(\beta^n_.-\beta^n_0\Big)\circ\theta_{\rho_m}\Big]\right]
\bb{E}\left[E^{K_{0,\tilde{\omega}}\circ\theta_{\tilde{\rho}_m}}\Big[F\left(\tilde{\beta}^n_.-\tilde{\beta}^n_0\right)\circ\theta_{\tilde{\rho}_m}\Big]\right],
\end{eqnarray}with $\bar{\omega}=\tau_{(w(\rho_m),y_1)}(\omega),\ \tilde{\omega}=\tau_{(w(\tilde{\rho}_m),y_2)}(\omega).$ Because of the
stationarity of the environment the last two $\bb{P}$-expectations above are in fact independent of $y_1$ respectively $y_2$ so that an
application of Fubini's Theorem shows us that (\ref{070}) equals\begin{equation*}
E^{K_{0}\circ\theta_{\rho_m}}\Big[F\Big(\beta^n_.-\beta^n_0\Big)\circ\theta_{\rho_m}\Big]
E^{K_{0}\circ\theta_{\tilde{\rho}_m}}\Big[F\left(\tilde{\beta}^n_.-\tilde{\beta}^n_0\right)\circ\theta_{\tilde{\rho}_m}\Big].
\end{equation*} Analogously we also find that\begin{equation*}
E^{K_{0}}\Big[F\Big(\beta^n_{\cdot+\frac{\rho_m}{n}}-\beta^n_{\frac{\rho_m}{n}}\Big)\Big]=
E^{K_{0}\circ\theta_{\rho_m}}\Big[F\Big(\beta^n_.-\beta^n_0\Big)\circ\theta_{\rho_m}\Big],
\end{equation*}and the same holds true if we replace $\beta^n_.,\rho_m$ by $\tilde{\beta}^n_.,\tilde{\rho}_m.$
This concludes the proof of (\ref{805}). We now come to the control of $\Delta_m^3.$ Noting that
on $\cal{A}_m,\ E_{\omega}$-a.s.\begin{equation*}
d_T\left(\beta_{\cdot+\frac{\rho_m}{n}}^n-\beta^n_{\frac{\rho_m}{n}}\,,\,\beta_.^n\right)\leq
\sup_{\scriptscriptstyle{\substack{0\leq s< t\leq Tn+1\\ \,|t-s|\leq n^{\nu}}}}\frac{1}{\sqrt{n}}\big|
\chi_t^{}-\chi_s^{}\big|+\sup_{\scriptscriptstyle{t\leq n^{\nu}}}\frac{1}{\sqrt{n}}\big|\chi_t^{}\big|,
\end{equation*}
we find by using (\ref{150}) and the fact that \begin{equation*}
E_{\omega}\Bigg[\sup_{\scriptscriptstyle{\substack{0\leq s< t\leq Tn+1\\ \,|t-s|\leq n^{\nu}}}}
\big|W_t-W_s\big|\Bigg]\leq
c(T)n^{1/4+\nu/4}E_{\omega}\Bigg[\sup_{0\leq s\leq t\leq 1}\frac{|W_t-W_s|}{|t-s|^{1/4}}\Bigg]
\leq c(T)n^{1/4+\nu/4},
\end{equation*}where the last inequality follows from an application of Fernique's Theorem, see \cite{DS} on page 14, which ensures the
existence of the exponential moment of $\eta\sup_{0\leq s\leq t\leq 1}|W_t-W_s|/|t-s|^{1/4}$ for a certain constant $\eta>0,$
that\begin{equation*}
\Delta^3_m\leq \frac{c(T)}{\sqrt{n}}(n^{\nu}+n^{1/4+\nu/2})\leq c(T)n^{\nu/2-1/4},
\end{equation*} and hence $\sum_m \Delta^3_m<\infty,$ recall $n=[\xi^m].$ It remains to show that\begin{equation}\label{144}
\sum_m P{{\otimes}}P\left[\cal{A}_m^c\right]<\infty.
\end{equation}
Indeed, we find that\begin{equation}\label{145}
P{{\otimes}}P\left[\left(X^1_{[0,\infty)}\right)^{R}{{\cap}}
\left(\tilde{X}^1_{[n^{\mu},\infty)}\right)^{R}\ne\emptyset\right]\overset{\scriptstyle{(\ref{33})}}{\leq} cn^{-\mu\frac{d_1-4}{2}},
\end{equation}
and moreover, since the random set $\cal{C}\,{{\cap}}\,\bb{N}$ is contained in $\cal{D},$ see (\ref{19}) and
(\ref{091}), we have that $P$-a.s., $\rho_m-n^{\mu}\leq T^1\circ\theta_{[n^{\mu}]}$ and hence from stationarity of
$\theta_1$ under $P$ it follows that for large $m,$
\begin{equation}\label{146}
P\left[\rho_m>n^{\nu}\right]\leq P[T^1>n^{\nu}-n^{\mu}]\overset{\scriptstyle{(\ref{26})}}{\leq}
c(\varepsilon)(\log n^{\nu})^{1+\frac{d_1-4}{2}}n^{-\nu\frac{d_1-4}{2}}\leq e^{-c(\varepsilon)m}.
\end{equation} Combining (\ref{145}) and (\ref{146}) we deduce (\ref{144}).
\begin{flushright}$\Box$\end{flushright}
\section |
1,108,101,564,663 | arxiv | \section{Introduction}
A great deal of progress has been made in the study of string compactification using the
ten-dimensional supergravity approximation
(for a review, see \cite{Douglas:2006es}). However, it has become clear that certain interesting
physical features of our world are difficult (if not impossible) to realize when this
description is valid. Examples which come to mind include a period of slow-roll inflation
\cite{Hertzberg:2007wc, Dimopoulos:2005ac, Grimm:2007hs}, certain models of dynamical
supersymmetry breaking \cite{Florea:2006si}, chiral matter with stabilized moduli
\cite{Blumenhagen:2007sm} and parametrically-small perturbatively-stabilized extra dimensions
\cite{Douglas:2006es}. This strongly motivates attempts to find descriptions of
moduli-stabilized string vacua which transcend the simple geometric description.
One approach to vacua outside the domain of validity of 10d supergravity is to rely only on the
4d gravity description, as in {\it e.g.\ } \cite{Shelton:2005cf, Silverstein:2007ac}. This can be
combined with insight into the microscopic ingredients to give a description of much more
generic candidate string vacua. A drawback of this approach is that it is difficult to control
systematically the interactions between the ingredients. Another promising direction is
heterotic constructions, which do not require RR flux and hence are more amenable to a
worldsheet treatment \cite{Adams:2006kb, Adams:2007vp}. However, stabilization of the dilaton in
these constructions requires non-perturbative physics.
A third technique, which is at an earlier state of development, was implemented in
\cite{Hellerman:2002ax}, and was inspired by \cite{Greene:1989ya, Vafa:1996xn}. The idea is to
build a compactification out of locally ten-dimensional geometric descriptions, glued together
by transition functions which include large gauge transformations, such as stringy dualities.
This technique is uniquely adapted to construct examples with no global geometric description.
In this paper, we build on the work of \cite{Hellerman:2002ax} to give 4d ${\cal N} = 1$
examples.
With S. Hellerman and B. Williams \cite{Hellerman:2002ax}, one of us constructed early examples
of vacua involving such `non-geometric fluxes'. These examples were constructed by compactifying
string theory on a flat $n$-torus, and allowing the moduli of this torus to vary over some base
manifold. The description of these spaces where the torus fiber is flat is called the {\it
semi-flat approximation} \cite{Strominger:1996it}. Allowing the torus to degenerate at real
codimension two on the base reduces the construction of interesting spaces to a Riemann-Hilbert
problem; the relevant data is in the monodromy of the torus around the degenerations
\cite{Greene:1989ya}. Generalizing this monodromy group to include not just modular
transformations of the torus, but more general discrete gauge symmetries of string theory
(generally known as string dualities) allows the construction of vacua of string theory which
have no global geometric description \cite{Hellerman:2002ax}. The examples studied in detail in
\cite{Hellerman:2002ax} had two-torus fibers, which allowed the use of complex geometry.
A natural explanation of mirror symmetry is provided by the conjecture \cite{Strominger:1996it}
that any CY has a description as a three-torus $(T^3)$ fibration, over a 3-manifold base. In the
large complex structure limit, the locus in the base where the torus degenerates is a trivalent
graph; the data of the CY is encoded in the monodromies experienced by the fibers in
circumnavigating this graph. Further, the edges of the graph carry energy and create a deficit
angle -- in this description a compact CY is a self-gravitating cosmic string network whose
back-reaction compactifies the space around itself. In this paper, our goal is to use this
description of ordinary CY manifolds to construct non-geometric vacua, again by enlarging the
monodromy group. We find a number of interesting new examples of non-geometric vacua with 4d
${\cal N}=1$ supersymmetry. In a limit, they have an exact CFT description as asymmetric
orbifolds, and hence can be considered `blowups' thereof. We study the spectrum, particularly
the massless scalars, and develop some insight into how these vacua fit into the web of known
constructions.
We emphasize at the outset two limitations of our analysis. First, the examples constructed so
far are special cases which have arbitrarily-weakly-coupled perturbative descriptions and
(therefore) unfixed moduli. Our goal is to use them to develop the semiflat techniques in a
controllable context. Generalizations with nonzero RR fluxes are naturally incorporated by
further enlarging the monodromy group to include large RR gauge transformations, as in F-theory
\cite{Vafa:1996xn}. There one can hope that all moduli will be lifted. This is the next step
once we have reliable tools for understanding such vacua using the fibration description.
The second limitation is that we have not yet learned to describe configurations where the base
of the $T^3$-fibration is not flat away from the degeneration locus.
The examples of SYZ fibrations we construct
(analogous to F-theory at constant coupling \cite{Dasgupta:1996ij}) all involve
composite degenerations which we do not know how to resolve. The set of rules we find for
fitting these composite degenerations into compact examples will be a useful guide to the more
difficult general case.
A number of intriguing observations arise in the course of our analysis. One can ``geometrize''
these non-geometric compactifications by realizing the action of the T-duality group as a
geometric action on a $T^4$ fiber. The semi-flat metric on the fiber contains the original
metric and the Hodge dual of the B-field. Hence, we are led to study seven-manifolds ${\cal
X}_7$ which are $T^4$ fibrations over a 3d base. They can be embedded into flat $T^4$
compactifications of M-theory down to seven dimensions where the reduced theory has an $SL(5)$
U-symmetry. U-duality then suggests that ${\cal X}_7$ may be a $G_2$ manifold since the
non-geometric Type IIA configuration can be rotated into a purely geometric solution of maximal
supergravity in seven dimensions. Whether or not these solutions can in general be lifted to
eleven dimensions is a question for further investigation. In this paper, we study explicit
examples of $G_2$ (and Calabi-Yau) manifolds and show that they do provide perturbative
non-geometric solutions to Type IIA in ten dimensions through this correspondence. The spectrum
of these spaces can be computed by noticing that they admit an asymmetric orbifold description,
and it matches that computed from M-theory when a comparison is possible.
The paper is organized as follows. In the next section we review the semiflat approximation to
geometric compactification in various dimensions. We describe in detail the semiflat
decomposition of an orbifold limit of a Calabi-Yau threefold; this will be used as a starting
point for nongeometric generalizations in section four. In section three we describe the
effective field theory for type II strings on a flat $T^3$. We show that the special class of
field configurations which participate in $T^3$-fibrations with {\it perturbative} monodromies
can alternatively be described in terms of geometric $T^4$-fibrations. We explain the U-duality
map which relates these constructions to M-theory on $T^4$-fibered $G_2$-manifolds. In sections
four and five we put this information together to construct nongeometric compactifications. In
section six we consider generalizations where the fiber theory involves discrete Wilson lines.
Hidden after the conclusions are many appendices. Appendix A gives more detail of the reduction
on $T^3$. The purpose of Appendices B--D is to build confidence in and intuition about the
semiflat approximation: Appendix B is a check on the relationship between the semiflat
approximation and the exact solution which it approximates; Appendix C is a derivation of the
Hanany-Witten brane-creation effect using the semiflat limit; Appendix D derives a known duality
using the semiflat description. In Appendix E we record asymmetric orbifold descriptions of the
nongeometric constructions of section four. In Appendices F through H, we study in detail the
massless spectra of many of our constructions, and compare to the spectra of M-theory on the
corresponding $G_2$-manifolds when we can. Appendix I contains templates to help the reader to
build these models at home.
\newpage
\section{Semi-flat limit}
Since we want to construct non-geometric spaces by means of T-duality, we exhibit the spaces as
torus fibrations. We need isometries in the fiber directions in which the dualities act. Hence,
we wish to study manifolds in a {\it semi-flat limit} where the fields do not depend on the
fiber coordinates. This is the realm of the SYZ conjecture \cite{Strominger:1996it}. Mirror
symmetry of Calabi-Yau manifolds implies that they have a special Lagrangian $T^n$ fibration.
Branes can be wrapped on the fibers in a supersymmetric way and their moduli space is the mirror
Calabi-Yau. At tree level, this moduli space is a semi-flat fibration, {\it i.e.\ } the metric has a
$U(1)^n$ isometry along the fiber. However, there are world-sheet instanton corrections to this
tree-level metric. Such corrections are suppressed (away from singular fibers) in the {\it large
volume limit}. The mirror Calabi-Yau is then in the {\it large complex structure limit}. In this
limit the metric is semi-flat and mirror symmetry boils down to T-duality along the fiber
directions\footnote{It is best to think of the fiber as being very small compared to the size of
the base. It is thought that in the large complex structure limit, the total space of the CY
collapses to a metric space homeomorphic to $S^n$ which is the base of the fibration (see {\it e.g.\ }
\cite{GrossWilson}). }.
As a warm-up, we will now briefly review the one-{\it complex}-dimensional case of a torus, and
the two-dimensional case of stringy cosmic strings \cite{Greene:1989ya}. These sections may be
skipped by experts. In Section \ref{threedim}, we construct a fibration for a three-dimensional
orbifold that will in later sections be modified to a non-geometric compactification.
\subsection{One dimension}
\label{onedimsec}
The simplest example is the flat two-torus. Its complex structure is given by modding out the
complex plane by a lattice generated by 1 and $\tau = \tau_1 + i \tau_2 \in \mathbb{C}$ (with
$\tau_2>0$). The K\"{a}hler\, structure is $\rho = b +iV/2$ where $b=\int_{T^2} B$ and $V$ the area of
the torus (again, $V>0$).
There is an $SL(2,\mathbb{Z})_\tau$ group acting on the complex modulus $\tau$. This is a redundancy in
defining the lattice. The group action is generated by $\tau \mapsto \tau + 1$ and $\tau \mapsto
-1/\tau$. Another $SL(2,\mathbb{Z})_\rho$ group acts on $\rho$. This is generated by the shift in the
B-field $b \mapsto b+1$ and a double T-duality combined with a $90^\circ$ rotation that is $\rho
\mapsto -1/\rho$. The fundamental domain for the moduli is shown in \fref{fundom}.
The torus can naturally be regarded as a semi-flat circle fibration over a circle. For special
Lagrangian fibers, we choose the real slices in the complex plane. In the $\tau_2 \rightarrow
\infty$ large complex structure limit, these fibers are small compared to the base $S^1$ which
is along the imaginary axis.
Mirror symmetry exchanges the complex structure $\tau$ with the K\"{a}hler\, structure $\rho$. This
boils down to T-duality along the fiber direction according to the Buscher rules
\cite{Buscher:1987sk, Buscher:1987qj}. It maps the large complex structure into large K\"{a}hler\,
structure that is $\rho_2 = V \rightarrow \infty$.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=6cm,angle=0,origin=c]{keyhole.eps}
\caption{A possible fundamental domain (gray area) for the action of the $SL(2,\mathbb{Z})$ modular group on the upper half-plane.
The upper-half plane parametrizes the possible values of $\tau$ (or $\rho$): the moduli of a two-torus.
The gray domain can be folded into an $S^2$ with three special points (the two orbifold points: $\tau_{\mathbb{Z}_6} = e^{2\pi i /6}$ and $\tau_{\mathbb{Z}_4} =
i$, and the decompactification point: $\tau\rightarrow i \infty$).}
\label{fundom}
\end{center}
\end{figure}
\subsection{Two dimensions}
\label{twodimsec}
In order to construct semi-flat fibrations in two dimensions, let us consider the dynamics
first. Type~IIA on a flat two-torus can be described by the effective action in Einstein frame
\begin{equation}
S=\int d^8 x \sqrt{g} \left( R + \frac{\partial_\mu \tau \partial^\mu \bar\tau}{\tau_2^2} +
\frac{\partial_\mu \rho \partial^\mu \bar\rho}{\rho_2^2} \right)
\label{acti}
\end{equation}
where $\tau$ is the complex structure of the torus, and $\rho=b+iV/2$ is the K\"{a}hler\, modulus as
described earlier.
The action is invariant under the
$SL(2,\mathbb{Z})_\tau \times SL(2,\mathbb{Z})_\rho$ perturbative duality group,
which acts on $\tau$ and $\rho$ by fractional linear transformations.
Variation with respect to $\tau$ gives
\begin{equation}\label{einsteineqn}
\partial \bar\partial \tau + \frac{2\partial\tau\bar\partial\tau}{\bar\tau-\tau}=0 ;
\end{equation}
and $\rho$ obeys the same equation. Stringy cosmic string solutions to the EOM can be obtained
by choosing a complex coordinate $z$ on two of the remaining eight dimensions, and taking
$\tau(z)$ a holomorphic section of an $SL(2, \mathbb{Z})$ bundle. Such solutions are not modified by
considering the following ansatz for the metric around the string\footnote{ By an appropriate
coordinate transformation of the base coordinate, this metric can be recast into a symmetric $g
\oplus g$ form (see \cite{Strominger:1996it,Loftin:2004qu}).}
\begin{equation}
ds^2=ds_\textrm{Mink}^2+e^{\psi(z,\bar z)} dz d\bar z + ds_\textrm{fiber}^2
\label{flateq1}
\end{equation}
where
\begin{equation}
ds_\textrm{fiber}^2 = \frac{1}{\tau_2} \left( \begin{array}{cc}
1 & \tau_1 \\
\tau_1 & \ |\tau|^2
\end{array}
\right)
\end{equation}
The Einstein equation is the Poisson equation,
\begin{equation}
\partial \bar\partial \psi = \partial \bar\partial \textrm{log} \, \tau_2
\label{flateq2}
\end{equation}
Far away from the strings, the metric of the base goes like \cite{Greene:1989ya}
\begin{equation}
ds^2_{2D} \sim | z^{-N/12} dz |^2
\end{equation}
where $N$ is the number of strings. This can be coordinate transformed by $\tilde z =
z^{1-N/12}$ to a flat metric with $2\pi N/12$ deficit angle.
\vskip 0.5cm \noindent {\bf Solutions and orbifold points.} One could in principle write down
solutions by means of the $j$-function,
\begin{equation}
j(\tau) = \eta(\tau)^{-24} (\theta^8_1(\tau)+\theta^8_2(\tau)+\theta^8_3(\tau))^3
\end{equation}
which maps the $\tau_{\mathbb{Z}_6} = e^{2\pi i /6}$ and $\tau_{\mathbb{Z}_4} = i$ orbifold points to 0 and 1,
respectively. The $\tau_2 \rightarrow \infty$ degeneration point gets mapped to $j\rightarrow
\infty$. A simple solution would then be
\begin{equation}
j(\tau) = \frac{1}{z-z_0} + j_0
\end{equation}
At infinity, the shape of the fiber is constant, {\it i.e.\ } $\tau_\infty = j^{-1}(j_0)$ and thus this
non-compact solution may be glued to any other solution with constant $\tau$ at infinity.
However, since $\tau$ covers the entire fundamental domain once, there will be two points in the
base where $\tau(z) = \tau_{\mathbb{Z}_6}$ or $\tau_{\mathbb{Z}_4}$. Over these points, the fiber is an
orbifold of the two-torus. These singular points cannot be resolved in a Ricci-flat way and we
can't use this solution for superstrings.
There is, however, a six-string solution which evades this problem \cite{Greene:1989ya}. It is
possible to collect six strings together in a way that $\tau$ approaches a constant value at
infinity. $\tau$ can be given implicitly by {\it e.g.\ }
\begin{equation}
y^2 = x (x-1)(x-2)(x-z)
\end{equation}
There are no orbifold points now because $\tau$ can be written as a holomorphic function over
the base. The above equation describes three double degenerations, that is, three strings of
tension twice the basic unit. In the limit when the strings are on top of one another, we obtain
what is known (according to the Kodaira classification) as a $D_4$ singularity with deficit
angle $180^\circ$.
The monodromy of the fiber around this singularity
is described by
\begin{equation}
\mathcal{M}_{D_4} = \left( \begin{array}{rr}
-1 & 0 \\
0 & \ -1
\end{array}
\right)
\end{equation}
acting on $\binom{\omega_1}{\omega_2}$with $ \tau \equiv {\omega_1\over \omega_2} $. This
monodromy decomposes into that of six elementary strings which are mutually
non-local\footnote{For explicit monodromies for the six strings, see \cite{Gaberdiel:1997ud}.}.
This can be generalized to more than six strings using the Weierstrass equation
\begin{equation}\label{weierstrasseqn}
y^2 = x^3 + f(z)x + g(z)
\end{equation}
The modular parameter of the torus is determined by
\begin{equation}
j(\tau(z)) = \frac{4 f^3}{4f^3+27g^2}
\end{equation}
Whenever the numerator vanishes, $\tau=\tau_{\mathbb{Z}_6}$ and we are at an orbifold point. We see
however that it is a triple root of $f^3$ and no orbifolding of the fiber is necessary. The same
applies for the $\mathbb{Z}_4$ points. The strings are located where $\tau_2 \rightarrow \infty$ that
is where the {\it modular discriminant} $\Delta\equiv 4f^3+27g^2$ vanishes.
Note that the monodromy of the fibers around a smooth point is automatically the identity
in such a construction.
\vskip 0.5cm \noindent {\bf Kodaira classification.} Degenerations of elliptic fibrations have
been classified according to
their monodromy
by Kodaira.
For convenience, we summarize the result
in the following table
\cite{Bershadsky:1996nh}:
\vskip 0.5cm \vskip 0cm
\begin{center}
\begin{tabular}{|c | c | c | c | c | }
\hline {\bf ord(f)} & {\bf ord(g)} & {\bf ord($\Delta$)} & {\bf monodromy} & {\bf
singularity}
\\ \hline
$\ge 0$ & $\ge 0$ & 0 & {\footnotesize $\mtwo{1}{0}{0}{1}$} & none \\ \hline
$0 $ & $ 0$ & $n$ & {\footnotesize $\mtwo{1}{n}{0}{1}$} & $A_{n-1}$ \\ \hline
$\ge 1 $ & $1$ & $2$ & {\footnotesize $\mtwo{1}{1}{-1}{0}$} & none \\ \hline
$1$ & $\ge 2$ & $3$ & {\footnotesize $\mtwo{0}{1}{-1}{0}$} & $A_{1}$ \\ \hline
$\ge 2$ & $2$ & $4$ & {\footnotesize $\mtwo{0}{1}{-1}{-1}$} & $A_{2}$ \\ \hline
$2$ & $\ge 3$ & $n+6$ & {\footnotesize $\mtwo{-1}{-n}{0}{-1}$} & $D_{n+4}$ \\ \hline
$\ge 2$ & $3$ & $n+6$ & {\footnotesize $\mtwo{-1}{-n}{0}{-1}$} & $D_{n+4}$ \\ \hline
$\ge 3$ & $4$ & $8$ & {\footnotesize $\mtwo{-1}{-1}{1}{0}$} & $E_{6}$ \\ \hline
$3$ & $\ge 5$ & $9$ & {\footnotesize $\mtwo{0}{-1}{1}{0}$} & $E_{7}$ \\ \hline
$\ge 4$ & $5$ & $10$ & {\footnotesize $\mtwo{0}{-1}{1}{1}$} & $E_{8}$ \\ \hline
\end{tabular}
\end{center}
\clearpage
\vskip 0.5cm \noindent {\bf Constructing K3.} One can construct a compact example
where the fiber experiences 24 $A_0$ degenerations.
In the Weierstrass description (\ref{weierstrasseqn}),
this means that $f$ has degree 8, $g$ has degree 12, and $\Delta$ has degree 24.
This is the semi-flat description of a $K3$ manifold.
In a certain limit where we group the strings
into four composite $D_4$ singularities, the base is flat and the total space becomes
$T^4/\mathbb{Z}_2$. The base can be obtained by gluing four flat triangles as seen in \fref{tetrah}. At
each $D_4$ degeneration, the base has $180^\circ$ deficit angle which adds up to $4\pi$ and
closes the space into a flat sphere with the curvature concentrated at four points.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3.5cm,angle=0,origin=c]{t4z2.eps}
\caption{Base of the $T^4/\mathbb{Z}_2$ orbifold. The $\mathbb{Z}_2$ action inverts the base coordinates and has four fixed points denoted by red stars.
They have $180^\circ$ deficit angle. As the arrows show, one has to fold the diagram and this gives an $S^2$. }
\label{t4fund}
\end{center}
\end{figure}
\vskip -0.5cm
As we have seen, in two dimensions the Weierstrass equation solves the problem of orbifold
points. In higher dimensions, we don't have this tool but we can still try to glue patches of
spaces in order to get compact solutions. Gluing is especially easy if the base is flat.
However, generically this is not the case. Having a look at the Einstein equation (\ref{einsteineqn}), we see that a
flat base can be obtained if $\tau(z)$ is constant. This happens in the case of $D_4$ and $E_n$
singularities.
Our discussion in this paper will (unfortunately) be restricted to these singularities.
The cosmic string metric is singular in the above semi-flat description. It must be slightly
modified in order to get a smooth Calabi-Yau metric for the total space. This will be discussed
in Appendix \ref{sfvs}.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3.0cm,angle=0,origin=c]{k3.eps}
\caption{Flat $S^2$ base constructed from four triangles: base of $K3$ in the $\mathbb{Z}_2$ orbifold limit.}
\label{tetrah}
\end{center}
\end{figure}
\newpage
\subsection{Three dimensions}
\label{threedim}
In two dimensions, the only smooth compact Calabi-Yau is the $K3$ surface. In three dimensions,
there are many different spaces and therefore the situation is much more complicated. The SYZ
conjecture \cite{Strominger:1996it} says that every Calabi-Yau threefold which has a geometric
mirror, is a special Lagrangian $T^3$ fibration with possibly degenerate fibers at some points.
For the generic case, the base is an $S^3$. Without the special Lagrangian condition, the
conjecture has been well understood in the context of topological mirror symmetry
\cite{Gross:1999hc, Tomasiello:2005bp}. There, the degeneration loci form a (real) codimension
two subset in the base. A graph $\Gamma$ is formed by edges and trivalent vertices. The fiber
suffers from monodromy around the edges. This is specified by a homomorphism
\begin{equation}
M: \ \pi_1(S^3 \setminus \Gamma) \longrightarrow SL(3,\mathbb{Z})
\end{equation}
There are two types of vertices which contribute $\pm 1$ to the Euler character of the total
space\footnote{These positive and negative vertices are also called type (1,2) / type (2,1)
\cite{Gross:1999hc} or type III / type II \cite{Ruan1} vertices by different authors. For an
existence proof of metric on the vertex, see \cite{Loftin:2004qu}.}. At the vertices, the
topological junction condition relates the monodromies of the edges.
One of the most studied non-trivial Calabi-Yau spaces is the quintic in $\mathbb{P}^4$. However, even
the topological description of this example is fairly complicated \cite{Gross:1999hc}. The
topological construction contains $250+50$ vertices and $450$ edges in the $S^3$ base.
Constructing not only topological, but {\it special Lagrangian} SYZ fibrations is a much harder
task. In fact, it is expected that away from the semi-flat limit, the real codimension two
singular loci in the base get promoted to codimension one singularities, {\it i.e.\ } surfaces in three
dimensions. These were termed ribbon graphs \cite{Joyce:2000ru} and their description remains
elusive.
\vskip 0.5cm \noindent {\bf A compact orbifold example.} In the following, we will describe the
singular $T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$ orbifold in the SYZ fibration picture. One starts with $T^6$
that is a product of three tori with complex coordinates $z_i$. Without discrete torsion, the
orbifold action is generated by the geometric transformations,
\begin{eqnarray}
\alpha: (z_1, z_2, z_3) \mapsto (-z_1, -z_2, z_3) \\
\beta: (z_1, z_2, z_3) \mapsto (-z_1, z_2, -z_3)
\end{eqnarray}
These transformations have unit determinant and thus the resulting space may be resolved into a
smooth Calabi-Yau manifold.
In order to obtain a fibration structure, we need to specify the base and the fibers. For the
base coordinates, we choose $x_i \equiv \textrm{Re}(z_i)$ and for the fibers $y_i \equiv
\textrm{Im}(z_i)$. Under the orbifold action, fibers are transformed into fibers and they don't
mix with the base\footnote{It is much harder in the general case to find a fibration that
commutes with the group action.}.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=4.5cm,angle=0,origin=c]{t6lines.eps}
\caption{Singularities in the base of $T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$. The big dashed cube is the original $T^3$ base.
The orbifold group generates the singular lines as depicted in the figure. The red dots show the intersection points of these edges.}
\label{singut6z2z2}
\end{center}
\end{figure}
\vskip 0.5cm \noindent {\bf Degeneration loci in the base.} The base originally is a $T^3$. What
happens after orbifolding? If we fix, for instance, the $x_3$ coordinate, then the orbifold
action locally reduces to $\alpha$ (since the other two non-trivial group elements change
$x_3$). This means that we simply obtain four fixed points in this slice of the base. This is
exactly analogous to the $T^4/\mathbb{Z}_2$ example. The fixed points correspond to $D_4$ singularities
with a deficit angle of $180^\circ$. As we change $x_3$, we obtain four parallel edges in the
base. By keeping instead $x_1$ or $x_2$ fixed, we get perpendicular lines corresponding to
conjugate $D_4$s whose monodromies act on another $T^2$ in the $T^3$ fiber. Altogether, we get
$3\times 4$ lines of degeneration as depicted in \fref{singut6z2z2}. These edges meet at
(half-)integer points in the $T^3$ base.
Some parts of the base have been identified by the orbifold group. We can take this into account
by a folding procedure which we have already seen for $T^4 / \mathbb{Z}_2$. The degeneration loci are
the edges of a cube. The volume of this cube is $\frac{1}{8}$ of the volume of the original
$T^3$. The base can be obtained by gluing six pyramids on top of the faces (see
\fref{cube_fold}). The top vertices of these pyramids are the reflection of the center of the
cube on the faces and thus the total volume is twice that of the cube. This polyhedron is a
Catalan solid\footnote{Catalan solids are duals to Archimedean solids which are convex polyhedra
composed of two or more types of regular polygons meeting in identical vertices. The dual of the
rhombic dodecahedron is the cuboctahedron.}: the {\it rhombic dodecahedron}. (Note that one can
also construct the same base by gluing two separate cubes together along their faces.)
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=6.5cm,angle=0,origin=c]{rhombic.eps} \qquad
\includegraphics[totalheight=6.5cm,origin=c]{cube_fold.eps}
\caption{(i) Rhombic dodecahedron: fundamental domain for the base of $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$. Six pyramids are glued on top of the faces of a cube.
Neighboring pyramid triangles give rhombi since the vertices are coplanar ({\it e.g.\ } $ABCD$). (ii) The $S^3$ base can be constructed by
identifying triangles as shown by the arrows. After gluing, the deficit angle around cube edges is $180^\circ$ which is appropriate for a
$D_4$ singularity. The dihedral angles of the dashed lines are $120^\circ$ and since three of
them are glued together, there is no deficit angle. The tips of the
pyramids get identified and the space finally becomes an $S^3$.}
\label{cube_fold}
\end{center}
\end{figure}
In order to have a compact space, we finally glue the faces of the pyramids to neighboring faces
(see the right-hand side of \fref{cube_fold}). This is analogous to the case of $T^4 / \mathbb{Z}_2$
where triangles were glued along their edges (\fref{t4fund}).
\newpage
\vskip 0.5cm \noindent {\bf The topology of the base.} The base is an $S^3$ which can be seen as
follows\footnote{We thank A. Tomasiello for help in proving this.}. First fold the three rombi
$ABCD$, $AFGD$ and $ABEF$, and the corresponding three on the other side of the fundamental
domain. Then, we are still left with six rhombi that we need to fold. It is not hard to see that
the problem is topologically the same as having a $B^3$ ball with boundary $S^2$. Twelve
triangles cover the $S^2$ and we need to glue them together as depicted in \fref{cube_s3}. This
operation is the same as taking the $S^2$ and identifying its points by an $x\mapsto -x$ flip.
This on the other hand, exhibits the space as an $S^1$ fibration over $D^2$. The fiber vanishes
at the boundary of the disk. This is further equivalent to an $S^2$ fibration over an interval
where the fiber vanishes at both endpoints. This space is simply an $S^3$. The degeneration loci
are on the $S^2$ equator of this $S^3$ base and form the edges of the cube.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=4cm,angle=0,origin=c]{s3.eps}
\caption{The base of $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$ is homeomorphic to a three-ball with an $S^2$ boundary which has to be folded as shown in the figure.}
\label{cube_s3}
\end{center}
\end{figure}
\vskip 0.5cm \noindent {\bf Edges and vertices.} The monodromies of the edges are shown in
\fref{cube_mono}. The letters on the degeneration edges denote the following $SL(3)$
monodromies:
\begin{equation}
x =
\left( \begin{array}{rrr}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{array}
\right)
\quad
y =
\left( \begin{array}{rrr}
-1 & 0 & 0 \\
0 & \ 1 & 0 \\
0 & 0 & -1
\end{array}
\right)\quad
z =
\left( \begin{array}{rrr}
-1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & \ 1
\end{array}
\right)
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3.5cm,angle=0,origin=c]{cube.eps}
\caption{Monodromies for the edges.}
\label{cube_mono}
\end{center}
\end{figure}
This orbifold example contained $D_4$ strings. These are composite edges made out of six
``mutually non-local'' elementary edges. The edges have $180^\circ$ deficit angle around them
which is $6\times \frac{\pi}{6}$ where $\frac{\pi}{6}$ is the deficit angle of the elementary
string.
\comment{ This can be easily seen as the dihedral angles of the cube are $90^\circ$ and we glued
two cubes together ($180^\circ = 360^\circ - 2\times 90^\circ$). There are eight trivalent
(composite) vertices. Using the fact that the solid angle of a vertex can be obtained from the
neighboring dihedral angles as
\begin{equation}
\theta = \alpha + \beta + \gamma - \pi
\end{equation}
or just simply noticing that the cube extends into an octant from any of its vertices, we arrive
at the conclusion that the {\it solid deficit angle} of a vertex is $3\pi$. }
Note that the base is flat. This made it possible to easily glue the fundamental cell to itself
yielding a compact space. Since the edges around any vertex meet in a symmetric way, the
cancellation of forces is automatic.
There are other spaces that one can describe using $D_4$ edges and the above mentioned composite
vertices. Some examples are presented in Section \ref{examplesec}. The strategy is to make a
compact space by gluing polyhedra like the above described cubes, then make sure that the
dihedral deficit angles are appropriate for the $D_4$ singularity.
\newpage
\subsection{Flat vertices}
Codimension two degeneration loci meet at vertices in the base. In the generic case, these are
trivalent vertices of elementary strings. Such strings have $30^\circ$ deficit angle around them
measured at infinity. This creates a solid deficit angle around the vertex.
In some cases when composite singularities meet, the base is flat and the vertex is easier to
understand. In particular, the total deficit angle arises already in the vicinity of the
strings. An example was given in Section \ref{threedim} where composite vertices arise from the
``collision'' of three $D_4$ singularities (see \fref{cube_fold}). The singular edges have a
deficit angle $\pi$. The vertex can be constructed by taking an octant of three dimensional
space and gluing another octant to it along the boundary walls. The curvature is then
concentrated in the axes. The solid angle can be computed as twice the solid angle of an octant.
This gives $\pi$ (or a deficit solid angle of $3\pi$).
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=4cm,angle=0,origin=c]{cone.eps}
\caption{The solid angle at the apex is determined by the dihedral angles between the planes.}
\label{conepic}
\end{center}
\end{figure}
In the general (flat) case, a composite vertex may be described by gluing two identical cones
(the analogs of octants). Such a cone is shown in \fref{conepic}. Note that the solid angle
spanned by three vectors is given by the formula
\begin{equation}
\theta = \alpha + \beta + \gamma - \pi
\end{equation}
where $\alpha$, $\beta$ and $\gamma$ are the dihedral angles at the edges. This can be used to
compute the solid angle around a composite vertex.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3cm,angle=0,origin=c]{balance.eps}
\caption{Flat vertex. $A$, $B$ and $C$ are singular edges. $C$ is pointing towards the reader. The dashed lines must be glued together
to account for the deficit angle around $C$.}
\label{balpic}
\end{center}
\end{figure}
The singular edges have a tension which is proportional to the deficit angle around them. This
leads to the problem of force balance. In \fref{balpic}, a flat vertex is shown. The two solid
lines ($A$ and $B$) are degeneration loci. The third edge ($C$) is pointing towards the reader
as indicated by the arrow head. The deficit angle around $C$ is shown by the shaded area. In the
weak tension limit (where we rescale the deficit angles by a small number), one condition for
force balance is that these edges are in a plane. (Otherwise, energy could be decreased by
moving the vertex.) This can be generalized for almost flat spaces by ensuring that $\alpha +
\beta = \gamma$. This is automatic when we construct the neighborhood of a vertex by gluing two
identical cones\footnote{In the weak tension limit, the two identical cones almost fill two
half-spaces. The slopes of the edges are dictated by the tensions as in \cite{Sen:1997xi}. We
leave the proof to the interested reader.}.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3.0cm,angle=0,origin=c]{junction.eps}
\caption{Junction condition for monodromies. The red loop around $A$ can be smoothly deformed into two loops around $B$ and $C$.}
\label{jconpic}
\end{center}
\end{figure}
Another problem to be solved is related to the fiber monodromies. These can be described by
matrices $A$, $B$ and $C$ (see \fref{jconpic}). The loop around one of the edges (say $A$) can
be smoothly deformed into the union of the other two ($B, C$). This gives the monodromy
condition\footnote{Since monodromy matrices do not generically commute, it is important to keep
track of the branch cut planes.} $ABC=1$.
Some composite strings can be easier described than elementary ones because the base metric can
be flat around them. Such singularities are $D_4$, $E_6$, $E_7$ and $E_8$ with deficit angles
$\pi$, $4\pi/3$, $3\pi/2$ and $5\pi/3$, respectively \cite{Greene:1989ya}. Vertices where
composite lines meet can also be easily found by studying flat $\mathbb{C}^3$ orbifolds. Here we list
some of the vertices that will later arise in the examples.
\vskip 0.5cm \vskip 0cm
\begin{table}[htdp]
\begin{center}
\begin{tabular}{|c | c | c | c |}
\hline {\bf orbifold group} & {\bf colliding singularities} & {\bf solid angle} \\
\hline
$\mathbb{Z}_2 \times \mathbb{Z}_2$ & $D_4 - D_4 - D_4$ & $\pi$ \\
$\mathbb{Z}_2 \times \mathbb{Z}_4$ & $D_4 - D_4 - E_7$ & $\pi/2$ \\
$\Delta_{12}$ & $D_4 - E_6 - E_6$ & $\pi/3$ \\
$\Delta_{24}$ & $D_4 - E_6 - E_7$ & $\pi/6$ \\
\hline
\end{tabular}
\end{center}
\caption{Examples for composite vertices.}
\end{table}
\newpage
We have already seen the $\mathbb{Z}_2 \times \mathbb{Z}_2$ vertex in Section \ref{threedim}. If the vertex
is located at the origin, then the strings are stretched along the coordinate axes,
\begin{equation}
D^{(1)}_4 : (1,0,0) \qquad D^{(2)}_4 : (0,1,0) \qquad D^{(3)}_4 : (0,0,1)
\end{equation}
The second example is generated by
\begin{eqnarray*}
\alpha : (z_1, z_2, z_3) & \mapsto & (-z_3, z_2, z_1) \\
\beta : (z_1, z_2, z_3) & \mapsto & (z_1, -z_2, - z_3)
\end{eqnarray*}
It contains different colliding singularities. Their directions are given by
\begin{equation}
D^{(1)}_4 : (1,0,0) \qquad D^{(2)}_4 : (1,0,1) \qquad E_7 : (0,1,0)
\end{equation}
The $\Delta_{12}$ group has $(\mathbb{Z}_2)^2$ and $\mathbb{Z}_3$ subgroups. It is generated by
\begin{eqnarray*}
\alpha: \, (z_1, z_2, z_3) &\mapsto & (z_2, z_3, z_1) \\
\beta: \, (z_1, z_2, z_3) &\mapsto & (-z_1, -z_2, z_3)
\end{eqnarray*}
The strings directions are
\begin{equation}
D_4 : (1,0,0) \qquad E^{(1)}_6 : (1,1,1) \qquad E^{(2)}_6 : (1,1,-1)
\end{equation}
The last example is generated by combining $\mathbb{Z}_3$ and $\mathbb{Z}_4$ generators,
\begin{eqnarray*}
\alpha : (z_1, z_2, z_3) & \mapsto & (z_2, z_3, z_1) \\
\beta : (z_1, z_2, z_3) & \mapsto & (-z_2, z_1, z_3)
\end{eqnarray*}
which generate the $\Delta_{24}$ group. The direction of the strings are the following,
\begin{equation}
D_4 : (1,1,0) \qquad E_6 : (1,1,1) \qquad E_7 : (1,0,0)
\end{equation}
This is not an exhaustive list; a thorough study based on the finite subgroups of $SU(3)$
\cite{Fairbairn} would be interesting.
\newpage
\section{Stringy monodromies}
In this section, we wish to extend the discussion by including the full perturbative duality
group of type II string theory on $T^3$ in the possible set of monodromies. We will find that
this duality group can be interpreted as the geometric duality group of an auxiliary $T^4$. The
extra circle is to be distinguished from the M-theory circle but it is related to it by a
U-duality transformation.
For simplicity, the Ramond-Ramond field strengths will be turned off. This allows us to use
perturbative dualities only. However, in moduli stabilization these fields play an important
role. In fact, in the Appendices \ref{hwapp} and \ref{t5app}, we use U-duality
\cite{Hull:1994ys} monodromies which act on RR-fields in order to describe two familiar
phenomena.
From the worldsheet point of view, string compactifications are expected to be typically
non-geometric, since the 2d CFT does not necessarily have a geometric target space. Even though
we construct our examples directly based on intuition from supergravity, they will have a
worldsheet description as modular invariant asymmetric orbifolds.
For other related works on non-geometric spaces, see \cite{Kumar:1996zx, Hellerman:2002ax,
Kaloper:1999yr, Hitchin:2004ut, Gualtieri:2003dx, Grana:2004bg, Kachru:2002sk, Gurrieri:2002wz,
Grana:2006kf, Lawrence:2006ma, Dabholkar:2005ve, Hull:2004in, Flournoy:2004vn, Hull:2006va,
Shelton:2005cf, Shelton:2006fd, Becker:2006ks} and references therein.
In the following, we study the perturbative duality group in Type IIA string theory compactified
on a flat three-torus. We gain intuition by studying the reduced 7d Lagrangian of the
supergravity approximation. Finally we discuss how U-duality relates non-geometric
compactifications to $G_2$ manifolds in M-theory which will be fruitful when constructing
examples in the next section.
\subsection{Reduction to seven dimensions}
\noindent {\bf Action and symmetries.} Let us consider the bosonic sector of (massless) 10d Type
IIA supergravity,
\begin{equation}
S_\textrm{IIA} = S_\textrm{NS}+S_\textrm{R}+S_\textrm{CS}
\end{equation}
where
\begin{equation}
S_\textrm{NS} = \frac{1}{2\kappa^2_{10}} \int d^{10}x \sqrt{-g} \, e^{-2\phi} (R+4\partial_\mu
\phi \partial^\mu \phi -\frac 1 2 |H_3|^2)
\end{equation}
\begin{equation}
S_\textrm{R} = -\frac{1}{4\kappa^2_{10}} \int d^{10}x \sqrt{-g} \, (|F_2|^2+|\tilde F_4|^2)
\end{equation}
and the Chern-Simons term is
\begin{equation}
S_\textrm{CS} = -\frac{1}{4\kappa^2_{10}} \int B \wedge F_4 \wedge F_4
\end{equation}
with $\tilde F_4 = dA_3 -A_1 \wedge dB$ and $\kappa^2_{10}=\kappa^2_{11}/2\pi R$.
First we set the RR fields to zero\footnote{This can be done consistently since the
$(-1)^{F_L}$ symmetry forbids a tadpole for any RR field.}. This truncates the theory to the NS
part which is identical to the IIB $S_\textrm{NS}$ action. Compactifying Type IIA on a flat
$T^3$ yields the perturbative T-duality group $SO(3,3,\mathbb{Z})$ which acts on the coset
$SO(3,3,\mathbb{R})/SO(3)^2$.
\newpage
The equivalences of Lie algebras
\begin{equation}
\mathfrak{so}(3,3) \cong \mathfrak{sl}(4)
\end{equation}
\begin{equation}
\mathfrak{so}(3) \oplus \mathfrak{so}(3) \cong \mathfrak{su}(2) \oplus \mathfrak{su}(2) \cong \mathfrak{so}(4)
\end{equation}
enable us to realize the T-duality group as an $SL(4,\mathbb{Z})$ action on $SL(4,\mathbb{R})/SO(4)$. This
latter space is simply the moduli space of a flat $T^4$ with constant volume. Therefore,
we can think of the T-duality group as the mapping class group of an auxiliary four-torus of
unit volume. What is the metric on this $T^4$
in terms of the data of the $T^3$? To answer this question, we have to study the
Lagrangian.
\vskip 0.5cm \noindent {\bf Reduction to seven dimensions.} One obtains the following terms
after reduction on $T^3$ \cite{Maharana:1992my} (see Appendix \ref{redapp} for more details and
notation)
\begin{equation}
S=\int dx \sqrt{-g} e^{-\phi} \mathcal{L}
\end{equation}
with $\mathcal{L}=\mathcal{L}_1+\mathcal{L}_2+\mathcal{L}_3+\mathcal{L}_4$ and
\begin{eqnarray}
\mathcal{L}_1 &=& R + \partial_\mu \phi \partial^\mu \phi \\
\mathcal{L}_2 &=& \frac{1}{4}(\partial_\mu G_{\alpha\beta} \partial^\mu G^{\alpha\beta} -
G^{\alpha\beta}G^{\gamma\delta} \partial_\mu B_{\alpha\gamma} \partial^\mu B_{\beta\delta}) \\
\mathcal{L}_3 &=& -\frac 1 4 g^{\mu\rho}g^{\nu\lambda} (G_{\alpha\beta} F^{(1)\alpha}_{\mu\nu}
F^{(1)\beta}_{\rho\lambda} + G^{\alpha\beta} H_{\mu\nu\alpha} H_{\rho\lambda\beta}) \\
\mathcal{L}_4 &=& -\frac{1}{12}H_{\mu\nu\rho} H^{\mu\nu\rho}
\end{eqnarray}
The relation of these fields and the ten dimensional fields are presented in Appendix
\ref{redapp}. In order to see the $SO(d,d,\mathbb{Z})$ symmetry, one introduces the symmetric positive
definite $2d\times 2d$ matrix
\begin{equation}
M=\left( \begin{array}{cc}
G^{-1} & G^{-1}B \\
BG^{-1} & \ \ G-BG^{-1}B
\end{array}
\right) \in SO(3,3)
\end{equation}
The kinetic terms $ \mathcal{L}_2$ can be written as the $\sigma-$model Lagrangian
\begin{equation}
\mathcal{L}_2 = \frac 1 8 \textrm{Tr}(\partial_\mu M^{-1} \partial^\mu M)
\end{equation}
The other terms in the Lagrangian are also invariant under $SO(3,3)$.
\vskip 0.5cm \noindent {\bf The SL(4) duality symmetry and ``N-theory''.} Let us now put the
bosonic action in a manifestly $SL(4)$ invariant form (see \cite{Brace:1998xz}). Rewrite
$\mathcal{L}_2$ as
\begin{equation}
\mathcal{L}_2= \frac 1 8 \textrm{Tr}(\partial_\mu M^{-1} \partial^\mu M) = \frac 1 4 \textrm{Tr}(\partial_\mu N^{-1} \partial^\mu
N),
\end{equation}
where we introduced the symmetric $SL(4)$ matrix\footnote{This matrix parametrizes the eight
complex structure moduli, and one K\"{a}hler\, modulus of $T^4$. }
\begin{equation}
N_{4\times 4}=(\textrm{det} \, G)^{-1/2} \left( \begin{array}{cc}
G & G\vec b \\
\vec b^T G & \ \ \textrm{det} \, G + \vec b^T G \vec b
\end{array}
\right)
\label{nmetric}
\end{equation}
\begin{equation}
B_{ij}=\epsilon_{ijk} b_k \qquad b_i=\frac 1 2 \epsilon_{ijk} B_{jk}.
\end{equation}
The equality of the Lagrangians can be checked by lengthy algebraic manipulations (or a computer
algebra software). We included the Hodge-dualized B-field in the metric as a Kaluza-Klein
vector. The inverse of $N$ is
\begin{equation}
N^{-1}=(\textrm{det} \, G)^{-1/2} \left( \begin{array}{cc}
(\textrm{det} \, G)G^{-1} + \vec b^T b & \ \ -\vec b \\
-\vec b^T & \ \ 1
\end{array}
\right).
\label{nmetrici}
\end{equation}
Keeping $N$ symmetric, the Lagrangian is invariant under the global transformation,
\begin{equation}
N(x) \mapsto U^T N(x) U, \ \ \textrm{with} \ U\in SL(4).
\end{equation}
A useful device for interpreting $N$ is the following. Note that we would get the exact same
bosonic terms of $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$, if we were to reduce an eleven
dimensional classical theory to seven dimensions. This theory is given by the Einstein-Hilbert
action plus a scalar, the ``11d dilaton''\footnote{We will denote the extra dimension by
$x^{10}$. This is not to be confused with the M-theory circle denoted by $x^{11}$.}
\begin{equation}
S=\int d^{11}x \sqrt{-\tilde g} \, e^{-\phi} (R(\tilde g) + \partial_\mu \phi\partial^\mu \phi)
\label{ntheory}
\end{equation}
This Lagrangian contains no B-field. The description in terms of (\ref{ntheory}) is only useful
when $\mathcal{L}_3$ and $\mathcal{L}_4$ vanish. This means that $ F^{(1)\alpha}_{\mu\nu} =
H_{\mu\nu\alpha} = H_{\mu\nu\rho} = 0$. Since the size of $T^4$ is constant, its dimensions are
not treated on the same footing as the three geometric fiber dimensions. It is similar to the
situation in F-theory \cite{Vafa:1996xn}, where the area of the $T^2$ is fixed and the K\"{a}hler\,
modulus of the torus is not a dynamical parameter.
We have seen that the matrix $N$ can be interpreted as a semi-flat metric on a $T^4$ torus
fiber. Part of this torus is the original $T^3$ fiber and the overall volume is set to one. The
T-duality group $SO(3,3,\mathbb{Z})$ acts on $T^4$ in a geometric way. This means that we can hope to
study non-geometric compactifications by studying purely geometric ones in higher dimension.
\subsection{The perturbative duality group}
In the previous section, we have transformed the coset space $SO(3,3)/SO(3)^2$ into
$SL(4)/SO(4)$ via Eq. (\ref{nmetric}). We also would like to see how the discrete T-duality
group $SO(3,3,\mathbb{Z})$ maps to $SL(4,\mathbb{Z})$. We will denote the $SO(3,3)$ matrices by $\mathcal{Q}$,
and the $SL(4)$ matrices by $\mathcal{W}$.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{|c | c | c | c |}
\hline {\bf SO(3,3)} & {\bf SL(4)} & {\bf dim} & {\bf examples} \\ \hline
\textrm{spinor} & \textrm{fundamental} & 4 & \textrm{RR fields} \\
\textrm{fundamental} & \textrm{antisym. tensor} & 6 & \textrm{momenta \& winding} \\
\hline
\end{tabular}
\end{center}
\caption{The two basic representations of the duality group.}
\end{table}
\vskip 0.5cm \noindent {\bf Generators of SO(3,3,Z).} It was shown in \cite{Schwarz:1998qj} that
the following $SO(3,3,\mathbb{Z})$ elements generate the whole group
\begin{equation}
\mathcal{Q}_1(n) = \left( \begin{array}{c|c}
\mathbbm{1}_{3\times 3} & n \\ \hline
0 & \ \mathbbm{1}_{3\times 3}
\end{array}
\right)
\qquad
\mathcal{Q}_2(R) = \left( \begin{array}{c|c}
R \ & 0 \\ \hline
0 \ & \ (R^{-1})^T
\end{array}
\right)
\qquad
\mathcal{Q}_3 = \left( \begin{array}{ccc|ccc}
0 & & & \ 1 & & \\
& 0 & & & 1 & \\
& & 1 \ & & & 0 \\
\hline
1 & & & \ 0 & & \\
& 1 & & & 0 & \\
& & 0 \ & & & 1
\end{array}
\right)
\end{equation}
where $n^T = -n$, $\textrm{det}\, R = \pm 1$. The first two matrices correspond to a change of
basis of the compactification lattice. The last matrix is T-duality along the $x^7-x^8$
coordinates. Instead of using $\mathcal{Q}_3$ directly, we combine double T-duality with a
$90^\circ$ rotation. This gives the $SO(3,3)$ matrix
\begin{equation}
\mathcal{ \widetilde Q}_3 =
\left( \begin{array}{ccc|ccc}
& & & & -1 & \\
& & & \ 1 & & \\
& & 1 & & & \\ \hline
& 1 & & & & \\
-1 & & & & & \\
& & & & & 1
\end{array}
\right)
\end{equation}
\vskip 0.5cm \noindent {\bf Generators of SL(4,Z).} In the Appendix of \cite{Brace:1998ku}, it
was shown that the above matrices have an integral $4\times 4$ spinor representation and in fact
generate the entire $SL(4,\mathbb{Z})$. We now list the spinor representations corresponding to these
generators\footnote{Note that \cite{Brace:1998ku} uses a different basis for the spinors.}.
\begin{itemize}
\item
$\mathcal{ Q}_1(n)$ is mapped to matrices
\begin{equation}
\mathcal{ W}_1 (n) =
\left( \begin{array}{rr|rc}
1 & & & \ n_{23} \\
& 1 \ & & \ n_{31} \\ \hline
& & \ 1 & \ n_{12} \\
& & & 1
\end{array}
\right)
\end{equation}
These are the generators corresponding to ``T'' transformations of various $SL(2)$
subgroups.
\item
$\mathcal{\widetilde Q}_3$ is mapped to
\begin{equation}
\mathcal{ \widetilde W}_3 =
\left( \begin{array}{rr|rr}
1 & & & \\
& 1 \ & & \\ \hline
& & & -1 \\
& & \ 1 &
\end{array}
\right)
\end{equation}
This corresponds to a modular ``S'' transformation. Note that $(\mathcal{ \widetilde W}_3)^2
\ne \mathbbm{1}$.
\item
When $\textrm{det} \, R = + 1$, the matrix $\mathcal{Q}_2(R)$ is mapped to the $SL(4,\mathbb{Z})$
matrix
\begin{equation}
\mathcal{W}_2(R) = \left( \begin{array}{cc}
R \ & 0 \\
0 \ & 1
\end{array}
\right)
\end{equation}
For symmetric $R$ matrices it coincides with the prescription of Eq. (\ref{nmetric}).
\item
The $\textrm{det} \, R = -1$ case is more subtle. Even though Type IIA string theory is parity
invariant, in the microscopic description reflecting an odd number of coordinates does not give
a symmetry by itself. Since this transformation flips the spinor representations $16
\leftrightarrow 16'$, it must be accompanied by an internal symmetry $\Omega$ which changes the
orientation of the world-sheet and thus exchanges the left-moving and right-moving spinors.
$SO(3,3)$ has maximal subgroup $S(O(3)\times O(3))$ and hence has two connected components
\cite{Gualtieri:2003dx}. Inversion of an odd number of coordinates is not in the identity
component. $SL(4,\mathbb{Z})$ is the double cover of the connected component of $SO(3,3,\mathbb{Z})$ only. We
must allow for $\textrm{det} \, \mathcal{W} = \pm 1$ to obtain $Spin(3,3, \mathbb{Z})$, the double
cover of the full $SO(3,3,\mathbb{Z})$. Then, the reflections of the $x^7$, $x^8$ or $x^9$ coordinates
have the following representations\footnote{The only non-trivial element in the center of
$SL(4)$ is $-\mathbbm{1}$. This sign may be attached to all the group elements not in the
identity component, giving an automorphism of $Spin(3,3, \mathbb{Z})$.}
\begin{equation}
\mathcal{W}_{I_7}=\textrm{diag}(-1,1,1,1) \qquad \mathcal{W}_{I_8}=\textrm{diag}(1,-1,1,1) \qquad \mathcal{W}_{I_9}=\textrm{diag}(1,1,-1,1)
\end{equation}
Upon restriction to $GL(3) \subset SO(3,3)$, the $Spin(3,3)$ group is a trivial covering.
Ramond-Ramond fields transform in the spinor representation of the T-duality group\footnote{As
discussed in \cite{Brace:1998xz}, the fields that have simple transformation properties are
$C^{(3)} = A^{(3)} + A^{(1)} \wedge B$.}. Therefore they form fundamental $SL(4)$ multiplets,
for instance $ (C_7, C_8, C_9, C_{789})$. We can check the above representation for the
coordinate reflections. Reflection of say $x^7$ combined with a flip of the three-form field
gives
\begin{equation}
(C_7, C_8, C_9, C_{789}) \mapsto (-C_7, C_8, C_9, C_{789})
\end{equation}
which is precisely the action of $\mathcal{W}_{I_7}$.
\end{itemize}
\subsection{Embedding $SL(2)^2$ in $SL(4)$}
In order to get some intuition for the $SL(4)$ duality group that we discussed in the previous
section, we first look at the simpler case of $T^2$ compactifications. In this section we
describe how the T-duality group of $T^2$ compactifications can be embedded into the bigger
$SL(4)$ group.
In eight dimensions, the duality group is $SL(2)_\tau\times SL(2)_\rho$ with the first factor
acting on the $\tau$ complex structure of the torus and the second factor acting on
$\rho=b+iV/2$ where $b=\int_{T^2} B$ and $V$ is the volume of $T^2$. If we consider a two
dimensional base with complex coordinate $z$, then the equations of motion are satisfied if
$\tau(z)$ and $\rho(z)$ are holomorphic sections of $SL(2,\mathbb{Z})$ bundles. Monodromies of $\tau$
around branch points points describe the geometric degenerations of the fibration. Monodromies
of $\rho$, however, correspond to T-dualities and to the semi-flat description of NS5-branes. In
particular, if there is a monodromy $\rho \mapsto \rho+1$ around a degeneration point in the
base, then it implies $b\mapsto b+1$ which describes a unit magnetic charge for the B-field, {\it i.e.\ }
an NS5-brane. The $\rho \mapsto -1/\rho$ monodromy on the other hand is a double T-duality along
the $T^2$ combined with a $90^\circ$ rotation.
Let us denote the two-torus coordinates by $x^{7,8}$. In order to embed this $SL(2)\times SL(2)$ duality
group into the $SL(4)$ of $T^3$ compactifications, we need to further compactify on a
``spectator'' circle of size $L$. We denote its coordinate by $x^9$. The metric on $T^3$
($x^9-x^{10}-x^{11}$) is now
\begin{equation}
G_{3\times 3}=\left( \begin{array}{cc|c}
g_{11} & g_{12} & \ \\
g_{21} & g_{22} & \ \\
\hline & & \ L^2
\end{array}
\right)
\end{equation}
Then, one can construct the $4\times 4$ metric on $T^4$ by the prescription of (\ref{nmetric})
which gives
\begin{equation}
N=(\textrm{det} \, g)^{-1/2}\left( \begin{array}{c|cc}
\frac{1}{L} g_{2\times 2} & & \\
\hline
& \ L & Lb \\
& \ Lb & L(\textrm{det} \, g + b^2) \\
\end{array}
\right) \equiv
\left( \begin{array}{cc}
\frac{1}{L}\mathcal{T}_{2\times 2} & \ \\
\ & L \mathcal{R}_{2\times 2}
\end{array}
\right)
\end{equation}
with
\begin{equation}
\mathcal{T}=\frac{1}{\tau_2}\left( \begin{array}{cc}
1 & \tau_1 \\
\tau_1 & |\tau|^2
\end{array}
\right)
\qquad
\mathcal{R}=\frac{1}{\rho_2}\left( \begin{array}{cc}
1 & \rho_1 \\
\rho_1 & |\rho|^2
\end{array}
\right)
\end{equation}
The $\sigma$-model Lagrangian
\begin{equation}
\textrm{Tr}(\partial_\mu N^{-1} \partial^\mu
N)= -2\left( \frac{\partial_\mu \tau\partial^\mu \bar\tau}{\tau_2^2} + \frac{\partial_\mu \rho\partial^\mu \bar\rho}{\rho_2^2}\right)
\end{equation}
indeed gives the familiar kinetic terms for the torus moduli (in seven dimensions).
We have seen how the metric and the B-field parametrize the relevant subset of the
$SL(4,\mathbb{R})/SO(4)$ coset space. The generators of the $SL(2,\mathbb{Z})\times SL(2,\mathbb{Z})$ duality group
are also mapped to elements in $SL(4,\mathbb{Z})$. We now verify that these images in fact give the
transformations that we expect.
\begin{itemize}
\item {\bf Geometric transformations}
These are simply generated by
\begin{equation}
T=\left( \begin{array}{cc}
1 & 1 \\
0 & 1
\end{array}
\right) \oplus
\mathbbm{1}_{2\times 2}
\quad \textrm{and} \quad
S=\left( \begin{array}{cc}
0 & -1 \\
1 & 0
\end{array}
\right) \oplus
\mathbbm{1}_{2\times 2}
\end{equation}
They act on $g_{2\times 2}$ by conjugation with the non-trivial $SL(2)$ part as expected. The
determinant of $g$ stays the same. The first one is a Dehn-twist and the second one is a
$90^\circ$ rotation.
\item {\bf Non-geometric transformations}
The generators
\begin{equation}
T'=\mathbbm{1}_{2\times 2} \oplus \left( \begin{array}{cc}
1 & 1 \\
0 & 1
\end{array}
\right)
\quad \textrm{and} \quad
S'=\mathbbm{1}_{2\times 2} \oplus \left( \begin{array}{cc}
0 & -1 \\
1 & 0
\end{array}
\right)
\end{equation}
correspond respectively to the shift of the B-field and to a double T-duality on $x^{7,8}$
combined with a $90^\circ$ rotation. The latter one has the $SL(4)$ monodromy
\begin{equation}
M=\left( \begin{array}{rr|rr}
1 & \ 0 & \ 0 & 0 \\
0 & 1 & 0 & 0 \\ \hline
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0
\end{array}
\right)
\end{equation}
This is basically an exchange of the $x^{9}-x^{10}$ coordinates and it transforms the
$\mathcal{R}_{2\times 2}$ submatrix of $N$ into its inverse
\begin{equation}
\mathcal{R}^{-1}=(\textrm{det} \, g)^{-1/2}\left( \begin{array}{cc}
\textrm{det} \, g + b^2 & -b \\
-b & 1
\end{array}
\right)
\end{equation}
After this double T-duality, the (geometric) metric on $T^3$ becomes
\begin{equation}
G_{3\times 3} \mapsto \widetilde G_{3\times 3}=\left( \begin{array}{c|r}
\frac{1}{\textrm{det} g + b^2}\ g_{2\times 2} & 0 \\
\hline 0 & \ L^2
\end{array}
\right)
\end{equation}
The B-field transforms as
\begin{equation}
b \mapsto \tilde b = -\frac{b}{\textrm{det} \, g + b^2}
\end{equation}
The metric $g$ on $T^2$ changes, in particular if $b=0$, then the volume gets inverted.
Since we exchanged the $x^9-x^{10}$ coordinates, one might have expected that this affects the
metric on $x^9$. However, we see that it remains the same as it should since it was only a
spectator circle.
\newpage
\item {\bf Left-moving spacetime fermion number: $(-1)^{F_L}$}
This is a {\it global} transformation which inverts the sign of the Ramond-Ramond fields. It
acts trivially on the vector representation of $SO(3,3)$ (which is the antisymmetric tensor of
$SL(4)$). It will be important since T-duality squares to $(-1)^{F_L}$. In
\cite{Hellerman:2002ax}, its representation was determined,
\begin{equation}
\mathcal{M}_{(-1)^{F_L}}=\left( \begin{array}{cc}
-1 & 0 \\
0 & -1
\end{array}
\right) \oplus
\left( \begin{array}{cc}
-1 & 0 \\
0 & -1
\end{array}
\right) \in SL(2)\times SL(2)
\end{equation}
that is a $D_4$ monodromy combined with a $D'_4$ ({\it i.e.\ } a conjugate $D_4$). This statement can be
proven as follows. Let us define complex coordinates
\begin{equation}
z_L = x^7_L + i x^8_L
\end{equation}
\begin{equation}
z_R = x^7_R + i x^8_R
\end{equation}
where $x_L$ and $x_R$ are the left- and right-moving components of the bosonic coordinates. We
denote a transformation
\begin{equation}
(z_L, z_R) \mapsto (e^{\theta_L} z_L, \ e^{\theta_R} z_R)
\end{equation}
by $\theta=(\theta_L, \theta_R)$. Then,
\begin{equation}
\theta_{D_4} = (-\pi, -\pi)
\end{equation}
as it is a reflection of the bosonic coordinates. Moreover, we can use $D'_4 = S^2$ where $S$ is
a double T-duality with a $90^\circ$ rotation. We have
\begin{equation}
\theta_{S} = \underbrace{(-\pi,0)}_\textrm{double T-duality} +
\underbrace{(\frac{\pi}{2},\frac{\pi}{2})}_{\textrm{$90^\circ$ rotation}} = (-\frac{\pi}{2},\frac{\pi}{2})
\end{equation}
from which we obtain
\begin{equation}
\theta_{D'_4} = 2\times \theta_{S} = (-\pi, \pi)
\end{equation}
Finally,
\begin{equation}
\theta_{D_4+D'_4} = \theta_{D_4} + \theta_{D'_4} = (-2\pi,0)
\end{equation}
which acts trivially on the bosons. However, it inverts the sign of the spinors from left movers
which is precisely the action of $(-1)^{F_L}$. Finally, it can be embedded into $SL(4)$ simply
as
\begin{equation}
\mathcal{M}_{(-1)^{F_L}} = \textrm{diag}(-1,-1,-1,-1)
\end{equation}
\end{itemize}
\subsection{U-duality and $G_2$ manifolds}
\label{UdualityandGtwo}
We have seen that upon compactifying Type IIA on $T^3$, a $T^4$ torus emerges. We will be
eventually interested in compactifications to four dimensions. For vacua without fluxes and
T-dualities, the total space of the $T^3$ fibration is a Calabi-Yau threefold. What can we say
about the total space of the $T^4$ fibration?
Note that there is an analogous (more general) story in M-theory. Reducing eleven dimensional
supergravity on a flat $T^4$ yields a Lagrangian that is symmetric under the $SL(5,\mathbb{R})$
U-duality group \cite{Hull:1994ys, Cremmer:1979up, Sezgin:1982gi, Cremmer:1997ct}. By
Hodge-dualizing the three-form $A_{IJK}=:\epsilon_{IJKL}X^L$ ($I,J,K,L=7,8,9,11$), one can
define a $5\times 5$ matrix\footnote{
The relation to F-theory \cite{Vafa:1996xn} can roughly be understood as follows. In
the lower right corner of the $5\times 5$ metric there is a $2\times 2$ submatrix (with
coordinates $x^{11,10}$). In the ten dimensional language, this matrix contains the dilaton and
the three-form $X^{11} \sim C^{(3)}$ which is ``mirror'' to the $C^{(0)}$ axion in Type IIB.
Roughly speaking, (conjugate) S-duality acts on this $T^2 \subset T^5$. }
\begin{equation}
G^{-1}=\left( \begin{array}{c|c}
\omega g^{IJ} + \frac{1}{\omega} X^I X^J & \ -\frac{1}{\omega}X^I \\
\hline
-\frac{1}{\omega}X^I & \frac{1}{\omega}
\end{array}
\right)
\end{equation}
which contains the geometric metric $g$ on $T^4$ as well. We denote the dimensions\footnote{Note
that $x^{10}$ and $x^{11}$ are switched. This is because we want to denote the extra M-theory
dimension by $x^{11}$. We stick to this notation throughout the paper.} by $x^7$, $x^8$, $x^9$,
$x^{11}$, $x^{10}$, respectively. The bosonic kinetic terms can be written as a manifestly
$SL(5)$ invariant $\sigma$-model in terms of this metric \cite{Cremmer:1997ct}.
We can embed the $4\times 4$ unit determinant matrix $N^{-1}$ (see Eq. \ref{nmetrici}) into the
$5\times 5$ unit-determinant matrix $G^{-1}$ as follows
\begin{equation}
G^{-1}=\left( \begin{array}{cc|c}
\delta g^{ij} + \frac{1}{\delta} b^i b^j & \ 0 \ & -\frac{1}{\delta}b^i \\
0 & 1 & 0 \\
\hline
-\frac{1}{\delta}b^i & 0 & \frac{1}{\delta}
\end{array}
\right)
\end{equation}
with $\delta\equiv(\textrm{det} \ g_{ij})^{1/2}$. By setting $\omega :=\delta$, we arrive at the
previous form of the metric. If we now perform a U-duality corresponding to the $x^{10}-x^{11}$
flip, then the solution is transformed into pure geometry in the 11d picture,
\begin{equation}
G^{-1}=\left( \begin{array}{cc|c}
\delta g^{ij} + \frac{1}{\delta} b^i b^j & \ -\frac{1}{\delta}b^i \ & \\
-\frac{1}{\delta}b^i & \frac{1}{\delta} & \\
\hline
& & \ 1
\end{array}
\right) \equiv
\left( \begin{array}{c|c}
g^{IJ}_\textrm{new} \ & \\
\hline
& \ 1
\end{array}
\right)
\end{equation}
In 10d Type IIA language, this flip roughly corresponds to the exchange of the Ramond-Ramond
one-form and the Hodge-dual of the B-field in the fiber directions.
In order to preserve minimal supersymmetry in four dimensions, one compactifies M-theory on a
$G_2$ manifold. Semi-flat limits of $G_2$ manifolds are expected to exist by an SYZ-like
argument \cite{Gukov:2002jv}. Then, by the above U-duality in seven dimensions, a solution is
obtained which is non-geometric from a 10d point of view as shown in this diagram
\begin{displaymath}
\xymatrix{ \textrm{\framebox{\ \ M-theory on semi-flat $G_2$ \ }}
\ar@{=>}[d]_{\textrm{reduction}}^{\textrm{on flat $T^4$}} & \textrm{\framebox{\ \ Type IIA on ``non-geometric space'' \ }} \\
\textrm{\framebox{\ \ 7d theory on $S^3$ (the base of $G_2$) \ }}
\ar@{<=>}[r]^{\textrm{\qquad\quad \ $SL(5)$ }}_{\textrm{\qquad\quad \ U-duality }} &
\textrm{\framebox{\ \ dual 7d theory on $S^3$ \ }} \ar@{~>}[u]_{\textrm{oxidation?}} }
\end{displaymath}
``Oxidation'' seems obscure in this context since we only have the 7d spacetime equations of
motion. However, for the special case of $D_4$ singularities, we will be able to ``lift the
solutions'' to 10d: they turn out to be asymmetric orbifolds, similar to some examples in
\cite{Hellerman:2002ax}.
\section{Compactifications with ${\bf D_4}$ singularities}
\label{examplesec}
In the previous sections, we studied the semi-flat limit of various geometries which had a
fibration structure. This corner of the moduli space is a natural playground for T-duality since
isometries appear along the fiber directions. Almost everywhere the space locally looks like
$\mathbb{R}^n \times T^n$ and the duality group can simply be studied by a torus reduction of the
supergravity Lagrangian. The idea is then to glue patches of the base manifold by also including
the T-duality group in the transition functions. Since the duality group is discrete, such
deformations are ``topological'' and {\it a priori} cannot be achieved continuously. From the
10d point of view, the total space becomes non-geometric in general. In seven dimensions, the
$SO(3,3,\mathbb{Z})$ group can be realized as the mapping class group of a $T^4$ of unit volume. This
geometrizes the non-geometric space by going one real dimension higher. Considering such
compactifications to four dimensions which preserve ${\mathcal{N}}=1$ supersymmetry, U-duality suggests
that the total space of the geometrized internal non-geometric space is a $G_2$ manifold.
In this section, we use these ideas to build non-geometric compactifications. We deform
geometric orbifold spaces by hand and also study particular examples of $G_2$ manifolds. These
examples will only contain (conjugate) $D_4$ singularities. This allows for a constant arbitrary
shape for the fiber and the base is also locally flat. Even though the examples are singular and
supergravity breaks down at the orbifold points, we can embed the solutions into Type IIA string
theory where they give consistent non-geometric vacua realized as modular invariant asymmetric
orbifolds.
\subsection{Modified $K3\times T^2$}
\label{modk3}
Let us first consider $K3$. The base of an elliptic fibration of $K3$ is an $S^2$. At the
$T^4/\mathbb{Z}_2$ orbifold point, there are four $D_4$ singularities in the base (see \fref{tetrah}).
The purely geometric $D_4$ monodromies are
\begin{equation}
\mathcal{M}_{D_4} = (\mathbbm{-1}_{2\times 2}) \oplus \mathbbm{1}_{2\times 2} \ \in \ SL(2)_\tau \times SL(2)_\rho
\end{equation}
\comment{
\begin{equation}
\mathcal{M}_{D_4} =
\left( \begin{array}{rrrr}
-1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & \ 1 & 0 \\
0 & 0 & 0 & \ 1
\end{array}
\right)
\end{equation}}
By changing the monodromies by hand, it is possible to construct non-geometric spaces. In
\cite{Hellerman:2002ax}, $K3$ was modified into the union of two half K3's which we denote by
$\widetilde{K3}$. This non-geometric space has two ordinary $D_4$'s and two non-geometric $D'_4$
singularities with monodromies
\begin{equation}
\mathcal{M}_{D'_4} = \mathbbm{1} \oplus (\mathbbm{-1})
\end{equation}
\comment{
\begin{equation}
\mathcal{M}_{D'_4} =
\left( \begin{array}{rrrr}
1 & 0 & 0 & 0 \\
0 & \ 1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1
\end{array}
\right)
\end{equation}}
If we had changed one or three $D_4$s into $D'_4$, then the monodromy at infinity would not be
trivial. In fact, it would be $\mathcal{M}_{D_4} \cdot \mathcal{M}_{D'_4}
=\mathcal{M}_{(-1)^{F_L}}$. This means that the $T^2 \times T^2$ fiber is orbifolded everywhere
in the base by the $\mathbb{Z}_2$ action which inverts the fiber coordinates. In principle, this could
be interpreted as an overall orbifolding by $(-1)^{F_L}$ which moves us from Type IIA to IIB.
However, it is not clear what should happen to the odd number of $D_4$ and $D'_4$ singularities
as they don't have a trivial monodromy at infinity in IIB either. Therefore, we do not consider
such examples any further.
Let us now compactify further and consider $K3\times T^2$ or $\widetilde{K3}\times T^2$. The
base is $S^2\times S^1$ where the second factor is the base of the two-torus as described in
Section \ref{onedimsec}. The relevant monodromies are embedded in the $SL(4)$ duality group as
follows
\begin{equation}
\mathcal{M}_{D_4} = \textrm{diag}(-1,-1,1,1) \qquad \mathcal{M}_{D'_4} = \textrm{diag}(1,1,-1,-1)
\end{equation}
Since in lower dimension the duality group is larger, one can consider another $D_4$-like
monodromy
\begin{equation}
\mathcal{M}_{D''_4} = \textrm{diag}(1,-1,-1,1)
\end{equation}
which is not in the $SL(2)\times SL(2)$ subgroup of $SL(4)$, and thus it was not possible for
the case of $T^2$ compactifications. In principle, we can have spaces with monodromies
\begin{equation}
(2 \times D_4) + (2 \times D''_4) \quad \textrm{or} \quad (2 \times D'_4) + (2 \times D''_4)
\end{equation}
These are T-dual to each other by an $x^7 - x^{10}$ flip. Thus, it is enough to consider the
first one which is geometric since the monodromies act only in the upper-left $SL(3)$ subsector
of $SL(4)$. However, this space is not Calabi-Yau. Supersymmetry suggests that in the base,
parallel lines\footnote{Parallelism makes sense in the context of $D_4$ singularities since the
base has a flat metric.} of singularities should have the same monodromies (possibly up to a
factor of $(-1)^{F_L}$ as in the case of $\widetilde{K3}\times T^2$). This is not the case for
this space. A way to explicitly see the absence of supersymmetry is to exhibit the total space
as the $(\mathbb{R} \times T^5) / \langle \alpha, \beta \rangle$ orbifold,
\begin{eqnarray}
\alpha: && (x,\theta_1,\theta_2 \, | \, \theta_3,\theta_4,\theta_5) \mapsto
(L-x,\theta_1,-\theta_2 \, | \, {-\theta_3},-\theta_4,\theta_5) \\
\beta: && (x,\theta_1,\theta_2 \, | \, \theta_3,\theta_4,\theta_5) \mapsto ( -x,\theta_1,-\theta_2 \, | \, \theta_3, -\theta_4,-\theta_5)
\end{eqnarray}
Here $x$, $\theta_{1,2}$ are coordinates on the base and $\theta_{3,4,5}$ are coordinates on the
fiber. $x$ is non-compact and $\theta_i$ are periodic. The orbifold group $\langle \alpha, \beta
\rangle$ also contains the element
\begin{equation}
\alpha \beta: \ (x,\theta_1,\theta_2 \, | \, \theta_3,\theta_4,\theta_5) \mapsto
(x+L,\theta_1,\theta_2 \, | \, {-\theta_3},\theta_4,-\theta_5)
\end{equation}
which breaks supersymmetry because it projects out the gravitini.
We see that by considering conjugate $D_4$ singularities, in the above reducible case we do not
obtain any other supersymmetric examples than those already considered in
\cite{Hellerman:2002ax} even if the duality group is extended. Hence, we move on to threefolds
in the next section.
\subsection{Non-geometric $T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$}
\label{nongeot6}
Let us consider the orbifold $T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$ that we described in detail in Section
\ref{threedim}. \fref{cube_mono} shows the monodromies of the singular edges. These monodromies
have the following $SL(4)$ representations,
\begin{equation}
x = \textrm{diag}(1,-1,-1,1) \quad y = \textrm{diag}(-1,1,-1,1) \quad z = \textrm{diag}(-1,-1,1,1)
\end{equation}
\comment{
\begin{equation}
x =
\left( \begin{array}{rrrr}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & \ 1
\end{array}
\right)
\quad
y =
\left( \begin{array}{rrrr}
-1 & 0 & 0 & 0 \\
0 & \ 1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & \ 1
\end{array}
\right)\quad
z =
\left( \begin{array}{rrrr}
-1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & \ 1 & 0 \\
0 & 0 & 0 & \ 1
\end{array}
\right)
\end{equation} }
These are of course geometric since they only act on the first three coordinates. How can we
deform the orbifold into something non-geometric? There are three more $D_4$ type singularities
that we can use. They have the following monodromies,
\begin{equation}
\bar x \equiv -x
\qquad
\bar y \equiv -y \qquad
\bar z \equiv -z
\end{equation}
These all invert the $x^{10}$ coordinate. A simple modification of $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$ is
possible by replacing the original monodromies by $\bar x,\bar y$ or $\bar z$. The junction
condition says that an even number of negative signs should meet at each vertex. Therefore,
consistent monodromy assignments are given by switching signs along loops. There are five
theories obtained this way as shown in \fref{allcubes}. Since these simple spaces have a
geometric total space at this orbifold point of their moduli space, we call them ``almost
non-geometric''.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=7.0cm,angle=0,origin=c]{allcubes.eps}
\caption{Almost non-geometric $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$ spaces. Monodromies are modified along the red loops. We refer to the models as one-plaquette,
two-plaquette, ``L'', ``U'' and ``X'', respectively.}
\label{allcubes}
\end{center}
\end{figure}
\subsection{Asymmetric orbifolds}
\label{asod}
In the previous section, we changed the monodromies by hand and obtained ``almost
non-geometric'' spaces. In particular, monodromies in the loops contained the extra action of
$(-1)^{F_L}$, which reverses the signs of all RR-charges,
\begin{equation}
x \cdot \mathcal{M}_{(-1)^{F_L}} = \bar x \qquad y \cdot \mathcal{M}_{(-1)^{F_L}} = \bar y \qquad
z \cdot \mathcal{M}_{(-1)^{F_L}}= \bar z
\end{equation}
where
\begin{equation}
\mathcal{M}_{(-1)^{F_L}} = \textrm{diag}(-1,-1,-1,-1)
\end{equation}
Hence, we can realize the non-geometric spaces of the previous section as asymmetric orbifolds
\cite{Narain:1986qm, Mueller:1986yr} (see also \cite{Dine:1997ji, Dabholkar:1998kv,
Blumenhagen:2000fp, Gaberdiel:2002jr, Aoki:2004sm, Kakushadze:1996hi}). We consider the simple
example of \fref{sngex}: the
one-plaquette model.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=5cm,angle=0,origin=c]{cube3.eps}
\caption{Simple non-geometric $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$.}
\label{sngex}
\end{center}
\end{figure}
If we parametrize the $T^6$ torus by angles $\theta_i$, then the original $\mathbb{Z}_2\times\mathbb{Z}_2$
orbifold group action is generated by
\begin{eqnarray}
\alpha : (\theta_1,\theta_2,\theta_3,\theta_4,\theta_5,\theta_6) \mapsto
(-\theta_1,-\theta_2,-\theta_3,-\theta_4,\theta_5,\theta_6) \\
\beta : (\theta_1,\theta_2,\theta_3,\theta_4,\theta_5,\theta_6) \mapsto
(-\theta_1,-\theta_2,\theta_3,\theta_4,-\theta_5,-\theta_6)
\end{eqnarray}
The base coordinates can be chosen to be $(\theta_1,\theta_3,\theta_5)$. The singular edges
along these directions have monodromies $(x,y,z)$, respectively.
Now the example of \fref{sngex} has modified monodromies. In particular, edges on the top of the
cube have monodromies which include $(-1)^{F_L}$. We use the same trick as in Section
\ref{modk3}: let us choose the vertical $x_5$ coordinate to be non-compact and then compactify
it with an asymmetric action,
\begin{eqnarray}
& \alpha :& \ (\theta_1,\theta_2,\theta_3,\theta_4,x_5,\theta_6) \mapsto
(-\theta_1,-\theta_2,-\theta_3,-\theta_4,x_5,\theta_6) \\
& \beta_1 :& \ (\theta_1,\theta_2,\theta_3,\theta_4,x_5,\theta_6) \mapsto
(-\theta_1,-\theta_2,\theta_3,\theta_4,-x_5,-\theta_6) \\
& \beta_2 :& \
(\theta_1,\theta_2,\theta_3,\theta_4,x_5,\theta_6) \mapsto
(-\theta_1,-\theta_2,\theta_3,\theta_4,L-x_5,-\theta_6) \ \times \ (-1)^{F_L}
\end{eqnarray}
This realizes the example as an asymmetric orbifold. The Type IIA spectrum is computed in
Appendix~\ref{aspectrum2}. It has $\mathcal{N}=1$ supersymmetry with a gravity multiplet, 16
vector multiplets and 71 chiral multiplets.
The theory is consistent since decorating $D_4$ singularities with $(-1)^{F_L}$ does not destroy
modular invariance. In the Green-Schwarz formalism, adding $(-1)^{F_L}$ changes the boundary
conditions for the four complex left-moving fermionic coordinates as
\begin{equation}
D_4: (++ --) \quad \longrightarrow \quad D_4 \times (-1)^{F_L}: (-- ++)
\end{equation}
Hence, the energy of the twisted sector ground state does not change and thus level-matching is
satisfied \cite{Vafa:1986wx}. In the RNS formalism, $(-1)^{F_L}$ does not act on the world-sheet
fields and therefore the moding does not change. However, the left-moving GSO projection changes
and various generalized discrete torsion signs show up in the twisted sectors as discussed in
the Appendices. (See also related literature \cite{Aoki:2004sm, Hellerman:2006tx}.) For Abelian
orbifolds, one-loop modular invariance implies higher loop modular invariance
\cite{Vafa:1986wx}. Here we are actually considering a non-Abelian orbifold\footnote{\ldots
since $x \mapsto -x$ and $x \mapsto L-x$ do not commute.} for which level-matching is not
sufficient for consistency. Further constraints may arise if a modular transformation takes a
pair of commuting group elements $(g,h)$ into their own conjugacy class \cite{Freed:1987qk},
\begin{equation}
(g,h) \longrightarrow (g^a h^b, g^c h^d) = (p g p^{-1}, p h p^{-1})
\end{equation}
where $a,b,c$ and $d$ are the elements of an $SL(2,\mathbb{Z})$ matrix. In this case, the path integral
with boundary conditions $(g,h)$ and $(p g p^{-1}, p h p^{-1})$ for the torus world-sheet should
give the same result. Since we only consider $D_4$ singularities, the twists of world-sheet
fermions by orbifold group elements do commute and thus non-commutativity can only come from the
action on the bosons. However, left-moving and right-moving bosons are treated symmetrically and
thus we do not get any further constraints. Therefore, one expects this model to be modular
invariant. Moreover, this theory has an alternative presentation as a $(\mathbb{Z}_2)^3$ Abelian
orbifold of $T^6$ as we will see in Section \ref{dualitysec}.
The rest of the modified $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$ spaces (\fref{allcubes}) have asymmetric
orbifold descriptions as well. These are listed in Appendix \ref{alist}. The modular invariance
argument of the previous paragraph applies to these as well. Some of the models are dual to each
other. This will be discussed in Section \ref{dualitysec}.
\subsection{Joyce manifolds}
\label{joycesec}
In Section \ref{UdualityandGtwo}, we saw how a class of non-geometric spaces can be transformed
into geometric M-theory compactifications by U-duality. Naturally, one can try to interpret
existing $G_2$ spaces from the literature as ``non-geometric'' Type IIA string theory vacua.
Let us denote the coordinates on $\mathbb{R}^7$ (and $T^7$) by $x_1, x_2, x_3$ (base), $ y_1, y_2, y_3,
y_4$ (fiber). The exceptional group $G_2$ is the subgroup of $GL(7,\mathbb{R})$ which preserves the
form
\begin{eqnarray*}
\varphi = dx_1 \wedge dy_1 \wedge dy_2 + dx_2 \wedge dy_1 \wedge dy_3 + dx_3 \wedge dy_2 \wedge
dy_3 + dx_2 \wedge dy_2 \wedge dy_4 \\
- dx_3 \wedge dy_1 \wedge dy_4 - dx_1 \wedge dy_3 \wedge
dy_4 - dx_1 \wedge dx_2 \wedge dx_3 \qquad
\end{eqnarray*}
It also preserves the orientation and the Euclidean metric on $\mathbb{R}^7$ and so it is a subgroup of
$SO(7)$. In this section, we consider particular compact examples. Joyce manifolds \cite{Joyce1,
Joyce2} are (resolved) $T^7/(\mathbb{Z}_2)^3$ orbifolds which preserve the calibration. We consider the
following action,
\begin{eqnarray*}
\alpha &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) \mapsto (x_1, -x_2, -x_3 \, | \, y_1, y_2, -y_3, -y_4) \\
\beta &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) \mapsto (-x_1, x_2, A_1-x_3 \, | \, y_1, -y_2, y_3, -y_4) \\
\gamma &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) \mapsto (A_2-x_1, A_3-x_2, x_3 \, | \, {-y_1}, y_2, y_3, -y_4)
\end{eqnarray*}
where $A_i \in \{0, \frac{1}{2}\}$. Note that $\alpha^2=\beta^2=\gamma^2=1$ and $\alpha$,
$\beta$ and $\gamma$ commute. Some of the choices of $\vec{A}\equiv (A_1,A_2,A_3)$ are
equivalent to others by a change of coordinates. Only shifts for the base coordinates are
included since fiber shifts can't be realized by a linear transformation. (We comment on this
later in Section \ref{fibershift}.) The blow-ups of these spaces are described in \cite{Joyce2,
Joyce:book}.
These orbifolds can be interpreted as non-geometric Type II backgrounds as follows. The $T^4$
fiber coordinates are already chosen to be $\{ y_i \}$. One needs to pick a direction for the
extra $x^{10}$ circle. Theories that differ in this choice are T-dual to each other. Then,
whenever a generator contains a minus sign for the $x^{10}$ circle, a $(-1)^{F_L}$ must be
separated from its action. The geometric action is then given by inverting the fiber signs (and
omitting the extra circle). For instance, if $y_4$ is the $x^{10}$ circle, then $\alpha$ will
become
\begin{equation}
\alpha_0 : (x_1, x_2, x_3 \, | \, y_1, y_2, y_3) \mapsto (x_1, -x_2, -x_3 \, | \, {-y_1}, -y_2,
y_3)
\end{equation}
and this geometric action will be accompanied by $(-1)^{F_L}$.
In the following, we list the spaces of different shifts and discuss their singularity
structure.
\begin{itemize}
\item
$\vec{A}=(0,0,0)$
(i) Let us first consider the $\mathbb{Z}_2 \times \mathbb{Z}_2$ orbifold generated by only $\alpha$ and
$\beta$. Then, by identifying $y^4$ with the extra $x^{10}$ coordinate, we obtain the model in
\fref{topbottom}. This is U-dual to the pure geometry $T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$ by a $y_1-y_4$
flip.
(ii) Let us now include $\gamma$. This gives the most singular example of Joyce
manifolds. The $x_i$ and $y_i$ coordinates parametrize the $S^3$ base and the $T^4$ fiber,
respectively. The $(\mathbb{Z}_2)^3$ orbifold group is equally well generated by $\langle \alpha,
\beta, \alpha\beta\gamma \rangle$. It is important to note that the product $\alpha\beta\gamma$ does not act on the
base coordinates. In principle, this could be interpreted as globally orbifolding\footnote{This
interpretation would give $T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$ in Type IIB. This is mirror to Type IIA on
$T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$ with discrete torsion turned on \cite{Vafa:1994rv}.} by $(-1)^{F_L}$.
However, this leads to problems similar to those in our earlier discussion in Section
\ref{modk3}.
It is also easy to see that U-duality does not work in this case\footnote{The general fiber in a
Lagrangian fibration on any symplectic manifold is a torus. However, the general fiber for a
coassociative $G_2$ fibration is expected to be $T^4$ or $K3$ \cite{Lee:2002fa}. The adiabatic
argument for U-duality only works for the $T^4$ case \cite{Vafa:1995gm, Sen:1996na} which must
be taken into account when choosing the fiber coordinates.}. Compactifying M-theory on a $G_2$
manifold gives $\mathcal{N}=1$ supersymmetry in 4d. However, the above configuration in Type II
has $\mathcal{N}=2$ supersymmetry\footnote{Although one of the gravitini is projected out by
$(-1)^{F_L}$, it comes back in the twisted sector to give extended supersymmetry.}, and
therefore cannot be equivalent to the M-theory configuration. Thus we will not discuss this
example any further.
\item
$\vec{A}=(0,0,\frac{1}{2}) \sim (0,\frac{1}{2},0) \sim (\frac{1}{2},0,0)$
The extra identification by $\gamma$ cuts the fundamental cell of $T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$ in
half. The resulting base is again an $S^3$ which can be constructed as shown in \fref{halfcube}.
The non-geometric space has the same monodromies as the model in \fref{fourthex} that we already
constructed by directly modifying the monodromies of $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$.
\begin{figure}[ht]
\begin{center}
\hskip -5cm
\includegraphics[totalheight=5.5cm,origin=c]{cube_fold_half.eps}
\vskip -4.5cm
\hskip 8cm
\includegraphics[totalheight=3.0cm,origin=c]{cube.eps}
\vskip 1.5cm
\caption{(i) Fundamental domain of the base after modding by $\gamma$: half of a rhombic dodecahedron. The arrows
show how the faces are identified.
\ (ii) Schematic picture indicating the structure of the degenerations.}
\label{halfcube}
\end{center}
\end{figure}
\item
$\vec{A}=(0,\frac{1}{2},\frac{1}{2}) \sim (\frac{1}{2},0,\frac{1}{2}) \sim
(\frac{1}{2},\frac{1}{2},0)$
Let us consider $\vec{A}=(0,\frac{1}{2},\frac{1}{2})$, as the others are equivalent by a
coordinate transformation. The action of $\alpha$ and $\beta$ generate $T^6 / \mathbb{Z}_2 \times
\mathbb{Z}_2$ as usual. The third $\mathbb{Z}_2$ is generated by $\gamma$. It has a fixed edge which goes
through two parallel faces of the cube (see \fref{otherhalfcube}). The base is again an $S^3$.
(The proof of this statement goes roughly as that of $T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2$.)
\clearpage
\begin{figure}[ht]
\begin{center}
\hskip -5cm
\includegraphics[totalheight=5.5cm,origin=c]{cube_fold_otherhalf.eps}
\vskip -4.5cm
\hskip 8cm
\includegraphics[totalheight=3.5cm,origin=c]{links3.eps}
\vskip 1.5cm
\caption{(i) Half of the fundamental domain after modding by $\gamma$. \ (ii) Schematic picture.}
\label{otherhalfcube}
\end{center}
\end{figure}
\item
$\vec{A}=(\frac{1}{2},\frac{1}{2},\frac{1}{2})$
(i) Let us first omit the action of $\gamma$. This gives a somewhat simpler space with base
depicted in \fref{trunc1}. It is the union of a truncated tetrahedron, plus a small tetrahedron.
This base can be obtained as the intersection of fundamental domains of the two commuting
$\mathbb{Z}_2$ actions. Both of these domains are $S^1$ times the square (with solid edges) depicted in
\fref{t4fund}. The identification of the faces and the schematic structure of the degenerations
are shown in \fref{trunc11}.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=7.5cm,angle=270,origin=c]{truncated_th.eps}
\caption{ The base of $T^6/(\mathbb{Z}_2)^2$ where the generators of $\mathbb{Z}_2$'s include coordinate shifts.
Four non-intersecting $D_4$ strings (dashed green lines in the middle of hexagons) curve the space into an $S^3$.
See Figure 36 in Appendix H for a pattern that can be cut out.}
\label{trunc1}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\hskip -7cm
\includegraphics[totalheight=5.5cm,origin=c]{truncated.eps}
\vskip -5cm
\hskip 8cm
\includegraphics[totalheight=3.5cm,origin=c]{links.eps}
\vskip 2.0cm
\caption{(i) The base can be constructed by gluing the truncated tetrahedron (dashed lines) to itself along with a small tetrahedron.
It is easy to check that the $D_4$ strings (solid lines) have $180^\circ$ deficit angle whereas the dashed lines are non-singular.
\ (ii) Schematic picture. The truncated tetrahedron example can roughly be understood as four
linked rings of $D_4$ singularities.
All of the rings are penetrated by two other rings which curve the space into a cylinder as they have tension 12.
This forces the string to come back to itself.}
\label{trunc11}
\end{center}
\end{figure}
(ii) Let us now include $\gamma$ as well. The coordinate shifts in the $\mathbb{Z}_2$ actions make sure
that the fixed edges do not intersect. The structure of the base is shown in \fref{kocka}.
\begin{figure}[ht]
\begin{center}
\hskip -5cm
\includegraphics[totalheight=4.5cm,origin=c]{kocka.eps}
\vskip -4cm
\hskip 7cm
\includegraphics[totalheight=3.5cm,origin=c]{links4.eps}
\vskip 1cm
\caption{(i) The base of the $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ Joyce orbifold. There are
six strings located on the faces of a cube. These faces are folded up which generates the
$180^\circ$ deficit angles.
\ (ii) Schematic picture. The degenerations form three rings of $D_4$ singularities.}
\label{kocka}
\end{center}
\end{figure}
\end{itemize}
\clearpage
\subsection{Dualities between models}
\label{dualitysec}
The two-plaquette model can be realized as $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$ by the following orbifold
action\footnote{The action of $\alpha$ creates four parallel edges of the singular cube in the
base. Then, $\beta$ and $\alpha\beta$ generate $4+4$ edges with $(-1)^{F_L}$. These give the two
``red plaquettes'' (see \fref{allcubes}).},
\begin{eqnarray*}
\alpha : (\theta_1,\theta_2,\theta_3,\theta_4,\theta_5,\theta_6) & \mapsto &
(-\theta_1,-\theta_2,-\theta_3,-\theta_4,\theta_5,\theta_6) \\
\beta :
(\theta_1,\theta_2,\theta_3,\theta_4,x_5,\theta_6) & \mapsto &
(-\theta_1,-\theta_2,\theta_3,\theta_4,-\theta_5,-\theta_6) \ \times \ (-1)^{F_L}
\end{eqnarray*}
Performing a single T-duality on $\theta_6$ turns $\beta$ into
\begin{equation}
\tilde\beta :
(\theta_1,\theta_2,\theta_3,\theta_4,x_5,\theta_6) \mapsto
(-\theta_1,-\theta_2,\theta_3,\theta_4,-\theta_5,-\theta_6)
\end{equation}
and keeps $\alpha$ intact\footnote{In the $T^2$ fiber language, the duality exchanges $\tau$ and
$\rho$ and therefore takes a $D'_4$ singularity into $D_4$.}. We thus learn that Type IIA on the
two-plaquette model is dual to Type IIB on $T^6/\mathbb{Z}_2\times\mathbb{Z}_2$. The details of the spectrum
computation is presented in Appendix \ref{aspectrum}.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=4.5cm,origin=c]{cube_duality.eps}
\caption{Monodromies of the one-shift Joyce orbifold.}
\label{cube_dual}
\end{center}
\end{figure}
Another duality is provided by considering the $\vec A=(\frac{1}{2},0,0)$ Joyce orbifold,
\begin{displaymath}
\begin{array}{llllrrrrrrrr}
\alpha &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) & \mapsto ( & x_1, & -x_2, & -x_3 & \, | \, & y_1, & y_2, & -y_3, & -y_4) \\
\beta &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) & \mapsto ( & -x_1, & x_2, & \frac{1}{2}-x_3 & \, | \, & y_1, & -y_2, & y_3, &-y_4) \\
\gamma &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) & \mapsto ( & -x_1, & -x_2, & x_3 & \, | \, & {-y_1}, & y_2, & y_3, & -y_4)
\end{array}
\end{displaymath}
The monodromies of the singularities in the base are shown in \fref{cube_dual} (see also
\fref{halfcube}). The action of $\alpha$ and $\gamma$ creates the usual cubic structure and
$\beta$ cuts the cube in half.
This $G_2$ orbifold can be interpreted as a Type IIA background in more than one way depending
on which coordinate we choose for the $x^{10}$ circle. As discussed in the previous section, a
minus sign in the $x^{10}$ direction is interpreted as $(-1)^{F_L}$ (this interpretation is
accompanied by an inversion of fiber signs). From \fref{cube_dual} it is clear that $x^{10}=y_2$
or $y_3$ gives the one-plaquette model since in these cases $\beta$ or $\alpha$, respectively,
will contain $(-1)^{F_L}$. On the other hand, choosing $x^{10}=y_1$ or $y_4$ gives model ``U''.
Since relabeling $x^{10}$ is an element of the $SL(4)$ T-duality group, these backgrounds are
T-dual to each other. The spectrum is computed in Appendix \ref{aspectrum2}.
\clearpage
\subsection{U-duality and affine monodromies}
\label{fibershift}
For usual orbifolds, it is known that the untwisted sector contains information about the
singular space, whereas the twisted sectors describe resolutions (or deformations
\cite{Vafa:1994rv, Gaberdiel:2004vx}) thereof. It is typically said that string theory ``knows''
about the non-singular resolution and the number of the various particles are determined by the
Hodge numbers. Here we can see this happening in a more general setup. In M-theory, the number
of $\mathcal{N}=1$ vector and chiral multiplets are respectively determined by the $b_2$ and
$b_3$ Betti numbers of the $G_2$-manifold. When U-duality works, one should obtain the same
massless spectrum from the asymmetric (non-geometric) orbifold of Type IIA.
Joyce \cite{Joyce1, Joyce2} computed Betti numbers for blown-up $T^7/(\mathbb{Z}_2)^3$ examples. These
examples, however, contained $1/2$ shifts also in directions that were interpreted as fiber
coordinates in the previous section\footnote{The notation $b_i$ and $c_i$ is from \cite{Joyce2}.
These constants should not be confused with the Betti numbers.},
\begin{displaymath}
\begin{array}{llllrrrrrrrr}
\alpha &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) & \mapsto ( & x_1, & -x_2, & -x_3 & \, | \, & y_1, & y_2, & -y_3, & -y_4) \\
\beta &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) & \mapsto ( & -x_1, & x_2, & {b_2}-x_3 & \, | \, & y_1, & -y_2, & y_3, & b_1-y_4) \\
\gamma &:& (x_1, x_2, x_3 \, | \, y_1, y_2, y_3, y_4) & \mapsto ( & c_5-x_1, & c_3-x_2, & x_3 & \, | \, & {-y_1}, & y_2, & y_3, & c_1-y_4)
\end{array}
\end{displaymath}
These shifts are recommended, otherwise one encounters ``bad singularities'' which can't easily
be resolved. If interpreted as a fibration, the monodromies acting on $T^4$ are affine
transformations which also include half-shifts for some of the fiber coordinates. Although these
orbifolds can readily be interpreted as non-geometric backgrounds for Type IIA, the naive
U-duality map does not necessarily work and the spectrum does not match with that of M-theory.
In Appendix \ref{joycespectrum}, we discuss the cases of two Joyce manifolds, with two and three
shifts $(b_1, b_2, c_1, c_3, c_5) = (0,\frac{1}{2},\frac{1}{2},0,0,0)$ and
$(0,\frac{1}{2},\frac{1}{2},\frac{1}{2},0,0)$. Naive U-duality works well for the three shift
example and one obtains the same spectrum from the non-geometric compactification. However, the
two shift example gives a different spectrum from what we expect from the Betti numbers of the
$G_2$-manifold\footnote{An ambiguity is immediately discovered by noticing that a redefinition
the fiber coordinates $\tilde y \equiv y+1/4$ changes the naive interpretation of $(-1)^{F_L}$
as $\textrm{diag}(-1,-1,-1,-1)$. The new monodromy action for $(-1)^{F_L}$ will now include
$1/2$ shifts in the fiber. In some cases, this ambiguity can be exploited to match the IIA and
M-theory spectra.}. The puzzle can simply be resolved by choosing a different (coassociative)
fiber. Taking $\{ x_1, x_2, y_2, y_3\}$ for fiber coordinates, the $\mathbb{Z}_2$ transformations have
no shifts in these directions and the non-geometric Type IIA spectrum indeed matches the
M-theory spectrum.
\newpage
\section{Compactifications with ${\bf E_n}$ singularities}
In this section, we list geometric orbifolds containing singularities other than $D_4$.
Non-geometric modifications of these orbifolds may be done similarly to the previous section.
For $D_4$ singularities, the constant shape of the fiber can be arbitrary. The main difference
in the $E_n$ case is that the fiber shape is determined by the symmetry group. In practice, this
means that in two dimensions $\tau=i$ or $\tau = e^{i\pi/3}$.
\subsection{Orbifold limits of $K3$}
Simple warm-up examples are provided by considering $T^4 / \mathbb{Z}_n$ orbifolds. These have been
analyzed from the F-theory point of view in \cite{Dasgupta:1996ij}.
\vskip 0.5cm \noindent {\bf The ${\bf T^4 / \mathbb{Z}_3}$ orbifold.} The action of the generator of
the orbifold group is given by
\begin{equation}
\alpha: \, (z_1, z_2) \mapsto (e^{i\pi/3}z_1, \, e^{-i\pi/3}z_2)
\end{equation}
which respects the torus identifications
\begin{equation}
z_i \sim z_i+1 \sim z_i+e^{i\pi/3}
\end{equation}
The base is $T^2 / \mathbb{Z}_3$ and can be parametrized by $z_1$. It contains three $E_6$
singularities of deficit angle $4\pi / 3$. The monodromy around these are given by
\begin{equation}
\mathcal{M}_{E_6} = (ST)^2 = \left( \begin{array}{cc}
-1 & -1 \\
1 & 0
\end{array}
\right)
\end{equation}
A fundamental cell is shown in \fref{t4z3fund}.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3.5cm,angle=0,origin=c]{t4z3.eps}
\caption{The base of the $T^4/\mathbb{Z}_3$ orbifold contains three $E_6$ singularities. }
\label{t4z3fund}
\end{center}
\end{figure}
\vskip 0.5cm \noindent {\bf The ${\bf T^4 / \mathbb{Z}_4}$ orbifold.} The generator of $\mathbb{Z}_4$ is given
by
\begin{equation}
\alpha: \, (z_1, z_2) \mapsto (i z_1, \, -i z_2)
\end{equation}
with the torus identifications
\begin{equation}
z_i \sim z_i+1 \sim z_i+i
\end{equation}
The base is $T^2 / \mathbb{Z}_4$. This orbifold contains two $E_7$ and one $D_4$ singularity. They have
deficit angles $3\pi/2$ and $\pi$, respectively. The $E_7$ and $D_4$ monodromies are given by
\begin{equation}
\mathcal{M}_{E_7} = S = \left( \begin{array}{cc}
0 & -1 \\
1 & 0
\end{array}
\right)\qquad \mathcal{M}_{D_4} = \left( \begin{array}{cc}
-1 & 0 \\
0 & -1
\end{array}
\right)
\end{equation}
A fundamental cell is shown in \fref{t4z4fund}.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=4cm,angle=0,origin=c]{t4z4.eps}
\caption{The base of the $T^4/\mathbb{Z}_4$ orbifold contains two $E_7$ and one $D_4$ singularities. }
\label{t4z4fund}
\end{center}
\end{figure}
\vskip 0.5cm \noindent {\bf The ${\bf T^4 / \mathbb{Z}_6}$ orbifold.} The base is $T^2 / \mathbb{Z}_6$. This
orbifold contains $E_8$, $E_6$ and $D_4$ singularities. The $E_8$ monodromy is given by
\begin{equation}
\mathcal{M}_{E_8} = ST = \left( \begin{array}{cc}
0 & -1 \\
1 & 1
\end{array}
\right)
\end{equation}
A fundamental cell is shown in \fref{t4z6fund}.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3.5cm,angle=0,origin=c]{t4z6.eps}
\caption{The base of the $T^4/\mathbb{Z}_6$ orbifold contains $E_8$, $E_6$ and $D_4$ singularities.
The three black dots denote one non-singular point.}
\label{t4z6fund}
\end{center}
\end{figure}
\subsection{Example: $T^6 / \mathbb{Z}_3$}
We continue by discussing three dimensional examples. The simplest one is $T^6 / \mathbb{Z}_3$. This is
created by orbifolding the square $T^6$ by cyclic permutations of (complex) coordinates
\begin{equation}
\alpha: \, (z_1, z_2, z_3) \mapsto (z_2, z_3, z_1)
\end{equation}
Clearly, this action preserves the holomorphic volume form,
\begin{equation}
\Omega = dz_1 \wedge dz_2 \wedge dz_3
\end{equation}
and the K\"{a}hler\, form
\begin{equation}
\omega = \sum_i dz_i \wedge d\bar z_i
\end{equation}
Let us now choose the real parts of $z_i$ for the base coordinates. Before orbifolding, the base
is a cube as shown in \fref{kocka_e6}. The fixed loci of $\alpha$ are at $z_1=z_2=z_3$ that is
along a diagonal. The cube has a $\mathbb{Z}_3$ symmetry about this diagonal, and thus the orbifolding
procedure respects the torus identifications.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=5.5cm,origin=c]{kocka_e6_3.eps}
\caption{The base of $T^6 / \mathbb{Z}_3$. The green line shows the $E_6$ singularity. Six triangles bound the domain.
Two triangles touching the singular green line are identified by folding. Two triangles should be identified according to the orientation
given by the arrows. The remaining two triangles are identified in a similar fashion.}
\label{kocka_e6}
\end{center}
\end{figure}
Since $\mathbb{Z}_3 \subset SU(2)$, this example preserves $\mathcal{N}=4$ supersymmetry in four
dimensions. By making the identifications of the bounding triangles, one can check that the only
singularity is $E_6$. It is along the diagonal which gives a closed loop in the base. Since
there are no other gravitating strings to curve the space, this is a good sign that the space
factorizes. In particular, we do not expect it to be an $S^3$.
\clearpage
\subsection{Example: $T^6 / \Delta_{12} $}
\label{d12}
A more complicated example is gained by orbifolding $T^6 /(\mathbb{Z}_2)^2$ by the above described
cyclic permutations. These permutations do not commute with the sign flips and together they
give $\Delta_{12} \subset SU(3)$. This group has the faithful representation described by the
following matrices (see \cite{Greene:1998vz}, and also \cite{Berenstein:2000mb, Hanany:1998sd,
Feng:2000af}) which act on the $(z_1, z_2, z_3)$ complex coordinates
\begin{equation}
\left( \begin{array}{ccc}
(-1)^p & 0 & 0 \\
0 & (-1)^q & 0 \\
0 & 0 & (-1)^{p+q}
\end{array}
\right) \qquad
\left( \begin{array}{ccc}
0 & 0 & (-1)^p \\
(-1)^q & 0 & 0 \\
0 & (-1)^{p+q} & 0
\end{array}
\right) \qquad
\left( \begin{array}{ccc}
0 & (-1)^p & 0 \\
0 & 0 & (-1)^q \\
(-1)^{p+q} & 0 & 0
\end{array}
\right)
\end{equation}
It can be generated by two elements,
\begin{eqnarray*}
\alpha: \, (z_1, z_2, z_3) &\mapsto & (z_2, z_3, z_1) \\
\beta: \, (z_1, z_2, z_3) &\mapsto & (-z_1, -z_2, z_3)
\end{eqnarray*}
The fundamental domain is shown in \fref{rhombic_e6}. There are two $E_6$ and four $D_4$
singularities in the base. They meet in $E_6$-$E_6$-$D_4$ and $D_4$-$D_4$-$D_4$ vertices. The
solid angle around these vertices are $\pi/3$ and $\pi$, respectively. The base is topologically
an $S^3$.
\begin{figure}[ht]
\begin{center}
\hskip -6cm
\includegraphics[totalheight=7cm,origin=c]{rhombic_e6.eps}
\vskip -5.5cm
\hskip 8cm
\includegraphics[totalheight=3.5cm,origin=c]{links5.eps}
\vskip 2cm
\caption{(i) The base of $T^6 /\Delta_{12}$. The red and green lines indicate $E_6$, $D_4$ singularities, respectively. The other edges are non-singular.
The solid green cube indicates the $D_4$ singularities of the original $T^6 / (\mathbb{Z}_2)^2$ orbifold. (ii) Schematic picture describing
the topology of the singular lines. See Appendix H for building this polyhedron at home.}
\label{rhombic_e6}
\end{center}
\end{figure}
\clearpage
\subsection{Example: $T^6 / (\mathbb{Z}_2)^2 \times \mathbb{Z}_4 $}
Another example is obtained from $T^6 / (\mathbb{Z}_2)^2$ by further orbifolding it by $\mathbb{Z}_4$. This
is possible because the rhombic dodecahedron has fourfold symmetry axes. These are the axes of
the green cube in \fref{cube_fold}.
\begin{figure}[ht]
\begin{center}
\hskip -7cm
\includegraphics[totalheight=6cm,origin=c]{rhombic_e7.eps}
\vskip -4.8cm
\hskip 8cm
\includegraphics[totalheight=3.0cm,origin=c]{links6.eps}
\vskip 2cm
\caption{(i) The base of $T^6 / (\mathbb{Z}_2)^2 \times \mathbb{Z}_4 $. The red and green lines indicate $E_7$, $D_4$ singularities, respectively. The other edges are non-singular.
The solid green cube indicates the $D_4$ singularities of the original $T^6 / (\mathbb{Z}_2)^2$ orbifold. (ii) Schematic picture describing
the topology of the singular lines.}
\label{rhombic_e7}
\end{center}
\end{figure}
The resulting base is shown in \fref{rhombic_e7}. There is one $E_7$ line which is topologically
a circle. In contrast to the $T^6 / \mathbb{Z}_3$ example, this happens because the other $D_4$
singularities curve the base and make this contractible loop a geodesic. The base only contains
familiar $D_4$-$D_4$-$D_4$ vertices.
\subsection{Example: $T^6 / \Delta_{24} $ }
Our final example can be constructed by first taking $T^6$. Its base is a cube with opposite
faces identified. We now place $E_7$ singularities on the twelve edges of the cube. We also add
diagonal $E_6$ singularities as in Section \ref{d12}. These are realized by the following
matrices which act on $(z_1, z_2, z_3)$ complex coordinates
\begin{equation}
\alpha_{E_6} = \left( \begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0
\end{array}
\right) \qquad \qquad
\beta_{E_7} = \left( \begin{array}{ccc}
0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{array}
\right)
\end{equation}
These generate the $\Delta_{24}$ group. Compared to $\Delta_{12}$, it also contains odd
permutations of the coordinates. Since odd permutations come with an odd number of minus signs,
the volume form is again invariant.
\begin{figure}[ht]
\begin{center}
\hskip -7cm
\includegraphics[totalheight=6.5cm,origin=c]{rhombic_d24f.eps}
\vskip -4.8cm
\hskip 8cm
\includegraphics[totalheight=3.0cm,origin=c]{links7.eps}
\vskip 2cm
\caption{(i) The base of $T^6 / \Delta_{24} $. The cyan, red and green lines indicate $E_7$, $E_6$ and $D_4$ singularities, respectively.
(ii) Schematic picture describing the topology of the singular lines. }
\label{rhombic_d24}
\end{center}
\end{figure}
In \fref{rhombic_d24}, the resulting base is shown. The green cube around the base is $1/8$ or
the original base of $T^6$. The faces should be folded as indicated by the arrows. The rear
faces touching $E_6$ should be also folded. This gives an $S^3$ with curvature concentrated in
the singular lines (see the right-hand side of the figure). The base contains two types of
composite vertices. One is an intersection of $E_7$, $E_6$ and $D_4$ edges. The other one comes
from the collision of an $E_7$ and two $D_4$ singularities.
\subsection{Non-geometric modifications}
Having discussed the geometric structure of the fibrations with exceptional singularities, we
can try to modify them into non-geometric spaces. Similarly to the examples in Section
\ref{nongeot6}, closed loops of $D_4$, $E_7$ and $E_8$ singularities\footnote{Since the
monodromy of $E_6$ is an order three modular transformation, adding a sign would make it order
six.} may be decorated with the action of $(-1)^{F_L}$. For example, $E_8={\tiny
\mtwo{0}{-1}{1}{1} \oplus \mtwo{1}{0}{0}{1}}$ with $(-1)^{F_L}$ has the same monodromy as a
composite of $A_2={\tiny \mtwo{0}{1}{-1}{-1}}$ and a $D'_4$ (which acts on the other $T^2
\subset T^4$). The tension four $A_2$ and the tension six $D'_4$ give the original deficit angle
of the tension ten $E_8$ (see \ref{twodimsec} for the Kodaira classification of singularities).
The simplest example is to add $(-1)^{F_L}$ to the $D_4$ and one of the $E_7$ singularities of
the $T^4/\mathbb{Z}_4$ orbifold (\fref{t4z4fund}), or instead decorate both $E_7$ singularities. The
$T^4/\mathbb{Z}_6$ orbifold (\fref{t4z6fund}) can similarly be modified by adding $(-1)^{F_L}$ to the
$D_4$ and the $E_8$ singularities. By performing a single T-duality in the fiber, the
$T^4/\mathbb{Z}_4$ monodromies can be changed to act on $SL(2)_\rho$ instead of $SL(2)_\tau$. The
resulting Type IIB theory has a $D'_4=D_4 \times (-1)^{F_L}$ and two $E'_7$ singularities. The
$E'_7$ corresponds to a double T-duality and thus the background is globally non-geometric, even
though it has a geometric dual.
Turning to the three dimensional examples, $(-1)^{F_L}$ can be added to the $D_4$ loop of $T^6
/\Delta_{12}$ as shown in \fref{rhombic_e6_red}. This is obtained by orbifolding the last
example in \fref{allcubes}. The $D_4$ loops or the $E_7$ loop of $T^6 / (\mathbb{Z}_2)^2 \times \mathbb{Z}_4 $
can similarly be modified. An example is shown in \fref{rhombic_e7_red} where a single $D_4$ has
been changed into $D'_4$ corresponding to the first example of \fref{allcubes}. A single
T-duality on the geometric $T^6 / (\mathbb{Z}_2)^2 \times \mathbb{Z}_4$ gives Type IIB with a circle of $E'_7$
and thus the dual background is non-geometric. $T^6 / \Delta_{24} $ can similarly be modified
(\fref{rhombic_d24_red}).
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=4cm,origin=c]{links5red.eps}
\caption{Non-geometric $T^6 /\Delta_{12}$. The red lines indicate extra $(-1)^{F_L}$ factors.}
\label{rhombic_e6_red}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3.5cm,origin=c]{links6red.eps}
\caption{Non-geometric $T^6 / (\mathbb{Z}_2)^2 \times \mathbb{Z}_4 $.}
\label{rhombic_e7_red}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=3.5cm,origin=c]{links7red.eps}
\caption{Non-geometric $T^6 / \Delta_{24}$. }
\label{rhombic_d24_red}
\end{center}
\end{figure}
These spaces can serve as perturbative string backgrounds. The consistency of these vacua,
however, needs further investigation.
\newpage
\section{Chiral Scherk-Schwarz reduction}
\label{chirals}
In previous sections, we studied non-geometric spaces mainly by using a $(-1)^{F_L}$ monodromy
around singular loci in the base. Another possibility is to have this transformation in the
fiber as a Wilson line. Fields still do not depend on the fiber coordinates, and in this sense
this is a (chiral) Scherk-Schwarz reduction.
\subsection{One dimension}
Let us consider Type IIA compactified on a circle with $(-1)^{F_L}$ Wilson line. This will be a
one-dimensional fiber. The configuration breaks half of the supersymmetry keeping sixteen
right-moving supercharges. In M-theory, $(-1)^{F_L}$ is described as reflection of $x^{11}$.
Hence, the background lifts to M-theory as compactification on a Klein bottle
\cite{Dabholkar:1996pc}.
An important feature of the background that one can try to exploit in the construction of
non-geometric spaces is that T-duality on the circle takes Type IIA to IIA (not IIB as usual)
\cite{Gutperle:2000bf, Hellerman:2005ja, Aharony:2007du}. Although the duality switches between
the $SO(8)$ spinor and conjugate spinor representations in the right-moving sector, it also
exchanges the untwisted and twisted R/NS sectors \cite{Hellerman:2005ja}. Therefore, when the
circle decompactifies, the two massless 10d gravitini have different chiralities and thus the
theory is still Type IIA.
At self-dual radius, the bosonic string has additional massless states and one obtains the gauge
group $SU(2)\times SU(2)$. In Type II strings, these extra states are destroyed by the GSO
projection and one is left with $U(1)\times U(1)$ only. With the above Wilson line, however, an
extended $SU(2)\times U(1)$ gauge symmetry is obtained. In the effective theory, T-duality is
part of the $SU(2)$ gauge group and thus a T-duality monodromy can be regarded as a Wilson line.
A simple two-dimensional non-geometric space is obtained by compactifying on another base circle
with a monodromy that is a T-duality on the fiber circle. The consistency of this model has to
be further investigated.
\subsection{Two dimensions}
These ideas can be generalized by considering $T^n$ compactifications and turning on a
$(-1)^{F_L}$ Wilson line. This still preserves half of the supersymmetry. In order to glue
spaces, only those monodromies can be considered which preserve Wilson lines, that is the ``spin
structure'' of the $T^n$ fiber. Therefore, the perturbative duality group will be a proper
subgroup of $O(n,n,\mathbb{Z})$.
In the following, we consider the simplest examples where the base is taken to be two
dimensional and is parametrized by the complex coordinate $z$. The shape of the two-torus fiber
is described by the $\tau$ complex parameter. We take the Wilson line\footnote{ The case of
Wilson lines turned on for both fiber circles is the same since a modular $T$ transformation
converts the $(-,-)$ spin structure into $(-,+)$. } to be along the real direction denoted by
$x^9$. Then, along this coordinate axis, a single T-duality is possible. Applying the Buscher
rules, this duality is mirror symmetry for the two-torus fiber.
Let us denote the components of an arbitrary $SL(2,\mathbb{Z})$ element $M$ by
\begin{equation}
M=\left( \begin{array}{cc}
a & b \\
c & d
\end{array}
\right), \qquad ad-bc=1
\end{equation}
Geometric transformations must preserve the $(-,+)$ spin structure. If $(x,y)\in \mathbb{Z}^2$ denotes
the homotopy class of a one-cycle, then this constraint is equivalent to
\begin{equation}
(-1)^{x} = (-1)^{ax+by}
\end{equation}
that is
\begin{equation}
(a-1)x+by = 0 \quad (\textrm{mod} \ 2)
\end{equation}
Since $y$ is arbitrary, $b$ must be even. Then, $\textrm{det}\, M=1$ forces $a$ (and $d$) to be
odd and the above equation is satisfied. Therefore, the geometric part of the duality group is
the $\Gamma_0(2) \subset SL(2)$ congruence subgroup of index three. A maximal subgroup of it is
$\Gamma(2)$ that contains matrices with even off-diagonal elements. $\Gamma_0(2)$ can be
generated by $\Gamma(2)$ and the $TST^{-1}$ transformation which exchanges two cycles in the
fiber. Its fundamental domain is shown in \fref{fundomgamma}. The full duality group contains
another copy of $\Gamma_0(2)$ for $\rho$, and a single T-duality along $x^9$.
\begin{figure}[ht]
\begin{center}
\includegraphics[totalheight=6cm,angle=0,origin=c]{gamma02.eps}
\caption{Fundamental domain (gray area) for the action of the $\Gamma_0(2)$ on the upper half-plane.}
\label{fundomgamma}
\end{center}
\end{figure}
A geometric $K3$ fibration\footnote{ This $K3$ fibration has been used in the literature
(\cite{Berglund:1998va}, see also \cite{Bershadsky:1998vn}) to describe F-theory duals of 8d CHL
strings \cite{Chaudhuri:1995fk, Chaudhuri:1995bf}. Nine dimensional CHL strings are defined by
taking $E_8 \times E_8$ heterotic strings and orbifolding by a $\mathbb{Z}_2$ action which shifts the
ninth coordinate and interchanges the two $E_8$ factors. For a recent study of the moduli space
of nine dimensional theories with sixteen supercharges, see \cite{Aharony:2007du}. } with such
restricted transformations can be described by \cite{Berglund:1998va}
\begin{equation}
y^2 + x^4 +x^2 w^2 f_4(z) + w^4 g_8(z) =0
\end{equation}
where $(x,y,w) \in \mathbb{C} P^2_{1,2,1}$ and $f_4, g_8$ are holomorphic sections of degree 4 and 8,
respectively. The $j$-function is given by
\begin{equation}
j(\tau) = \frac{(f_4^2+12g_8)^3}{108 \, g_8(-f_4^2+4g_8)^2}
\end{equation}
The discriminant of the elliptic fibration vanishes generically at 16 points out of which 8 are
double zeros. The moduli space is ten dimensional, in contrast to the 18 dimensional space of
the cubic Weierstrass equation.
As explained in Appendix B of \cite{Berglund:1998va}, the types of possible degenerations are
$A_n$, $D_n$ and $E_7$. The $K3$ geometry can reach the $T^4/\mathbb{Z}_2$ orbifold limit where four
$D_4$ singularities close the base into an $S^2$. The orbifold is then generated by
\begin{displaymath}
\begin{array}{llllrrrr}
\alpha &:& (x_1, x_2 \, | \, y_1, y_2) & \mapsto ( & {-x_1}, & -x_2 \, | \, & -y_1, & -y_2) \\
\beta &:& (x_1, x_2 \, | \, y_1, y_2) & \mapsto ( & {x_1}, & x_2 \, | \, & y_1, & \frac{1}{2}+y_2)
\end{array}
\end{displaymath}
with $\beta$ containing $(-1)^{F_L}$. This is the same theory as the asymmetric orbifold limit
of the $12+12'$ model of \cite{Hellerman:2002ax}. The anomaly free $\mathcal{N}=1$ 6d spectrum
contains a supergravity multiplet, nine tensor multiplets, eight vector multiplets and twenty
hypermultiplets. The strong coupling limit is M-theory on a $\mathbb{Z}_2$ orbifold\footnote{ This is
to be compared with the CHL string in six dimensions which is dual \cite{Bershadsky:1998vn} to
M-theory on
\begin{equation}
(K3 \times S^1)/ \left\{\sigma \cdot (y\rightarrow y+1/2) \right\}
\end{equation}
by utilizing the heterotic-Type II duality \cite{Witten:1995ex}.}
\begin{equation}
(K3 \times S^1)/ \left\{ \sigma \cdot (y\rightarrow -y) \right\}
\end{equation}
where $y\in[0,1)$ is the $S^1$ coordinate and $\sigma$ is an involution on $K3$ that acts with
eight fixed points. It preserves twelve of the harmonic $(1,1)$ forms and changes the sign of
the other eight harmonic $(1,1)$ forms. The spectrum computation \cite{Sen:1996tz} matches that
of the asymmetric orbifold.
The resolved $12+12'$ model used a doubly elliptic Weierstrass fibration over an $S^2$ base,
\begin{equation}
y^2=x^3+p_4(z)x+q_6(z) \qquad \tilde y^2=\tilde x^3+\tilde p_4(z)\tilde x+\tilde q_6(z)
\end{equation}
The constants in the polynomials give a 19 dimensional moduli space. In the above orbifold
limit, the complex base coordinate includes $y_2$ (which has the Wilson line). The $\Gamma_0(2)$
construction resolves the orbifold in a different `frame': it chooses a different set of base
coordinates, namely $x_1$ and $x_2$. It presumably slices out a different subspace in the full
moduli space of the model.
Finally, T-duality along the $x^9$ circle can also be considered. The $\tau(z)$ and $\rho(z)$
sections can be described by considering a doubly elliptic fibration over the base. In
\cite{Hellerman:2002ax}, the fiber tori were independent and thus $\tau(z)$ and $\rho(z)$ were
unrelated. For the present configuration with a Wilson line, however, a single T-duality can
exchange them and result in more complicated non-geometric spaces. The construction of such
backgrounds is left for future work.
\clearpage
\section{Conclusions}
\label{section_conclusions}
A perturbative vacuum of string theory is specified by a conformal field theory on the
worldsheet. Only in special cases will the CFT have a geometric description. Such cases include
flat space, Calabi-Yau and flux compactifications, which have been studied in great detail. The
development of a more systematic understanding of the set of consistent string vacua will
inevitably require the study of non-geometric compactifications.
String dualities allow for the construction of string vacua that are locally geometric but not
necessarily manifolds globally. Using this idea, we have constructed non-geometric
compactifications preserving $\mathcal{N} = 1$ supersymmetry in four dimensions. In the two
dimensional case, the Weierstrass equation with holomorphic coefficients solves the equation of
motion and allows for sharing the $\mathbb{Z}_4$ and $\mathbb{Z}_6$ orbifold points which is necessary for
$SU(2)$ holonomy. Since an appropriate generalization of the Weierstrass equation was not at our
disposal, we were only able to describe such spaces at the asymmetric orbifold point in their
moduli space. A strong motivation for departing the flat-base limit is that it presumably
generates a non-trivial potential for the overall volume modulus. Note that for $D_4$
singularities, the size of the fiber is an arbitrary free parameter which (typically) runs to
large volume.
Although our explicit examples were all orbifolds, in principle, it is possible to build
non-orbifold examples by means of $D_4$ and $E_n$ singularities. Since the base in this case is
flat, it could be obtained by gluing various polyhedra along their faces. By carefully choosing
the dihedral angles of the building blocks, one can create the appropriate deficit angles for
the edges. However, it is not easy to satisfy the constraints on monodromies coming from
supersymmetry and the constructions quickly get complicated.
A good step in this direction would be to find a good basis of building blocks
which suffice even to reconstruct the orbifold examples.
By the relation discussed in
Section \ref{UdualityandGtwo}, such spaces would presumably give new examples of $G_2$
manifolds.
In Appendix \ref{aspectrum2}, Type IIA string theory has been compactified in a non-geometric
way on the ``one-shift'' $T^7/(\mathbb{Z}_2)^3$ orbifold down to four dimensions. The massless spectrum
is equivalent to that of the M-theory compactification on a particular resolution of this
orbifold with $(b_2, b_3)=(16,71)$. The orbifold has, however, numerous other resolutions with
very different Betti numbers \cite{Joyce:book}. It would be interesting to see whether these
other resolutions arise in Type IIA by the introduction of discrete torsion (and possibly
NS5-branes).
In the other direction, the result of section 3.4 shows that a general $T^3$-fibration with
$SO(3,3,\mathbb{Z})$ T-duality monodromies has a globally geometric M-theory dual. This is striking
given the difficulty of describing such creatures from the string theory point of view. More
generic constructions with $SL(5)$ monodromies presumably have no duality frame where they are
globally geometric.
In this paper, we have focussed on compactifications where the monodromy group was a subgroup of
the perturbative duality group. There is no obstacle in principle to the extension of the
monodromy group to include the full $SL(5)$ U-duality group\footnote{An early attempt to
geometrize such examples was made in \cite{Kumar:1996zx}.}. In this manner one can extend these
techniques to include in the compactification Ramond-Ramond fields, D-branes and orientifolds,
and presumably to find vacua with no massless scalars. In Appendices C and D we build confidence
that such objects can be treated consistently in the semiflat approximation by rederiving from
this viewpoint the Hanany-Witten brane-creation effect and the duality between M-theory on
$T^5/\mathbb{Z}_2$ and type IIB on K3. Although we studied vacua of Type II string theory, the
discussion can be applied to heterotic strings as well where the duality group $O(16+d, d)$ is
much larger \cite{Flournoy:2004vn}.
Another interesting direction is the study of leaving the large complex structure limit. Our
special flat-base examples had a worldsheet description as modular invariant asymmetric
orbifolds. However, in the generic case, this powerful tool is missing. Any available tools,
such as the gauged linear sigma model \cite{Witten:1993yc}, should be brought to bear on this
problem.
In \cite{Greene:1989ya} it is proved for the $T^2$-fibered case that a solution in the semiflat
approximation determines an exact solution. While the power of holomorphy is lacking in the
$T^3$-fibered case, the physical motivation for this statement \cite{Hellerman:2002ax} remains.
The idea is that the violations of the semiflat approximation are localized in the base, and we
have a microscopic description of the degenerations, as D-branes or NS-branes or as parts of
well-understood CY manifolds or orbifolds or U-duality images of these things.
It is expected that the singular edges in the base transform into ribbon graphs as we move away
from the semi-flat limit \cite{Joyce:2000ru, dave}. It seems possible that one can construct
local (in the base) invariants of the fibration which give `NUT charges' \cite{Hull:1997kt}.
These invariants, which are analogous to the number of seven-branes in the stringy cosmic
strings construction, appear in the \cite{Shelton:2005cf} mirror-symmetry-covariant
superpotential.
\vskip 0.5cm
{\bf Acknowledgements}
We thank Allan Adams, Henriette Elvang, Mark Hertzberg, Bal\'azs K\H{o}m\H{u}ves, Vijay Kumar,
Albion Lawrence, Ruben Minasian, Dave Morrison, Washington Taylor and Alessandro Tomasiello for
discussions and comments on the draft. JM acknowledges early conversations on related matters
with S. Hellerman in 2003. This work is supported in part by funds provided by the U.S.
Department of Energy (D.O.E.) under cooperative research agreement DE-FG0205ER41360.
|
1,108,101,564,664 | arxiv | \section*{Introduction} We consider the following problem. Let $X$ be a smooth complex projective variety of dimension $n>1$, with an ample divisor $L$. For each positive integer $s<n$,
describe the set $\mathcal P_{X,s}$ of geometric genera of irreducible subvarieties $V\subset X$ of dimension $s$, and, in particular, the subset $\mathcal P_{X,L,s}\subseteq \mathcal P_{X,s}$
of geometric genera of irreducible complete intersections of $n-s$ hypersurfaces from $\bigcup_{m\geqslant1} |mL|$. The
complement of each of these sets in $\{0\}\cup\N$ is the corresponding
\emph{set of $s$--gaps}, and its maximal intervals are called \emph{$s$--gap
intervals}. For curves on a very general surface $X$ in $\PP^3$ of degree $d$ (i.e., $n=2$, $s=1$) with a natural polarization $\mathcal O_X(1)$ the two sets $\mathcal P_{X,\mathcal O_X(1),1}$ and $\mathcal P_{X,1}$
coincide; the initial gap interval was found in \cite{Xu1} and the next one in \cite{CFZ}. In this case
there exists a maximum $G_d$ for the set of gaps (\cite[Thm.~2.4]{CFZ} and Remark \ref {rem:compareflam2} below). This means that a very general surface of degree $d$ in $\PP^3$ carries a curve of geometric genus $g$ for any $g> G_d$. In the present note we show that the latter remains true for any smooth projective variety, and in particular, for \emph{any} (not just for a very general) smooth surface of degree $d$ in $\PP^3$. One of our main results is the following:
\begin {thm}\label{thm:main} Let $X$ be an irreducible, smooth, projective variety of dimension $n>1$, let $L$ be a very ample divisor on $X$ and let $s\in\{1,\ldots,n-1\}$.
Then there is an integer $p_{X,L,s}$ (depending on $X$, $L$ and $s$) such that
for any $p\geqslant p_{X,L,s}$ one can find an irreducible subvariety $Y$ of $X$ of dimension $s$ with at most ordinary points of multiplicity $s+1$ as singularities such that $p_g(Y)=p$. Moreover, one can choose $Y$ to be a complete intersection $Y=D_1\cap\ldots\cap D_{n-s}$, where $D_i\in |L|$ for $i=1,\ldots,n-s-1$ are smooth and transversal and $D_{n-s}\in |mL|$ for some $m\geqslant1$ is such that $Y$ has ordinary singularities of multiplicity $s+1$.
\end{thm}
Let $Y$ be an irreducible variety of dimension $s$. A point $y\in Y$ is \emph{ordinary of multiplicity} $m$ ($m>1$), if\\
\begin{inparaenum}
\item [(i)] the Zariski tangent space of $Y$ at $y$ has dimension $s +1$, and\\
\item [(ii)] the (affine) tangent cone to $Y$ at $y$ is a cone with vertex $y$ over a smooth hypersurface of degree $m$ in $\mathbb P^ {s}$.
\end{inparaenum}
An ordinary point of $Y$ is an isolated hypersurface singularity, hence, it is Gorenstein.
The proof of Theorem \ref{thm:main} is done in Section \ref{S:nfolds}. In Section \ref{S:surf} we deduce an effective upper bound for gaps in the surface case. In Section \ref{S:surfP3}, we focus on smooth surfaces in $\PP^3$, proving in particular that in this case there is no \emph{absolute gap} for geometric genera of curves. That is, for all $d>0$, all non--negative integers are geometric genera for some curves lying on some smooth surfaces of degree $d$ in $\mathbb P^ 3$.
\subsection* {Notation and conventions} We work over the field of complex numbers and use standard notation and terminology.
In particular, for $X$ a reduced, irreducible, projective variety,
we denote by $\omega_X$ its dualizing sheaf. We will sometimes
abuse notation and use the same symbol to denote a divisor $D$ on
$X$ and its class in ${\rm Pic}(X)$. Thus $K_X$ will denote a
canonical divisor or the canonical sheaf $\omega_X$. When $Y \subset X$ is a closed subscheme,
$\mathcal I_{Y/X}$ will denote its ideal sheaf.
\section{Upper bound for gaps}\label{S:nfolds}
\subsection{Preliminaries}
In the sequel, $X$ is an irreducible, complex projective variety of dimension $n\geqslant 2$.
We assume usually that $X$ is Gorenstein, so that $\omega_X$ is a line bundle. This holds, in particular, if $X$ has only ordinary singularities.
We set
\[
p(X):=h^ 0(X,\omega_{X})\,\,\,\mbox{and}\,\,\, q(X):=h^ 1(X,\omega_{X})\,.
\]
For smooth varieties, both $p(X)$ and $q(X)$ are birational invariants. Note that, if $X$ is a smooth surface, then $q(X)$ is the \emph{irregularity} of $X$.
The \emph{geometric genus} of $X$ is defined as
\[
p_g(X):= p(X'),
\]
where $X'\to X$ is any desingularization of $X$.
\begin{lem}\label{lem:pg} Let $X$ be an irreducible, smooth projective variety of dimension $n$, and let $Y$ be an irreducible, effective divisor on $X$. Assume that $h^ i(X,\omega_X\otimes \mathcal O_X(Y))=0$ for all $i\geqslant 1$ \footnote{By the Kawamata--Viehweg vanishing theorem this holds provided
$Y$ is nef and big (in particular, for $Y$ ample).}. Then:\\
\begin{inparaenum}
\item [(i)] one has
\[
p(Y)=h^ 0(X,\omega_X\otimes \mathcal O_X(Y))+q(X)-p_g(X)\,,
\]
which is the geometric genus if $Y$ is smooth;\\
\item [(ii)] suppose that ${\rm Sing}(Y)=\{x_1,\ldots,x_k\}$, where $x_1,\ldots,x_k$ are ordinary points of $Y$ of multiplicity $n$. Then
\[
p_g(Y)\geqslant p(Y) -k,
\]
and the equality holds if and only if $x_1,\ldots,x_k$ impose $k$ independent conditions to the linear system $|\omega_X\otimes \mathcal O_X(Y)|$, i.e., if and only if the restriction map
\begin{equation}\label{eq:pp}
H^ 0(X,\omega_X\otimes \mathcal O_X(Y))\longrightarrow \bigoplus_{i=1}^ k \mathcal O_{x_i}
\end{equation}
is surjective.
\end{inparaenum}
\end{lem}
\begin{proof} Part (i) follows from the \emph{adjunction sequence}
\[
0\longrightarrow \omega_X \longrightarrow \omega_X\otimes \mathcal O_X(Y)\longrightarrow
\omega_X\otimes \mathcal O_X(Y)\otimes \mathcal O_Y\cong \omega_Y\longrightarrow 0.
\]
As for part (ii), let $\pi: X'\to X$ be the blow-up of $X$ at $x_1,\ldots,x_k$ with exceptional divisors $E_1,\ldots, E_k$. Set $E=\sum_{i=1}^ k E_i$. The union $x$ of $x_1,\ldots,x_k$ is a 0--dimensional subscheme of $X$. The proper transform $Y'$ of $Y$ in $X'$ is smooth and belongs to the linear system $|\pi^ *(\mathcal O_X(Y))\otimes \mathcal O_{X'}(-nE)|$, whereas $\omega_{X'}=\pi^ *(\omega_X)\otimes \mathcal O_{X'}((n-1)E)$. Hence, by (i), one has
\[
\begin{split}
p_g(Y)=p_g(Y')&= h^ 0(X',\omega_{X'}\otimes \mathcal O_{X'}(Y'))+q(X')-p_g(X') \\
\, &=h^ 0(X',\pi^ *(\omega_{X}\otimes \mathcal O_{X}(Y))\otimes \mathcal O_{X'}(-E))+q(X)-p_g(X)\\
\, &=h^ 0(X,\omega_{X}\otimes \mathcal O_{X}(Y)\otimes \mathcal I_{x/X})+q(X)-p_g(X)\,.
\end{split}
\]
Now the assertion follows.\end{proof}
\begin{lem}\label{lem:sec} Let $X\subset \mathbb P^ r$ be a non--degenerate, irreducible projective variety of dimension $n$. Let $x_1,\ldots, x_k\in X$ be general points.
If
$k\leqslant r-n=\codim_{\PP^ r} (X)$,
then the scheme theoretical intersection of the linear space $\langle x_1,\ldots, x_k\rangle$ with $X$ is the reduced $0$--dimensional scheme consisting of $x_1,\ldots, x_k$.
\end{lem}
\begin{proof} The assertion is trivial for $k=1$, so we assume $k\geqslant 2$. For $n=1$ and $k=2$, this is the classical \emph{trisecant lemma}, to the effect that a general chord of a non--degenerate curve in $\mathbb P^ r$, where $r\geqslant 3$, is not a trisecant (see, e.g., \cite [Example 1.8]{CC} for a simple proof). If $n=1$ and $k>2$, one proceeds by applying induction on $k$ to the projection of $X$ to $\mathbb P^ {r-1}$ from one of the points $x_1,\ldots, x_k$.
If $n>1$, one proceeds by applying induction on $n$ to the section of $X$ with a general hyperplane containing $\langle x_1,\ldots, x_k\rangle$. \end{proof}
\subsection{The theorem}
\begin {thm}\label{thm:main-1} Let $X$ be an irreducible, smooth, projective variety of dimension $n>1$, and let $L$ be a very ample line bundle on $X$.
Then there is an integer $p_{X,L}$ (depending on $X$ and $L$) such that for all $p\geqslant p_{X,L}$ one can find an irreducible hypersurface $Y\in \bigcup_{m \geqslant1}|mL|$ with at most ordinary points of multiplicity $n$ as singularities and with $p_g(Y)=p$.
\end{thm}
\begin{proof}
Set $ d:= L^ n$. For a positive integer $m$ we denote by $p_m$ the geometric genus of smooth elements in $|mL|$ (which is of course a non--gap). We show that for $m$ sufficiently large, any integer $p$ in the interval $[p_{m-1}+1, p_m-1]$ is the geometric genus of a hypersurface in $|mL|$ with $p_m-p$ ordinary points of multiplicity $n$ as singularities, which can be taken generically on $X$.
Since $L$ is very ample, by Lemma \ref {lem:pg}--(i) and by the asymptotic Riemann--Roch Theorem \cite [Vol.\;I, p. 21]{L}, we have
\begin{eqnarray}\label{eq:pd}
p_m & = & \chi(\omega_{X}\otimes \mathcal O_{X}(mL))+q(X)-p_g(X)\\
& = & h^0(\omega_{X}\otimes \mathcal O_{X}(mL))+q(X)-p_g(X)\nonumber\\
& = & \frac {m^ n}{n!} d+ O(m^ {n-1})\,. \nonumber
\end{eqnarray}
Hence
\begin{equation}\label{eq:delta}
\delta_m:=p_m-p_{m-1}-1=\frac {m^ {n-1}}{(n-1)!} d + O(m^ {n-2})\,.
\end{equation}
Theorem \ref{thm:main-1} follows from the:
\medskip
\noindent {\bf Claim 1}. \emph{There is an integer $m_{X,L}$ (depending on $X$ and $L$) such that for all $m\geqslant m_{X,L}$, for all positive integers $k\leqslant \delta_m$, and for general points $x_1,\ldots x_k$
in $X$, one can find an irreducible element $Y\in |mL|$ with ordinary points of multiplicity $n$ at
$x_1,\ldots, x_k$ and no other singularity.}
\medskip
Indeed, suppose that Claim 1 holds. Then the map \eqref{eq:pp} is surjective by the generality of $x_1,\ldots, x_k$. Thus Lemma \ref {lem:pg}--(ii) implies Theorem \ref{thm:main-1} with
\begin{equation*}\label{pX}
p_{X,L}:=p_{m_{X,L}-1}.
\end{equation*}
In turn, Claim 1 is a consequence of the following
\medskip
\noindent {\bf Claim 2}. \emph{There is an integer $m_{X,L}\geqslant n$ such that for all $m\geqslant m_{X,L}$, one has}
\begin{equation}\label{eq:r1}
\delta_m \leqslant\dim(|\nu L|)-n\,,
\quad\mbox{where}\quad m=n\nu + \mu\quad \mbox {with}\quad \mu \in \{0, \ldots, n-1\}\, .
\end{equation}
Indeed, assuming that Claim 2 holds, let $x$ be the reduced $0$--dimensional scheme formed by the points $x_1,\ldots, x_k$, and let $\Lambda: =\nu L\otimes \mathcal I_{x/X}$.
By Lemma \ref {lem:sec}, \eqref{eq:r1} ensures that $x$ is the base locus scheme of the linear system $|\Lambda|$. Therefore, by Bertini's theorem the general $Y\in |\Lambda^{\otimes n}\otimes \mathcal O_X(\mu)|\subset |mL|$ is irreducible having $x_1,\ldots, x_k$ as ordinary points of multiplicity $n$ and no other singularity.
Thus, Claim 2 implies Claim 1.
\medskip
Finally, we prove Claim 2.
\begin{proof}[Proof of Claim 2]
By the asymptotic Riemann--Roch Theorem (cf.\;\eqref{eq:pd}), one has
\[
\dim(|\nu L|)=\frac {\nu^ n}{n!}d + O(\nu^ {n-1})\,.
\] Hence, by \eqref {eq:delta}, Claim 2 holds if, for $m \gg 0$, one has
\begin{equation}\label{eq:nu}
n\,m^ {n-1}<\nu^ n\,.
\end{equation}
Since $\frac{m}{n}<\nu+1$, \eqref {eq:nu}
is true for $m\gg 0$.
\end{proof}
This ends the proof of Theorem \ref{thm:main-1}.
\end{proof}
\begin{rem}\label{rem:asymreal} As follows from the proof, the upper bound $p_{X,L}$ depends only on the Hilbert polynomial of $\bigoplus_{m\geqslant1} H^0(X, \omega_X \otimes \cO_X(mL))$ and of $\bigoplus_{m\geqslant1} H^0(X,\cO_X(mL))$. The former coincides with the Hilbert function by Kodaira's Theorem.
Assuming that $h^i(X,\cO_X(mL))=0$ for all positive integers $m$ and $i$, it is possible to replace the asymptotic Riemann--Roch theorem with the true Riemann--Roch, which is then purely numerical. This gives in principle an effective bound on the integers $m_{X,L}$ and $p_{X,L}$ in Theorem \ref {thm:main-1} (cf.\; Section \ref {S:surf} for a particular case).
\end{rem}
\noindent \emph{Proof of Theorem \ref{thm:main}.}
With $X$, $L$, $n$, and $s$ as in Theorem \ref{thm:main}, it
suffices to apply Theorem \ref{thm:main-1} to $X'=D_1\cap\ldots\cap D_{n-s-1}$ instead of $X$ and $L|_{X'}$ instead of $L$, where $D_1,\ldots, D_{n-s-1}\in |L|$ are general.
\qed
\section{
Genera of curves on smooth surfaces}\label{S:surf} In this section we compute an effective upper bound for gaps of geometric genera of curves on surfaces.
\bthm\label{thm:upper-bound} Let $S$ be a smooth, irreducible, projective surface, and $L$ a very ample line bundle on $S$. Set
\begin{equation*}\label{eq: notation} p:=p_g(S),\quad q:=q(S),\quad d:=L^2,\quad\mbox{and}\quad e:=K_S\cdot L\,.\end{equation*} For $\epsilon \in \{0,1\}$, set
\begin{equation}\label{eq:Deltae}
\Delta(\epsilon):= 4 (3 + 2 \epsilon) d^2 + 12 de + e^2 - 8 d (p-q),
\end{equation}
\be\label{eq:na} n_1 =n_1(\epsilon):=
\begin{cases} \, \, 2 & \,\,\,\mbox{if}\,\,\,\Delta(\epsilon) < 0,\\
\left\lceil 4 + \epsilon + \frac{e}{d} + \sqrt{\frac{\Delta(\epsilon)}{d^2}} \right\rceil& \,\,\,\mbox{if\,} \,\,\,\Delta(\epsilon) \geqslant 0,\end{cases}
\ee
\be\label{eq:nb} n_2 =n_2(\epsilon):=
\left\lceil \frac{6(p-q) + d (1+\epsilon) + e (2 \epsilon -1)-12}{e + 2d(1+\epsilon)} \right\rceil\,,
\ee
\be\label{eq:nbb} n_3 := \min\left\{n\in \mathbb N\,| \left\lfloor \frac n2\right\rfloor^ 2d>nd-\frac {d-e}2-1\right\}\,,
\ee
\be\label{eq:vanishing}
n_4:=\min\left\{n\in \mathbb N\,\vert\, h^1(S,\cO_S(nL))=h^2(S,\cO_S(nL))=0\right\}\, ,
\ee
and
\be\label{eq:n0} n_0 =n_0(\epsilon):= {\rm max} \{n_1(\epsilon), n_2(\epsilon),n_3, n_4\} \,.
\ee
Set finally
\be\label{eq:phi}
\varphi(d,e,n_0)= \frac{1}{2} \left[(n_0-1)((n_0-1)d + e)\right] + 1\,.\ee
Then for any $g \geqslant \varphi(d,e, n_0)$ the surface $S$ carries
a reduced, irreducible curve $C$ of geometric genus $g$ with only nodes as singularities.
\ethm
The proof of Theorem \ref{thm:upper-bound} is basically the same as the one of Theorem \ref {thm:main-1} in the case of surfaces, with a slight improvement, based upon the following:
\bthm\label{thm:Ter-lemma} {\rm (\cite[Thm.\ 1.4]{CC},
\cite[Thm.\ 1.3]{CR})} Let $X \subset \PP^r$ be an irreducible,
projective, non--degenerate variety of dimension $m$.
Assume $X$ is not $k$--weakly defective for a given $k \geqslant 0$ such that
\be\label{eq:ter-lem}
r \geqslant(m+1)(k+1)\,.
\ee Then, given general points $p_0,\ldots,p_k$ on $X$, the general hyperplane $H$
containing $T_{X,p_0,\ldots,p_k}$\footnote{$T_{X,p_0,\ldots,p_k}$ stands for the linear span of the union of the embedded tangent spaces $T_{X,p_i}$, $i=0,\ldots,k$.} is
tangent to $X$ only at $p_0,\ldots,p_k$. Such a hyperplane $H$
cuts out on $X$ a divisor with ordinary double points at
$p_0,\ldots,p_k$ and no further singularities.
\ethm
Recall (see \cite[p.\,152]{CC1}) that a variety $X$ as in Theorem \ref{thm:Ter-lemma} is
said to be \emph{$k$-weakly defective} if, given $p_0,\ldots,p_k\in X$ general points and
a general hyperplane $H$ containing $T_{X,p_0,\ldots,p_k}$ (i.e., \emph{tangent} to $X$ at $p_0,\ldots,p_k$),
then $H$ cuts out on $X$ a divisor $H_X$ such that there is a positive dimensional subvariety
$\Sigma \subseteq {\rm Sing}(H_X)$ containing $p_0,\ldots,p_k$
($\Sigma$ is then called the {\em contact variety of} $H$).
\subsection{Proof of Theorem \ref{thm:upper-bound}}\label{ss:nota}
The arithmetic genus of curves in $|nL|$ is
\begin{equation}\label{eq:pa}
p(d,e,n):=\frac{1}{2}n(nd+e)+1\,.
\end{equation}
For $n\geqslant n_0$ set
\be\label{eq:l}
l(d,e,n):=\dim(|nL|) =\frac{1}{2}n(nd-e)+p-q\,,\ee
where the latter equality follows by the Riemann--Roch Theorem
and \eqref {eq:vanishing}, since we assume $n\geqslant n_0\geqslant n_4$. Consider the
embedding $$\phi_{|nL|}\colon S\hookrightarrow \PP^{l(d,e,n)}\,.$$ Since $\phi_{|nL|}$ is an isomorphism of $S$ to its image $S_n$, we may identify $S$ with
$S_n$.
Set
\be\label{eq:delta2}
\delta(d,e,n):=p(d,e,n)-p(d,e,n-1)-1=nd-\frac{1}{2}(d-e)-1\,.
\ee
As in the proof of Theorem \ref {thm:main-1}, we show that for any $n \geqslant n_0$ and any positive integer $k\leqslant\delta(d,e,n)-1$, one can find an irreducible curve $C\in
|nL|$ with exactly $k+1$ nodes at general points of $S$ as its only singularities. Then, for any $n \geqslant n_0$, all the integers in the interval
$J_n=[p(d,e,n-1),\,p(d,e,n)]$ are non-gaps.
Since the intervals $J_n$ and $J_{n+1}$ overlap, this proves
Theorem \ref{thm:upper-bound}, because
\begin{equation}\label{eq:phi2}
\varphi(d,e, n_0):=\min (J_{n_0})=p(d,e,n_0-1)\,
\end{equation} is exactly \eqref{eq:phi}.
The proof follows by Proposition \ref{prop:proof-thm} and Lemma \ref{lem:inequality} below (which are of independent interest).
\bprop\label{prop:proof-thm} Let $S$ be a smooth, irreducible, projective surface, and $L$ a very ample line bundle on $S$. Assume that $n \geqslant \max \{n_3,2\}$ and that (with the above notation) the following inequalities hold
\begin{equation}\label{eq: bound} l(d,e,n)\geqslant3(\delta(d,e,n)-1)
\,\end{equation} and
\begin{equation}\label{eq: bound-1} l(d,e,\lfloor n/2\rfloor)\geqslant\delta(d,e,n)+1\,.\end{equation}
Then for any
$k\in\{0,\ldots,\delta(d,e,n)-1\}$,\\ \begin{inparaenum}
\item[(a)] the smooth surface $S_n\subset \PP^{l(d,e,n)}$ is not $k$-weakly defective, and\\
\item[(b)] there exists a reduced, irreducible curve $C\in |nL|$
in $S$ with nodes at $k+1$ general points of $S$ and no other singularity.
\end{inparaenum}
\eprop
\bproof Let $x_0, \ldots, x_k$ be general points of $S$.
Inequality (\ref{eq: bound-1}) guarantees that, for any $k\in\{0,\ldots,\delta(d,e,n)-1\}$, one has
$$\dim\,|\cO_S(\lfloor n/2\rfloor L)\otimes \mathcal I_{\{x_0,\ldots, x_k\}/S}|=l(d,e,\lfloor n/2\rfloor)-k-1\geqslant
l(d,e,\lfloor n/2\rfloor)-\delta(d,e,n)
\geqslant1\,.$$
The general curve in $|\cO_S(\lfloor n/2\rfloor L)\otimes \mathcal I_{\{x_0,\ldots, x_k\}/S}|$
is reduced and irreducible. Letting $C_1$ and $C_2$ be two different such general curves, and $C_0$ a general member of $L$, we obtain a divisor
$$C=\varepsilon C_0+C_1+C_2\in |\cO_S(nL)\otimes \mathcal I_{T_{S,x_0,\ldots, x_k}/\mathbb P^ {l(d,e,n)}}|\,,$$ where $\varepsilon\in\{0,1\}$, $\varepsilon\equiv n\mod 2$.
Since $C$ is reduced, with nodes at $x_0,\ldots,x_k$, this shows that (a) holds.
Now (b) follows. Indeed, since $k+1\leqslant \delta(d,e,n)$,
(\ref{eq: bound}) yields (\ref{eq:ter-lem}) with $m=2$ and $r=l(d,e,n)$. Hence Theorem \ref{thm:Ter-lemma} applies, and so, the general curve in $|\cO_S(nL)\otimes \mathcal I_{T_{S,x_0,\ldots, x_k}/\mathbb P^ {l(d,e,n)}}|$ has nodes at $x_0,\ldots, x_k$ and is elsewhere smooth. This curve is irreducible by Bertini's theorem. Indeed, if $n$ is odd, then $|\cO_S(nL)\otimes \mathcal I_{T_{S,x_0,\ldots, x_k}/\mathbb P^ {l(d,e,n)}}|$ has no fixed component and is not composed with a pencil. Assume that $n$ is even. By \eqref {eq:nbb},
$$C_1\cdot C_2= \frac {n^ 2}4 d>\delta(d,e,n)\geqslant k+1\,,$$
which motivates \eqref {eq:nbb}.
So, the general curve in $|\cO_S(nL)\otimes \mathcal I_{T_{S,x_0,\ldots, x_k}/\mathbb P^ {l(d,e,n)}}|$, being singular only at $x_0,\ldots, x_k$, cannot be of the form $C_1+C_2$, hence it must be irreducible. \eproof
\blem\label{lem:inequality} Let $\epsilon \in \{0,1\}$ be such that $\epsilon \equiv n \pmod{2}$, and let $\Delta(\epsilon)$ be as in \eqref{eq:Deltae}. Then\\
\begin{inparaenum}\item[(a)] {\rm (\ref{eq: bound-1})} holds for any
$n\geqslant n_1$, with $n_1$ as in {\rm \eqref{eq:na}}; \\
\item[(b)] if {\rm (\ref{eq: bound-1})} holds, then also {\rm (\ref{eq: bound})} holds, provided
that $n \geqslant n_2$, with $n_2$ as in {\rm \eqref{eq:nb}}.
\end{inparaenum}\elem
\bproof (a) Write $n = 2 t + \epsilon$, with $t \geqslant 1$ since $n \geqslant n_1\geqslant 2$. From \eqref{eq:l} and \eqref{eq:delta2},
\eqref{eq: bound-1} reads
\begin{equation}\label{eq:aiuto1}
t^2 d - t(4d+e) + 2(p- q) - e + (1-2\epsilon) d\geqslant 0\,.
\end{equation}and the discriminat of the left hand side is $\Delta(\epsilon)$ as in \eqref{eq:Deltae}.
When $\Delta(\epsilon) \geqslant 0$, \eqref{eq:aiuto1} holds for $t \geqslant \frac{4d+e + \sqrt{\Delta(\epsilon)}}{2d}$, and so (\ref{eq: bound-1}) holds for
{\small
$$n \geqslant 4 + \epsilon + \frac{e}{d} +\sqrt{\frac{\Delta(\epsilon)}{d^2}}\,.$$
}If $\Delta(\epsilon) < 0$, then (\ref{eq: bound-1}) holds for any $n \geqslant 2$. This motivates the definition of $n_1$ in \eqref{eq:na} and proves (a).
\smallskip
\noindent
(b) As above, (\ref{eq: bound}) reads
\begin{equation}\label{eq:aiuto2}
n^2 d - n(6d+e) + 2 (p - q) + 3(d - e) + 12 \geqslant 0\,.
\end{equation} Moreover, \eqref{eq:aiuto1} reads
\begin{equation}\label{eq:aiuto1b}
n^2 d - 8nd - 2ne + 8(p- q) + 4(d - e) + \epsilon (\epsilon d - 2 nd + 2 e) \geqslant 0\,.
\end{equation} The difference between the
left hand side in \eqref{eq:aiuto2} and that of \eqref{eq:aiuto1b} is
$$2nd + ne - 6(p - q) - (d - e) - \epsilon (\epsilon d - 2 nd + 2 e)+12\,,$$ which is non--negative as soon as
$$n \geqslant \frac{6(p-q) + d (1+\epsilon) + e (2 \epsilon -1)-12}{e + 2d(1+\epsilon)}\,.$$Assuming (a), this motivates the definition of
$n_2$ in \eqref{eq:nb} and proves (b).
\eproof
\begin{proof}[Proof of Theorem \ref{thm:upper-bound}] The integer $n_0$ in \eqref{eq:n0} satisfies both (a) and (b) in Lemma \ref{lem:inequality}. Hence \eqref{eq: bound} and \eqref{eq: bound-1} hold, and we can conclude by Proposition \ref{prop:proof-thm}.
\end{proof}
\section{Genera of curves on smooth surfaces in $\PP^3$}\label{S:surfP3} Here we focus on the case $S$ is a smooth surface of degree
$d \geqslant 4$ in $\PP^3$. In \cite{CFZ} we considered the case of a very general $S\in |\mathcal O_{\PP^ 3}(d)|$; here we drop this assumption, and simply assume $S$ smooth and $d\geqslant 4$ (the case $d<4$ being trivial for our considerations, because then $S$ carries curves of any genus). As a direct consequence of Theorem \ref{thm:upper-bound}, we have:
\bcor\label{cor: upper-bound} For any integer $d \geqslant 4$ there exists an integer $c_d$ such that, for any smooth surface $S$ in $\PP^3$ of degree $d$
and any integer $g \geqslant c_d$, $S$ carries
a reduced, irreducible nodal curve of geometric genus $g$, whose nodes can be prescribed generically on $S$.
\ecor
One can give an effective upper bound for $c_d$. We keep here the notation of Section \ref {S:surf}. Letting $L=\mathcal O_S(1)$
we obtain $$e = d (d-4),\quad q=q(S)= 0, \;\;\; {\rm and} \;\;\; p =p_g(S)= \frac{1}{6}(d-1)(d-2)(d-3)\,.$$ By Theorem \ref{thm:upper-bound}
one has
\begin{equation}\label{eq:phi-1}
c_d \leqslant \varphi (d, d(d-4), n_0)\,,
\end{equation} cf.\ (\ref{eq:phi}).
Thus, we are left to compute $n_0$ as in \eqref{eq:n0}. Since, by Serre duality, $n_4=d-3$, this amounts to compute $n_1$, $n_2$, and $n_3$ as in \eqref {eq:na}, \eqref {eq:nb}, and \eqref {eq:nbb}.
From \eqref{eq:Deltae}
we get $$\Delta({\epsilon}) = d \left( - \frac{1}{3} d^3 + 12 d^2 - \frac{1}{3} (104 - 24 \epsilon) d + 8\right)\,.$$
The polynomial $\Delta({\epsilon})/d$ has three positive roots
$$d_1,d_2,d_3 \sim \begin{cases} 0,25, \;2, 89, \; 32, 86 &\quad\mbox{if}\quad \epsilon=0,\\ 0,36, \; 2, \;\;\;\;\;\; 33,64&\quad\mbox{if}\quad \epsilon=1\,.\end{cases}$$
Thus,
$\Delta(0) \geqslant 0$ for $4 \leqslant d \leqslant 32$ and $\Delta(0) \leqslant 0$ for $d \geqslant 33$, while $\Delta(1) \geqslant 0$ for $4 \leqslant d \leqslant 33$ and $\Delta(1) \leqslant 0$ for $d \geqslant 34$.
Now
\eqref{eq:na}, \eqref{eq:nb}, and (\ref{eq:nbb}) give, respectively,
$$n_1(0)=
\begin{cases} 2 & \,\,\,\mbox{if}\,\,\, d \geqslant 33\,,\\
\left\lceil d + \sqrt{\frac{\Delta(0)}{d^2}} \right\rceil & \,\,\,\mbox{if}\; 4 \leqslant d \leqslant 32\,,\end{cases}\quad\qquad n_1(1) =
\begin{cases} 2 & \,\,\,\mbox{if}\,\,\, d \geqslant 34\,,\\
\left\lceil d + 1 + \sqrt{\frac{\Delta(1)}{d^2}} \right\rceil & \,\,\,\mbox{if}\; 4 \leqslant d \leqslant 33\,,\end{cases}$$
$$n_2(0)=
\left\lceil d-5 + \frac{6 (d-3)}{d(d-2)} \right\rceil,\quad\qquad n_2(1) =
\left\lceil d-5 + \frac{9(d-2)}{d^2} \right\rceil\,,$$
\noindent and
\[
n_3(0)= 3 + \left \lfloor \sqrt {2d-6-(4/d)} \right \rfloor, \quad\qquad n_3(1)= 4 + \left \lfloor\sqrt {2d-2-(4/d)} \right \rfloor\,.
\]
\noindent In particular, for $d\gg 0$, one has $$n_1=2,\quad n_2\sim d-5, \quad n_3\sim\sqrt{d}, \quad\mbox{hence}\quad n_0=n_4=d-4\,.$$ So, by (\ref{eq:phi}) and (\ref{eq:phi-1}),
\[
c_d \leqslant \varphi (d, d(d-4), d-4) = \frac{d(d-5) (2d -9)}{2}\sim d^ 3\,.
\]
\begin{rem}\label{rem:compareflam2}\normalfont{ Let ${\rm Gaps}(d)$ be the set of gaps for geometric genera of irreducible curves on $S\in |\mathcal O_{ \PP^3}(d)|$ very general. By \cite[Theorem 2.4] {CFZ}, one has $${\rm Gaps}(4) = \emptyset, \;\; {\rm Gaps}(5) = \{0,1,2\},\quad\mbox{and}\quad {\rm Gaps}(d) \subset \left[0, \; \frac{d(d-1)(5d-19)}{6} -1\right]\;\; \mbox{for}\;\; d \geqslant 6\,.$$
This is compatible with the results of the present section. A more refined analysis based on \cite[Remark 2.5] {CFZ}, shows that the maximum $G_d$ of ${\rm Gaps}(d)$ goes
like $G_d=O(d^ {\frac 83})$. It is an open problem to see if this is sharp.
\begin{comment}
\noindent
(ii) As in Remark \ref{rem:compareflam1}, one could compare the bounds $c_d +1$ and $p_{X_d, \mathcal O_{X_d}(1)}$ coming from Section \ref{S:nfolds}. This in turn reduces to compare the integer $n_0(d)$ found above and the integers $m_{X_d, \mathcal O_{X_d}(1)}$ expressed in terms of the Hilbert polynomials. Unless $d =5$ (in which case $X_5$ is canonical, and so, $h^2(\mathcal O_{X_d}(1)) = h^0(\mathcal O_{X_d})=1$), one has
$\nu_{X_d, \mathcal O_{X_d}(1)} = 1$. Therefore, $m_{X_d, \mathcal O_{X_d}(1)} \geqslant 2$ is the smallest integer $m$ satisfying
$$md + \frac{1}{2} (d^2 - 5d) - 1 \leqslant \frac{(d-1)(d-2)(d-3)}{6} \ \frac{1}{2} \nu^2 d - \nu d (d-4)\,,$$where $\nu = \lfloor \frac{m}{2} \rfloor$.
\end{comment}
}
\end{rem}
\subsection{Absence of absolute gaps for curves on smooth surfaces in $\PP^3$} We say that an integer $g$ is a \emph{$d$--absolute gap} if there is no irreducible curve with geometric genus $g$ on any smooth surface of degree $d$. We show here that there is no absolute gap at all.
\begin{thm}\label{thm:abs-gaps} For any positive integer $d$ and for any non-negative integer $g$, there is a smooth surface $S\subset \PP^ 3$ of degree $d$ and an irreducible, nodal curve $C$ on $S$ with geometric genus $g$.\end{thm}
\begin{proof} We may assume $d \geqslant 5$, otherwise the result is well known (cf. e.g. \cite[Prop.\;1.2 and Cor.\;2.2]{CFZ}).
We set
\be\label{4000} \ell_{d,n} := l (d, d(d-4), n) =
\Bigg \{ \begin{array}{ccc}
& \frac {n (n^ 2+6n+11)}6 \,\,\, &\text {if}\,\,\, n<d \\
& \frac {d\big ( 3n^ 2-3n (d-4)+(d^2-6d+11)\big)}{6} -1
\,\,\, &\text {if}\,\,\, n\geqslant d\,, \\
\end{array}
\ee and
\be\label{4000b} p_{d,n}:= p(d,d(d-4),n) =\frac {dn(d+n-4)}2+1\,.\ee
By \cite[Thm.\;2.4 and Rem.\;2.5] {CFZ}, for $S \subset \PP^3$ very general one has
$${\rm Gaps}(d) \subset \left[0,\; p_{d,n-1}-\ell_{d,n-1}-1 \right]= \left[0,\;
\frac{d(d-1)(5d-19)}{6} -1\right]\,\,\, \text{if}\,\,\, d>n\geqslant
\sqrt[3]{12d^ 2}\,.$$Plugging $n=d-1$ in this formula, we obtain the desired result
for all
\[
g\geqslant p_{d,d-2}-\ell_{d,d-2}\,.
\]Take now $n\leqslant d-2$. By \cite [Theorem 3.1]{CC0}, for a general surface $\Sigma\subset \PP^ 3$ of degree $n$
with $4\leqslant n\leqslant d-2$ and for any $g\in [p_{n,d}-
\ell_{n,d}, p_{n,d}]$ there is a reduced, irreducible component
$\mathcal V$ of the Severi variety of complete intersections of $\Sigma$ with surfaces of degree $d$ having
$\delta= p_{n,d}-g$ nodes as the only singularities.
Notice that the union of integers in the non--gap intervals
$$J_{d-1} (n) = [p_{n,d-1} - \ell_{n,d-1}, p_{n,d-1}] \;\; {\rm and} \;\; J_{d} (n) = [p_{n,d} - \ell_{n,d}, p_{n,d}]$$ is an integer interval
for $n \leqslant d-2$. To see this, it suffices to observe that
$$p_{n,d} - \ell_{n,d} \leqslant p_{n,d-1} + 1 < p_{n,d}\,.$$
The inequality on the right is trivial. To show the other inequality
$$\ell_{n,d} \geqslant p_{n,d} - p_{n, d-1} -1\,,$$ using \eqref{4000}, \eqref{4000b}, and the fact that $ d \geqslant n+2 > n$, we can rewrite it as
$$3 d (d - n +2) + (n^2 - 9 n + 26) \geqslant 0\,,$$which holds if $d \geqslant n+2$.
Any curve $C\in \mathcal V$ is cut out on $\Sigma$ by a surface $S$ of degree $d$. We claim that $S$ can be taken to be smooth. Since the linear system $|\mathcal I_{C/\PP^3}(d)|$ is base point free outside $C$, by Bertini's theorem $S$ can be choosen to be smooth off $C$. Suppose $S$ is singular at a point $p\in C$. Since $C$ has $\delta$ nodes and no other singularity, and it is the complete intersection of $S$ and $\Sigma$, then $p$ is a node of $C$. But the general surface in
$|\mathcal I_{C/\PP^3}(d)|$ is non--singular at the nodes of $C$, because $|\mathcal I_{C/\PP^3}(d)|$ contains all surfaces of the form $\Sigma+\Phi$, where $\Phi$ is a general surface of degree $d-n$ (so it does not contain the nodes of $C$), and $\Sigma$ is smooth (thus $\Sigma+\Phi$ is smooth at the nodes of $C$).
In this way we find nodal curves of any geometric genus
$g\geqslant p_{4,d}-\ell_{4,d}=0$ on smooth surfaces of degree $d$, proving the assertion.
\end{proof}
Let us conclude by the following conjecture.
\medskip
\noindent {\bf Conjecture.} {\em For any smooth, rational variety $X$
of dimension $n+1$, any very ample line bundle $L$ on $X$, any $s\in\{1,\ldots, n-1\}$, and any integer $g\geqslant 0$ there is a smooth hypersurface $D\in|\cO_X(L)|$ carrying an $s$-dimensional subvariety $S\subset D$ of geometric genus $g$.
In particular, for any $n \geqslant 3$, $d \geqslant 1$, $s\in\{1,\ldots, n-1\}$, and $g\geqslant 0$ there is a smooth hypersurface $D\in |O_{\PP^{n+1}}(d)|$
and a subvariety $S\subset D$ as before. }
\smallskip
One can ask whether the same holds, more generally, for any smooth Fano variety $X$.
\providecommand{\bysame}{\leavevmode\hboxto3em{\hrulefill}\thinspace}
|
1,108,101,564,665 | arxiv | \section{Introduction}
Potassium-40 is a common, natural isotope. It decays mainly by $\beta^-$ decay, less frequently by electron-capture to an excited state of argon-40 (\mbox{EC$^{*}$}), and very rarely by $\beta^+$ decay (see Fig.~\ref{Fig:K40_Decay_Scheme_Update_wColor.pdf}). It is a frequent contaminant in various particle detectors,
and a source of radioactive background in rare-event searches for dark matter~\cite{schumann_direct_2019,pradler_unverified_2013, ANTONELLO20191, adhikari2018background, angloher2022simulation, amare2021annual} and neutrinoless double-beta decay~\cite{EJIRI20191}. %
Several experiments looking for dark matter are taking draconian steps to purify their NaI detectors of K, and/or deploy vetos to tag the problematic low-energy radiation from {$^{40}$K}\ electron-capture which falls in the expected dark matter signal region~\cite{adhikari2018background,ANTONELLO20191,amare2021annual}. This veto method relies on identification of the high-energy $\gamma$-ray from the de-excitation of {$^{40}$Ar}\ following an \mbox{EC$^{*}$}\ decay. In addition, the long half-life (1.27~Ga) of this isotope makes it a useful geochronometer via the K/Ar and \mbox{$^{40}$Ar/$^{39}$Ar}\ dating techniques~\cite{begemann_call_2001,carter2020production,min2000test,renne2010joint}. Finally, the presence of all three modes of $\beta$ decay, some of which are extremely rare third-forbidden unique transitions, make this isotope of particular interest to nuclear structure theory~\cite{EJIRI20191,mougeot_improved_2018,MOUGEOT2019108884}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figs/K40_Decay_Scheme_PRC.pdf}
\caption{\label{Fig:K40_Decay_Scheme_Update_wColor.pdf}{$^{40}$K}\ decay scheme, with branching ratios and half-life calculated from our determination of $\mbox{$I_\text{EC$^{0}$}$} / \mbox{$I_\text{EC*}$}$ (this work and~\cite{prl}) and from literature values for $\mbox{$T^-$}$ (partial half-life of the $\beta^-$ decay) and $\mbox{$T^*$}$ (partial half-life of \mbox{EC$^{*}$})~\cite{kossert2022activity} (also shown: transition energy~\cite{chen_nuclear_2017}, Q$_{\text{EC}^0}$~\cite{wang2021ame} and Q$^{-}$~\cite{AUDI2003337}).}
\end{figure}
Aside from the branches mentioned previously, an electron-capture decay directly to the ground state of {$^{40}$Ar}\ (\mbox{EC$^{0}$}) has also been predicted by some~\cite{engelkemeir_positron_1962,MOUGEOT2019108884,carter2020production,kossert2022activity}, ignored or disputed by others~\cite{min2000test,renne2010joint}, but never observed. This branch forms a particularly challenging background in rare-event searches as there is no high-energy $\gamma$-ray from de-excitation that could be used to tag the low-energy radiation from the electron capture. Such a background would evade the veto technique mentioned previously. In addition, this added background has been proposed~\cite{pradler_unverified_2013} as a way to constrain the dark-matter interpretation of the longstanding, but controversial, DAMA/LIBRA claim for discovery of dark matter~\cite{bernabei2018first}. From the standpoint of geochronology, the existence of this decay to ground state could mean that samples are up to tens of millions of years older than commonly believed~\cite{carter2020production}. An empirical frequency of this branch would also inform calculations in nuclear structure theory, including those for neutrinoless double-beta decay half-lives. The KDK (potassium decay) collaboration~\cite{di_stefano_kdk_2017} has carried out the first measurement of \mbox{EC$^{0}$}~\cite{prl}, using a novel detector configuration~\cite{stukel2021novel}, as detailed in what follows.
\section{Detector and analysis}
The fully characterized KDK experimental setup~\cite{stukel2021novel} consists of a {$^{40}$K}\ source close to a cm${}^2$, sensitive X-ray Silicon Drift Detector (SDD), both surrounded by an efficient tagger (the Modular Total Absorption Spectrometer; MTAS), as illustrated in Fig.~\ref{fig:MTAS_SDD_schematic}.
MTAS is a 1-tonne array of NaI scintillators with $> 97\%$ tagging efficiency for the 1.46~MeV $\gamma$-rays of interest (Table~7 in~\cite{stukel2021novel}).
Data with the {$^{40}$K}\ source were acquired over 44~days.
Following offline determination of coincidences between the SDD and MTAS for three nominal time windows of $(1, 2, 4)\ \mu \text{s}$, SDD pulses were fit and energy calibrations were performed. To avoid biases during the analysis, the anti-coincident SDD spectrum had been blinded from (0.88--1.4)~keV (silicon escape peak region) and (2.0--3.8)~keV (electron capture signal region) while cuts and analysis methods were established.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{figs/KDK_Experiment_w_Insert_No_Source.pdf}
\caption{\label{fig:MTAS_SDD_schematic}Schematic displaying the cross-section of MTAS, along with the SDD. The SDD housing is centered in MTAS.}
\end{figure*}
Over the course of the run, two types of SDD instabilities appeared: gain drops due to voltage supply failures that were reset by an operator, and noise bursts attributed to power fluctuations in the lab. Both can be identified in the SDD energy range below the blinded region; after this cut, $ 76\%$ of the livetime remains. Minor gain drifts in the SDD were corrected by tracking the coincidence X-ray lines.
Gain drifts of a few percent were also observed in MTAS. A Kolmogorov-Smirnov test comparing the arrival times of the open anti-coincident and coincident SDD events in the 2--5~keV region returned a p-value of 0.63. This implies both data sets are consistent with the same underlying time distribution, and is consistent with the tagging efficiency of MTAS not changing over the run. In addition, before opening the data set, five time regions with MTAS gain changes were identified. After opening the data, the full $\rho$ analysis (detailed in later sections) was carried out on each of these regions, and the values were compared. A fit to a single common value of $\rho$ yielded a $\chi^2$ of 3.6 for 4 degrees of freedom, i.e., $p = 0.45$, implying the data are consistent with a constant tagging efficiency. Consistent results were observed over all three coincidence windows.
\subsection{Physical phenomena visible in the SDD and MTAS spectra}
A coincidence spectrum of the {$^{40}$K}\ source with a 2~$\mu$s nominal window can be seen in Fig.~\ref{Fig:K_40_1us_Coincidence_Spectrum.pdf}, which bins events by SDD and MTAS energies, resulting in various bands and peaks. The foci are near (\keV{3}, \MeV{1.46}), corresponding to \mbox{EC$^{*}$}\ decays involving X-ray detection in the SDD and complete capture of the de-excitation $\gamma$-ray in MTAS. Additional features in Fig.~\ref{Fig:K_40_1us_Coincidence_Spectrum.pdf} involve partial energy depositions of \mbox{EC$^{*}$}\ decay products. The projection of this figure, with nonzero MTAS energy, onto the SDD energy, is the coincident SDD data spectrum in Fig.~\ref{Fig:K40_Anti_vs_Coinc_Histogram.pdf}.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figs/K40_2us_Coincidence_Spectrum.pdf}
\caption{\label{Fig:K_40_1us_Coincidence_Spectrum.pdf}{$^{40}$K}\ coincidence spectrum between the SDD and MTAS. SDD and MTAS data obtained with the enriched {$^{40}$K}\ source using a nominal 2~$\mu$s coincident window is binned into a 250 $\times$ 250 grid over the displayed energy ranges. }
\end{figure}
In addition to the Ar signal, characteristic X-rays of Cl, K, and Ca are seen in the signal region (see Table~\ref{tab:fluo_counts} for the number of events, as well as Fig.~\hl{2} of~\cite{prl}).
K and Cl atoms are numerous enough in the source to be fluoresced by source $\beta^-$. Most $\beta^-$-fluoresced events are anti-coincident, with a small coincident contribution primarily from the $\beta^-$ which make it into MTAS.
Relative coincident and anti-coincident K counts are consistent with this description. Though Cl is also fluoresced by $\beta^-$, Ar X-rays (generally from \mbox{EC$^{*}$}) are energetic enough to fluoresce Cl, contributing additional coincident events.
\begin{table}[ht]
\centering
\caption{\label{tab:fluo_counts}Fluorescence counts observed in SDD spectra. Fluorescence may occur via various sources, described in the text, which affect the coincident sorting of the resulting X-ray detection. The quoted values are obtained using a \mus{2} window between the SDD and MTAS.}
\begin{ruledtabular}
\begin{tabular}{l D{,}{\times}{-1} D{,}{\times}{-1}}
Element & \multicolumn{1}{c}{Coincident counts} & \multicolumn{1}{c}{Anti-coincident counts} \\
\hline
Cl & 3.48(2) , 10^3 & 8.85(5) , 10^3 \\
K & 2.9(3), 10^2 & 5.89(7) , 10^3 \\
Ca & 1.0(5) , 10 & 1.36(5) , 10^3
\end{tabular}
\end{ruledtabular}
\end{table}
The Ar and Ca atoms in the source come from the slow decay of {$^{40}$K}. There are therefore 9--10 orders of magnitude fewer Ar and Ca atoms than K or Cl ones in the source. Based on the limited K fluorescence visible in KDK data, fluorescence of an Ar or Ca atom caused by something other than the atom itself is highly negligible.
Interestingly, there is a small contribution of characteristic Ca X-rays visible in the anti-coincident spectrum. Having ruled out external fluorescence, a product of the {$^{40}$K}\ $\rightarrow $ {$^{40}$Ca}\ decay itself must produce these events; the decay to Ca must occasionally involve \emph{self-fluorescence} of the daughter via the produced $\beta^-$. The exact low-order processes involved in such interactions are beyond the scope of this work, and we note that the Ca contribution has no effect on analysis of the Ar X-rays.
Outside of the SDD signal region, an additional presentation of source \mbox{EC$^{*}$}\ decays is contained at (\keV{1.2}, \MeV{1.46}). This silicon escape peak is formed in the event that a source X-ray fluoresces a detector Si atom prior to detection. The remaining energy of the source X-ray is equivalent to the difference between its initial energy and that of the fluoresced Si X-ray (\keV{1.7}), resulting in a detectable \keV{1.2} event.
Lastly, though it is possible for source $\gamma$-rays to deposit energy in the SDD, this is modelled to occur in $<0.5\%$ of cases. This contribution is considered in the systematic analysis described further, though the effect is negligible relative to the statistical error of our measurement. $\beta^+$ from the source would eventually provide a small contribution to the continuous SDD background, though over the KDK runtime any such events are negligible due to the minute branching ratio of this mode.
\subsection{Coincident and anti-coincident events}\label{sec:CoinUnco}
In the KDK dataset, we expect a total of $\sigma^*$ counts from \mbox{EC$^{*}$}\ decays, and $\sigma$ counts from \mbox{EC$^{0}$}\ decays present in our signal (Ar X-ray) peaks. These are expanded to
\begin{align*}
\sigma^* & = AT\ \text{\mbox{$I_\text{EC*}$} } \ P_K^* \omega_K \ \eta \ ( 1- \eta_\gamma ) \nonumber
\\
\sigma & = AT \ \text{\mbox{$I_\text{EC$^{0}$}$} } \ P_K \omega_K \ \eta ,
\end{align*}
where $AT$ are total source decays over the run duration. K-shell capture probabilities $P_K^* = 0.7564(4), P_K = 0.8908(7)$ using Betashape code V.2.2~\cite{mougeot2017betashape} differ for the two modes, though the fluorescence probability $\omega_K$ is the same. Both modes emit the same X-rays, whose detection probability in the SDD is $\eta$. The $\gamma$-ray accompanying the \mbox{EC$^{*}$}\ decay could deposit some energy in the SDD, with a small probability, thus shifting the event out of the signal region. From simulations, we estimate this probability to be $\eta_\gamma = 0.0048(48)$.
Various factors will sort $\sigma$ and $\sigma^*$ into coincident and anti-coincident events; the main ones are:
\begin{enumerate}
\item{The efficiency with which the $\gamma$s from \mbox{EC$^{*}$}\ are tagged by MTAS is not perfect. We have previously studied this parameter $\varepsilon$ and determined it with high precision to be $\varepsilon=0.9792 \pm 0.006$ at a 2~$\mu$s coincidence window~\cite{stukel2021novel} using data and Geant4~\cite{collaboration2003geant4} simulations. This will reduce the expected number of \mbox{EC$^{*}$}\ events that are properly tagged to $ \sigma^* \epsilon $.}
\item{Conversely, the non-perfect tagging efficiency will lead to $\sigma^* (1 - \epsilon )$ EC$^{*}$s being untagged (false positives).}
\item{In addition, some \mbox{EC$^{0}$}\ events may be in spurious coincidence with the MTAS background; the probability that this occurs in the $\mathcal{O}(\mu \text{s})$ coincidence window is $\beta_M \bar{t} << 1$~\cite{stukel2021novel} (false negatives).}
\item{The remaining \mbox{EC$^{0}$}\ events are anti-coincident.}
\end{enumerate}
These subsets of signal counts are summarized below:
\begin{align*}
\Sigma^* & =
\overbrace{
\sigma^* \epsilon
}^\text{1. Observed \mbox{EC$^{*}$}}
+
\overbrace{
\sigma \beta_M \bar{t}
}^\text{3. False negatives} \nonumber
\\
\Sigma & =
\underbrace{
\sigma^* (1 - \epsilon )
}_\text{2. False positives}
+
\underbrace{
\sigma ,
}_\text{4. Observed \mbox{EC$^{0}$}}
\end{align*}
where $\Sigma^*$ are expected coincident counts, and $\Sigma $ are expected anti-coincident counts. We introduce a parameter for total signal counts, $\nu \equiv \sigma^* + \sigma $, and a useful term:
\begin{equation*}
\rho^\prime \equiv \frac{\sigma }{\sigma^* } = \rho \frac{P_K}{P_K^*} \frac{1}{(1 - \eta_\gamma )}.
\end{equation*}
Using these two new expressions, we obtain the final likelihood terms relating $\rho $ to expected coincident and anti-coincident counts:
\begin{align}
\Sigma^* & =\frac{\nu }{ 1 + \rho^\prime } \left( \epsilon + \rho^\prime \beta_M \bar{t} \right) \nonumber
\\
\Sigma & = \frac{\nu }{ 1 + \rho^\prime } \left( 1 - \epsilon + \rho^\prime \right) .
\label{eq:exp_coinc_uncoinc_counts}
\end{align}
\subsection{Likelihood method}
SDD data are sorted by coincidence, and both subsets are fit simultaneously through minimization of the sum of the negative-log likelihoods:
\begin{equation*}
-\ln \mathcal{L} = - (\ln \mathcal{L}_{coin} + \ln \mathcal{L}_{anti})
\end{equation*}
Each of the two terms is a binned Poisson likelihood ratio~\cite{BAKER1984437}:
\begin{equation*}
- \ln \mathcal{L}_{j} =
\sum_i \left[ f(x_{i, j};\boldsymbol{\theta}) - n_{i, j} + n_{i, j} \ln \biggl( \frac{n_{i, j}}{ f(x_{i, j};\boldsymbol{\theta} ) } \biggr) \right]
\end{equation*}
Above, index $j$ represents either coincident or anti-coincident data, $n_{i, j}$ are total events in bin $i$ and $f(x_{i, j})$ are the model-predicted counts in that bin.
In addition to providing estimators of the parameters and confidence intervals, this technique returns goodness-of-fit~\cite{BAKER1984437}. Some of the parameters in $\boldsymbol{\theta}$, like our main one $\rho = {\mbox{$I_\text{EC$^{0}$}$}} / {\mbox{$I_\text{EC*}$}}$, and like those pertaining to the shapes of the lines, are shared between the coincident and anti-coincident data, while others are not.
The spectra contain several fluorescent contributions (Cl, K, Ca). For each, we model the associated K$_{\alpha}$\ and K$_{\beta}$\ X-rays with Gaussian distributions, the means of which are fixed to known values~\cite{be_table_2010}. For each such K$_{\alpha}$\ + K$_{\beta}$\ pair, the free Gaussian width is shared across the two peaks, and is the same in both coincident and anti-coincident spectra. A parameter for total K$_{\alpha}$\ + K$_{\beta}$\ counts, not shared across the coincidence-sorted data, is left free. The ratio of intensities $I_{K_\alpha}/ I_{K_\beta}$ is generally fixed to values in~\cite{SCHONFELD1996527}.
The continuous background model, consisting of decaying exponential and flat components, has all associated parameters left free. These parameters are not shared across the coincident and anti-coincident data, as this background contribution has a different shape in each subset.
The Ar K$_{\alpha}$\ and K$_{\beta}$\ X-rays of interest are modelled in a similar manner as the fluorescence lines. In order for these components to directly inform $\rho$, we insert the expression for expected coincident and anti-coincident Ar counts of Eq.~\eqref{eq:exp_coinc_uncoinc_counts} directly into the likelihood. This introduces free parameters $\rho $ and total Ar counts, along with fixed terms including efficiencies, as described earlier in Sec.~\ref{sec:CoinUnco}.
We note that the result $\rho $ is stable to the choice of fixing or freeing the ratio of K$_{\alpha}$\ to K$_{\beta}$\ intensities in the Ar and fluorescent components. Modelling of the latter generally has a negligible effect on the result, which is informed only by the Ar lines. An additional test modelled the signal X-rays with a Voigt profile~\cite{wertheim1974determination}, which yields essentially identical results since it approaches the limiting Gaussian case.
Initial opening of the data led to a value for $\rho$ of $0.008 \stackrel{stat}{\pm} 0.002$ on the single chosen energy range before evaluation of systematic errors.
A thorough analysis of systematics leads to our reported value of $\rho =\mbox{$ \PEC / \PECStar = \ratio \stackrel{\text{stat}}{\pm} \ratiostat \stackrel{\text{sys}}{\pm} \ratiosys $}$.
An example of a fit is shown in Fig.~\ref{Fig:K40_Anti_vs_Coinc_Histogram.pdf}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figs/coinc_uncoinc_June23_2us.pdf}
\caption{\label{Fig:K40_Anti_vs_Coinc_Histogram.pdf}SDD coincidence and anti-coincidence spectra. Results of simultaneous fit to coincident (top) and anti-coincident (bottom) SDD spectra at a \mus{2} coincidence window. Signal counts are shown in green. Various fluorescence peaks and an exponential background model are included. The total minimization has an associated goodness-of-fit of $p=0.4$. }
\end{figure}
\subsection{Systematics}
The systematic errors pertinent to the experiment fall into two categories: fit characteristics, and physical limitations. The choice of fit range (canonical fit range is 1.5--5.0~keV) pertaining to the mathematical background model is the dominant source of systematic error. The dominant physical error arises from the imperfect $\gamma$-ray tagging efficiency of MTAS. Table~\ref{tab:syst_errors} contains a summary of each source of error and its effect.
\begin{table}[ht]
\centering
\caption{\label{tab:syst_errors}Systematic errors on $\rho$. All sources of error, described in the text, are smaller than the statistical (\ratiostat).}
\begin{ruledtabular}
\begin{tabular}{l D{,}{\times}{-1}}
Source & \multicolumn{1}{c}{Systematic Error} \\
\hline
Fit range & 1 , 10^{-3} \\
MTAS $\gamma$-ray-tagging efficiency & 5 , 10^{-4} \\
Binning & 1 , 10^{-4} \\
SDD $\gamma$-ray-tagging efficiency & 4 , 10^{-5} \\
K-shell capture probabilities & 8 , 10^{-6} \\
Expected MTAS background counts & 3 , 10^{-6} \\
\end{tabular}
\end{ruledtabular}
\end{table}
To account for possible covariances between parameters which contribute to the overall systematic error, their effect is tested simultaneously by randomly drawing their value prior to a likelihood fit. Fit characteristics are drawn from a uniform distribution over the considered range. Physical parameters are drawn from a Gaussian whose width corresponds to the parameter's known uncertainty. This process is repeated 10,000 times to obtain a distribution of $\rho $, whose mean corresponds to our final measurement, and width is the systematic error, as shown in Fig.~\ref{fig:syst_check_rho_rho_error}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figs/rho_rho_error_2D_fit_2us_10k.pdf}
\caption{ Distributions of $\rho $ and its statistical error from systematic checks.
The value of $\rho$ and its associated statistical error $\sigma_\rho^\text{stat}$ obtained from 10,000 systematic-varying fits are displayed. The width of the $\rho $ distribution is equivalent to the systematic error. Contour lines of the fit function correspond to the colour bar on the right.}
\label{fig:syst_check_rho_rho_error}
\end{figure}
Though the systematic check described above uses a background model consisting of exponential and flat components, other models have been explored in depth. We have tested polynomials up to the third degree over various ranges, and find them to either be too simple or to find unphysical minima and maxima in the data. Overall, we find other models to either be too simple, find unphysical artefacts, or supply extraneous degrees of freedom to the canonical (exponential) case.
Lastly, our analysis was performed at 3 coincidence windows: $(1, 2, $ and $4)~\mu \text{s}$, for which we find consistent results of $\rho = (0.0091, 0.0095, 0.0095)$ respectively, all with the same error $\stackrel{stat}{\pm} 0.0022 \stackrel{sys}{\pm} 0.0010$.
These values are consistent with the value initially found upon opening the data and reported in the previous section.
Ar counts obtained in the systematic check at \mus{2} are summarized in Table~\ref{tab:Ar_counts_summary}.
\begin{table}[ht]
\centering
\caption{\label{tab:Ar_counts_summary}{$^{40}$K}\ $\rightarrow$ {$^{40}$Ar}\ electron capture decays visible in \mus{2} KDK data. The total $\sim 3$~keV signal counts pertaining to this transition are sorted by coincidence, with ground-state decays contributing primarily to the anti-coincident subset. Notations in parentheses refer to Sec.~\ref{sec:CoinUnco}.}
\begin{ruledtabular}
\begin{tabular}{l D{,}{\times}{-1}}
Component & \multicolumn{1}{c}{Visible counts} \\
\hline
Total Ar ($\Sigma^* + \Sigma$) & 4.78 , 10^4 \\
Coincident Ar ($\Sigma^*$) & 4.63 , 10^4 \\
Anti-coincident Ar ($\Sigma$) & 1.50 , 10^3 \\
\mbox{EC$^{0}$}\ decay ($\sigma$) & 5.00 , 10^2
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Implications}
The following re-evaluation of the {$^{40}$K}\ decay scheme using our measurement affects a variety of fields, including nuclear structure theory and geochronology. This is discussed below, illustrating impacts on neutrinoless double-beta decay of ${}^{48}$Ca and the thermal history of the Acapulco meteorite.
\subsection{Construction of the decay scheme of \texorpdfstring{{$^{40}$K}}{40K}}
Constructing the full decay scheme for {$^{40}$K}\ requires 4 parameters. Two of these are partial decay constants for the $\beta^-$ and \mbox{EC$^{*}$}\ branches, $\lambda^- = 0.4904 \pm 0.0019$~Ga$^{-1}$ and $\lambda^* = 0.05646 \pm 0.00016 $~Ga$^{-1}$, taken from the most recent data evaluation (Sec.~5.2 of~\cite{kossert2022activity}). These absolute measurements are independent of \mbox{EC$^{0}$}, though they depend on factors like the precise {$^{40}$K}\ content of the source, and the efficiency of the detectors. Generally speaking, the other two parameters used are an experimental determination of \mbox{$I_{\beta^+}$} /\mbox{$I_{\beta^-}$} ~\cite{engelkemeir_positron_1962} and a theoretical value for \mbox{$I_\text{EC$^{0}$}$} /\mbox{$I_{\beta^+}$}~\cite{mougeot_improved_2018}. The former directly leads to $\lambda^+ = \frac{\mbox{$I_{\beta^+}$}}{\mbox{$I_{\beta^-}$}} \lambda^-$, while the latter then provides $\lambda^0 = \frac{\mbox{$I_\text{EC$^{0}$}$}}{\mbox{$I_{\beta^+}$}} \lambda^+$. From this complete set of partial decay constants, the total decay constant can be obtained: $\lambda = \sum_i \lambda$. Partial and total halflives are then determined ($T_i = \frac{\ln 2}{ \lambda_i}$). Lastly, branching ratios ($P_i = \frac{\lambda_i}{\sum \lambda_j}$) are obtained.
Our novel measurement provides an additional, experimental, value: \mbox{$I_\text{EC$^{0}$}$}/\mbox{$I_\text{EC*}$}. In conjunction with $\lambda^*$, it leads directly to $\lambda^0 = \frac{\mbox{$I_\text{EC$^{0}$}$}}{\mbox{$I_\text{EC*}$}} \lambda^*$.
The value of $\lambda^0$ is the same, within uncertainties, for various commonly-used sets of decay constants~\cite{kossert2022activity,min2000test,be_table_2010}.
We keep $\lambda^-$, as before. To complete the set of 4 parameters with $\lambda^+$, we need to choose between \mbox{$I_{\beta^+}$} /\mbox{$I_{\beta^-}$}\ and \mbox{$I_\text{EC$^{0}$}$}/\mbox{$I_{\beta^+}$}. This choice only affects the decay scheme at the level of the small $\beta^+$ branch which varies by a factor of 2, as Table~\ref{tab:New_Decay_Scheme_Measurements} shows. This discrepancy advocates for further experimental and theoretical work on the $\beta^+$ branch.
\begin{table}[ht]
\centering
\caption{\label{tab:New_Decay_Scheme_Measurements}Re-evaluation of the {$^{40}$K}\ decay scheme. Branching ratios $I$ and total half life $T_{1/2}$ are calculated from our measurement of $\rho=\text{\mbox{$I_{EC}$} / \mbox{$I_{EC^*}$}}$, evaluation of measured partial \mbox{EC$^{*}$}\ and $\beta^-$ half lives~\cite{kossert2022activity}, and either the measured relative $\beta^+/\beta^-$ feeding~\cite{engelkemeir_positron_1962} or the predicted value of \mbox{$I_\text{EC$^{0}$}$}/\mbox{$I_{\beta^+}$}~\cite{mougeot_improved_2018}. The choice only affects the $\beta^+$ branching.}
\begin{ruledtabular}
\begin{tabular}{l D{.}{.}{-1} l}
Quantity & \multicolumn{1}{c}{Value} & Uncertainty \\
\hline
\mbox{$I_\text{EC$^{0}$}$}\ (\%) & \IECBranchingPer & $\pm \IECBranchingStatPer(\text{stat}) \pm \IECBranchingSystPer(\text{syst}) $ \\
\mbox{$I_\text{EC*}$}\ (\%) & 10.31 & $\pm 0.04 $ \\
\mbox{$I_{\beta^-}$}\ (\%) & 89.59 & $\pm 0.05 $ \\
\mbox{$I_{\beta^+}$}\ (\%) (expt) & 0.00100 & $\pm 0.00013 $ \\
\mbox{$I_{\beta^+}$}\ (\%) (theory) & 0.00045 & $\pm 0.00012 $ \\
$T_{1/2}$ (Ga) & 1.266 & $\pm 0.004 $
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{\label{sec:Theory}Nuclear shell-model calculations}
We obtain a theoretical estimate for the third-forbidden unique \mbox{$I_\text{EC$^{0}$}$}\ transition, that is complementary to the main experimental result, as described further. Moreover, the (in)frequency of this decay informs the extent to which such forbidden modes are suppressed, which is applicable to calculations of neutrinoless double-beta decay half-lives including that of $^{48}$Ca. This commonly overlooked suppression, quantified as quenching of the weak axial-vector coupling, can significantly increase calculated $0\nu\beta\beta $ half-lives.
We calculate a value of $\mbox{$I_\text{EC$^{0}$}$} = 0.058 \pm 0.022$ using the Behrens-B\"uhring formalism (\cite{behrens1982} for the full theory and~\cite{haaranen2017} for a streamlined version) with the nuclear matrix elements calculated in the shell-model framework using the code NuShellX@MSU~\cite{nushellx} with the Hamiltonian \emph{sdpfk}~\cite{sdpfk}. It is consistent within uncertainties with a value obtained prior to the experiment. Our theoretical estimate is compared to other predictions along with the KDK measurement of this work in Fig.~\ref{Fig:BREC_History_W_KDK_Sensitivity.png}.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figs/KDK_Theory_Comparison_For_Paper.pdf}
\caption{\label{Fig:BREC_History_W_KDK_Sensitivity.png}Measured ground-state electron-capture branching ratio compared to predictions from Engelkemeir (1962) \cite{engelkemeir_positron_1962}, $\log ft$ (1999)~\cite{gove_log-f_1971,noauthor_logft_nodate}, Mougeot (2019)~\cite{mougeot_improved_2018}, Carter (2020)~\cite{carter2020production}, Kossert (2022)~\cite{kossert2022activity}~and this work (Sec.~\ref{sec:Theory}).}
\end{figure}
Using this Hamiltonian the half-lives of the three decay branches, $\beta^-$, \mbox{EC$^{*}$}, and \mbox{EC$^{0}$}, could be calculated. The corresponding decay amplitudes are proportional to the weak axial-vector coupling $g_{\rm A}$, the value of which is known to be quenched for a wide range of nuclear masses~\cite{Suhonen2017}. Here the effective value of it can be determined by comparison of the computed and experimental half-lives, giving for the first-forbidden unique transition \mbox{EC$^{*}$}\ the value $g_{\rm A}^{\rm eff}=0.34$ and for the two third-forbidden unique transitions a value $g_{\rm A}^{\rm eff}=0.43$ for $\beta^-$ branch and $g_{\rm A}^{\rm eff}=0.53$ for the \mbox{EC$^{0}$}\ branch. These values of $g_{\rm A}^{\rm eff}$ are very well in line with the values $g_{\rm A}^{\rm eff}=0.43$ -- $0.66$ obtained in the mass range $A=74$ -- $126$ for the $2^{-}\leftrightarrow 0^{+}$ first-forbidden unique $\beta$ and EC transitions in the framework of the proton-neutron quasiparticle random-phase approximation~\cite{EJIRI201427}. These results suggest that the forbidden contributions to the nuclear matrix elements of the $0\nu\beta\beta$\ decay are very much suppressed and the resulting half-lives are much longer that expected based on the bare value $g_{\rm A}^{\rm bare}=1.27$ of the axial-vector coupling.
Using values in~\cite{Kortelainen_2004}, we expect the quenching of this axial-vector coupling strength to increase the neutrinoless double-beta decay half-life of $^{48}$Ca by a factor of $7^{+3}_{-2}$.
\subsection{Effect on geochronology}
The branched decay of {$^{40}$K}\ to {$^{40}$Ar}\ is the basis of K/Ar dating and its variant the \mbox{$^{40}$Ar/$^{39}$Ar}\ technique. The ubiquity of K and its 1.27~Ga half life make these geochronometers some of the most useful and versatile isotopic methods available to date geological samples.
The age equation for the {$^{40}$K}/{$^{40}$Ar}\ isotope system is:
\begin{equation*}\label{Eqn:K_Ar_Age}
t_{\small\mbox{K/Ar}} = \frac{1}{\lambda_T} \ln\left[\frac{N_{^{40}\text{Ar}}}{N_{^{40}\text{K}}}\frac{\lambda_T}{\lambda_\text{Ar}} + 1\right],
\end{equation*}
where $t_{\small\mbox{K/Ar}}$ is the age of the sample, $N_x$ is the number of atoms of a given isotope in the sample, $\lambda_T$ is the total decay constant of {$^{40}$K}\ and $\lambda_\text{Ar}$ is the decay constant of {$^{40}$K}\ to {$^{40}$Ar}, including \mbox{EC$^{*}$}, \mbox{EC$^{0}$}\ and $\beta^+$ branches.
In the K/Ar technique, the ratio of {$^{40}$K}\ to {$^{40}$Ar}\ isotopes is measured, leading to an estimation of the age of the sample. Precision of this technique currently reaches 0.5\%~\cite{mcdougall2011calibration}.
In the \mbox{$^{40}$Ar/$^{39}$Ar}\ technique, neutron activation is used to transmute $^{39}$K to $^{39}$Ar. This allows mass-spectrometric measurements of the parent proxy ($^{39}$Ar) and the daughter ({$^{40}$Ar}) on the same sample. The efficiency of the activation is hard to estimate, therefore the activation is generally carried out in parallel on a reference sample of known age, providing the age of the sample relative to the reference. The age of the reference must be established independently of the \mbox{$^{40}$Ar/$^{39}$Ar}\ technique. Commonly used references include Fish Canyon ($t_m = 28.201 \pm 0.023$~Ma for a 68\% CL)~\cite{kuiper2008synchronizing}. The age of the sample is then:
\begin{equation*}\label{Eqn:Ar_Ar_Age}
t_{\small\mbox{$^{40}$Ar/$^{39}$Ar}} = \frac{1}{\lambda_T} \ln\left[\frac{N_{^{40}\text{Ar}}}{N_{^{39}\text{Ar}}}J+1 \right],
\end{equation*}
where the irradiation parameter ($J$) is given by:
\begin{equation*}\label{Eqn:Monitor_Flux}
J = \frac{e^{\lambda_T t_m}-1}{M_{^{40}\text{Ar}}/M_{^{39}\text{Ar}}},
\end{equation*}
where $M_{^{40}\text{Ar}}/M_{^{39}\text{Ar}}$ is the measured isotopic ratio in the monitor. Unlike K/Ar, the \mbox{$^{40}$Ar/$^{39}$Ar}\ technique no longer depends on the argon branching ratio. This technique currently reaches precisions of 0.1\%~\cite{niespolo2017intercalibration}.
Certain sets of decay constants widely used in the geological community question the existence of, or ignore, the \mbox{EC$^{0}$}\ decay branch~\cite{min2000test,renne2010joint}. Using data in Table~\ref{tab:Geo_Decay_Constants_main}, we illustrate how adding the \mbox{EC$^{0}$}\ branch to various sets of decay constants~\cite{min2000test,kossert2022activity} affects K/Ar dates throughout geologic time in Fig.~\hl{4a} of~\cite{prl}.
In Fig.~\hl{4b} of~\cite{prl} we display the impact of this reevaluation on the commonly-used Fish Canyon sanidine standard, when the K/Ar age is recalculated from the {$^{40}$K} /{$^{40}$Ar}\ ratio~\cite{jourdan2007age} and various decay constants~\cite{min2000test,kossert2022activity,steiger1977subcommission}.
Using these updated Fish Canyon ages with the same set of decay constants, we recalculate the \mbox{$^{40}$Ar/$^{39}$Ar}\ ages of the Acapulco meteorite~\cite{renne200040ar} in Fig.~\hl{4c} of~\cite{prl}.
Also shown is the Pb/Pb age for phosphates from Acapulco~\cite{gopel2010thermal}, updated to include uncertainties in the uranium isotopic composition~\cite{GOLDMANN2015145}, and the uranium decay constants~\cite{jaffey1971precision} ($4.555 \pm 0.005$~Ga).
\begin{table*}[ht]
\centering
\caption{\label{tab:Geo_Decay_Constants_main}Effect of adding $\mbox{EC$^{0}$}$ to a commonly used set of decay constants~\cite{min2000test} and the latest evaluation~\cite{kossert2022activity}. $\lambda_T$ is the total decay constant of {$^{40}$K} , and $\lambda_\text{Ar}$ is its partial decay constant to {$^{40}$Ar} .}
\begin{ruledtabular}
\begin{tabular}{l D{,}{\pm}{-1} D{,}{\pm}{-1}}
& \lambda_T , 1\sigma \text{ (Ga}^{-1}\text{)} & \lambda_\text{Ar} , 1\sigma \text{ (Ga}^{-1}\text{)} \\\hline
Min et al.~\cite{min2000test} & 0.546 , 0.005 & 0.0580 , 0.0007 \\
KDK with Min et al.~\cite{min2000test} & 0.547 , 0.005 & 0.0585 , 0.0007 \\
Kossert et al.~\cite{kossert2022activity} & 0.5468 , 0.0019 & 0.0564 , 0.00016 \\
KDK with Kossert et al.~\cite{kossert2022activity} & 0.5474 , 0.0019 & 0.0570 , 0.00021 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
Fig.~\ref{fig:ThermalHistory_main} shows the thermal history of the Acapulco meteorite using the various \mbox{$^{40}$Ar/$^{39}$Ar}\ ages, along with those from Pb/Pb and Sm/Nd~\cite{prinzhofer1992samarium}, with closure temperatures, and updated uncertainties for Sm/Nd, from~\cite{renne200040ar}.
Pb/Pb and Sm/Nd ages are statistically indistinguishable from \mbox{$^{40}$Ar/$^{39}$Ar}\ ages calculated using recent decay constants~\cite{min2000test,kossert2022activity} updated by our measurement of \mbox{EC$^{0}$}\ ($\chi^2$ fits to a common age respectively yield $p=0.37$ for updated Kossert~\cite{kossert2022activity} and $p=0.51$ for updated Min~\cite{min2000test}), consistent with rapid cooling.
The KDK measurement itself tends to decrease \mbox{$^{40}$Ar/$^{39}$Ar}\ ages and therefore reduce apparent cooling rates. The systematic nature of this change may affect studies of past heat flow in Earth's crust.
\begin{figure}[ht]
\centering
\includegraphics[width = 1\linewidth]{figs/Geo_thermal_history_28092022.pdf}
\caption{Effect of KDK on the thermal history of the Acapulco meteorite. \mbox{$^{40}$Ar/$^{39}$Ar}\ ages of the Acapulco meteorite~\cite{renne200040ar} calculated for various sets of decay constants~\cite{steiger1977subcommission,min2000test,kossert2022activity} (Fig.~\hl{4c} in \cite{prl}) are plotted along with the corresponding estimated closure temperature of $300 \pm 25$~$^\circ$C (68\%CL; temperatures shifted for visibility). Age-temperature data obtained using Pb/Pb (updated from~\cite{gopel2010thermal}) and Sm/Nd~\cite{prinzhofer1992samarium} also shown.
\label{fig:ThermalHistory_main}}
\end{figure}
\section{Conclusion}
As detailed in this document and elsewhere~\cite{prl}, the KDK collaboration has successfully measured the branching ratio of the elusive electron-capture decay of {$^{40}$K}\ to the ground state of {$^{40}$Ar}. The measured branching ratio is in agreement with the theoretical value calculated in this work. When compared to the traditionally used branching ratio, which is a factor of two larger, a factor of five increase in precision has been achieved. The improved precision will allow rare-event searches to better understand their low-energy backgrounds. Additionally, this measurement represents the first experimental verification of a third-forbidden unique transition informing nuclear structure theory. Finally, our measurement affects K-decay based geochronological estimations by up to a percent.
\begin{acknowledgments}
Xavier Mougeot of LNHB drew our attention to his latest evaluation of the decay scheme of {$^{40}$K}.
Engineering support has been contributed by Miles Constable and Fabrice R\'eti\`ere of TRIUMF, as well as by Koby Dering through the NSERC/Queen’s MRS.
Funding in Canada has been provided by NSERC through SAPIN and SAP RTI grants, as well as by the Faculty of Arts and Science of Queen's University, and by the McDonald Institute.
Work was performed at Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy under Contract DE-AC05-00OR22725. Thermal deposition was conducted at the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility.
This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
US support has also been supplied by the Joint Institute for Nuclear Physics and Applications, and by NSF grant EAR-2102788.
This material is based upon work supported by the U.S. Department of Homeland Security under grant no. 2014-DN-077-ARI088-01. Disclaimer: The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.
This draft manuscript is distributed solely for purposes of scientific peer review. Its content is deliberative and predecisional, so it must not be disclosed or released by reviewers. Because the manuscript has not yet been approved for publication by the U.S. Geological Survey (USGS), it does not represent any official USGS finding or policy. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
J.C., L.E.M., and P.R.R. acknowledge support from NSF grant 2102788.
\end{acknowledgments}
|
1,108,101,564,666 | arxiv | \section{Introduction}
One of the main focuses of the theory of quantum information in recent years
has been to understand the powers and limitations of {\em LOCC protocols}.
These are protocols wherein two or more physically separated parties
possess the ability to perform arbitrary operations on local quantum systems
and to communicate with one another, but only classically.
The paradigm of LOCC, short for {\em local operations and classical
communication}, provides a setting in which to address basic questions
about the nature of entanglement and non-locality, generally viewed as
principal characteristics of quantum information.
One question along these lines that has received a great
deal of attention is that of {\em LOCC distinguishability} of sets of states.
In the two-party case, the two parties (Alice and Bob) share one of a known
orthogonal collection of pure states, and their goal is to determine which
of the states it is \cite{BennettD+99, BennettD+99a, ChenL03, Fan04, GhoshK+01,
GhoshK+04, HorodeckiS+03, Nathanson04, WalgateH02, WalgateS+00}.
In some cases it is possible for Alice and Bob to perform this task without
error and in some it is not.
For example, the fundamental result of Walgate, et~al.~\cite{WalgateS+00}
establishes that any two orthogonal pure states can be distinguished without
error.
On the other hand, large sets of maximally entangled states cannot; for
instance, if Alice and Bob's systems each correspond to $n$ dimensional
spaces, then it is impossible for them to perfectly distinguish $n+1$ or more
maximally entangled states \cite{Nathanson04}.
Other examples of sets of orthogonal states that cannot be perfectly
distinguished by LOCC protocols include those of \cite{BennettD+99}
and any set of states forming an unextendable product basis
\cite{BennettD+99a}.
These examples demonstrate that entanglement is not an essential feature
of LOCC indistinguishable sets of states given that these sets contain only
product states.
This paper considers a related question, which is whether there exist
subspaces of bipartite tensor product spaces such that no orthonormal basis
of the subspace has the property that its elements can be perfectly
distinguished by means of an LOCC protocol.
Many examples of LOCC-indistinguishable sets fail to give an example of
such a subspace in that they span subspaces for which one can easily find
a perfectly distinguishable basis.
For example, the four Bell states are not perfectly distinguishable by any
LOCC protocol, but the space spanned by these states obviously does have a
perfectly distinguishable basis---the standard basis.
Indeed, {\em every} subspace of a tensor product space
$\mathcal{A}\otimes\mathcal{B}$ for which
$\op{dim}(\mathcal{A}) = \op{dim}(\mathcal{B}) = 2$ has a basis whose
elements can be perfectly distinguished by some LOCC protocol, and therefore
fails to have the property we are considering.
We prove, however, that if the dimension of both $\mathcal{A}$ and
$\mathcal{B}$ is at least three, then there do exist subspaces of
$\mathcal{A}\otimes\mathcal{B}$ with the property that no basis of
the subspace is LOCC distinguishable.
In particular, it is proved that in the case
$n = \op{dim}(\mathcal{A}) = \op{dim}(\mathcal{B})$ for $n\geq 3$,
the subspace of dimension $n^2 - 1$ that is orthogonal to the canonical
maximally entangled state (or any other fixed maximally entangled state)
has this property.
One motive for investigating this property is to identify quantum channels
having suboptimal classical corrected capacity with respect to the
definition of Hayden and King~\cite{HaydenK04}.
More specifically, Hayden and King considered the situation in which a sender
transmits classical information over a quantum channel to a receiver, who
has the added capability to measure the environment and use the result to
correct the channel's output.
This notion of correcting the output of a quantum channel by measuring
the environment was considered earlier by Gregoratti and
Werner \cite{GregorattiW04}, who focused primarily on the quantum capacity of
such channels.
Based on the result of Walgate, et al.~\cite{WalgateS+00}, Hayden and King
proved that the {\em classical corrected capacity} of any quantum channel
is at least one bit of information.
Many natural examples of channels can easily be seen to in fact have
{\em optimal} classical corrected capacity, meaning that the capacity
is $\log_2 n$ for $n$ the dimension of the input space, and no examples of
channels were previously proved to have less than optimal classical corrected
capacity.
The existence of subspaces having no LOCC distinguishable bases implies the
existence of such channels, even if the definition of Hayden and King is
extended to allow two-way communication between the receiver and the
environment.
The remainder of this paper is organized as follows.
Section~\ref{sec:preliminaries} discusses notation and background information,
Section~\ref{sec:indistinguishability} contains a proof of the main result
of the paper, which is that there exist subspaces of bipartite tensor product
spaces having no LOCC distinguishable bases, and Section~\ref{sec:channel}
discusses the implications of this result to classical corrected capacities
of quantum channels.
The paper concludes with a short list of open questions.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection*{Basic notation}
This paper will use standard mathematical notation rather than Dirac notation
to represent vectors and linear mappings.
All vector spaces discussed in this paper are assumed to be finite dimensional
complex vector spaces.
The standard basis of a vector space $\mathcal{X}$ of the form
$\mathcal{X} = \mathbb{C}^n$ is $\{e_1,\ldots,e_n\}$, where $e_i$ is the
elementary unit vector defined by $e_i[j] = \delta_{ij}$.
The space of linear mappings from a space $\mathcal{Y}$ to a space
$\mathcal{X}$ is denoted $\lin{\mathcal{Y},\mathcal{X}}$, and we write
$\lin{\mathcal{X}}$ as shorthand for $\lin{\mathcal{X},\mathcal{X}}$ and
$\mathcal{X}^{\ast}$ as shorthand for $\lin{\mathcal{X},\mathbb{C}}$.
If $\mathcal{X} = \mathbb{C}^n$ and $\mathcal{Y} = \mathbb{C}^m$, then
elements of $\mathcal{X}$ are identified with $n$ dimensional column vectors,
elements of $\mathcal{X}^{\ast}$ are identified with $n$ dimensional row
vectors, and elements of $\lin{\mathcal{Y},\mathcal{X}}$ are identified with
$n\times m$ matrices in the typical way.
For $x\in\mathcal{X}$ we let $\overline{x} \in \mathcal{X}$ and
$x^{\t},x^{\ast}\in\mathcal{X}^{\ast}$ denote the entry-wise complex
conjugate, transpose, and conjugate transpose of $x$, and similar
for linear mappings; $\overline{X}\in\lin{\mathcal{Y},\mathcal{X}}$ and
$X^{\t},X^{\ast}\in\lin{\mathcal{X},\mathcal{Y}}$ denote the
entry-wise complex conjugate, transpose, and conjugate transpose of
$X\in\lin{\mathcal{Y},\mathcal{X}}$.
The usual inner products on $\mathcal{X}$ and
$\lin{\mathcal{Y},\mathcal{X}}$ are given by $\ip{x}{y} = x^{\ast}y$ and
$\ip{X}{Y} = \op{tr}(X^{\ast}Y)$ for $x,y\in\mathcal{X}$ and
$X,Y\in\lin{\mathcal{Y},\mathcal{X}}$.
The standard basis of the space $\lin{\mathcal{Y},\mathcal{X}}$ consists
of the mappings $E_{i,j} = e_i e_j^{\ast}$ for $1\leq i\leq n$ and
$1\leq j\leq m$.
The identity operator acting on a given space $\mathcal{X}$ is denoted
$I_{\mathcal{X}}$, or just as $I$ when $\mathcal{X}$ is implicit of
otherwise understood.
It is sometimes helpful to give different names to distinct but otherwise
identical spaces; in particular, we assume that $\mathcal{A} = \mathbb{C}^n$
and $\mathcal{B} = \mathbb{C}^n$ are vector spaces referring to Alice's and
Bob's systems, respectively, throughout the paper.
We define $I_{\mathcal{B}, \mathcal{A}}\in\lin{\mathcal{B},\mathcal{A}}$ to
be the linear mapping that identifies vectors in $\mathcal{A}$ with vectors
in $\mathcal{B}$ by identifying the standard bases of these spaces.
Often this mapping is used implicitly.
For instance, if $a\in\mathcal{A}$ and
$b \in \mathcal{B}$ then $\ip{a}{b}$ is shorthand for
$\ip{a}{I_{\mathcal{B},\mathcal{A}}b}$, and when
$X \in \lin{\mathcal{A},\mathcal{B}}$ we write $\op{tr} (X)$ to mean
$\op{tr} (I_{\mathcal{B},\mathcal{A}} X)$.
It is convenient when discussing bipartite quantum states to define a linear
bijection
\[
\op{vec}:\lin{\mathcal{Y},\mathcal{X}}\rightarrow\mathcal{X}\otimes\mathcal{Y}
\]
by the action $\op{vec}(E_{i,j}) = e_i\otimes e_j$ on standard basis elements,
extending by linearity.
It is simple to verify that for any choice of linear mappings $A$, $X$, and $B$
(for which the product $A X B$ is sensible), the equation
\[
(A\otimes B^{\t})\op{vec}(X) = \op{vec}(A X B)
\]
is satisfied.
For $\mathcal{A} = \mathbb{C}^n$ and $\mathcal{B} = \mathbb{C}^n$, the unit
vector
\[
\frac{1}{\sqrt{n}}\op{vec}(I_{\mathcal{B},\mathcal{A}})
= \frac{1}{\sqrt{n}}\sum_{i = 1}^n e_i\otimes e_i
\in\mathcal{A}\otimes\mathcal{B}
\]
represents the canonical maximally entangled pure state in the
space $\mathcal{A}\otimes\mathcal{B}$.
Let $P\in\lin{\mathcal{A}\otimes\mathcal{B}}$ represent the projection
onto the space spanned by this vector,
\[
P = \frac{1}{n}
\op{vec}(I_{\mathcal{B},\mathcal{A}})
\op{vec}(I_{\mathcal{B},\mathcal{A}})^{\ast},
\]
and let $Q\in\lin{\mathcal{A}\otimes\mathcal{B}}$ denote the projection onto
the orthogonal complement of this space,
\[
Q = I_{\mathcal{A}\otimes\mathcal{B}} - P.
\]
Also let $\mathcal{P}$ and $\mathcal{Q}$ denote the subspaces of
$\mathcal{A}\otimes\mathcal{B}$ onto which $P$ and $Q$ project.
\subsection*{Separable measurements and perfect distinguishability}
There is no simple characterization known for the set of measurements that can
be realized by means of LOCC protocols.
For this reason it will simplify matters greatly for us to consider the set
of {\em separable measurements}, which does have a simple mathematical
characterization that we now discuss.
Let $\mathcal{A}$ and $\mathcal{B}$ be spaces corresponding to
two parties Alice and Bob.
A {\em separable measurement} on $\mathcal{A}\otimes\mathcal{B}$ with
possible outcomes $\{1,\ldots,N\}$ is a POVM described by a collection
\[
\{A_i\otimes B_i\,:\,i = 1,\ldots,N\}\subset
\lin{\mathcal{A}\otimes\mathcal{B}}.
\]
Similar to ordinary POVMs, $A_i$ and $B_i$ must be positive semidefinite
operators for each $i$, and must satisfy
\[
\sum_{i = 1}^N A_i\otimes B_i = I_{\mathcal{A}\otimes\mathcal{B}}.
\]
If we have that each of the operators $A_i$ and $B_i$ has rank equal to one,
we will say that the measurement is a {\em rank one separable measurement}.
Any measurement that can be realized by means of an LOCC protocol can be
described by a rank one separable measurement in the sense of the following
proposition.
\begin{prop}
Suppose that $\{M_k\,:\,k=1,\ldots,m\}$ is a POVM that describes the classical
output of a given LOCC protocol on $\mathcal{A}\otimes\mathcal{B}$.
Then there exists a rank one separable measurement
\[
\{a_i a_i^{\ast} \otimes b_i b_i^{\ast}\,:\,i = 1,\ldots,N\}
\]
on $\mathcal{A}\otimes\mathcal{B}$ together with a partition
$S_1 \cup \cdots \cup S_m = \{1,\ldots,N\}$,
$S_k\cap S_l = \varnothing$ for $k\not=l$, such that
\[
M_k = \sum_{i\in S_k} a_i a_i^{\ast} \otimes b_i b_i^{\ast}
\]
for $1\leq k \leq m$.
\end{prop}
\noindent
The fact that the classical output of any LOCC protocol can be described
by a separable measurement is well-known and the proof is routine.
It seems to have been first observed by Vedral and Plenio \cite{VedralP98} and
is discussed further in references \cite{BennettD+99, Rains97}.
By considering the spectral decomposition of its POVM elements, any separable
measurement can easily be further resolved to have rank one as claimed by
the proposition.
We note that the converse of the theorem is known to be false, as there exist
separable measurements that cannot be realized by LOCC
protocols \cite{BennettD+99}.
Suppose that $u_1,\ldots,u_m\in\mathcal{A}\otimes\mathcal{B}$ is a collection
of unit vectors.
A separable measurement $\{A_i\otimes B_i\,:\,i = 1,\ldots,N\}$ may be said to
{\em perfectly distinguish} this collection of vectors if there exists a
partition $S_1\cup \cdots \cup S_m = \{1,\ldots,N\}$,
$S_k\cap S_l = \varnothing$ for $k\not=l$, such that
\[
u_k^{\ast} \left(\sum_{i\in S_l} A_i\otimes B_i\right) u_k = \delta_{kl}
\]
for $1\leq k,l\leq m$.
\begin{cor}
If Alice and Bob can perfectly distinguish the states $u_1,\ldots,u_m$
by means of an LOCC protocol, then there exists a rank one separable
measurement
\[
\{a_i a_i^{\ast} \otimes b_i b_i^{\ast}\,:\,i = 1,\ldots,N\}
\]
that perfectly distinguishes $u_1,\ldots,u_m$.
\end{cor}
\noindent
We also note that, without loss of generality, the measurement in this
corollary may be assumed to satisfy the property that $a_i \otimes b_i$ and
$a_j\otimes b_j$ are linearly independent for each choice of $i\not=j$.
\subsection*{Unitary equivalence of realizations of completely positive maps}
The main result of this paper is applied to the question of channel
capacities in Section~\ref{sec:channel}.
It will be helpful in that section to have noted the simple fact below
concerning realizations of completely positive maps.
Let $\trans{\mathcal{X},\mathcal{Y}}$ denote the space of linear mappings
of the form
$\Phi : \lin{\mathcal{X}} \rightarrow \lin{\mathcal{Y}}$.
The {\em Jamio{\l}kowski isomorphism} is the linear mapping
of the form
$J : \trans{\mathcal{X},\mathcal{Y}} \rightarrow
\lin{\mathcal{Y}\otimes\mathcal{X}}$
defined by
\[
J(\Phi) =
\sum_{i,j} \Phi(E_{i,j}) \otimes E_{i,j}
=
(\Phi \otimes I_{\lin{\mathcal{X}}})(\op{vec}(I_{\mathcal{X}})
\op{vec}(I_{\mathcal{X}})^{\ast}).
\]
\begin{prop}\label{prop:realize}
Suppose that $\Phi\in\trans{\mathcal{X},\mathcal{Y}}$ is completely positive,
and further suppose that $\mathcal{Z}$ is a space and
$A,B\in\lin{\mathcal{X},\mathcal{Y}\otimes\mathcal{Z}}$ are linear mappings
that both realize $\Phi$ in the sense that
\[
\Phi(X) = \op{tr}_{\mathcal{Z}} A X A^{\ast} = \op{tr}_{\mathcal{Z}} B X B^{\ast}
\]
for all $X\in\lin{\mathcal{X}}$.
Then there is a unitary operator $U\in\lin{\mathcal{Z}}$ such that
$A = (I \otimes U) B$.
\end{prop}
\begin{proof}
We have
\begin{align*}
J(\Phi) & = (\Phi \otimes I_{\lin{\mathcal{X}}})(
\op{vec}(I_{\mathcal{X}})\op{vec}(I_{\mathcal{X}})^{\ast})\\
& =
\op{tr}_{\mathcal{Z}} (A \otimes I_{\mathcal{X}})
\op{vec}(I_{\mathcal{X}})\op{vec}(I_{\mathcal{X}})^{\ast}
(A \otimes I_{\mathcal{X}})^{\ast}\\
& = \op{tr}_{\mathcal{Z}} \op{vec}(A)\op{vec}(A)^{\ast},
\end{align*}
and so $\op{vec}(A)\in \mathcal{Y}\otimes\mathcal{Z}\otimes\mathcal{X}$ is a
purification of $J(\Phi)$.
Likewise, $\op{vec}(B)$ is a purification of $J(\Phi)$ as well.
It is well-known that two purifications of a given positive semidefinite
operator are equivalent up to a unitary operator on the space that is
traced out.
In the present situation this implies
\[
\op{vec}(A) = (I_{\mathcal{Y}} \otimes U \otimes I_{\mathcal{X}})
\op{vec}(B) = \op{vec}((I_{\mathcal{Y}}\otimes U)B)
\]
for some unitary operator $U\in\lin{\mathcal{Z}}$.
This is equivalent to $A = (I_{\mathcal{Y}}\otimes U) B$, and so the
proposition is proved.
\end{proof}
\section{Two-way indistinguishability}
\label{sec:indistinguishability}
This section contains a proof of the main result of this paper, which is
stated in the following theorem.
\begin{theorem}
\label{theorem:no-distinguishable-basis}
Let $\mathcal{A} = \mathbb{C}^n$ and $\mathcal{B} = \mathbb{C}^n$
for $n\geq 3$.
Then there is no basis of the subspace
$\mathcal{Q}\subseteq\mathcal{A}\otimes\mathcal{B}$ that is perfectly
distinguishable by any LOCC protocol.
\end{theorem}
\noindent
Before giving a formal proof of this theorem, it will be helpful to give a
brief sketch of the proof.
Recall that the operator
\[
Q = I_{\mathcal{A}\otimes\mathcal{B}} -
\frac{1}{n}\op{vec}(I_{\mathcal{B},\mathcal{A}})
\op{vec}(I_{\mathcal{B},\mathcal{A}})^{\ast}
\]
is the projection onto the subspace $\mathcal{Q}$.
If $\{u_1,\ldots,u_{n^2 - 1}\}$ is a basis of $\mathcal{Q}$ whose elements
are perfectly distinguished by some LOCC protocol, then these elements
are also perfectly distinguished by some rank one separable measurement.
Such a measurement may be written as
\[
\left\{a_i a_i^{\ast} \otimes \overline{b_i} b_i^\t\,:\,
i = 1,\ldots, N\right\}
\]
for $a_1,\ldots,a_N\in\mathcal{A}$ and $b_1,\ldots,b_N\in\mathcal{B}$.
As previously noted, we may assume without loss of generality that
the vectors $a_i\otimes\overline{b_i}$ and $a_j\otimes\overline{b_j}$
are linearly independent for $i\not=j$.
Based on the fact that this measurement perfectly distinguishes the elements
in the chosen basis of $\mathcal{Q}$, we will determine that the basis
\[
\left\{u_1,\ldots,u_{n^2 - 1},
\frac{1}{\sqrt{n}}\op{vec}(I_{\mathcal{B},\mathcal{A}})\right\}
\]
of the entire space $\mathcal{A}\otimes\mathcal{B}$ diagonalizes each of the
operators $Q (a_i a_i^{\ast} \otimes \overline{b_i} b_i^{\t}) Q$
for $1\leq i \leq N$.
Because any two operators that are simultaneously diagonalized by a given
basis must commute, we conclude that
the operators $Q (a_i a_i^{\ast} \otimes \overline{b_i} b_i^{\t}) Q$ and
$Q (a_j a_j^{\ast} \otimes \overline{b_j} b_j^{\t}) Q$ commute for every
choice of $i$ and $j$.
However, based on the properties of the projection $Q$ it can be shown
that there must be a choice of $i$ and $j$ for which
$Q (a_i a_i^{\ast} \otimes \overline{b_i} b_i^{\t}) Q$ and
$Q (a_j a_j^{\ast} \otimes \overline{b_j} b_j^{\t}) Q$ do not commute.
This is a contradiction that stems from the assumption that
$\{u_1,\ldots,u_{n^2 - 1}\}$ is an LOCC distinguishable basis of $\mathcal{Q}$,
and so we conclude that such a basis does not exist.
We now give a more formal proof, beginning with a lemma that proves that there
must exist choices of $i$ and $j$ for which the operators
$Q (a_i a_i^{\ast} \otimes \overline{b_i} b_i^{\t}) Q$
and $Q (a_j a_j^{\ast} \otimes \overline{b_j} b_j^{\t}) Q$ do not commute.
\begin{lemma}\label{lemma:commutation}
Suppose that
\[
\left\{a_i a_i^{\ast} \otimes \overline{b_i} b_i^\t\,:\,
i = 1,\ldots, N\right\}
\]
is a rank one separable measurement such that
$a_i \otimes \overline{b_i}$ and $a_j\otimes \overline{b_j}$ are linearly
independent for all $i\not=j$.
Then there exists a choice of $i$ and $j$ such that the operators
\[
Q (a_i a_i^{\ast} \otimes \overline{b_i} b_i^{\t}) Q
\quad\text{and}\quad
Q (a_j a_j^{\ast} \otimes \overline{b_j} b_j^{\t}) Q
\]
do not commute.
\end{lemma}
\begin{proof}
First note that as
$\left\{a_i a_i^{\ast}\otimes\overline{b_i} b_i^\t\,:\,i=1,\ldots, N\right\}$
describes a measurement, we have
\[
\sum_{i=1}^N a_i a_i^{\ast} \otimes \overline{b_i} b_i^\t =
I_{\mathcal{A}\otimes\mathcal{B}}.
\]
It follows that
\[
\op{vec}(I_{\mathcal{B},\mathcal{A}}) =
\left(\sum_{i=1}^N a_i a_i^{\ast} \otimes \overline{b_i} b_i^\t\right)
\op{vec}(I_{\mathcal{B},\mathcal{A}})
= \op{vec}\left(
\sum_{i=1}^N a_i a_i^{\ast} b_i b_i^\ast\right)
= \op{vec}\left(
\sum_{i=1}^N \ip{a_i}{b_i} a_i b_i^\ast\right)
\]
and therefore
\[
\sum_{i=1}^N \ip{a_i}{b_i} a_i b_i^\ast = I_{\mathcal{B},\mathcal{A}}.
\]
Taking the trace of both sides yields
\[
\sum_{i=1}^N \abs{\ip{a_i}{b_i}}^2 = n.
\]
Now, let
\[
\alpha_{i,j} = (a_i^{\ast} \otimes b_i^{\t}) Q (a_j \otimes \overline{b_j})
\]
for all $i$, $j$.
It will be proved that there exists a choice of $i\not=j$ such that
$\alpha_{i,j}\not=0$.
In order to prove this, assume toward contradiction that
$\alpha_{i,j}=0$ for every pair $i\not=j$.
As
\[
\alpha_{i,j} = (a_i^{\ast} \otimes b_i^{\t}) Q (a_j \otimes \overline{b_j})
= \ip{a_i}{a_j}\ip{b_j}{b_i} - \frac{1}{n}\ip{a_i}{b_i}\ip{b_j}{a_j},
\]
we have that
\[
\ip{a_i}{a_j}\ip{b_j}{b_i} = \frac{1}{n}\ip{a_i}{b_i}\ip{b_j}{a_j}
\]
for all choices of $i\not=j$.
Because $\sum_i\abs{\ip{a_i}{b_i}}^2 = n > 0$, we may choose some
value of $i$ for which $\ip{a_i}{b_i} \not=0$.
We then have
\begin{multline*}
\ip{a_i}{b_i} =
a_i^{\ast}\left(\sum_j \ip{a_j}{b_j}a_j b_j^{\ast}\right)b_i
= \sum_j \ip{a_j}{b_j} \ip{a_i}{a_j} \ip{b_j}{b_i}\\
= \sum_{j\not=i} \ip{a_j}{b_j} \ip{a_i}{a_j} \ip{b_j}{b_i}
+ \ip{a_i}{b_i} \norm{a_i}^2 \norm{b_i}^2\\
= \frac{1}{n} \sum_{j\not=i}\ip{a_j}{b_j} \ip{a_i}{b_i} \ip{b_j}{a_j}
+ \ip{a_i}{b_i} \norm{a_i}^2 \norm{b_i}^2\\
= \left( \frac{1}{n}\sum_j \abs{\ip{a_j}{b_j}}^2 - \frac{1}{n}
\abs{\ip{a_i}{b_i}}^2 + \norm{a_i}^2 \norm{b_i}^2\right)\ip{a_i}{b_i}\\
=
\left( 1 - \frac{1}{n}\abs{\ip{a_i}{b_i}}^2 +
\norm{a_i}^2 \norm{b_i}^2\right)\ip{a_i}{b_i}.
\end{multline*}
As $\ip{a_i}{b_i}\not=0$ this implies
\[
\frac{1}{n}\abs{\ip{a_i}{b_i}}^2 = \norm{a_i}^2 \norm{b_i}^2.
\]
But then by the Cauchy-Schwarz Inequality we have
\[
\abs{\ip{a_i}{b_i}}^2 \leq
\norm{a_i}^2 \norm{b_i}^2 = \frac{1}{n}\abs{\ip{a_i}{b_i}}^2,
\]
which implies $\abs{\ip{a_i}{b_i}}^2 = 0$.
This contradicts the fact that $i$ was chosen so that $\ip{a_i}{b_i}\not=0$,
and so it has been proved that $\alpha_{i,j} \not= 0$ for some
choice of $i\not=j$.
Fix such a choice for the remainder of the proof.
Next, let us prove that the two vectors $Q(a_i \otimes \overline{b_i})$ and
$Q(a_j \otimes \overline{b_j})$ are linearly independent.
To this end let $\beta$ and $\gamma$ be scalars that satisfy
\[
\beta\,Q(a_i \otimes \overline{b_i}) +
\gamma\,Q(a_j \otimes \overline{b_j}) = 0.
\]
This implies
\[
\beta\, a_i \otimes \overline{b_i} + \gamma\, a_j \otimes \overline{b_j}
= \frac{1}{n}\left(\beta \ip{b_i}{a_i} + \gamma \ip{b_j}{a_j}\right)
\op{vec}(I_{\mathcal{B},\mathcal{A}}),
\]
or equivalently
\[
\beta\, a_i b_i^{\ast} + \gamma\, a_j b_j^{\ast}
= \frac{1}{n}\left(
\beta \ip{b_i}{a_i} + \gamma \ip{b_j}{a_j}\right)I_{\mathcal{B},\mathcal{A}}.
\]
The left hand side of this equation has rank at most 2.
Because we are assuming that $n\geq 3$ this means that the right hand side
must be 0, for otherwise it would have rank $n \geq 3$.
Thus $\beta\, a_i b_i^{\ast} + \gamma\, a_j b_j^{\ast} = 0$, which is
equivalent to
$\beta\, a_i \otimes \overline{b_i} + \gamma\, a_j \otimes \overline{b_j} = 0$.
As $a_i \otimes \overline{b_i}$ and $a_j \otimes \overline{b_j}$ are
necessarily linearly independent, however, this implies that $\beta=\gamma=0$.
Consequently $Q(a_i \otimes \overline{b_i})$ and
$Q(a_j \otimes \overline{b_j})$ are linearly independent
To complete the proof, we must show that the two operators
\[
Q (a_i a_i^{\ast} \otimes \overline{b_i} b_i^{\t}) Q
(a_j a_j^{\ast} \otimes \overline{b_j} b_j^{\t}) Q
= \alpha_{i,j} Q(a_i \otimes \overline{b_i}) (a_j^{\ast} \otimes b_j^{\t})Q
\]
and
\[
Q (a_j a_j^{\ast} \otimes \overline{b_j} b_j^{\t}) Q
(a_i a_i^{\ast} \otimes \overline{b_i} b_i^{\t}) Q
= \overline{\alpha_{i,j}} Q(a_j \otimes \overline{b_j})
(a_i^{\ast} \otimes b_i^{\t})Q
\]
are not equal.
Because $\alpha_{i,j} \not= 0$ and the vectors $Q(a_i\otimes \overline{b_i})$
and $Q(a_j\otimes \overline{b_j})$ are nonzero (as they are linearly
independent), neither of these operators is 0.
The images of the two operators are therefore the spaces spanned by the
vectors $Q(a_i\otimes \overline{b_i})$ and $Q(a_j\otimes \overline{b_j})$,
respectively.
The fact that the two operators are not equal therefore follows from the
linear independence of
$Q(a_i\otimes \overline{b_i})$ and $Q(a_j\otimes \overline{b_j})$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:no-distinguishable-basis}]
The proof is by contradiction.
To this end, assume
$\{u_1,\ldots,u_m\}\subset \mathcal{A}\otimes\mathcal{B}$, $m = n^2 -1$, is an
orthonormal basis of $\mathcal{Q}$ whose elements are perfectly distinguished
by some LOCC protocol.
Then there exists a rank one separable measurement
\[
\left\{a_i a_i^{\ast} \otimes \overline{b_i} b_i^\t\,:\,
i = 1,\ldots, N\right\}
\]
for which $a_i \otimes \overline{b_i}$ and $a_j \otimes \overline{b_j}$ are
linearly independent for all $i\not=j$,
together with a partition $S_1\cup \cdots \cup S_m = \{1,\ldots,N\}$,
$S_k\cap S_l = \varnothing$ for $k\not=l$, such that
\[
u_k^{\ast} \left(\sum_{i\in S_l}
a_i a_i^{\ast} \otimes \overline{b_i} b_i^\t\right) u_k = \delta_{kl}
\]
for all $1\leq k,l\leq m$.
Now, as
\[
u_k^{\ast} \left(a_i a_i^{\ast} \otimes \overline{b_i}b_i^{\t}\right) u_k
= \abs{\ip{u_k}{a_i\otimes \overline{b_i}}}^2,
\]
it follows that $u_k$ and $a_i\otimes\overline{b_i}$ are orthogonal whenever
$i\not\in S_k$.
Consequently, it holds that
\[
u_k^{\ast}
\left(a_i a_i^{\ast} \otimes \overline{b_i}b_i^{\t}\right) u_l = 0
\]
for $k\not=l$ given that $S_k$ and $S_l$ are disjoint.
The projection $Q$ acts trivially on each of the vectors
$u_1,\ldots,u_m$, and thus
\[
u_k^{\ast} Q
\left(a_i a_i^{\ast} \otimes \overline{b_i}b_i^{\t}\right) Q u_l = 0
\]
for $k\not=l$.
Letting $v = \frac{1}{\sqrt{n}}\op{vec}(I_{\mathcal{B},\mathcal{A}})$
we obviously have $Q v = 0$, and thus
\[
u_k^{\ast} Q \left(a_i a_i^{\ast} \otimes \overline{b_i}b_i^{\t}\right) Q v =
v^{\ast} Q \left(a_i a_i^{\ast} \otimes \overline{b_i}b_i^{\t}\right) Q u_k =
0
\]
for each choice of $k$ as well.
Thus, it has been shown that the orthonormal basis
$\{u_1,\ldots,u_m,v\}$ of $\mathcal{A}\otimes\mathcal{B}$ diagonalizes
each of the operators
$Q \left(a_i a_i^{\ast} \otimes \overline{b_i}b_i^{\t}\right) Q$,
for $1\leq i \leq N$.
As these operators are all simultaneously diagonalized by a common
orthonormal basis, they must therefore commute.
However, by Lemma~\ref{lemma:commutation} this is not the case---for
at least one choice of $i\not=j$ it holds that
$Q \left(a_i a_i^{\ast} \otimes \overline{b_i}b_i^{\t}\right) Q$
and $Q \left(a_j a_j^{\ast} \otimes \overline{b_j}b_j^{\t}\right) Q$
do not commute.
As a contradiction has been reached, this completes the proof of the theorem.
\end{proof}
\subsection*{Impossibility for pairs of qubits}
It should be noted that the assumption $n\geq 3$ in
Theorem~\ref{theorem:no-distinguishable-basis} is necessary.
Indeed, every subspace of a tensor product space
$\mathcal{A}\otimes\mathcal{B}$ where $\mathcal{A} = \mathbb{C}^2$ and
$\mathcal{B} = \mathbb{C}^2$ has a perfectly distinguishable basis.
To see this, let $\mathcal{V}$ be a subspace of $\mathcal{A}\otimes\mathcal{B}$
and let $m = \op{dim}(\mathcal{V})$.
There is nothing to prove for $m = 0$ or $m=1$, the claim for $m = 2$ follows
from Walgate, et al.~\cite{WalgateS+00}, and is trivial for $m = 4$.
In the remaining case $m = 3$, it must be that $\mathcal{V}$ is the
orthogonal complement of some unit vector $u\in\mathcal{A}\otimes\mathcal{B}$.
By considering the Schmidt decomposition of $u$, it is
straightforward to find two product states $a_1\otimes b_1$ and
$a_2\otimes b_2$ so that the set $\{u, a_1\otimes b_1, a_2\otimes b_2\}$ is
orthonormal.
Letting $v$ be any vector orthogonal to the span of
$\{u, a_1\otimes b_1, a_2\otimes b_2\}$, we have that
$\{v, a_1\otimes b_1, a_2\otimes b_2\}$ is an orthonormal basis of
$\mathcal{V}$.
Walgate and Hardy \cite{WalgateH02} have shown that any such set is
perfectly distinguishable given that at least two members of the set
are product states.
\section{Channels with suboptimal classical corrected capacity}
\label{sec:channel}
Hayden and King~\cite{HaydenK04} considered the classical capacity of quantum
channels when the receiver has the capability to measure the channel's
environment and to use the classical result of this measurement when measuring
the output of the channel.
In this section we give examples of channels that have suboptimal capacity
with respect to this definition.
In fact, the capacity of the channels remains suboptimal even when two-way
communication is allowed between the receiver and the environment.
As our aim is to only prove the existence of channels with suboptimal classical
corrected capacity rather than proving quantitative bounds on this capacity,
we will use the following qualitative definition that does not refer to any
specific measure of capacity.
An admissible (i.e., completely positive and trace-preserving) mapping
$\Phi\in\trans{\mathcal{X},\mathcal{A}}$ is said to have {\em optimal
two-way classical corrected capacity} if the following holds.
\begin{mylist}{\parindent}
\item[1.]
There exists a space $\mathcal{B}$ and a unitary embedding
$U\in\lin{\mathcal{X},\mathcal{A}\otimes\mathcal{B}}$ such that
\[
\Phi(X) = \op{tr}_{\mathcal{B}} U X U^{\ast}
\]
for all $X\in\lin{\mathcal{X}}$, and
\item[2.]
there exists an orthonormal basis $\{x_1,\ldots,x_n\}$ of $\mathcal{X}$ such
that the set
\[
Ux_1,\ldots,Ux_n \in \mathcal{A}\otimes\mathcal{B}
\]
is perfectly distinguishable by some LOCC protocol.
\end{mylist}
\noindent
Note that by Proposition~\ref{prop:realize}, a given mapping $\Phi$ fails to
have optimal two-way classical corrected capacity if item 2 above fails to
hold for even a single choice of $U$.
This is because any other choice is equivalent up to a unitary operator
on $\mathcal{B}$, which can simply be absorbed into the LOCC protocol.
The admissible maps that fail to satisfy the above definition are based
on the subspaces considered in the previous section.
Let $n\geq 3$, let $\mathcal{X} = \mathbb{C}^{n^2 - 1}$, and let
$\mathcal{A} = \mathcal{B} = \mathbb{C}^n$.
Choose $u_1,\ldots,u_{n^2 - 1}\in\mathcal{A}\otimes\mathcal{B}$
to be an arbitrary orthonormal basis for the subspace $\mathcal{Q}$
of $\mathcal{A}\otimes\mathcal{B}$.
Define $U\in\lin{\mathcal{X},\mathcal{A}\otimes\mathcal{B}}$
as
\[
U = \sum_{i=1}^{n^2 - 1} u_i e_i^{\ast}.
\]
Obviously $U$ is a unitary embedding, so the mapping
$\Phi\in\trans{\mathcal{X},\mathcal{A}}$ defined by
$\Phi(X) = \op{tr}_{\mathcal{B}} U X U^{\ast}$
for all $X\in\lin{\mathcal{X}}$ is admissible.
\begin{cor}
The mapping $\Phi$ does not have optimal two-way classical corrected capacity.
\end{cor}
\begin{proof}
If $\Phi$ were to have optimal two-way classical corrected capacity, there
would be a choice of an orthonormal basis
$\{x_1,\ldots,x_{n^2 - 1}\}$ of $\mathcal{X}$ such that
$Ux_1,\ldots,Ux_{n^2 - 1}\in \mathcal{A}\otimes\mathcal{B}$
is perfectly distinguishable by an LOCC protocol.
As any such set is necessarily an orthonormal basis of $\mathcal{Q}$, this
cannot be by Theorem~\ref{theorem:no-distinguishable-basis}.
\end{proof}
Although the notion of correctable versus uncorrectable channels does
not require that the input and output spaces have the same dimension,
it is of course simple to adjust such an example to give a channel where
this constraint is satisfied by viewing that the receiver's space
$\mathcal{A}$ is embedded in $\mathcal{X}$.
One may therefore view the example above for $n=3$ as giving a
three-qubit channel having suboptimal two-way classical corrected capacity.
\section{Conclusion}
\label{sec:conclusion}
It has been proved that there exist subspaces of bipartite tensor product
spaces that have no bases that can be perfectly distinguished by LOCC
protocols, and this fact has been used to construct admissible mappings
having suboptimal two-way classical corrected capacity.
There are several interesting unanswered questions relating to these
results, including the following.
\begin{mylist}{\parindent}
\item[1.]
What is the smallest dimension required for a subspace to have no
bases perfectly distinguishable by LOCC protocols?
(The smallest dimension achieved in the present paper is 8.)
\item[2.]
Do there exist subspaces of $\mathcal{A}\otimes\mathcal{B}$ having
no perfectly distinguishable bases when $\op{dim}(\mathcal{A}) = 2$?
As demonstrated in Section~\ref{sec:indistinguishability} this necessarily
requires $\op{dim}(\mathcal{B})\geq 3$.
\item[3.]
Quantitative bounds on the probability with which bases of the subspaces in
question can be distinguished by LOCC protocols were not considered in this
paper, and nor were specific bounds on classical corrected capacities of
the associated channels.
What can be proved about such bounds?
\end{mylist}
\subsection*{Acknowledgments}
I thank Somshubhro Bandyopadhyay, Mehdi Mhalla, and Jonathan Walgate
for several helpful discussions and suggestions.
This research was supported by Canada's NSERC, the Canada Research Chairs
program, and the Canadian Institute for Advanced Research (CIAR).
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,564,667 | arxiv | \section{Introduction}
Let $G$ be a finite group,
and define an equivalence relation on the elements of $G$
by declaring that elements $a$ and $b$ of $G$ are equivalent
if, and only if, they have the same order.
The equivalence class of an element $g$ in $G$
is called its \defn{order class} and is denoted by
$\ordclass{g}{G}$.
Thus,
\begin{displaymath}
\ordclass{g}{G} = \{ x\in G : \order{x} = \order{g} \},
\end{displaymath}
The cardinality of $\ordclass{g}{G}$ is denoted by $\ordcount{g}{G}$.
For a positive integer $k$, we define $\ordcount{k}{G}$
to be the number of elements of order $k$ in $G$.
It is equal to zero if $G$ has no elements of order $k$.
In any group $G$, the order class $\ordclass{1}{G}$ of the identity
element is always a singleton, so $\ordcount{1}{G} = 1$ however the
subscript is interpreted.
We say that $G$ has \defn{perfect order classes} if $\ordcount{g}{G}$
is a divisor of the order of $G$, for each element $g$ in $G$.
Groups with this property seem first to have been investigated
in~\cite{FinchJones2002}.
See also~\cites{FinchJones2003,JonesToppin2011}.
The symmetric group $\sym{3}$ of degree $3$ has perfect order classes
because it has $2$ elements of order $3$, and $2$ is a divisor of the
order $6$ of $\sym{3}$, and also has $3$ elements of order $2$,
and $3$ is again a divisor of $6$.
The alternating group $\alt{4}$ of degree $4$ however, does not have
perfect order classes, as it has a total of $8$ elements of order $3$, and $8$
is not a divisor of the order $12$ of $\alt{4}$.
The object of this note is to describe Hamiltonian groups with
perfect order classes.
We prove the following theorem, in which $\cyclic{n}$ denotes
the cyclic group of order $n$ and $Q$ denotes the quaternion
group of order $8$.
\begin{maintheorem}\label{thm:main}
A finite Hamiltonian group has perfect order classes if,
and only if, it has the form
$Q\times\cyclic{3^k}$ or the form $Q\times\cyclic{2}\times\cyclic{3^k}$,
for some positive integer $k$.
\end{maintheorem}
It follows from this that the smallest Hamiltonian group with
perfect order classes is the group $Q\times\cyclic{3}$ of order $24$.
In particular, despite being involved in every Hamiltonian group,
the quaternion group $Q$ of order $8$ does not itself have perfect
order classes.
(It has a total of $6$ elements of order $4$, and $6$ does not divide $8$.)
\section{Preliminaries}
All groups are supposed to be finite, and will be written multiplicatively.
We use $1$ to denote the trivial group of order $1$,
as well as to denote the identity element of any group.
We denote the quaternion group of order $8$ by $Q$ and,
for a positive integer $n$, the cyclic group of order $n$
is denoted by $\cyclic{n}$.
If $G$ is any group, and $n$ a non-negative integer,
then $G^n$ denotes the direct product of $n$ copies of $G$
(with the understanding that $G^0$ is the trivial group).
For a positive integer $n$, Euler's function $\eulerphi(n)$ counts the
number of positive integers less than and relatively prime to $n$.
Recall that it is a multiplicative function:
$\eulerphi(ab) = \eulerphi(a)\eulerphi(b)$,
for relatively prime positive integers $a$ and $b$.
For a prime number $p$ and a positive integer $n$, we have
\begin{displaymath}
\eulerphi(p^n) = p^{n-1}(p-1),
\end{displaymath}
a formula that we use frequently.
Especially, we have $\eulerphi(p) = p - 1$.
The next result is a key result in the study of groups with perfect order classes.
\begin{proposition}\cite{Das2009b}*{Proposition 2.2}
If $g$ is an element of a finite group $G$,
then $\ordcount{g}{G}$ is divisible by $\eulerphi(\order{g})$.
\end{proposition}
\begin{corollary}\cite{FinchJones2002}*{Proposition 1, Corollary 1}
If $G$ is a finite group with perfect order classes then,
for each prime divisor $p$ of $G$, the order of $G$ is divisible
by $p - 1$.
In particular, every non-trivial finite group with perfect order classes has even order.
\end{corollary}
The structure of Hamiltonian groups is known.
\begin{theorem}\cite{Robinson}*{5.3.7}\label{thm:hamstruct}
A finite Hamiltonian group $G$ is a direct product of $Q$,
an elementary abelian $2$-group, and an abelian
group of odd order.
\end{theorem}
Either or both of the elementary abelian direct factor and the
direct factor of odd order may be trivial, of course, as $Q$
is itself Hamiltonian.
We make some simple remarks about order classes and their lengths.
\begin{lemma}\label{lem:hallorderclosed}
A normal Hall subgroup of a finite group contains the complete
order class of each of its members.
\end{lemma}
\begin{proof}
Let $H$ be a normal Hall subgroup of a finite group $G$,
and let $h$ be a member of $H$.
Since the order $\order{H}$ and the index $[G:H]$ of $H$
are coprime, there are integers $\alpha$ and $\beta$ for
which $\alpha\order{H} + \beta [G:H] = 1$.
If $g$ is any member of $\ordclass{h}{G}$,
then $\order{g} = \order{h}$ is a divisor of $\order{H}$, so we have
$g = g^{\alpha\order{H}}g^{\beta [G:H]} = g^{\beta [G:H]}$.
Then, in the quotient group $G/H$ of order $[G:H]$ we
compute $g^{\beta [G:H]}H = (gH)^{\beta [G:H]} = H$,
whence $g$ belongs to $H$.
Since $g$ was an arbitrary member of $\ordclass{h}{G}$,
we conclude that $\ordclass{h}{G}\subset H$, as claimed.
\end{proof}
The following lemma is proved in essentially the same way.
\begin{lemma}\label{lem:hallsuck}
If $H$ is a normal Hall subgroup of a finite group $G$ then,
for each divisor $d$ of the order of $H$,
we have $\ordcount{d}{G} = \ordcount{d}{H}$.
\end{lemma}
\begin{corollary}\label{cor:coprimedp}
Let $G = A\times B$, where $A$ and $B$ have relatively prime orders.
If $a$ divides $\order{A}$ and $b$ divides $\order{B}$, then
$\ordcount{ab}{G} = \ordcount{a}{A}\ordcount{b}{B}$.
\end{corollary}
The following well-known result on consecutive prime powers
is a consequence of the Bang-Zsigmondy theorem,
and we shall use it several times in what follows.
\begin{lemma}\label{lem:conspp}
Let $p$ and $q$ be prime numbers and let $m$ and $n$ be
positive integers such that $p^m - 1 = q^n$.
Then we have one of the following:
\begin{enumerate}
\item{$p = n = 3$ and $q = m = 2$;}
\item{$(n,p) = (1,2)$, and $q = 2^m -1$ is a Mersenne prime (whence $m$ is prime); or,}
\item{$(m,q) = (1,2)$, and $p = 2^n + 1$ is a Fermat prime (whence $n$ is a power of $2$).}
\end{enumerate}
\end{lemma}
\section{Hamiltonian Groups with Perfect Order Classes}
Crucial to all that follows are formulae for the number of
elements of each possible order in a finite Hamiltonian group $G$.
We express these in terms of a direct decomposition of the
form $G = Q\times E\times A$, as described in Theorem~\ref{thm:hamstruct}.
\begin{lemma}\label{lem:ordcount}
Let $G = Q\times E\times A$ be a finite Hamiltonian group,
where $E\ensuremath{\simeq}\cyclic{2}^e$, for $e$ a non-negative integer, and $A$ is an abelian group of odd order.
For each odd divisor $d$ of the order of $G$, we have
\begin{enumerate}
\item{$\ordcount{d}{G} = \ordcount{d}{A}$;}
\item{$\ordcount{2d}{G} = (2^{e+1} - 1)\cdot \ordcount{d}{A}$; and,}
\item{$\ordcount{4d}{G} = 3\cdot 2^{e+1}\cdot \ordcount{d}{A}$.}
\end{enumerate}
\end{lemma}
Note especially the cases in which $d = 1$:
\begin{displaymath}
\ordcount{2}{G} = (2^{e+1} - 1)
\end{displaymath}
and
\begin{displaymath}
\ordcount{4}{G} = 3\cdot 2^{e+1}.
\end{displaymath}
\begin{proof}
Every element $g$ of $G$ has an unique expression of the form
\begin{displaymath}
g = qua, \thickspace q\in Q, \thickspace u\in E, \thickspace a\in A.
\end{displaymath}
Since the direct factors $Q$, $E$ and $A$ are mutually centrialising, therefore,
\begin{displaymath}
\order{g} = \lcm(\order{q},\order{u})\order{a}.
\end{displaymath}
Note that the order of $u$ is either $1$ or $2$,
and the non-trivial elements of $Q$ have order either $2$ or $4$.
Now $Q$ has a unique central involution $z$ so,
if $g$ has order $2$ then $g = z^{\varepsilon}u$,
where $\varepsilon\in\{0,1\}$, $u\in E$ and $u\neq 1$ if $\varepsilon = 0$.
Thus, every element of order $2$ is contained in the elementary
abelian $2$-subgroup $\langle z\rangle\times E$
so there are $2^{e+1} - 1$ elements of order $2$.
If $g$ has order $4$, then $g = qu$, where $q\in Q$ has order $4$,
and $u\in E$ is arbitrary.
Since $Q$ has $6$ elements of order $4$, it follows that
$\ordcount{4}{G} = 6\cdot 2^3 = 3\cdot 2^{e+1}$.
Now let $d > 1$ be any odd divisor of $\order{G}$.
If $g$ has order $2d$, then $g = va$, where $v\in Q\times E$ has order $2$,
and $a\in A$ has order $d$.
Since there are $2^{e+1}-1$ elements of order $2$,
all of which belong to $Q\times E$, and there are, by definition,
$\ordcount{d}{A}$ elements of order $d$ in $A$,
it follows from Corollary~\ref{cor:coprimedp} that
$\ordcount{2d}{G}$ is as stated.
Finally, an element $g$ of order $4d$ has the form $g = va$,
with $a\in A$ of order $d$ and $v\in Q\times E$ of order $4$.
Therefore, the number of elements of order $4d$ is equal to
the product $\ordcount{4}{G}\ordcount{d}{A} = 3\cdot 2^{e+1}\ordcount{d}{A}$,
completing the proof.
\end{proof}
Since a finite Hamiltonian group $G$ has elements of order $4$,
if it is to have perfect order classes, then $\ordcount{4}{G}$,
which is divisible by $3$,
must be a divisor of the order of $G$.
\begin{corollary}
If $G$ is a Hamiltonian group with perfect order classes,
then the order of $G$ is divisible by $3$.
\end{corollary}
We now turn to the proof of Theorem~\ref{thm:main}.
Let us begin by showing that each of the forms of Hamiltonian groups
in Theorem~\ref{thm:main} has perfect order classes.
\begin{lemma}
If $k$ is a positive integer, then the groups
$G = Q\times\cyclic{2}\times\cyclic{3^k}$,
and
$G = Q\times\cyclic{3^k}$,
have perfect order classes.
\end{lemma}
\begin{proof}
Consider first the case $G = Q\times\cyclic{3^k}$,
and note that
\begin{displaymath}
\order{G} = \order{Q\times\cyclic{3^k}} = 8\cdot 3^k.
\end{displaymath}
Then $G$ has an unique element of order $2$,
and $6$ elements of order $4$.
For each $j$ with $1\leq j\leq k$,
we have
\begin{displaymath}
\ordcount{3^j}{G} = \ordcount{3^j}{\cyclic{3^k}} = \eulerphi(3^j) = 2\cdot 3^{j-1},
\end{displaymath}
which is a divisor of $\order{G}$.
Next,
\begin{displaymath}
\ordcount{2\cdot 3^j}{G} = 2\cdot \ordcount{3^j}{\cyclic{3^k}} = 2\cdot\eulerphi(3^j) = 4\cdot 3^{j-1},
\end{displaymath}
and
\begin{displaymath}
\ordcount{4\cdot 3^j}{G} = 3\cdot 2\cdot \ordcount{3^j}{\cyclic{3^k}} = 3\cdot 2\cdot\eulerphi(3^j) = 3\cdot 4\cdot 3^{j-1} = 4\cdot 3^j,
\end{displaymath}
and these divide the order of $G$.
Now consider the case $G = Q\times\cyclic{2}\times\cyclic{3^k}$.
In this case there are $3$ elements of order $2$ and $12$ of order $4$,
while the number of elements of order a power of $3$ is the same as
in the previous case.
The numbers of elements of orders $2\cdot 3^j$ and $4\cdot 3^j$ are twice
as many as the previous case, and these remain divisors of the order of $G$.
This completes the proof.
\end{proof}
The remainder of this section is devoted to proving the converse.
To that end, we suppose that $G$ is a finite Hamiltonian group
with perfect order classes.
Then
\begin{displaymath}
G = Q\times E\times T\times P,
\end{displaymath}
where $E\ensuremath{\simeq}\cyclic{2}^e$ is an elementary abelian $2$-group of rank $e\geq 0$,
$T$ is a non-trivial abelian $3$-group,
and $P$ is an abelian group of odd order, coprime to $3$.
Note that we have
\begin{displaymath}
\order{G} = \order{Q}\order{E}\order{T}\order{P} = 8\cdot 2^e\cdot 3^k\cdot \order{P} = 2^{e+3}\cdot 3^k\order{P},
\end{displaymath}
for some positive integer $k$.
The argument to follow will show, in a sequence of lemmata,
first that the subgroup $P$ has prime-power order,
then that $P$ is actually trivial,
and finally that the $3$-subgroup $T$ is cyclic
and that the elementary abelian $2$-subgroup $E$ is either
trivial or of order $2$.
Let us first show that our subgroup $P$ has prime power order.
\begin{lemma}\label{lem:step1}
The order of $G$ is divisible by at most one prime number greater than $3$.
In particular, $P$ is a $p$-group, for some prime number $p > 3$.
\end{lemma}
\begin{proof}
Suppose, to the contrary, that $G$ is divisible by two odd
primes $p$ and $q$, both greater than $3$.
Then
\begin{displaymath}
\ordcount{12pq}{G} = 3\cdot 2^{e+1} \cdot \ordcount{3}{G} \cdot \ordcount{p}{G}\cdot \ordcount{q}{G}.
\end{displaymath}
Now each of $\ordcount{3}{G}$, $\ordcount{p}{G}$ and $\ordcount{q}{G}$ is even,
so $\ordcount{12pq}{G}$ is divisible by $2^{e+4}$,
which implies that $2^{e+4}$ divides $\order{G}$,
a contradiction.
\end{proof}
From the lemma, it follows that our group $G$ has the form
\begin{displaymath}
G = Q\times\cyclic{2}^e\times T\times P,
\end{displaymath}
where $P$ is a $p$-group, for some prime $p > 3$.
The next step is to show that $P$ is trivial.
\begin{lemma}\label{lem:step2}
Let $G = Q\times\cyclic{2}^e\times T\times P$,
where $T$ is a non-trivial abelian $3$-group,
$e\geq 0$,
and $P$ is a $p$-group, for some prime $p > 3$.
If $G$ has perfect order classes,
then $P$ is trivial.
\end{lemma}
\begin{proof}
Suppose that $\order{P} = p^m$, where $m\geq 1$;
we shall derive a contradiction.
Then $G$ has an element of order $p$,
so $\ordcount{p}{G}$ is divisible by $p - 1$.
Note the formula
\begin{displaymath}
\order{G} = 2^{e+3}\cdot 3^k\cdot p^m,
\end{displaymath}
and, in particular, the highest powers of $2$ and $3$ that divide
the order of $G$.
Since $\ordcount{3}{G} = \ordcount{3}{T}$, we can write
\begin{displaymath}
\ordcount{3}{G} = 3^{\lambda} - 1,
\end{displaymath}
where $\lambda\geq 1$ is the rank of $T$.
If $\lambda$ is even, then $3^{\lambda}\congruent 1\pmod{8}$,
so $\ordcount{3}{G}$ is divisible by $8$.
Then $\ordcount{12}{G} = 3\cdot 2^{e+1}\cdot\ordcount{3}{G}$
is divisible by $2^{e+4}$.
Since $\ordcount{12}{G}$ divides $\order{G} = 2^{e+3}\cdot 3^k\cdot p^m$,
this is a contradiction.
Therefore, $\lambda$ is odd.
If $\lambda = 1$, then $T\ensuremath{\simeq}\cyclic{3^k}$ is cyclic,
and so has an element of order $3^k$,
and the number of elements of order $3^k$ is equal to
$\eulerphi(3^k) = 2\cdot 3^{k-1}$.
Then the number of elements of $G$ whose order is equal to $2\cdot 3^{k}$ is given by
\begin{displaymath}
\ordcount{2\cdot 3^k}{G} = 3\cdot(2^{e+1} - 1)\eulerphi(3^k) = 2\cdot 3^k\cdot(2^{e+1} - 1).
\end{displaymath}
Now $\ordcount{2}{G} = 2^{e+1} - 1$ is an odd divisor of $\order{G}$,
so $2^{e+1} - 1$ divides $3^{k}p^{m}$,
and hence we may write, for suitable integers $\alpha$ and $\beta$,
\begin{displaymath}
2^{e+1} - 1 = 3^{\alpha}p^{\beta}.
\end{displaymath}
Assume first that $e > 0$, so that $2^{e+1} - 1\neq 1$.
Then at least one of $3$ and $p$ is a divisor of $2^{e+1} - 1$.
If $3$ divides $2^{e+1} - 1$,
then $\ordcount{2\cdot 3^k}{G}$ is divisible by $3^{k+1}$,
which contradicts the assumption that $\ordcount{2\cdot 3^k}{G}$
is a divisor of $\order{G}$.
Therefore, $2^{e+1} - 1 = p^{\beta}$.
This implies that $\beta = 1$, so $p = 2^{e+1} - 1$.
Then
\begin{displaymath}
p - 1 = 2^{e+1} - 2 = 2(2^e - 1),
\end{displaymath}
and $p-1$ divides $\order{G}$,
so $2^e - 1$ divides $\order{G}$ also.
Because $2^e - 1$ is an odd divisor of $\order{G}$,
it follows either that $e = 1$ or that $2^e - 1$ is a power of $3$.
Since $e = 1$ yields the absurdity $p = 2^2 - 1 = 3 < p$,
it must be that $2^e - 1$ is a power of $3$.
This then implies that $e = 2$, whence, $p = 2^3 - 1 = 7$.
Then $p - 1 = 6 = 2\cdot 3$,
so $\ordcount{2\cdot 3^k}{G}$ is divisible by $3^{k+1}$,
another contradiction.
Now suppose that $e = 0$, so $\order{G} = 8\cdot 3^k\cdot p^m$.
Then
\begin{displaymath}
\ordcount{2\cdot p\cdot 3^k}{G} = \ordcount{3^k}{G}\ordcount{p}{G} = \eulerphi(3^k)\ordcount{p}{G} = 2\cdot 3^{k-1}\ordcount{p}{G}.
\end{displaymath}
Since $p > 3$, we have $p\congruent\pm 1\pmod{6}$.
Suppose first that $p = 6n-1$, for some positive integer $n$.
Then
\begin{displaymath}
p - 1 = 6n - 2 = 2(3n-1)
\end{displaymath}
is a divisor of $\order{G}$, so $3n - 1$ divides $\order{G}$.
Because $3n - 1$ is coprime to both $3$ and $p$, therefore, $3n - 1 = 2^{\alpha}$,
for some non-negative integer $\alpha$.
Since $\alpha = 0$ leads to the impossibility $3n = 2$,
it follows that $\alpha$ is positive.
If $p > 5$, then $\alpha > 2$ and $\ordcount{2\cdot p\cdot 3^k}{G}$
is divisible by
\begin{displaymath}
2\cdot 3^{k-1}(p-1) = 4\cdot 3^{k-1}(3n-1) = 2^{\alpha+2}3^{k-1},
\end{displaymath}
and $2^{\alpha + 2}$ is larger than the largest power $8$ of $2$ that divides $\order{G}$,
a contradiction.
Therefore, $p = 5$.
But then $\order{G}$ is divisible by
$\ordcount{60}{G} = 3\cdot 2\cdot\ordcount{3}{G}\ordcount{5}{G}$,
which is divisible by $3\cdot 2\cdot 2\cdot 4 = 3\cdot 16$,
again, a contradiction.
Therefore, we must have that $p = 6n + 1$, for some positive integer $n$,
and so $p - 1 = 6n$ divides $\ordcount{p}{G}$.
Then
\begin{displaymath}
\ordcount{4\cdot p\cdot 3^k}{G} = 3\cdot 2\cdot\ordcount{3^k}{G}\ordcount{p}{G} = 3\cdot 2\eulerphi(3^k)\ordcount{p}{G} = 2^{2}\cdot 3^k\ordcount{p}{G},
\end{displaymath}
is divisible by $3^{k+1}$, leading to the impossibility that
$3^{k+1}$ divides $\order{G}$.
Therefore, we cannot have $\lambda = 1$, and $T$ is not cyclic.
Since $\lambda > 1$ is odd, we must have $\lambda\geq 3$.
Because $\ordcount{3}{G} = 3^{\lambda} - 1$ is coprime to $3$
and is an even divisor of $\order{G}$,
hence, $3^{\lambda} - 1$ divides $2^{e+3}p^m$.
And, since $\lambda\geq 3$, it follows that $3^{\lambda} - 1$
is not a power of $2$ so, in fact, $2p$ divides $3^{\lambda} - 1$.
Since $\lambda$ is odd, hence, $1 + 3 + \cdots + 3^{\lambda - 1}$
is odd so we have,
\begin{displaymath}
\ordcount{3}{G} = 3^{\lambda} - 1 = 2p^{\alpha},
\end{displaymath}
for some integer $\alpha\geq 1$.
In particular, we have
\begin{displaymath}
3^{\lambda}\congruent 1\pmod{p}.
\end{displaymath}
From this, and from Fermat's little theorem,
we obtain that the order $s$ of $3$ modulo $p$ divides $p-1= \eulerphi(p)$ which,
in turn, is a divisor of $\order{G}$;
and also, $s$ divides $\lambda$.
Since $\gcd(p,p-1) = 1$ it follows that $p - 1$,
and hence also its divisor $s$,
is of the form $2^{\alpha}3^{\beta}$,
for suitable integers $\alpha,\beta\geq 0$.
But $s$ divides the odd integer $\lambda$,
so it must be that $s$ is a (positive) power of $3$.
(Note that $s\neq 1$ since $p$ is odd.)
In particular, $3$ divides $\lambda$, so we can write
$\lambda = 3\sigma$, where $\sigma\geq 1$ is an odd integer.
Now,
\begin{displaymath}
\ordcount{3}{G} = 3^{\lambda} - 1 = 3^{3\sigma} - 1 = 27^{\sigma} - 1 = 26(1 + 27 + \cdots + 27^{\sigma - 1}),
\end{displaymath}
so $13$ divides $\ordcount{3}{G}$, which divides $\order{G}$,
and it follows that $p = 13$.
But then $4$ divides $p - 1 = 12$,
so $\ordcount{12}{G}$ is divisible by $2^{e+4}$,
a final contradiction.
This completes the proof.
\end{proof}
We now know that $G$ is a $\{2,3\}$-group.
The final step is to prove that the Sylow $3$-subgroup $T$
of $G$ is cyclic, and that the rank of $E$ is either $0$ or $1$.
\begin{lemma}\label{lem:step3}
Let $G = Q\times\cyclic{2}^e\times{T}$,
where $T$ is an abelian group of order $3^k$, for a positive integer $k$.
If $G$ has perfect order classes, then $e\in\{0,1\}$
and $T$ is cyclic.
\end{lemma}
\begin{proof}
Since $\ordcount{2}{G} = 2^{e+1} - 1$ divides the order $\order{G} = 2^{e+3}3^k$
of $G$, and since $\ordcount{2}{G}$ is odd, therefore,
\begin{displaymath}
2^{e+1} - 1 = 3^n,
\end{displaymath}
for some non-negative integer $n$.
The only solutions are $(e,n) = (0,0)$ and $(e,n) = (1,1)$.
Thus, $e\in\{0,1\}$, as claimed.
Next, the number of elements of order $3$ in $G$ has the form
\begin{displaymath}
\ordcount{3}{G} = 3^{\lambda} - 1,
\end{displaymath}
where $\lambda\geq 1$ is the rank of $T$.
As in the proof of Lemma~\ref{lem:step2}, the rank $\lambda$
must be odd.
Since $\ordcount{3}{G} = 3^{\lambda}-1$ is a divisor of $\order{G}$,
and is even, we must have
\begin{displaymath}
3^{\lambda} - 1 = 2^{\alpha}3^{\beta},
\end{displaymath}
for some integers $\alpha$ and $\beta$ with $\alpha\geq 1$,
and $\beta\geq 0$.
However, $3$ and $3^{\lambda} - 1$ are coprime,
which implies that $\beta = 0$ and $3^{\lambda} - 1 = 2^{\alpha}$.
The only solutions for this are $(\lambda,\alpha) = (1,1)$
and $(\lambda,\alpha) = (2,3)$.
Since $\lambda$ is odd, we must have $\lambda = 1$.
This implies that $T$ is cyclic, completing the proof.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{Das2009b}{article}{
author = {Ashish Kumar Das},
title = {On Finite Groups Having Perfect Order Subsets},
journal = {Int. J. Algebra},
volume = {3},
number = {13},
year = {2009},
pages = {629\ndash 637}
}
\bib{FinchJones2002}{article}{
author = {{Carrie} {E}. {Finch} and {Lenny} {Jones}},
title = {A Curious Connection Between {Fermat} Numbers and Finite Groups},
journal = {Amer. Math. Monthly},
volume = {109},
number = {6},
year = {2002},
pages = {517\ndash 524}
}
\bib{FinchJones2003}{article}{
author = {{Carrie} {E}. {Finch} and {Lenny} {Jones}},
title = {Non-Abelian groups with perfect order subsets},
journal = {The JP Journal of Algebra and Number Theory},
volume = {3},
number = {1},
year = {2003},
pages = {13\ndash 26}
}
\bib{JonesToppin2011}{article}{
author = {Lenny Jones and Kelly Toppin},
title = {On three questions concerning groups with perfect order subsets},
journal = {Involve},
volume = {4},
number = {3},
year = {2011},
pages = {251\ndash 261}
}
\bib{Robinson}{book}{
author={D.J.S. Robinson},
title={A Course in the Theory of Groups},
series={Graduate Texts in Mathematics},
volume={80},
publisher={Springer-Verlag},
place={New York},
date={1993}
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,108,101,564,668 | arxiv | \section[]{Introduction}
How and why some galaxies cease forming stars and remain quiescent are open questions that bear significantly on our understanding of galaxy evolution.
Contrary to the expectation that a lack of star-formation is the consequence of a paucity of cool gas, observational studies have established that a high fraction of passive galaxies are not gas-poor (see Chen 2017a and references therein). Systematic 21cm surveys have discovered that more than a third of present-day quiescent galaxies contain abundant neutral hydrogen (\ion{H}{1}) gas in their interstellar medium (ISM; e.g., Oosterloo et al.\ 2010; Serra et al.\ 2012). At an earlier epoch, QSO absorption-line surveys of \ion{Mg}{2} absorption features near luminous red galaxies (LRGs) at $z\sim0.5$ have also demonstrated that a significant fraction of these distant massive ellipticals (with total stellar masses of $M_\mathrm{star} \gtrsim 10^{11}\,\mathrm{M}_\odot$) are surrounded by chemically enriched cool gaseous halos on $\sim100$ kpc scales (e.g., Gauthier et al.\ 2009, 2010; Bowen \& Chelouche 2011; Huang et al.\ 2016; Chen et al.\ 2018). The total mass in this cool ($T\sim10^4$ K) circumgalactic medium (CGM) is estimated to be $M_\mathrm{cool} \approx (1-2)\times10^{10}\,\mathrm{M}_\odot$ within projected distance $d<160$ kpc (or as much as $\approx 4\times10^{10}\,\mathrm{M}_\odot$ at $d < 500$ kpc; Zahedy et al.\ 2019), similar to what has been reported for star-forming galaxies (e.g., Chen et al.\ 2010; Stocke et al.\ 2013; Werk et al.\ 2014).
The existence of large reservoirs of cool gas around massive ellipticals challenges simple theoretical expectations that these galaxies are surrounded by predominantly hot ($T\gtrsim10^6$ K) gas on both small ($\lesssim 10$ kpc; ISM) and large ($\sim 100$ kpc; CGM) scales. Furthermore, it indicates that some physical mechanisms are preventing the gas from triggering the resumption of star formation in the central galaxy.
A common feature of the gaseous environment at $d\lesssim 10$ kpc around massive quiescent galaxies is the high $\mathrm{Fe/Mg}$ abundance ratio, $[\mathrm{Fe}/\mathrm{Mg}]\apg0$, that has been observed in every instance cool gas is present (Zahedy et al.\ 2016, hereafter Z16; Zahedy et al.\ 2017a). This Fe enhancement not only indicates that the ISM of massive ellipticals has been significantly enriched by Type Ia supernovae (SNe Ia), but also points to SNe Ia as a potentially important heating source in massive halos (e.g., Conroy et al.\ 2015; Li et al.\ 2020a,b).
One of the galaxies studied in Z16 is a massive ($M_{\rm star}\approx 10^{11}\, \mathrm{M_\odot}$) elliptical lens for QSO HE0047$-$1756 at $z_\mathrm{gal}=0.408\pm0.001$. It exhibits extremely strong and kinematically complex low-ionization metal absorptions with a line-of-sight velocity spread exceeding 600 \mbox{km\ s${^{-1}}$}\ (Figure 1, top) and a velocity shear of $\approx350$ \mbox{km\ s${^{-1}}$}\ between two locations $\approx8$ kpc apart in projection. Long-slit far-ultraviolet (FUV) spectroscopic observations of both lensed QSO images revealed the presence of abundant \ion{H}{1} within the galaxy, with measured \ion{H}{1} column densities of log\,$N$(\ion{H}{1})$/\mbox{${\rm cm^{-2}}$}= 19.6-19.7$ at both locations (Zahedy et al.\ 2017b, hereafter Z17), constraining the gas metallicity to be $\mathrm{[Fe/H]}\gtrsim 0$ for both sightlines after accounting for likely dust depletion. While Z17 also noted the presence of possible absorption features from other metal ions probing a wide range of ionization states, including the highly ionized \ion{O}{6} $\lambda\lambda1031,1037$ doublet, their low-resolution spectra precluded a detailed investigation of these absorption profiles to confirm the presence of high ions. Because $\mathrm{O^{5+}}$ ions are most abundant at temperatures near the peak of the cooling curve for metal-enriched gas ($T\approx 10^{5.5}$\,K; e.g., Gnat \& Sternberg 2007), such warm gas is expected to cool rapidly if left to itself. Therefore, the possible detection of rapidly cooling gas in the ISM implies the presence of an effective heating mechanism in the galaxy. Characterizing the properties of such a transient gas phase and its relationship to cooler atomic/molecular gases offers a unique opportunity to understand the dynamic gas content of ellipticals, in order to gain insight into late-time feedback in massive quiescent galaxies.
In this {\it Letter}, we report the robust detection of highly ionized gas in the ISM of the massive elliptical lens of HE0047$-$1756, traced by the \ion{O}{6} and \ion{N}{5} absorption features. Furthermore, we report the serendipitous discovery of molecular hydrogen ($\mathrm{H_2}$) in the ISM, the first direct detection of $\mathrm{H_2}$ within a passive galaxy beyond the local Universe. We compare the spatial distributions and mass budgets of the molecular ($T\sim100$\,K), cool ($T\sim10^4$\,K), and warm ($T\sim10^5$\,K) ISM phases and discuss their implications for feedback in massive ellipticals. We adopt a $\Lambda$ cosmology with $\Omega_{\rm M}=0.3$, $\Omega_\Lambda = 0.7$, and $H_0 =70 \ {\rm km} \ {\rm s}^{-1}\ {\rm Mpc}^{-1}$.
\section[]{Observations}
New FUV spectra of image {\it A} of the doubly lensed QSO HE\,0047$-$1756 ($z_\mathrm{QSO} = 1.676$; Figure 1 of Z16) were obtained with the Cosmic Origins Spectrograph (COS) onboard the {\it Hubble Space Telescope (HST)} during our {\it HST} Cycle 25 observing program (Program ID: 15250; PI: Zahedy) in December 2018. {\it HST}/COS with the G130M and G160M gratings provides a wavelength coverage from $\lambda\approx1130$ \AA\ to $\lambda \approx 1790$ \AA\ at a medium resolution of ${\rm FWHM}\approx18-20\, \mbox{km\ s${^{-1}}$}$, a fifteenfold increase in resolution from the Z17 spectra. The total integration time of the observations was 9,418 s and 17,722 s for the G130M and G160M gratings, respectively, comprising 22 individual exposures spread over three separate {\it HST} visits. The observations used two (four) central wavelength settings for the G130M (G160M) grating and two or four FP-POS at each central wavelength, to ensure a continuous wavelength coverage and reduce fixed pattern noise over the full spectral range of the data.
The pipeline-reduced COS data were downloaded from the {\it HST} archive and processed further using our custom software. The additional data reduction involved recalibrating the COS wavelength solution using a method described in Chen et al.\ (2018) and Zahedy et al.\ (2019).
These steps resulted in a combined spectrum which was then continuum normalized by fitting a low-order polynomial function to absorption-free spectral regions. The final COS spectrum of HE\,0047$-$1756$A$ has a median signal-to-noise ratio of S/N $\approx 10-20$ per resolution element over the full wavelength range. The wavelength solution is accurate and precise to better than 3 \mbox{km\ s${^{-1}}$}, as evidenced by a comparison between low-ionization absorption features seen in COS and ground-based optical echelle spectra (presented in Z16) and the excellent agreement in line centroids among various $\mathrm{H_2}$ absorption lines spanning $\approx300$ \AA\ in observed wavelength (\S3.1).
We supplement our COS spectrum of sightline $A$ with low-resolution (${\rm FWHM}\approx270\, \mbox{km\ s${^{-1}}$} $) FUV spectra of both images of the lensed QSO taken with the Space Telescope Imaging Spectrograph (STIS) and the G140L grating onboard {\it HST} from Z17. The STIS spectrum of HE\,0047$-$1756$A$ ($B$) has a median S/N $\approx 20-30\,(12-18)$ per resolution element over its full wavelength range of $1150-1720$ \AA.
\section[]{Results}
\begin{figure}
\begin{center}
\hspace{-0.16in}
\includegraphics[width=3.2in]{HE0047A_DLA_withFeII_newold.pdf}
\end{center}
\vspace{-0.15in}
\caption
{{\it Top}: Kinematically complex gas at $d=4.6$ kpc from the massive elliptical lens galaxy ($z_\mathrm{gal}=0.408$, vertical dashed line), seen in \ion{Fe}{2} $\lambda2600$ absorption from ground-based optical echelle spectrum of HE\,0047$-$1756$A$ (adapted from Z16). The absorption profile comprises 15 individual components (blue tick marks) spanning over $600$ \mbox{km\ s${^{-1}}$}\ in line-of-sight velocity. Zero velocity corresponds to the redshift of the $\mathrm{H_2}$ absorption identified in Figure 2, $z_\mathrm{abs}=0.405985$.
{\it Bottom}: New {\it HST}/COS FUV spectrum of the corresponding Ly$\alpha$ absorption associated with the lens galaxy. The COS spectrum is rebinned by three pixels for display purposes. The 1-$\sigma$ error spectrum is included in cyan. Contaminating features are dotted out for clarity. The magenta tick mark above the profile indicates the best-fit centroid of the damped Ly$\alpha$ profile. The solid red and dashed magenta curves show the best-fit $N($\ion{H}{1}) and its uncertainty, log $N($\ion{H}{1})$/\mbox{${\rm cm^{-2}}$} =19.80\pm0.15$.
}
\end{figure}
A prominent feature associated with the massive elliptical lens galaxy is the Ly$\alpha$ absorption with strong damping wings (Figure 1, bottom), confirming the previously reported high $N($\ion{H}{1}) of the gas inferred using low-resolution STIS FUV spectra (Z17). To refine the $N($\ion{H}{1}) measurement, we perform a Voigt profile analysis on the observed damped Ly$\alpha$ profile using a custom software (see Zahedy et al.\ 2019) that takes into account the relevant COS line-spread function (LSF; Lifetime Position 4). Our analysis yields a total \ion{H}{1} column density of log $N($\ion{H}{1})$/\mbox{${\rm cm^{-2}}$} =19.80\pm0.15$, which is consistent within uncertainties with the Z17 measurement. We adopt this $N($\ion{H}{1}) throughout subsequent analysis.
\subsection{Discovery of $\mathrm{H}_2$ in the ISM of the Lens Galaxy}
A visual inspection of our {\it HST}/COS spectrum of HE\,0047$-$1756$A$ reveals the presence of numerous absorption features consistent with the $\mathrm{H_2}$ Lyman and Werner bands at redshift $z\approx0.406$, or approximately $-430$ \mbox{km\ s${^{-1}}$}\ from the systemic redshift of the lens galaxy (see Figure 2).\footnote{While the velocity offset of the $\mathrm{H_2}$ absorption features may seem large for ISM gas, it is partly explained by the uncertainty on the lens redshift ($\approx 200$ \,\mbox{km\ s${^{-1}}$} ; Z16). Furthermore, the projected escape velocity at $r=5$ kpc from the lens galaxy is $\approx400-500$\,\mbox{km\ s${^{-1}}$}\ given the estimated mass of its host dark-matter halo (Z16), so the observed $\mathrm{H_2}$ kinematics is consistent with ISM gas that is bound to the galaxy. Empirically, large kinematic widths of $\approx500$\,\mbox{km\ s${^{-1}}$}\ have been observed in the atomic/molecular ISM of some nearby early-type galaxies (e.g., Oosterloo et al.\ 2007; Davis et al.\ 2013), reflecting the potential wells of these massive systems.} The $\mathrm{H_2}$ absorption features coincide in velocity with the strongest low-ionization absorption component identified in Z16 (component 1 in their Table 6). We are able to identify more than 140 absorption transitions originating from the ground state of $\mathrm{H_2}$ at different rotational levels from $J=0$ to $J=5$. Each of these transitions has a vibrational quantum number of $\nu=0$ for the lower state and $\nu\leq17$ ($\nu\leq4$) for the upper state in the Lyman (Werner) band.
\begin{figure*}
\begin{center}
\vspace{0.in}
\includegraphics[width=6.3in]{HE0047_H2_newest.pdf}
\end{center}
\vspace{-0.2in}
\caption
{Continuum-normalized absorption profiles of select $\mathrm{H}_2$ Lyman- and Werner-band transitions that are used in our absorption analysis, grouped by $J$ level, observed along sightline HE 0047$-$1756$A$ at $d=4.6$ kpc from the massive elliptical lens. The COS spectrum is rebinned by three pixels for display purposes. The 1-$ \sigma$ error spectrum is included in cyan. Zero velocity marks the best-fit redshift of the $\mathrm{H}_2$ absorption identified with a Voigt profile analysis, $z_\mathrm{abs} = 0.405985$. Regions excluded from the analysis due to blending and/or contaminating features have been grayed out for clarity. The best-fit $\mathrm{H}_2$ absorption profiles are plotted on top of the data in red curves. The significant detection of $\mathrm{H}_2$ at $J>2$ indicates that non-thermal excitation mechanism is effective in populating these high rotational levels (see \S 4.1).}
\end{figure*}
To characterize the molecular gas properties, we perform a Voigt profile analysis using a custom software that models the observed $\mathrm{H_2}$ transitions in each $J$ level simultaneously. We adopt the $\mathrm{H_2}$ line list from Ubachs et al.\ (2019) which was made available to us by Patrick Petitjean (private communication). Although we detect more than 140 $\mathrm{H_2}$ transitions, a significant fraction of these lines are blended with each other or other absorption lines. To ensure robust fitting results, we perform our absorption analysis on a subset of available lines (between five and 14 transitions) for each $J$ value, which are selected to contain minimal blending and have unambiguous local continuum level. While only a fraction of observed $\mathrm{H_2}$ transitions are used to find the best-fit model, we find that the resulting full $\mathrm{H_2}$ absorption model reproduces the absorption profiles of most of the excluded transitions reasonably well.
\begin{table}
\begin{center}
\caption{$\mathrm{H_2}$ properties at $d=4.6$ kpc from the lens galaxy}
\label{tab:Imaging}
\hspace{-0.28in}
\resizebox{2.6in}{!}{
\begin{tabular}{cclc}
\hline
$z_\mathrm{abs}$ & $J$ & log\,$N/$\mbox{${\rm cm^{-2}}$} & $b$ \\
& & &(\mbox{km\ s${^{-1}}$})\\
\hline
$0.405985$ & $0$ & $17.34^{+0.09}_{-0.26}$ & $2.7^{+0.8}_{-0.5}$ \\
& $1$ & $17.58^{+0.07}_{-0.21}$ & $3.9^{+0.5}_{-0.3}$ \\
& $2$ & $15.71^{+0.44}_{-0.18}$ & $6.3^{+0.8}_{-1.1}$ \\
& $3$ & $15.86^{+0.28}_{-0.23}$ & $6.2^{+1.0}_{-0.8}$ \\
& $4$ & $14.82^{+0.06}_{-0.04}$ & $8.7^{+1.6}_{-1.7}$ \\
& $5$ & $14.61^{+0.06}_{-0.05}$ & $10.5^{+5.0}_{-2.5}$ \\
& $6$ & $<14.4^a$ & $10$ \\
& $\mathbf{Total}$ & $17.8^{+0.1}_{-0.3}$ & \\ \hline
\multicolumn{4}{l}{$^a$ 95\% upper limit (see \S3.1).}\\
\end{tabular}}
\end{center}
\end{table}
For each rotational $J$ level, we first generate a model spectrum for a single-component line profile, motivated by both the narrow linewidths and lack of kinematic substructures in the observed $\mathrm{H_2}$ absorption profiles (see Figure 2). The Voigt profile is uniquely defined by three free parameters: the line centroid redshift $z_\mathrm{abs}$, the absorption column density $\log\,N$, and the Doppler parameter $b$. To reduce the number of free parameters, all transitions from a given $J$ level are tied to have the same $\log\,N$ and $b$. We further require different $J$ levels to share the same line centroid redshift. Once a theoretical $\mathrm{H_2}$ absorption spectrum has been generated, it is convolved with the relevant COS LSF and subsequently binned to match the pixel resolution of the data. Finally, this model spectrum is compared to the data and the best-fit model parameters for each $J$ level are found by minimizing $\chi^2$ value at the selected $\mathrm{H_2}$ transitions. We estimate the model uncertainties by constructing a marginalized posterior probability distribution for each model parameter based on a Markov Chain Monte Carlo (MCMC) analysis done with the \textsc{Emcee} package (Foreman-Mackey et al.\ 2013). Each MCMC run consists of 500 steps performed by an ensemble of 250 walkers, which are seeded in a small region of the parameter space around the minimum $\chi^2$ solution to speed up convergence.
We present the best-fit model absorption profiles and compare them to the data in Figure 2. In addition, we summarize the results of the Voigt profile analysis in Table 1, where we report the model parameters and estimated 68\% confidence intervals for the $J=0$ to $J=5$ levels. For $J=6$, which does not exhibit any detectable absorption, we report in Table 1 the 95\% upper limit on the absorption column density for a $b=10$\,\mbox{km\ s${^{-1}}$}\ line profile (matching the linewidth of the $J=5$ level), estimated using the error array at the strongest available $J=6$ transition in the COS data. The best-fit model yields a total $\mathrm{H_2}$ column density of $\log\,N(\mathrm{H_2})/\mbox{${\rm cm^{-2}}$}=17.8^{+0.1}_{-0.3}$
and a best-fit redshift of $z_\mathrm{abs}=0.405985\pm0.000005$. The centroid of the $\mathrm{H_2}$ line profile is consistent within uncertainties ($< 1\,$\mbox{km\ s${^{-1}}$}) with the strongest low-ionization metal component identified in ground-based optical echelle spectra (Z16), which indicates their association.
As shown in Table 1, our analysis also identifies a trend of increasing Doppler parameter with increasing $J$ value, from $b\approx3$ \mbox{km\ s${^{-1}}$}\ at $J=0$ to $b\approx10$ \mbox{km\ s${^{-1}}$}\ at $J=5$. The trend of rising velocity dispersion with rotational level has been reported in a number of $\mathrm{H_2}$-bearing damped Ly$\alpha$ absorbers (DLAs) at low and high redshifts (e.g., Ledoux et al.\ 2003; Albornoz V\'asquez et al.\ 2014; Boettcher et al.\ 2020). The kinetic temperature needed to thermally broaden the $\mathrm{H_2}$ line profiles to a linewidth of $\approx 3\,(10)$ \mbox{km\ s${^{-1}}$}\ is $\approx 10^3\,(10^4)$ K, significantly higher than temperatures at which a significant amount of molecular gas is expected to be present. Therefore, our measurements indicate that non-thermal line broadening is dominant for both low and high $J$ states in the gas, with increasing turbulence toward higher rotational states. We discuss the possible origins of this trend in \S 4.1.
With both the neutral and molecular hydrogen contents of the absorber known, we can estimate the molecular gas fraction according to the following expression,
$f_\mathrm{H_2} = 2\,N(\mathrm{H_2})/[2\,N(\mathrm{H_2})+N(\mathrm{H\,I)}]$.
The extremely strong low-ionization absorber observed along HE\,0047$-$1756$A$ is resolved into 15 kinematic components (Figure 3; Z16). While the total $N$(\ion{H}{1}) can be measured robustly from the strong Ly$\alpha$ damping wings (Figure 1), it is not possible to constrain the \ion{H}{1} column densities of these individual components because all available \ion{H}{1} Lyman series lines are heavily saturated and the different components blended. Thus, we first estimate $f_\mathrm{H_2}$ by attributing all the observed $N$(\ion{H}{1}) to the $\mathrm{H}_2$-bearing component. Although this assumption is unrealistic because it would result in highly asymmetric Ly$\alpha$ damping wings owing to the $\mathrm{H_2}$-bearing component occurring at the blue extremum of the profile, it yields a {\it conservative} lower limit on $f_\mathrm{H_2}$ of log\,($f_\mathrm{H_2})_\mathrm{lower}=-1.7^{+0.2}_{-0.3}$. To estimate an upper bound on the molecular gas fraction, we note that the $\mathrm{H}_2$-bearing component contains $\approx40-45$\% of the total column densities of the low-ionization species probed by \ion{Mg}{1}, \ion{Mg}{2}, and \ion{Fe}{2} absorptions (see Table 6 of Z16). If we assume that all 15 components have similar metallicities and dust content, which is justified by the relatively uniform $\mathrm{Fe/Mg}$ elemental abundance ratio observed across all components (Z16), then the inferred $N$(\ion{H}{1}) of the $\mathrm{H}_2$-bearing component is log\,$N($\ion{H}{1})$/\mbox{${\rm cm^{-2}}$} \approx19.4$. Consequently, the implied molecular gas fraction is log\,($f_\mathrm{H_2})_\mathrm{upper}=-1.3^{+0.2}_{-0.3}$.
The observed $f_\mathrm{H_2}$ at $d=4.6$ kpc from the lens galaxy is comparable to nearby ellipticals with CO detections (Welch et al.\ 2010; Young et al.\ 2014) but is
among the highest known for $z<1$ DLAs, where $\approx 90\%$ of absorbers with log\,$N($\ion{H}{1})$/\mbox{${\rm cm^{-2}}$} \gtrsim 19$ have log\,$f_\mathrm{H_2}\lesssim-2$ (e.g., Crighton et al.\ 2013; Muzahid et al.\ 2015a; 2016; but see Boettcher et al.\ 2020). Considering that $\mathrm{H_2}$ forms on the surface of dust grains, the high $f_\mathrm{H_2}$ can be explained by the high gas metallicity, $\mathrm{[Fe/H]}\gtrsim 0$ (Z17), which results in an elevated dust-to-gas ratio relative to the general DLA population.
\subsection{Highly Ionized Gas in the ISM of the Lens Galaxy}
\begin{figure}
\begin{center}
\hspace{-0.05in}
\includegraphics[width=3.1in]{HE0047A_highvslowions_new.pdf}
\end{center}
\vspace{-0.1in}
\caption
{Continuum normalized absorption profiles of different high- and low-ionization
metal transitions along HE\,0047$-$1756$A$ at $d=4.6$ kpc from the massive elliptical lens.
Zero velocity corresponds to the redshift of the $\mathrm{H_2}$ absorption detected in Figure 2, whereas the
systemic redshift of the lens galaxy, $z_\mathrm{gal}=0.408$, is shown in vertical dashed line.
The 1-$\sigma$ error spectrum is included in cyan.
Contaminating features have been dotted out for clarity.
The magenta tick marks at the top of the first three panels indicate the
location of individual components for the high-ionization species identified
in the Voigt profile analysis (see \S3.2), with the best-fit Voigt
profile models included in red. For comparison, individual
components of the low-ionization species are marked with the blue tick marks in the bottom five panels (Z16). The high ions show a distinct kinematic structure from what is seen in the low ions, indicating that they arise in a different gas phase.}
\label{Figure 7.3}
\end{figure}
\begin{table}
\begin{center}
\caption{High-ionization absorption properties at $d=4.6$ kpc}
\label{tab:Imaging}
\hspace{-0.45in}
\resizebox{3.7in}{!}{
\begin{tabular}{clrrr}\hline
Component & Species &\multicolumn{1}{c}{d${v_c}^a$} &\multicolumn{1}{c}{$b$} & \multicolumn{1}{c}{log\,$N$\,/\mbox{${\rm cm^{-2}}$}} \\
& &\multicolumn{1}{c}{(km\,s$^{-1}$)} & \multicolumn{1}{c}{(km\,s$^{-1}$)} & \\ \hline
1 & \ion{O}{6} & $+32.0\pm3.4$ &$51.4\pm4.9$ & $14.31\pm0.03$ \\
& \ion{N}{5} & & & $13.75\pm0.11$ \\ \hline
2 & \ion{O}{6} & $+220.1\pm5.6$& $47.9\pm8.9$ & $14.02\pm0.06$ \\
& \ion{N}{5} & & & $13.58\pm0.16$ \\ \hline
3 & \ion{O}{6} & $+372.3\pm6.6$&$42.8\pm3.5$ & $14.71\pm0.06$ \\
& \ion{N}{5} & & & $13.76\pm0.15$ \\ \hline
4 & \ion{O}{6} & $+423.4\pm14.7$&$64.2\pm21.4$ & $14.60\pm0.13$ \\
& \ion{N}{5} & & & $14.30\pm0.06$ \\ \hline
5 & \ion{O}{6} & $+507.3\pm12.7$&$43.4\pm8.4$ & $14.13\pm0.18$ \\
& \ion{N}{5} & & & $13.59\pm0.18$ \\ \hline
6 & \ion{O}{6} & $+672.3\pm3.0$&$25.5\pm4.6$ & $14.10\pm0.06$ \\
& \ion{N}{5} & & & $13.33\pm0.22$ \\ \hline
\multicolumn{5}{l}{$^a$ Relative velocity shift from the $\mathrm{H_2}$ absorption redshift, $z_\mathrm{abs}=0.405985$} \\
\end{tabular}}
\end{center}
\end{table}
Z17 previously noted possible absorption features from high-ionization metal lines associated with the lens galaxy along both sightlines of HE\,0047$-$1756.
However, the low spectral resolution of the Z17 data prevented a detailed investigation into this tentative detection of high-ionization gas.
The {\it HST}/COS spectrum of HE\,0047$-$1756$A$ clearly resolves different metal absorption profiles, enabling precise measurements of gas kinematics and column densities of the highly ionized species.
As shown in the top three panels of Figure 3, the new COS spectrum confirms that \ion{O}{6} absorption is indeed detected and resolved into multiple components in the lens galaxy. In addition, \ion{N}{5} absorption is also detected with a kinematic structure that is consistent with \ion{O}{6}. To constrain their absorption properties, we perform a joint Voigt profile analysis of the \ion{O}{6} $\lambda1031$ line and the \ion{N}{5} $\lambda\lambda1238,1242$ doublet following the method of Zahedy et al.\ (2019).\footnote{The second member of the \ion{O}{6} doublet, \ion{O}{6} $\lambda1037$, is excluded from the Voigt profile analysis due to significant blending with neighboring low-ionization transitions \ion{C}{2} $\lambda1036$ and \ion{O}{1} $\lambda1039$. Although the \ion{O}{6} $\lambda1031$ profile is also contaminated by a higher-redshift Ly$\epsilon$ line at $z=0.55003$, in this case the absorption profile of the contaminating Ly$\epsilon$ line is well-constrained by various other Lyman series lines observed in our COS spectrum. To remove this contamination from the \ion{O}{6} $\lambda1031$ absorption, we have divided the observed \ion{O}{6} $\lambda1031$ profile by the best-fit model of the $z=0.55003$ Ly$\epsilon$ line prior to performing the analysis.} To ensure the robustness of the fit, we tie both the component structure and Doppler linewidths of the two ions. We summarize the results from our Voigt profile analysis of these high-ionization species in Table 2. The continuum-normalized absorption profiles and best-fit models for \ion{O}{6} and \ion{N}{5} are presented in the top three panels of Figure 3. To compare the kinematics between low- and high-ionization species, we also show the observed and modeled absorption profiles of \ion{Mg}{1}, \ion{Mg}{2}, and \ion{Fe}{2} in the bottom five panels of Figure 3, from previous absorption analysis reported in Z16.
It is clear from Figure 3 that the high ions exhibit a distinct kinematic structure from that of the low ions, which indicates that the high ions arise in a different gas phase (Zahedy et al.\ 2019). Specifically, our analysis reveals a highly ionized gas phase in the lens ISM that is kinematically complex, comprising six broad kinematic components ($b\approx25-65\, \mbox{km\ s${^{-1}}$}$) that span $\approx 640$ \mbox{km\ s${^{-1}}$}\ in line-of-sight velocity. The observed total column densities of these highly ionized species are log $N$(\ion{O}{6})$/\mbox{${\rm cm^{-2}}$} =15.2\pm0.1$ and log $N$(\ion{N}{5})$/\mbox{${\rm cm^{-2}}$}\ =14.6\pm0.1$. These \ion{O}{6} and \ion{N}{5} absorbers are among the strongest known to be in the vicinity of $z<1$ galaxies (cf., Johnson et al.\ 2015; Muzahid et al.\ 2015b; Werk et al.\ 2016; Rosenwasser et al.\ 2018; Zahedy et al.\ 2019), where high-ionization absorbers with log $N$(\ion{O}{6})$/\mbox{${\rm cm^{-2}}$} > 15$ and log $N$(\ion{N}{5})$/\mbox{${\rm cm^{-2}}$} > 14$ are rare.
It is also interesting to note the observed \ion{N}{5} to \ion{O}{6} column density ratios among the six high-ionization components, which have an arithmetic mean and dispersion of log\,$\langle N$(\ion{N}{5})$/N$(\ion{O}{6}) $\rangle=-0.5\pm0.2$. These ionic ratios are considerably higher than typical values seen in the Galactic corona (e.g., Wakker et al.\ 2012), the circumgalactic medium of external galaxies (e.g., Werk et al.\ 2016; Zahedy et al.\ 2019), and the high-redshift intergalactic medium (e.g., Lehner et al.\ 2014), where a large majority of absorbers in these diverse environments exhibit log\,$N$(\ion{N}{5})$/N$(\ion{O}{6}) $\lesssim-0.8$. We argue that a super-solar $\mathrm{[N/O]}$ in the highly ionized gas phase is the most likely explanation, considering that high nitrogen-to-alpha ratios of $\mathrm{[N/\alpha]}\gtrsim 0.3$ have been reported in the evolved stellar populations and cool ISM of nearby ellipticals (e.g., Greene et al.\ 2013; Yan 2018). Similar $\mathrm{[N/O]}$ ratios in both high- and low-ionization gases would also suggest a causal link between different phases of the ISM of the elliptical lens (we discuss this connection in \S 4.3).
\begin{figure*}
\begin{center}
\hspace{-0.2in}
\includegraphics[width=7.2in]{excitation_new.pdf}
\vspace{-0.2in}
\end{center}
\caption
{Excitation diagram for the observed rotational level populations (red circles) of $\mathrm{H_2}$ gas at $d=4.6$ kpc from the massive elliptical lens. {\it Left}: Assuming that different rotational levels follow a Boltzmann distribution, the observed ratio between the $J=0$ and $J=1$ levels indicates an excitation temperature of $T_\mathrm{01}=104^{+39}_{-33}$ K (dashed line). However, this single-temperature model severely underpredicts the observations at $J>2$. {\it Middle}: A model with two excitation temperatures of $T_\mathrm{0J,1}=93^{+20}_{-17}$ K (dotted line) and $T_\mathrm{0J,2}=490^{+48}_{-41}$ K (dash-dotted line) can reproduce the observed level populations. The thin gray curves show 100 random realizations of the two-temperature model using the MCMC method. {\it Right}: The thin blue curves represent a set the \textsc{cloudy} models that best reproduce the trend seen in the data. These models have UV radiation fields which are $15-25$ times more intense than the Milky Way ISM radiation field. If the elevated populations at higher rotational levels are due to radiation pumping, the required radiation field is significantly higher than what is observed in the local Galactic ISM. }
\end{figure*}
\section[]{Discussion}
\subsection{Physical Conditions of the $\mathrm{H}_2$ Gas}
The distribution of $\mathrm{H_2}$ molecules among different rotational levels reflects the excitation state of the gas and offers insight into
the physical mechanisms that are responsible. The relative populations of different $\mathrm{H_2}$ rotational levels can be described by a Boltzmann distribution,
$\frac{N_J}{N_{J=0}} = \frac{g_J}{g_{J=0}}\,\mathrm{exp}[-{B_v\,J(J+1)}/{T_\mathrm{0J}}]$,
where $N_\mathrm{J}$ is the $\mathrm{H_2}$ column density for the rotational level $J$, $T_\mathrm{0J}$ is the excitation temperature from $J=0$ to rotational level $J$, and $B_v = 85.36$ K. The statistical weight $g_J$ is $(2J+1)$ for even-numbered $J$ or $3(2J+ 1)$ for odd-numbered $J$. In Figure 4, we show the $\mathrm{H}_2$ excitation diagram of the absorber for rotational states between $J=0$ and $J=5$. The column density ratio between $J=0$ and 1 states, which contain $\approx 98$\% of the total $N(\mathrm{H_2})$, implies an excitation temperature of $T_\mathrm{01}=104^{+39}_{-33}$ K.
The observed $T_\mathrm{01}$ for the bulk of molecular gas along the lensed sightline is comparable to typical values reported in $z<1$ $\mathrm{H}_2$-bearing DLAs (see Muzahid et al.\ 2015a). However, while the observed population for $J=2$ can also be well-reproduced by the same excitation temperature, this single-temperature model fails to explain the observed column densities at $J>2$ (Figure 4, left panel). The predicted column densities for these higher rotational states are orders of magnitude lower than the observations, which indicates a higher excitation temperature for $J>2$.
It is well-known from Galactic $\mathrm{H}_2$ studies that a one-temperature fit typically works only for optically thin $\mathrm{H}_2$ absorbers with log\,$N(\mathrm{H_2})/ \mbox{${\rm cm^{-2}}$} \lesssim 15 $ (e.g., Spitzer et al.\ 1974; Spitzer \& Jenkins 1975; Jenkins \& Peimbert 1997). In contrast, stronger $\mathrm{H_2}$ absorbers in the Galaxy and beyond have been found to exhibit elevated populations at higher rotational levels (e.g., Jenkins \& Peimbert 1997; Reimers et al.\ 2003; Noterdaeme et al.\ 2007; Rawlins et al.\ 2018; Balashev et al.\ 2019; Boettcher et al.\ 2020), which indicate that the $\mathrm{H}_2$ gas is bifurcated into two excitation temperatures. Motivated by these prior observations, we perform a simultaneous fit of a two-temperature model to our data and find that the observed $\mathrm{H}_2$ level populations are well-reproduced by two excitation temperatures of $T_\mathrm{0J,1}=93^{+20}_{-17}$ K and $T_\mathrm{0J,2}=490^{+48}_{-41}$ K (Figure 4, middle panel).
The observed temperature bifurcation and trend of rising velocity dispersion with $J$ level (see \S3.1) can be understood to be a consequence of the $\mathrm{H}_2$ absorption originating in a gas cloud with an internal density and/or temperature stratifications (e.g., Noterdaeme et al.\ 2007). In this scenario, most of the column densities at low-$J$ levels originate from the inner layer of the cloud, where the gas is sufficiently dense and shielded from radiation that collisions are the dominant excitation mechanism. Consequently, the low-level populations are essentially thermalized and the lower excitation temperature is highly coupled to the kinetic temperature of the gas.
In contrast, the elevated column densities and broader line profiles of high-$J$ levels indicate that they arise primarily from warmer and more turbulent outer layers of the cloud (e.g., Lacour et al.\ 2005). At these locations, $\mathrm{H_2}$ molecules can be highly excited through collisions triggered by shocks and turbulent dissipation (e.g. Jenkins \& Peimbert 1997; Gry et al.\ 2002; Gredel et al.\ 2002; Ingalls et al.\ 2011), as well as through radiation pumping by an external UV radiation field (e.g., Jura 1975; Klimenko \& Balashev 2020). A unique prediction of the shock scenario is a systematic shift of up to a few \mbox{km\ s${^{-1}}$}\ in line centroids with increasing rotational state, which is caused by the different $J$ levels originating from distinct locations moving at slightly different speeds relative to the shock front (e.g. Jenkins \& Peimbert 1997; Gredel et al.\ 2002). Although we do not detect any systematic shift in line centroids with $J$ levels to within the precision of our COS wavelength calibration ($\lesssim 3$ \mbox{km\ s${^{-1}}$}), we cannot rule out a more modest shift of $\approx 1$ \mbox{km\ s${^{-1}}$}\ or less, which may be the result of weaker shocks (e.g., Gredel 1997).
As an alternative, we now explore radiation pumping as an excitation mechanism. We perform a series of calculations using the \textsc{Cloudy} v.13.03 code (Ferland et al.\ 2013) to simulate a plane-parallel slab of gas with uniform density $n_\mathrm{H}$ which is irradiated by two UV radiation fields: the updated Haardt \& Madau (2001) extragalactic UV background at $z=0.4$, known as HM05 in \textsc{Cloudy}, and the built-in unextinguished Milky Way ISM radiation field from Black (1987). To constrain the strength of UV radiation that is required to reproduce the observations, we vary the overall intensity of the ISM radiation field by a scale factor of between 0.1 and 100. We incorporate dust grains in the calculations following the observed grain abundance and size distribution in the local ISM. For each input radiation field, we construct a grid of \textsc{Cloudy} models spanning a wide range of gas densities ($0\leq \mathrm{log}\,n\mathrm{_H/cm^{-3}}\leq 4$) at the observed gas metallicity (Z17). For each grid point, \textsc{Cloudy} calculates the expected column density for each $J$ level assuming thermal and ionization equilibrium. To simulate two-sided illumination of the cloud, we use half the observed $N(\mathrm{H_2})$ as the stopping condition for the calculations and subsequently double the output $\mathrm{H_2}$ level populations for comparison with the data.
We summarize the results of our \textsc{Cloudy} calculations in the right panel of Figure 4, where the set of models that best reproduce the observed $\mathrm{H_2}$ excitation diagram are shown in thin blue curves. These models have UV radiation fields which are $15-25$ times stronger than the local ISM radiation field. The range of gas densities are $n_\mathrm{H}\approx1000-2500\,\mbox{${\rm cm^{-3}}$}$, with mean $\mathrm{H_2}$ kinetic temperatures ($90-130$ K) and \ion{H}{1} column densities (log $N($\ion{H}{1})$/\mbox{${\rm cm^{-2}}$} =19.6-19.9$) which are broadly consistent with the observations. While it is clear that these simple models are only able to roughly reproduce the general trend seen at $J>2$, this exercise demonstrates that if the elevated populations at higher rotational levels are primarily due to radiation pumping, the required UV radiation field is significantly higher than what is observed in the Galactic ISM (see also e.g., Klimenko \& Balashev 2020; Boettcher et al.\ 2020).
\subsection{Spatial Variations in Multiphase Gas Properties}
A benefit of using a multiply lensed QSO system as gas probes is the ability to investigate spatial variations in the gas properties of a foreground galaxy. As described in Z16 (see their Figure 1), the doubly lensed images of HE\,0047$-$1756 probe opposite sides of the massive elliptical lens galaxy, with sightline $A$ at $d=4.6$ kpc (1.8 half-light radii, $r_e$) and sightline $B$ at $d = 3.3$ kpc (1.3 $r_e$). The observed \ion{H}{1} and low-ionization metal column densities differ by less than $0.1-0.2$ dex between the two sightlines (Z16; Z17), despite a separation of $\approx 8$ kpc in projection. These similarities suggest that the cool ($T\sim10^4$ K) ISM phase is spatially extended, with a high gas covering fraction at $d\lesssim5$ kpc.
While we are unable to perform a detailed analysis on the \ion{O}{6} absorption detected in the low-resolution STIS FUV spectrum of HE\,0047$-$1756$B$ (Z17), we can compare the general absorption properties of the \ion{O}{6} absorbers detected along the two sightlines. Specifically, the total \ion{O}{6} rest-frame equivalent width is $W_r (1031)_B = 1.3\pm0.1$ \AA\ along sightline $B$, which is very similar to what is observed in the COS spectrum along sightline $A$, $W_r (1031)_A = 1.14\pm0.04$ \AA. Furthermore, the observed FWHM of the \ion{O}{6} profile along sightline $B$ is $\approx 580$ \mbox{km\ s${^{-1}}$}, which is comparable to the observed kinematic spread of $\approx 640$ \mbox{km\ s${^{-1}}$}\ for the \ion{O}{6} absorption profile along sightline $A$ (\S3.2). The coherent \ion{O}{6} absorption properties between the two sightlines imply that similar to the low-ionization gas phase, the highly ionized ISM is spatially extended and has a high covering fraction on a scale of $\sim5$ kpc in the massive elliptical.
The lack of a high-resolution FUV spectrum of HE\,0047$-$1756$B$ prevents a direct search for $\mathrm{H_2}$ along this sightline. To assess whether we can constrain spatial variations in molecular gas properties using the available low-resolution STIS FUV spectrum of sightline $B$, we perform the following experiment. First, we divide the best-fit model for the full Lyman and Werner bands from the high-resolution COS FUV spectrum of sightline $A$ to remove all $\mathrm{H_2}$ absorption from the spectrum. Then, we convolve the resulting ``$\mathrm{H_2}$-free'' spectrum of the QSO with the STIS LSF and compare the result with our STIS spectrum of sightline $A$. We find that while individual $\mathrm{H_2}$ lines are unresolved in the STIS spectrum, the combined absorptions from the Lyman and Werner bands result in an overall flux decrement that is detectable across the QSO spectrum.
Motivated by the result of the experiment, we generate a series of model Lyman and Werner bands spanning a wide range of $N(\mathrm{H_2})$, apply them to the ``$\mathrm{H_2}$-free'' QSO spectrum, and convolve the results with the STIS LSF. Each of the resulting spectra is then rescaled to the level of sightline $B$ using the mean observed flux ratio of the two lensed images in two absorption-free regions: $1575-1585$ and $1640-1650$ \AA\ in the observed frame. Finally, we compare the products to the STIS spectrum of sightline $B$ in the spectral region between 1415 and 1435 \AA\ in observed wavelength, which has a large concentration of strong $\mathrm{H_2}$ transitions, and infer the allowed $N(\mathrm{H_2})$ using a $\chi^2$ analysis. The observed spectrum of HE\,0047$-$1756$B$ is consistent with the presence of $\mathrm{H_2}$ with $N(\mathrm{H_2})\lesssim10^{16}\,$\mbox{${\rm cm^{-2}}$}\ at the 95\% confidence level.
The inferred molecular gas fraction of $f_\mathrm{H_2}\lesssim0.05\%$ along sightline $B$ is a factor of at least $\approx 40-100$ times lower than that observed along sightline $A$ on the opposite side of the galaxy, $f_\mathrm{H_2}=2-5\%$ (\S 3.1). This exercise suggests that in contrast to the neutral and highly ionized gas phases, the molecular gas distribution in the lens ISM is clumpier. Furthermore, the observed $f_\mathrm{H_2}$ along the two lensed sightlines are consistent with nearby quiescent galaxies found to harbor molecular gas (e.g., Young et al.\ 2014) but low compared to typical values in star-forming disks (Chen 2017b and references therein). If these $f_\mathrm{H_2}$ constraints are representative of the rest of the galaxy, they imply a low mass fraction of dense, cold molecular gas in the multiphase ISM of the lens.
\subsection{Implications for Feedback in Massive Ellipticals}
How the ISM is partitioned by mass into its different gas phases depends sensitively on the gas cooling rate, the available heating to offset this cooling, and the relevant timescales of these processes. The simultaneous detections of multiple gas phases in the lens ISM enable such an investigation for the first time in a distant elliptical, which can offer valuable insight into late-time feedback in massive elliptical galaxies.
Specifically, now that we have robustly detected highly ionized gas in the lens galaxy and constrained its properties, we can calculate the mass budget in the warm ($T\sim10^5$ K) gas phase and compare it to the previously estimated mass budget in the cool ISM (Z17). For the cool phase, Z17 estimated a total Fe mass of $M_\mathrm{Fe}\sim (5-8)\times10^4 \,(f_\mathrm{c,cool}) \,\mathrm{M_\odot}$ at $d<5$ kpc ($\approx 2\,r_e$, matching the region probed by the doubly lensed QSO), where $f_\mathrm{c,cool}$ is the cool gas covering fraction. The corresponding total mass in the cool phase is
\begin{equation}
M_\mathrm{cool}\sim (4-6)\times10^7 \bigg(\frac{f_\mathrm{c,cool}}{1.0}\bigg) \bigg(\frac{Z_\mathrm{cool}}{Z_\odot}\bigg)^{-1} \, \mathrm{M_\odot},
\end{equation}
where $Z_\mathrm{cool}$ is the cool gas metallicity.
For $f_\mathrm{c,cool}\approx1$ and a solar metallicity gas, which Z17 inferred for the cool phase, the inferred mass in the cool ($T\sim10^4$ K) ISM is $M_\mathrm{cool}\sim (4-6)\times10^7 \, \,\mathrm{M_\odot}$.
Assuming that the observed $N$(\ion{O}{6}) along sightline $A$ is representative at $d<5$ kpc, the estimated mass in the \ion{O}{6}-bearing phase of the ISM is
\begin{equation}
M_\mathrm{warm}\sim 3\times10^7 \bigg(\frac{f_\mathrm{c,warm}}{1.0}\bigg) \bigg(\frac{Z_\mathrm{warm}}{Z_\odot}\bigg)^{-1} \bigg(\frac{f_\mathrm{O^{5+}}}{0.1}\bigg)^{-1} \mathrm{M_\odot},
\end{equation}
where $f_\mathrm{c,warm}$ is the covering fraction of the warm phase, $Z_\mathrm{warm}$ is its metallicity, and $f_\mathrm{O^{5+}}$ is the ionization fraction of $\mathrm{O^{5+}}$ ions. If we further assume a unity covering fraction and solar gas metallicity for the warm ($T\sim10^5$ K) ISM phase, and adopt a reasonable $f_\mathrm{O^{5+}}\approx0.1-0.2$ which is predicted for a wide range of physical conditions at $T\sim10^5$ K (e.g., Oppenheimer \& Schaye 2013), we find a total gas mass of $M_\mathrm{warm}\sim (1.5-3)\times10^7 \, \,\mathrm{M_\odot}$ in the warm ISM phase that is likely traced by \ion{O}{6} absorption. Despite the uncertainties inherent in our simple calculations, the estimated mass budgets in the cool and warm ISM phases are comparable to within a factor of a few if the two phases have similar gas covering fractions and metallicities.
In the physical picture where the observed \ion{O}{6} absorber traces transitional temperature ($T\sim10^5$ K) gas that is radiatively cooling from a virialized hot phase ($T\sim10^6$\,K), $M_\mathrm{{warm}}$ is proportional to the mass flow rate into the cool ISM following $\dot{M}_\mathrm{cool}=M_\mathrm{warm}/t_\mathrm{cool}$, where $t_\mathrm{cool}$ is the cooling timescale of the \ion{O}{6}-bearing gas. The cooling timescale depends on the gas temperature, metallicity, and density. For $T\approx10^{5.5}$ K and a solar-metallicity gas with a density of $n_\mathrm{H}=10^{-3}\,\mbox{${\rm cm^{-3}}$}$, typical of the hot halo of massive ellipticals at $d\approx 10$ kpc (e.g., Singh et al.\ 2018), the expected cooling time is $t_\mathrm{cool}\approx 20-30$ Myr (Gnat \& Sternberg 2007; Oppenheimer \& Schaye 2013) with a total cooling rate of $\sim(1-3)\times10^{47}\,\mathrm{erg\,yr^{-1}}$. Thus, in a radiative cooling scenario the estimated $M_\mathrm{{warm}}$ translates to a mass flow rate of $\dot{M}_\mathrm{cool}\sim 0.5-1.5\,\mathrm{M_\odot\,yr^{-1}}$ at $d<5$ kpc. If the bulk of this flow cooled to $T\lesssim10^4$ K and remained in this phase, we should expect $M_\mathrm{cool} \gg M_\mathrm{warm}$ over a timescale of $\sim100$ Myr in the absence of significant star-formation activity. Considering Z16 found a minimum stellar population age of $>1$ Gyr and no detectable star formation ($\mathrm{SFR}<0.1\,\mathrm{M_\odot\,yr^{-1}}$) in the lens galaxy, this calculation suggests that most of the cooling gas is reheated to the coronal phase.\footnote{In principle, the cool gas could also be depleted primarily by further cooling into the cold ($T\lesssim10^2$ K) phase probed by $\mathrm{H_2}$ molecules. However, we consider this scenario unlikely given the inferred low mass fraction of molecular gas in the lens ISM, which is also consistent with observations of nearby ellipticals (\S 4.2).} To heat the gas back to virial temperature, the required heating rate is $\dot{E}_\mathrm{heat}\sim(1-3)\times10^{48}\,\mathrm{erg\,yr^{-1}}$, assuming $T_\mathrm{vir}\approx3\times10^6$\,K given the estimated mass of the dark-matter host halo of the lens (Z16)
Observations of nearby massive ellipticals show that mechanical feedback (often dubbed ``radio-mode feedback'') from active galactic nuclei (AGNs) can output as much power as $\dot{E}_\mathrm{AGN}\sim10^{49}-10^{50}\,\mathrm{erg\,yr^{-1}}$ (e.g., Werner et al.\ 2019). If the lens galaxy of HE\,0047$-$1756 hosts an active nucleus at present, then in principle it has more power than what is required to reheat the \ion{O}{6}-traced cooling gas. There are two caveats to this statement, however. Because the estimated cooling time of a $T\sim10^5$ K gas is short ($\sim10^7$ yr) owing to the expected high gas densities at $d<10$ kpc, the actual amount of available heating depends sensitively on the radio-mode duty cycle (i.e., the fraction of time that an AGN is in radio mode). The radio-mode duty cycle in ellipticals has been estimated to be no more than $\approx 30\%$ outside of rich cluster environments (O'Sullivan et al.\ 2017). Furthermore, even if an AGN is currently on, its energy output is likely to be distributed over a large volume in the gaseous halo. Indeed, observations of large X-ray cavities/bubbles and extended radio lobes around nearby giant elliptical galaxies indicate that AGNs deposit kinetic energy on scales of $\sim 50$ kpc or larger in the hot halo (e.g., McNamara \& Nulsen 2007; Fabian 2012). In conclusion, whether AGNs are viable as a continuous heating source requires not only a high duty cycle but also that its mechanical energy can be effectively coupled with the ISM on $\sim1$ kpc scales in the galaxy.
Alternatively, we consider heating sources associated with the old stellar populations themselves. Previous analytic and simulation studies suggest that
feedback from SNe Ia and stellar winds from asymptotic giant branch (AGB) stars may offset radiative cooling from diffuse gas in massive elliptical galaxies with $M_{\rm star}\approx 10^{11}\, \mathrm{M_\odot}$ (e.g., Conroy et al.\ 2015; Li et al.\ 2018). Empirically, the observed high $\mathrm{[Fe/Mg]}$ abundance ratios at $d\lesssim20$ kpc from quiescent galaxies (Z16; Zahedy et al.\ 2017a) also supports the idea that their ISM has been subjected to significant influence from recent SNe Ia. Using the mean SN Ia rate in nearby ellipticals (e.g., Mannucci et al.\ 2005), Z17 estimated an integrated SN Ia rate of $\sim0.3$ per century within $d<5$ kpc from the massive elliptical lens galaxy. Multiplying this rate by a mean energy of $10^{51}\,\mathrm{erg}$ per SN Ia, we estimate that the heating rate available from SNe Ia is $\dot{E}_\mathrm{Ia}\sim3\times10^{48}\,\mathrm{erg\,yr^{-1}}$, which is comparable to the required heating.
In addition to SNe Ia heating, Conroy et al.\ (2015) also considered how materials ejected from AGB stars can interact with and heat the ambient ISM in elliptical galaxies. Using their analytic formula for AGB heating rate, we estimate a heating rate of $\dot{E}_\mathrm{AGB}\sim5\times10^{47}\,\mathrm{erg\,yr^{-1}}$ in the lens galaxy of HE\,0047$-$1756. This exercise suggests that heating from SNe Ia and AGB stars may suffice to match the cooling rate inferred from the observed \ion{O}{6} absorption and prevent a large accumulation of cold gas in the ISM, even in the absence of strong feedback from an active nucleus.
\section[]{Conclusions}
Our analysis of the medium-resolution FUV spectrum of lensed QSO sightline HE\,0047$-$1756$A$ has revealed a complex, multiphase gas at $d=4.6$ kpc from the lens and yielded the first constraints on multiphase ISM properties in a massive quiescent galaxy ($M_{\rm star}\approx 10^{11}\, \mathrm{M_\odot}$) beyond the local Universe. $\mathrm{H_2}$ gas is detected with column density $\log\,N(\mathrm{H_2})/\mbox{${\rm cm^{-2}}$}=17.8^{+0.1}_{-0.3}$ and a molecular gas fraction of $f_\mathrm{H_2}=2-5\%$. Furthermore, the ISM exhibits \ion{O}{6} and \ion{N}{5} absorptions with a distinct kinematic structure from that of the low ions (e.g. \ion{Mg}{2}; Z16), indicating that these high ions arise in a different gas phase. The highly ionized phase has a total log $N$(\ion{O}{6})$/\mbox{${\rm cm^{-2}}$} =15.2\pm0.1$ and log $N$(\ion{N}{5})$/\mbox{${\rm cm^{-2}}$}\ =14.6\pm0.1$, among the strongest associated with $z<1$ galaxies. The low- and high-ionization gas phases are spatially extended on $\sim5$ kpc scale, which is in contrast to the patchier $\mathrm{H_2}$ spatial distribution on this scale.
We have investigated how the ISM is partitioned by mass into its different phases and examined its implications on late-time feedback in the galaxy. Specifically, the mass in the highly ionized ISM phase is $M_\mathrm{warm}\sim (1.5-3)\times10^7 \, \,\mathrm{M_\odot}$ at $d<5$ kpc, comparable to the estimated mass in the cool ($T\lesssim10^4$ K) ISM. Assuming the high-ionization gas originates in a transient warm ($T\sim10^5$ K) phase undergoing radiative cooling from a hot halo surrounding the galaxy, the inferred mass accretion rate is $\sim 0.5-1.5\,\mathrm{M_\odot\,yr^{-1}}$. The lack of star-formation activity ($\mathrm{SFR}<0.1\,\mathrm{M_\odot\,yr^{-1}}$) in the galaxy suggests that most of this flow is reheated to the hot phase, at a rate of $\dot{E}_\mathrm{heat}\sim(1-3)\times10^{48}\,\mathrm{erg\,yr^{-1}}$. Continuous heating from evolved stellar populations (primarily SNe Ia but also AGB winds) in the massive elliptical galaxy may suffice to prevent a large accumulation of cold gas in the ISM, even in the absence of strong AGN feedback. While this conclusion is based on a single galaxy, our study underscores the important role that evolved stellar populations can play in maintaining the low star-formation rate in massive quiescent galaxies over cosmic time.
\acknowledgements
The authors thank the anonymous referee for thoughtful comments that helped improve the presentation of this paper.
We thank Patrick Petitjean for providing his $\mathrm{H_2}$ line list, and Sean Johnson and Ben Rosenwasser for insightful discussions.
FSZ acknowledges support of a Carnegie Fellowship from the Observatories of the Carnegie Institution for Science. FSZ and HWC acknowledge partial support from HST-GO-15250.004A. FSZ, HWC, and EB acknowledge partial support from HST-GO-15163.001A and NSF AST-1715692 grants.
This work is based on data gathered with the NASA/ESA
{\it Hubble Space Telescope} operated by the Space Telescope Science Institute and the Association of Universities for Research in
Astronomy, Inc., under NASA contract NAS 5-26555. Additional data shown here were gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory in Chile.
|
1,108,101,564,669 | arxiv | \section{Conflict free coloring}
Conflict-free coloring was introduced in 2003 \cite{Even2002} motivated by problems arising from situations in wireless communication. Over the past two
decades, conflict-free coloring has been extensively studied \cite{smorosurvey}.
\begin{definition}[Conflict-free chromatic number of hypergraphs]\label{def:CF_hypergraph}
The \emph{conflict-free chromatic number} of a hypergraph $H=(V,E)$ is the minimum number of colors required to color the points in $V$ such that every $e \in E$ contains a point whose color is distinct from that of every other point in $e$.
\end{definition}
\noindent Conflict-free coloring has also been studied in the context of hypergraphs created out of simple graphs. Two such variants are \emph{conflict-free coloring on closed neighborhoods} and
\emph{conflict free coloring on open neighborhoods}. In this manuscript, we focus on the former variant.
Given a graph $G$, for any vertex $v \in V(G)$, let $N_G(v) : = \{u \in V(G):\{u,v\} \in E(G)\}$ denote the \emph{open neighborhood} of $v$ in $G$. Let $N_G[v] := N_G(v) \cup \{v\}$ denote the \emph{closed neighborhood} of $v$ in $G$.
\begin{definition}[{Closed neighborhood conflict-free chromatic number}]\label{defn:closed_CF}
Given a graph $G = (V, E)$, a conflict-free coloring on closed neighborhoods (CFCN coloring)
is an assignment of colors
$C: V(G) \rightarrow \{1, 2, \ldots, k\}$ such that for every $v \in V(G)$, there exists an
$i \in \{1, 2, \ldots, k\}$ such that $|N[v] \cap C^{-1} (i)| = 1$. The smallest $k$ required for such a coloring
is called the CFCN chromatic number of $G$, denoted $\chi_{CN}(G)$.
\end{definition}
In other words, given a graph $G$, let $H$ be the hypergraph with $V(H) = V(G)$ and $E(H) = \{N_G[v]:v \in V(G)\}$. Then, $\chi_{CN}(G)$ is equal to the conflict-free chromatic number of the hypergraph $H$ created from $G$.
Pach and Tardos \cite{Pach2009} showed that for a graph $G$ with maximum degree
$\Delta$, the CFCN chromatic number $\chi_{CN}(G) = O(\log^{2 + \varepsilon} \Delta)$ for any $\varepsilon > 0$. We improve this bound and show the following.
\begin{theorem}\label{thm:cfcntight}
Let $G$ be a graph with maximum degree $\Delta$. Then $\chi_{CN}(G) = O(\log^2 \Delta)$.
\end{theorem}
In 2014, Glebov, Szab\'o and Tardos \cite{glebov2014conflict} showed the existence of graphs $G$
on $n$ vertices
such that $\chi_{CN}(G) = \Omega(\log^2 n)$. Since
$\Delta < n$, our bound in Theorem \ref{thm:cfcntight} is tight up to constants.
Before we proceed to the proof, we explain some notations. All logarithms we consider
here are to the base $2$.
Given a graph $G$ and a set $S \subseteq V(G)$, we use $G[S]$ to denote the subgraph of $G$ induced on the vertex set $S$. For any two vertices $u,v \in V(G)$, we use $dist_G(u,v)$ to denote the number of edges in
a shortest path between $u$ and $v$ in $G$. We set $dist_G(u,v) = \infty$
when there is no path between $u$ and $v$ in $G$.
\begin{definition}[Maximal Distance-$3^+$ Set]
For a graph $G$, a \emph{maximal distance-$3^+$ set} is a set $A \subseteq V(G)$ that satisfies the following:
\begin{enumerate}
\item For every two distinct $u, v \in A$, $dist_G(u,v) \geq 3$.
\item For every $v \in V(G)\setminus A$, $\exists u \in A$ such that $dist_G(u,v) < 3$.
\end{enumerate}
\end{definition}
Let $A$ be a maximal distance-$3^+$ set in $G$. Let $B = \{v \in V(G) \setminus A:v\mbox{ has a neighbor in }A\}$ and let $C = V(G) \setminus (A
\cup B)$.
We make the following observations.
\begin{observation}
\label{obv:dist-3-set_exact_1_neighbor}
Every vertex in $B$ has exactly one neighbor in $A$.
\end{observation}
\begin{observation}
\label{obv:dist-3-set_atleast_1_neighbor}
Every vertex in $C$ has at least one neighbor in $B$.
\end{observation}
Our proof uses the following theorem on conflict-free coloring on
hypergraphs due to Pach and Tardos \cite{Pach2009}.
\begin{theorem}[Theorem 1.2 in \cite{Pach2009}]\label{thm_pach_main}
For any positive integers $t$ and $\Gamma$, the conflict-free chromatic number of any hypergraph in which each edge is of size at least $2t-1$ and each edge intersects at most $\Gamma$ others is $O(t\Gamma^{1/t}\log \Gamma)$. There is a randomized polynomial time algorithm to find such a coloring.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:cfcntight}]
We perform the following iterative process starting with $G_0 = G$.
\noindent \textbf{Iterative coloring process:}
Let $A_i$ be a maximal distance-$3^+$ set in $G_i$. Let $B_i := \{v \in V(G_i) \setminus A_i: v \mbox{ has a neighbor in }A_i\}$ and $C_i := V(G_i)
\setminus (A_i \cup B_i)$. Assign a color $c_i$ to all the vertices in $A_i$. Observation \ref{obv:dist-3-set_exact_1_neighbor} combined with
the fact that $A_i$ is an independent set in $G_i$ imply that for every vertex $v \in A_i \cup B_i$,
$N_G[v]$ contains exactly one vertex with the color $c_i$.
Repeat the above process with $G_{i+1} = G[C_i]$.
The iterative process is repeated till one of the following two conditions is satisfied:
(i) $G_{i}$ is the empty graph, or (ii) $i = k = 4 \log \Delta$.
If the process terminated with $i < 4\log \Delta$, then we have CFCN-colored $G$ with $O(\log \Delta)$ colors. Suppose it terminated
with $i = k= 4\log \Delta$. We know that every vertex in $V(G) \setminus C_k$ has some color appearing exactly once in its closed
neighborhood under the present coloring. In order to complete the proof, we need to extend this `nice' property to the vertices of $C_k$ as well. If $C_k$ is the empty set, then the proof is complete.
Asssume $C_k$ is non-empty. Let $H$ be a hypergraph constructed from $G$ as explained here. Let $V(H) = B_0 \cup B_1 \cup \cdots \cup B_k$
and $E(H) = \{e_v : v \in C_k\}$, where
$e_v = \{N_G(v) \cap V(H)\}$.
We make a crucial observation here that each vertex in the set $\cup_{i=0}^k B_i$ is uncolored so far. Consider a vertex $v$ in $C_k$. From Observation \ref{obv:dist-3-set_atleast_1_neighbor}, $v$ has at least one neighbor in each of $B_0, B_1, \ldots , B_k$ thus making the size of $e_v$ at least $4\log \Delta + 1$. Further, each hyperedge in this hypergraph intersects at most $\Delta^2$ other hyperedges. Substituting $t = 2\log \Delta$ and $\Gamma = \Delta^2$ in Theorem \ref{thm_pach_main}, we can see that the conflict-free chromatic number of the hypergraph $H$ is $O(\log^2\Delta)$. We ensure that the colors we use to color the points in the hypergraph $H$ is disjoint from the set $\{c_0, c_1, \ldots , c_k\}$. Now, consider the graph $G$. For each vertex $v \in B_0 \cup B_1 \cup \cdots \cup B_k =
V(H)$, we assign the color it obtained while coloring $H$. This would mean that every vertex in $C_k$ now has some color appearing exactly once in its closed neighborhood in $G$ and thereby satisfying the `nice' property mentioned above. Finally, use a new color (that has not been used so far) to color all the so far uncolored vertices in $G$.
This completes the proof
of the theorem.
\end{proof}
\bibliographystyle{plain}
|
1,108,101,564,670 | arxiv | \section{Introduction}
\label{sec:intro}
A recent set of papers \citep{kashlinsky08, kashlinsky09} claims to have detected the velocities of galaxy clusters with respect to the cosmic microwave background (CMB) frame by means of the kinetic Sunyaev-Zel'dovich (kSZ) effect \citep{sunyaev72}. The papers suggest the existence of a ``dark flow'': a 700 km s$^{-1}$ bulk flow of all matter out to a redshift of at least $z\simeq 0.1$ ($\rm{r}\simeq400$ Mpc). The magnitude and direction of the flow are claimed to be consistent with the peculiar velocity of the Local Group with respect to the CMB frame as inferred from the CMB dipole \citep{kogut93}. Velocity coherence over such large scales is not predicted by the standard $\Lambda\rm{CDM}$ cosmology and would, if confirmed, constitute a major observational result.
In this paper we revisit the analysis presented in \cite{kashlinsky09}, hereafter referred to as K09. The K09 analysis seeks to measure the kSZ signal of a sample of $\sim$700 X-ray-selected galaxy clusters. The 3-year WMAP temperature maps for 8 differencing assemblies \citep{hinshaw07} are high-pass filtered in an attempt to remove the primary CMB anisotropy. The temperatures of the filtered maps at the galaxy cluster locations are fit to a dipole, which is interpreted as the kSZ signal induced by a bulk flow of the galaxy clusters.
We will argue that the uncertainty of this measurement is dominated by primary CMB anisotropy, not detector noise. As the CMB is observed by all 8 WMAP channels, the errors are highly correlated between these channels, and the inferred detection significance is greatly reduced.
\section{Cluster Sample}
\label{sec:clusters}
We construct a cluster sample as similar to that used in K09 as possible. For all clusters we require that $z\le0.3$ and a corrected X-ray flux in the 0.1-2.4 keV band $> 3*10^{-12} \rm{erg} \rm{s}^{-1} \rm{cm}^{-2}$. We use the REFLEX catalog \citep{boehringer04} and require that $\delta<2.5^{\circ}$, leaving 415 clusters. We use the eBCS catalog \citep{ebeling98, ebeling00} and require that $\delta>0^{\circ}$ and $|b|>20^{\circ}$, leaving 279 clusters. We use the CIZA catalog \citep{ebeling02,kocevski07}, which, after our cuts, contains 122 clusters. Note that this version of the CIZA catalog contains 130 clusters total (73 from \cite{ebeling02} and 57 from \cite{kocevski07}), whereas K09 used an extended version containing 165 clusters total which is not publicly available at the time of this writing. We note that while K09 removed all clusters whose X-ray emission appeared to be dominated by a point source, we do not. This does not have a strong effect on our best-fit dipole, which, as discussed in Section \ref{sec:results}, is very close to the value presented in K09. We use 816 clusters total, compared to 782 in K09. 720 of our clusters survive the 3-year WMAP KP0 galactic mask used in this analysis.
\section{CMB Maps}
\label{sec:cmb}
Our preparation of the filtered CMB maps duplicates the method used in K09. We use the ``foreground reduced" temperature maps from the 3-year WMAP data release\footnote{http://lambda.gsfc.nasa.gov} \citep{hinshaw07}. We use one map each from the 8 differencing assembly channels: Q1, Q2, V1, V2, W1, W2, W3, and W4. For each channel we construct the filter $F_{\ell}$ described in K09. The filter is essentially a high-pass filter with a transition multipole of $\ell\sim300$. We explicitly remove the $\ell=0,1,2,3$ components from each map, as described in \cite{kashlinsky08}, by setting $F(\ell=0,1,2,3)=0$. We use the 3-year WMAP KP0 galactic mask for all steps of the analysis which involve spherical harmonic transforms. We use the HEALPix software package \citep{gorski05}.
The next step is to construct a mask which isolates the temperature fluctuations at the cluster locations. This mask has all pixels outside of the cluster areas and outside of the KP0 galactic mask set to zero. The ``cluster areas" are defined in K09 to be disc-shaped regions surrounding each cluster with $r_{\rm{disk}}=$min[6$\theta_{\rm{X-ray}},30^\prime$], where $\theta_{\rm{X-ray}}$ is related to each cluster's X-ray emission. Because we lack access to the list of $\theta_{\rm{X-ray}}$ used in K09, we use $r_{\rm{disk}}=30^\prime$ for all clusters. As pointed out in K09, because the majority of clusters have $\theta_{\rm{X-ray}}\ge5^\prime$, the K09 analysis uses $r_{\rm{disk}}=30^\prime$ for most clusters. Specifically, the average $r_{\rm{disk}}$ used in K09 is $28^\prime.4$, the standard deviation is $3^\prime.2$, and only 16 clusters have $r_{\rm{disk}} < 20^\prime$. As such, our method is very similar to the method used in K09. As discussed in Section \ref{sec:results}, any difference in methodology has little impact on our best-fit dipole, which is very similar to that presented in K09.
The final step is to calculate the dipole component of the temperature fluctuations at the cluster locations (which have been isolated using the pixel mask described in the previous paragraph). This is accomplished with the HEALPix function \textit{remove\_dipole}, which returns a 3-component dipole vector [$a_x,a_y,a_z$]. The dipole is calculated for each of the 8 WMAP channels individually and results from all channels are combined using inverse-variance weighting. For example, $\hat{a}_x=(\sum_{i}w_{x,i}\hat{a}_{x, i})/ (\sum_{i}w_{x,i})$, where $w_{x,i}=\sigma_{x,i}^{-2}$ and $i$ is the index for the 8 WMAP channels. The variances are calculated from simulations which are described in Section \ref{sec:errors}.
\section{Results}
\label{sec:results}
Our best-fit dipole is [$a_x,a_y,a_z$]=[1.2, -2.4, 0.2] $\mu$K. This may be compared to [$a_x,a_y,a_z$]=[0.6, -2.7, 0.6] $\mu$K, the corresponding all-$z$ result from K09. The magnitude (direction) of our best-fit dipole is within 6\% ($17^{\circ}$) of the best-fit dipole measured in K09. We conclude that our best-fit dipole agrees well with that measured in K09, suggesting that the slight differences in methodology (the cluster catalogs and the choice of $r_{\rm{disk}}$) are unimportant. Furthermore, we have repeated this analysis with $r_{\rm{disk}}$ increased and decreased by 20\%, and the results do not change significantly. The magnitude (direction) of the best-fit dipole changes by less than 4\% ($11^\circ$) compared to the $r_{\rm{disk}}=30^\prime$ case.
Our best-fit dipole may be described as a vector with $2.7 \mu$K magnitude and with a higher temperature in the direction of the galactic coordinates $(\ell, b)=(298,4)$. A naive interpretation of this dipole suggests a bulk flow moving away from $(\ell, b)=(298,4)$, which is the opposite sign of the velocity presented in K09. However, the interpretation of the sign and magnitude of the velocity is complicated by the convolution of the kernel of $F_\ell$ with the gas profiles assumed for the clusters. This issue was not discussed in K09. For these reasons we do not attempt to quantify the sign or magnitude of the inferred velocity. We feel that this is justified by the low detection significance of the dipole, as discussed in Section \ref{sec:detection}.
\section{Error Estimation}
\label{sec:errors}
We estimate the error of the dipole measurement as follows. The basic strategy is to repeat the analysis described in Section \ref{sec:cmb}, but with simulated WMAP data replacing actual WMAP data. We generate 1000 realizations of the CMB using the best-fit 3-year WMAP $\Lambda$CDM $C_{\ell}^{TT}$ power spectrum. We use 3-year results, as opposed to 5-year, to remain consistent with the analysis of K09. We convolve each CMB realization with the beams of the 8 WMAP channels in order to simulate noise-free observations. For each CMB realization we also generate white noise maps for each WMAP channel. The noise maps are generated using the prescription outlined in the WMAP Three-Year Explanatory Supplement \citep{limon07}, with the noise variance in a given pixel inversely proportional to the number of times that pixel was observed. Our simulated noise is uncorrelated between WMAP channels and between map pixels. While the latter assumption is not strictly true for WMAP data, the effects are negligible for any temperature analysis \citep{limon07}. The white noise maps are added to the simulated noise-free CMB maps to produce maps which should have the same statistical properties as the actual ``foreground reduced" temperature maps described in Section \ref{sec:cmb}. We have confirmed that the (KP0-masked) power spectra of these simulated maps match those of the true maps.
These simulated maps are then analyzed using the methods described in Section \ref{sec:cmb}. To summarize, the maps are filtered by $F_{\ell}$ and the dipole component of the filtered maps at the locations of the galaxy clusters is calculated. Although the WMAP data is simulated, we use the actual cluster positions and the actual KP0 galactic mask. We note that because the definition of $F_{\ell}$ given in K09 depends on the measured spectrum $C_{\ell}^{sky}$, we calculate $F_{\ell}$ for each simulated map.
This suite of 1000 simulated WMAP ``experiments'' allows us to measure the uncertainty with which each WMAP channel measures each dipole component. As there is only ``noise'' (CMB and detector noise) in these simulations, the distribution of dipole measurements provides a measure of the uncertainty of a single measurement of the dipole. These distributions are well described as Gaussian with zero mean. Inverse-variance weighting is used to combine the different channels' estimates of each dipole component. Finally we are left with 1000 measurements of the 3 dipole components. These distributions are also well described as Gaussian with zero mean and are shown in Figure \ref{fig:err}. The $1\sigma$ measurement error for each dipole component is defined to be the best-fit Gaussian width $\sigma$ of each distribution. The final uncertainties on the dipole components are [$\sigma_{a_x}, \sigma_{a_y}, \sigma_{a_z}$]=[1.7, 1.7, 1.1] $\mu$K. These uncertainties should be accurate to within a few percent. The uncertainty is highest on the dipole components that lie in the galactic plane ($a_x$ and $a_y$) because of the geometry of the galactic mask.
\section{CMB Correlations}
\label{sec:correlations}
These simulations bring to light an important fact that was not discussed in K09: the dipole estimates are highly correlated between the 8 different WMAP channels. The correlation coefficient $\rho$ between two different channels' estimates of a given dipole component is approximately $0.9$. For example, the correlation coefficient $\rho$ between the estimates of $a_x$ provided by the Q1 and W2 channels is 0.90, as shown in Figure \ref{fig:corr}. We attribute these correlations to primary CMB fluctuations, as the detector noise is uncorrelated between channels.
To test the hypothesis that these correlations are caused by CMB fluctuations, we have separately filtered the noise-free CMB maps and the detector-noise-only maps in these 1000 experiments. The filter $F_\ell$ is still constructed using the full maps which include the CMB and detector noise. The resulting uncertainties on the dipole components in the CMB-only maps are [$\sigma_{a_x}, \sigma_{a_y}, \sigma_{a_z}$]=[1.7, 1.7, 1.1] $\mu$K. These uncertainties are consistent with those obtained using the full simulations which contain both CMB and detector noise. The uncertainties on the dipole components in the noise-only maps are much smaller: [$\sigma_{a_x}, \sigma_{a_y}, \sigma_{a_z}$]=[0.2, 0.2, 0.1] $\mu$K. We conclude that the uncertainty of the dipole measurement is dominated by CMB fluctuations, not detector noise. Although the filter $F_\ell$ is constructed with the intent of filtering out primary CMB anisotropy, residual CMB power is still present in the filtered maps and dominates the error on the dipole measurement.
Furthermore, if CMB fluctuations dominate the uncertainty of the dipole measurement, then the 5-year WMAP maps should produce a best-fit dipole that is very similar to that obtained using the 3-year WMAP maps. We have repeated our analysis on 5-year WMAP maps\footnotemark[1] \citep{hinshaw09} to test this hypothesis. We use the 3-year KP0 mask and 3-year $F_{\ell}$ filters on the 5-year maps in order to make the comparison as direct as possible. The best-fit dipole is [$a_x,a_y,a_z$]=[1.3, -2.3, 0.1] $\mu$K, which is very close to the best-fit dipole from the 3-year WMAP data. The magnitude (direction) of this dipole is within 2\% ($4^\circ$) of that obtained using the 3-year WMAP maps. This provides further support to the claim that the uncertainty of the dipole measurement is dominated by CMB fluctuations, not detector noise.
If we simulate 1000 WMAP ``experiments'' and enforce the unphysical condition that each WMAP channel observes a different realization of the CMB, then the dipole estimates are uncorrelated, as expected. In this unrealistic scenario the uncertainty on the dipole measurement is much smaller: [$\sigma_{a_x}, \sigma_{a_y}, \sigma_{a_z}$]=[0.7, 0.7, 0.4] $\mu$K. These errors are closer to those presented in K09: [$\sigma_{a_x}, \sigma_{a_y}, \sigma_{a_z}$]=[0.5, 0.4, 0.4] $\mu$K. These errors are smaller than the errors described above (which account for CMB correlations) by a factor of $\sim\sqrt{8}$, as is expected when combining 8 uncorrelated estimates as opposed to combining 8 highly correlated estimates.
\section{Detection Significance}
\label{sec:detection}
Our best-fit dipole is [$a_x,a_y,a_z]=[1.2 \pm 1.7, -2.4 \pm 1.7, 0.2 \pm 1.1]$ $\mu$K. The $\chi^2$/d.o.f. is 2.52/3. The probability to exceed this $\chi^2$ is 0.47, corresponding to a Gaussian detection significance of 0.7$\sigma$. If we use the best-fit dipole presented in K09, [$a_x,a_y,a_z$]=[0.6, -2.7, 0.6] $\mu$K, which is quite close to ours, the detection significance is 0.8$\sigma$. The errors used in these significance calculations come from the simulations presented in Section \ref{sec:errors}, which take into account correlations between the WMAP channels.
A slightly different statistic may be considered if one is specifically interested in constraining bulk flows: the component of the best-fit dipole projected along the direction of the peculiar velocity of the Local Group with respect to the CMB frame. Measurements of the CMB dipole \citep{kogut93} suggest that this velocity is towards the galactic coordinates $(\ell, b)=(276,30)$. This statistic has 1 degree of freedom and we have calculated the uncertainty on its measurement using the methods described in Section \ref{sec:errors}. The best-fit projected dipole is 2.2 $\pm$1.6 $\mu$K, corresponding to a detection significance of 1.4$\sigma$. The sign is such that the temperature is higher at $(\ell, b)=(276,30)$.
We conclude that there is not a significant detection of a bulk flow. The significance of the best-fit bulk flow is $0.7\sigma$ and the significance of the component projected along the Local Group's peculiar velocity is $1.4\sigma$.
\section{Conclusion}
\label{sec:conclusion}
We have revisited the analysis presented in \cite{kashlinsky08, kashlinsky09} which reports a significant detection of a bulk flow of $\sim$700 galaxy clusters out to $z\simeq0.1$ by means of the kSZ effect. We have demonstrated that the estimates for the kSZ signal are highly correlated between the different WMAP channels used in this analysis and that this correlation is caused by primary CMB anisotropy. We have simulated the errors on the kSZ measurement while taking into account these CMB correlations and find that there is not a significant detection of a kSZ signal or bulk flow.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{err.pdf}
\caption{1000 simulated estimates for the 3 dipole components. These simulations take into account the CMB correlations between the different WMAP channels. The uncertainty is highest on the dipole components that lie in the galactic plane ($a_x$ and $a_y$) because of the geometry of the galactic mask.}
\label{fig:err}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{corr.pdf}
\caption{1000 simulated estimates for the $a_x$ dipole component from two randomly chosen WMAP channels, Q1 and W2. The estimates are highly correlated ($\rho=0.9$). This high level of correlation is common to all pairs of channels and is caused by primary CMB fluctuations.}
\label{fig:corr}
\end{center}
\end{figure*}
\acknowledgments
The author would like to thank Bradford Benson, John Carlstrom, Tom Crawford, William Holzapfel, Wayne Hu, Stephan Meyer, and Christian Reichardt for useful discussions.
This work was supported by the NSF Physics Frontier Center award PHY-0551142 and the NSF OPP award ANT-0638937.
|
1,108,101,564,671 | arxiv |
\section{Introduction}
\input{introduction.tex}
\section{Related Works}
\label{sec:related-work}
\input{related-work.tex}
\section{Methodology}
\label{sec:methodology}
\input{methodology.tex}
\section{Discussion}
\label{sec:discuss}
The present study has several limitations, which should be addressed in future studies. First, we use a heuristic method to determine the ground truth. This heuristic method can only approximate the ground truth because the data sources (i.e., CheckPhish and DomainTools feeds
in this case) may contain some errors.
Second, we could not avoid the data imbalance problem, meaning that
the resulting detectors or classifiers may be slightly biased towards the majority class even after the oversampling.
Third, we only considered the WHOIS and URL lexical features, but not the website contents or the network layer features, Fourth, we only considered five WHOIS features because most of the other kinds of WHOIS information are largely missing, which means that WHOIS registrars need to collect more detailed information than what is presented at the moment of writing. Fifth, application of deep learning models or explainable ML are left to future research. Sixth, we observe that the python library {\tt wordninja} can make bad splits at times (e.g., when a domain name is seemingly in English characters but actually in another languages).
\section{Conclusion}
\label{sec:conclusion}
We have presented the first systematic study on {\em data-driven} characterization, and detection of COVID-19 themed malicious websites. We presented a methodology and applied it to a specific dataset. Our experiments led to several insights, highlighting that attackers are {\em agile}, {\em crafty}, {\em economically incentivized} in waging COVID-19 themed malicious websites attacks. Our experiments show that Random Forest can serve as an effective detector against these attacks, especially when WHOIS information about websites in question is available. This highlights the importance of domain registrars to collect more information when registering domains in future.
\smallskip
\noindent{\bf Acknowledgement}. We thank the reviewers for their useful comments. This work was supported in part by ARO Grant \#W911NF-17-1-0566, ARL Grant \#W911NF-17-2-0127, and the NSA OnRamp II program.
\bibliographystyle{IEEEtran}
\subsection{Characterization Methodology}
\label{sec:rq_characterize}
In order to characterize COVID-19 themed malicious websites, we address 4 Research Questions (RQs):
\begin{itemize}
\item RQ1: Which WHOIS registrars are most abused to launch COVID-19 themed malicious websites?
\item RQ2: Which Top Level Domains (TLDs) are most abused by COVID-19 themed malicious websites?
\item RQ3: What trends are exhibited by COVID-19 themed malicious websites?
\item RQ4: Which theme keywords are mostly abused by attackers, and how?
\end{itemize}
We consider WHOIS information because it has shown to be useful in the era prior to the COVID-19 pandemic \cite{XuCodaspy13-maliciousURL,cns14MaliciousWebsiteXu}. Answering the preceding questions will deepen our understanding of COVID-19 themed malicious website attacks.
\subsection{Detection Methodology}
\label{sec:rq_detection}
We propose leveraging machine learning to detect COVID-19 themed malicious websites and answer:
\begin{itemize}
\item RQ5: Which classifier is competent in detecting COVID-19 themed malicious websites?
\item RQ6: What is the impact of WHOIS features on the classifier's effectiveness?
\end{itemize}
In order to answer these questions, we need to train detectors. Figure \ref{fig:detectionFramework} highlights the methodology for detecting COVID-19 themed malicious websites. The methodology can be decomposed into the following modules: data collection, feature definition and extraction, data pre-processing, classifier training, and classifier test.
Data about websites need to be collected from reliable sources. The collected data may need enrichment to provide more information, as what will be illustrated in our case study. Then, features may be defined to describe these websites. In the case of using deep learning (which requires much larger datasets), features may be automatically learned. One may consider a range of classifiers, which are generically called $C_i$'s in Figure \ref{fig:detectionFramework}.
As shown in Figure \ref{fig:detectionFramework},
one can use classifiers individually or an ensemble of them (e.g., via a desired voting scheme, such as weighted vs. unweighted majority voting). In the simple form of unweighted majority voting, a website is classified as malicious if majority of the classifiers predict it as malicious; otherwise, it is classified as benign.
\begin{figure}[!t]
\centering
\includegraphics[width=.49\textwidth]{figures/COVID-web-Detection.pdf}
\vspace{-2em}
\caption{Methodology for detecting COVID-19 themed malicious websites}
\label{fig:detectionFramework}
\end{figure}
In order to evaluate the effectiveness of the trained classifiers, we propose adopting the standard metrics, including: accuracy (ACC), false-positive rates (FPR), false-negative rates (FNR), and $F1$-score.
Specifically, let $TP$ be the number of true positives, $TN$ be the number of true negatives, $FP$ be the number of false positives, and $FN$ be the number of false negatives. Then, we have
ACC $= \frac{TP + TN}{TP + TN + FP + FN}$,
FPR $= \frac{FP}{FP + TN}$,
FNR $= \frac{FN}{FN + TP}$, and
$F1$-score $= \frac{2TP}{2TP + FP + FN}$.
\section{Case Study}
\label{sec:experiments}
Our case study applies the methodology to specific datasets.
\subsection{Data Collection}
Our dataset of COVID-19 malicious website examples are obtained from what was published between 2/1/2020 and 5/15/2020 by two sources:
(i) CheckPhish \cite{COVID19_data_checkphish}, which contains 131,761 malicious websites waging scamming attacks related to COVID-19; and
(ii) DomainTools \cite{COVID19_data_DT}, which contains 157,579 malicious websites waging malware, phishing, and spamming attacks related to COVID-19.
The union of these two sets leads to a total of 221,921 malicious websites, denoted by $D_{malicious}$, owing to the fact that 67,419 websites belong to both sets.
For obtaining benign websites, we use the top 250,000 websites from Cisco's Umbrella 1 million websites dataset \cite{cisco_umbrella_dataset} on 05/16/2020, denoted by $D_{benign}$, which is a source of reputable websites. We compile a merged dataset denoted by $D_{initial} = D_{malicious} \cup D_{benign}$.
In order to collect WHOIS information of a website, we use the python library {\tt whois 0.9.7} to query the WHOIS database on 8/7/2020. We observe that 42,540 (or 19.17\%) out of the 221,921 malicious websites have no WHOIS information available, and 93,082 (or 37.2\%) out of the 250,000 benign websites have no WHOIS information available. This means that the presence/absence of WHOIS information does not indicate that a website is malicious or not.
\subsection{Characterization Case Study}
\subsubsection{Answering RQ1: Identifying the WHOIS registrars that are most abused to launch COVID-19 themed malicious websites}
For this purpose, we use a subset of $D_{malicious}$ set, denoted by $D'_{malicious}$, which contains 171,901
malicious websites with WHOIS {\em registrar\_name} information available.
\begin{figure}[!t]
\centering
\includegraphics[width=.91\linewidth]{figures/Registrar-Distribution-for-COVID.pdf}
\caption{Top 10 abused WHOIS registrars of COVID-19 themed malicious websites (the $y$-axis is in the log-scale).}
\label{fig:registrarDistribution}
\end{figure}
Figure \ref{fig:registrarDistribution} depicts the top 10 abused registrars, which are ranked according to the absolute number of
COVID-19 themed websites in $D'_{malicious}$ that are respectively registered by them.
We observe that {\tt Godaddy} is the most frequently abused registrar, followed by {\tt Google} and {\tt Namecheap}. This finding inspires us to analyze if there is any financial incentive behind the use of a specific registrar. The cost
registering a {\tt .com} domain in the first year, is:
{\tt Godaddy} for \$11.99, {\tt Google} for \$9, {\tt Namecheap} for \$8.88, {\tt Dynadot} for \$8.99, {\tt 1\&1} for \$1, {\tt name.com} for \$8.99, {\tt PDR Ltd} for \$35, {\tt OVH} for \$8.28, {\tt Alibaba} for \$7.99, {\tt Reg-ru} for \$28. This suggests that some attackers might have considered registrar {\tt 1\&1} because it is the cheapest, while some attackers use reputed registrars.
\begin{insight}
Some attackers may be incentivized to use cheaper registrars but some of the other don't.
\end{insight}
\subsubsection{Answering RQ2: Which Top Level Domains (TLDs) are most abused by COVID-19 themed malicious websites?}
In order to answer this question, we use the original dataset $D_{malicious}$, which contains 221,921 COVID-19 themed malicious websites with corresponding TLD information.
\begin{figure}[!htbp]
\vspace{-1.5em}
\centering
\includegraphics[width=.89\linewidth]{figures/TLD-Distribution-for-COVID.pdf}
\vspace{-1.5em}
\caption{Top 10 abused TLDs of COVID-19 themed malicious websites (the $y$-axis is in the log-scale).}
\label{fig:tldDistribution}
\end{figure}
Figure \ref{fig:tldDistribution} depicts the
top 10 abused TLDs, which are ranked according to the absolute number TLDs for COVID-19 themed malicious websites. We make the following observations. First, {\tt .com} hosts the highest number of malicious websites, followed by {\tt .org} and {\tt .net}.
Second, 5 of the top 10 abused TLDs correspond to country-level {\tt ccTLDs}, including {\tt .de}, {\tt .uk}, {\tt .ru}, {\tt .nl} and {\tt .eu}.
\begin{insight}
Attackers often abuse popular TLDs.
\end{insight}
\subsubsection{Answering RQ3: What trends are exhibited by COVID-19 themed malicious websites?}
In order to answer this question, we use the dataset $D_{malicious}$ mentioned above.
Figure \ref{fig:website_trend} depicts the trend of malicious websites, leading to two observations. First, there is a discrepancy between the daily numbers of websites that are reported by the two sources. According to CheckPhish, the number of COVID-19 themed malicious websites reaches the peak on 03/25/2020, with 18,495 malicious websites; according to DomainTools, the number of COVID-19 themed malicious websites reaches a peak on 03/20/2020, with 3,981 malicious websites. This data indicates that there are reporting inconsistencies among sources and many COVID-19 themed malicious websites are created at the early stage of the pandemic when {\em uncertainties} are maximum. Second, the number of COVID-19 themed malicious websites, by and large, has been decreasing since the last week of March 2020 (i.e., two weeks after the pandemic declaration), leading to about 1,000 websites per day during the first week of May 2020 (i.e., about two months after pandemic declaration). However, there is still oscillation. One possible cause is that the attackers have been waiting to create new COVID-19 themed malicious websites based on the pandemic's new developments (e.g., vaccine).
\begin{figure}[!htbp]
\vspace{-1.5em}
\centering
\includegraphics[width=0.91\linewidth]{figures/trend_analysis.pdf}
\caption{Trends of COVID-19 themed malicious website.}
\label{fig:website_trend}
\end{figure}
\begin{insight}
Inconsistencies in reporting mechanisms, attackers are agile in creating COVID-19 themed malicious websites.
\end{insight}
\subsubsection{Answering RQ4: Which theme keywords are mostly abused by attackers, and how?}
In order to answer this question, we analyze the dataset $D_{malicious}$ mentioned above.
We use the python library {\tt wordninja} with English Wikipedia language model \cite{python_wordninja} to split domain name strings and extract COVID-19 themed keywords.
We observe that 4 keywords (i.e., {\em covid}, {\em corona}, {\em covid19}, and {\em coronavirus}) are most widely used as expected; they are followed by {\em mask}, {\em quarantine}, {\em virus}, {\em test}, {\em facemask}, {\em pandemic}, and {\em vaccine}. We extract more than 19,000 keywords. A further analysis of the domain names reveals that attackers create COVID-19 themed malicious websites with names containing geographical attributes. For example, {\tt coronaviruspreventionsanantonio.com}, { \tt coronavirusprecentionhouston.com}, and {\tt coronaviruspreventiondallas.com} use a combination of city name and a COVID-19 themed keyword. Moreover, we observe the existence of COVID-19 themed ``parking'' websites, which have no content at the present time but might be used for upcoming COVID-19 themes.
\begin{insight}
Attackers are crafty in using COVID-19 themed keywords and geographical information in creating COVID-19 themed malicious website domain names.
\end{insight}
\subsection{Detection Case Study}
Given $D_{initial}$, the detection case study proceeds as follows.
\subsubsection{Feature Definition and Extraction}
We define features according to the following aspects of websites: WHOIS (F1-F4), domain name lexical information (F5-F9), statistical information (F10), and Top-Level Domain or TLD (F11).
\begin{itemize}
\item Current WHOIS registration lifetime (F1): This is the number of days that has passed since a website's registration, with respect to the date when this feature's value is extracted (e.g., 08/07/2020 in our case).
\item Remaining WHOIS expiration
lifetime (F2): This is the number of remaining days before a website's WHOIS registration expires, with respect to the date when this feature's value is extracted (e.g., 08/07/2020 in our case).
\item Number of days since last WHOIS update (F3): This is the number of days elapsed since a website's last update with respect to the date when this feature's value is extracted (e.g., 08/07/2020 in our case).
\item WHOIS registrar reputation (F4): We propose measuring a WHOIS registrar's reputation as $\frac{n}{|D_{benign}|}$, where $n$ is the number of benign websites in $D_{benign}$ that are registered by this particular registrar and $|D_{benign}|$ is the size of set $D_{benign}$.
\item Number of dots in domain name (F5): This is the number of dots (character `.') in the domain name. For example, domain {\tt any.com} has 1 dot.
\item Domain hyphen count (F6): This is the number of hyphens (`-') in a domain name.
\item Domain vowel count (F7): This is the number of vowels (i.e., {\em a}, {\em e}, {\em i}, {\em o}, {\em u}) in a domain name.
\item Domain digits percentage (F8): This is the ratio of the number of digits (0-9) in a domain name to the number of characters including digits.
\item Domain unique alphabetic-numeric characters count (F9): This is the total number of unique alphabetic and numeric characters (i.e., a-z, A-Z, 0-9) in a domain name.
\item Domain entropy (F10): This is the Shannon entropy \cite{shannon_entropy} of the domain name (i.e., a kind of statistical information), which is computed based on the frequency of characters in the domain name.
\item TLD Reputation (F11): We propose measuring a TLD's reputation as
$\frac{m}{|D_{benign}|}$,
where $m$ is the number of websites in $D_{benign}$ that contain this particular TLD.
\end{itemize}
\subsubsection{Data Pre-Processing}
Given that some websites may not have information for the features, it is important to consider different scenarios. In our example, we propose considering two datasets that can be derived from $D_{initial}$ because some websites do not have information for the WHOIS features.
\begin{itemize}
\item Dataset $D_1 \subset D_{initial}$ consists of websites for which WHOIS information is available (i.e.,
features F1-F4 are available). $D_1$ contains 21,749 websites in total, including 16,411 COVID-19 themed malicious websites and 5,338 benign websites.
\item Dataset $D_2 \subset D_{initial}$, where $D_1 \cap D_2 = \emptyset$, consists of websites for which WHOIS information is absent (i.e.,
features F1-F4 are entirely missing). $D_2$ contains 135,621 websites, including 42,540 malicious websites and 93,081 benign websites. For each website belonging to $D_2$, only values of the 7 features (i.e., F5-F11) are available.
\end{itemize}
\begin{table} [!htbp]
\caption{Relative importance of features in $D_1$ with respect to the random forest method.}
\label{tab:featureSelection}
\begin{center}
\begin{tabular}{|c|c||c|c|}
\hline
Feature & Importance & Feature & Importance \\
\hline
F1 & 0.429 & F7 & 0.080 \\
\hline
F2 & 0.094 & F8 & 0.009\\
\hline
F3 & 0.131 & F9 & 0.028 \\
\hline
F4 & 0.065 & F10 & 0.029 \\
\hline
F5 & 0.065 & F11 & 0.068 \\
\hline
F6 & 0.003 & &\\
\hline
\end{tabular}
\end{center}
\end{table}
Since only $D_1$ contains all WHOIS information, We use it for feature selection study. For this purpose, we use the {\em random forest classification feature importance} method \cite{kuhn2013applied} (with the 80-20 splitting of training-test data) to
find the important features.
Table \ref{tab:featureSelection} depicts the relative importance of the features in $D_1$. We observe that F6 and F8 have a very small relative importance (i.e., $< 0.01$) when compared to the others, suggesting that hyphens and digits are equally used in malicious or benign domain names.
Hence, we will eliminate F6 and F8 in the rest study of $D_1$.
In order to see whether or not the feature selection result is impacted by the data imbalance of $D_1$ (with the malicious:benign ratio being 3.1:1), we explore two widely-used methods: (i) {\em oversampling} the minority class to replicate some random examples; and (ii) {\em undersampling} the majority class to remove some random examples. At first, we do the 80-20 splitting of training-test data, and then change the malicious:benign ratio in the training set, while keeping the test set intact.
We wish to identify the ratio that achieves the highest $F1$-score.
In what follows we only report the results of Random Forest because it outperforms the other classifiers for the original dataset $D_1$.
\ignore{
\begin{table} [!htbp]
\caption{Impact of the malicious:benign ratio on the effectiveness of the Random Forest classifier with Oversampling and Undersampling methods, where $D_1$ with ratio 3.1:1 is the original $D_1$ and $D_2$ with ratio 1:2.2 is the original $D_2$.(old table with 10-fold CV)}
\label{tab:RatioParameter}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Dataset & Method & {\em Ratio} & ACC & FPR & FNR & $F1$-score\\
\hline
$D_1$ & - & 3.1:1 & 0.952 & 0.132 & 0.021 & 0.968\\
\hline
$D_1$ & Oversample & 2:1 & 0.944 & 0.123 & 0.022 & 0.959 \\
\hline
$D_1$ & Oversample & 1.67:1 & 0.935 & 0.128 & 0.027 & 0.949 \\
\hline
$D_1$ & Oversample & 1.43:1 & 0.937 & 0.119 & 0.023 & 0.948 \\
\hline \hline
$D_1$ & Undersample & 2:1 & 0.940 & 0.115 & 0.032 & 0.9556 \\
\hline
$D_1$ & Undersample & 1.67:1 & 0.930 & 0.115 & 0.043 & 0.944 \\
\hline
$D_1$ & Undersample & 1.43:1 & 0.928 & 0.105 & 0.049 & 0.939 \\
\hline \hline
$D_2$ & - & 1:2.2 & 0.946 & 0.045 & 0.073 & 0.915 \\
\hline
$D_2$ & Oversample & 1:1.25 & 0.945 & 0.054 & 0.053 & 0.940 \\
\hline
$D_2$ & Oversample & 1:1.11 & 0.946 & 0.061 & 0.046 & 0.943 \\
\hline
$D_2$ & Oversample & 1:1 & 0.946 & 0.061 & 0.046 & 0.944 \\
\hline
\hline
$D_2$ & Undersample & 1:1.25 & 0.944 & 0.058 & 0.054 & 0.937 \\
\hline
$D_2$ & Undersample & 1:1.11 & 0.945 & 0.064 & 0.045 & 0.943 \\
\hline
$D_2$ & Undersample & 1:1 & 0.946 & 0.068 & 0.039 & 0.947 \\
\hline
\end{tabular}
\end{center}
\end{table}
}
\ignore{
\begin{table} [!htbp]
\caption{Impact of the malicious:benign ratio on the effectiveness of the Random Forest classifier with {\em Oversampling} and {\em Undersampling}, where $D_1$ with ratio 3.1:1 is the original $D_1$ and $D_2$ with ratio 1:2.2 is the original $D_2$ (old table with earlier features). }
\label{tab:RatioParameter}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c}
\hline
Dataset & Method & {\em Ratio} & ACC & FPR & FNR & $F1$-score \\
\hline
$D_1$ & - & 3.1:1 & 0.974 & 0.050 & 0.018 & 0.983\\
\hline
$D_1$ & Oversample & 2:1& 0.978 & 0.021 & 0.022 & 0.984 \\
\hline
$D_1$ & Oversample & 1.67:1 & 0.979 & 0.013 & 0.026 & 0.983 \\
\hline
$D_1$ & Oversample & 1.43:1 & 0.979 & 0.010 & 0.029 & 0.982 \\
\hline \hline
$D_1$ & Undersample & 2:1 & 0.973 & 0.027 & 0.028 & 0.979 \\
\hline
$D_1$ & Undersample & 1.67:1 & 0.969 & 0.037 & 0.027 & 0.976 \\
\hline
$D_1$ & Undersample & 1.43:1 & 0.969 & 0.021 & 0.038 & 0.973 \\
\hline \hline
$D_2$ & - & 1:2.2 & 0.944 & 0.047 & 0.074 & 0.913 \\
\hline
$D_2$ & Oversample & 1:1.25 & 0.944 & 0.061 & 0.050 & 0.938 \\
\hline
$D_2$ & Oversample & 1:1.11 & 0.946 & 0.059 & 0.049 & 0.944 \\
\hline
$D_2$ & Oversample & 1:1 & 0.947 & 0.061 & 0.043 & 0.948 \\
\hline
\hline
$D_2$ & Undersample & 1:1.25 & 0.943 & 0.062 & 0.052 & 0.936 \\
\hline
$D_2$ & Undersample & 1:1.11 & 0.943 & 0.063 & 0.051 & 0.940 \\
\hline
$D_2$ & Undersample & 1:1 & 0.947 & 0.069 & 0.038 & 0.948 \\
\hline
\end{tabular}
\end{center}
\end{table}
}
\ignore{
\begin{table} [!htbp]
\caption{Impact of the malicious:benign ratio on the effectiveness of the Random Forest classifier with {\em Oversampling} and {\em Undersampling}, where $D_1$ with ratio 3.1:1 is the original $D_1$ and $D_2$ with ratio 1:2.2 is the original $D_2$. }
\label{tab:RatioParameter}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c}
\hline
Method & {\em Ratio} & ACC & FPR & FNR & $F1$-score & FPR-0 & FPR-1\\
\hline
- & 3.1:1 & & & & & & \\
\hline
Oversample & 2:1& .977 & .016 & .027 & .983 & .008 & .052 \\
\hline
Oversample & 1.67:1 & .978 & .016 & .025 & .982 & .010 & .039 \\
\hline
Oversample & 1.43:1 & .982 & .011 & .022 & .985 & .008 & .030 \\
\hline
Oversample & 1.25:1 & .984 & .008 & .023 & .985 & .007 & .028 \\
\hline
Oversample & 1.11:1 & .984 & .005 & .027 & .984 & .004 & .029 \\
\hline
Oversample & 1:1 & .985 & .004 & .027 & .984 & .984 & .027 \\
\hline \hline
Undersample & 2:1 & .968 & .041 & .027 & .976 & .976 & .053 \\
\hline
Undersample & 1.67:1 & .967 & 0.036 & .031 & .974 & .974 & .053\\
\hline
Undersample & 1.43:1 & .968 & .029 & .035 & .972 & .021 & .048\\
\hline
Undersample & 1.25:1 & .968 & .033 & .032 & .971 & .026 & .041 \\
\hline
Undersample & 1.11:1 & .969 & .023 & .039 & .970 & .021 & .043 \\
\hline
Undersample & 1:1 & .967 & .021 & .044 & .967 & .021 & .044 \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
}
\begin{table}[!htbp]
\caption{Impact of the malicious:benign ratio on the effectiveness of the Random Forest classifier with {\em Oversampling} and {\em Undersampling}, where $D_1$ with ratio 3.1:1 is the original $D_1$.}
\label{tab:RatioParameter}
\vspace{-2em}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c}
\hline
Dataset & Method & {\em Ratio} & ACC & FPR & FNR & $F1$-score \\
\hline
$D_1$ & (none) & 3.1:1 & 0.980 & 0.030 & 0.017 & 0.987\\
\hline
$D_1$ & Oversample & 2:1& 0.980 & 0.030 & 0.018 & 0.986 \\
\hline
$D_1$ & Oversample & 1.67:1 & 0.980 & 0.027 & 0.017 & 0.988 \\
\hline
$D_1$ & Oversample & 1.43:1 & 0.979 & 0.028 & 0.019 & 0.986\\
\hline
$D_1$ & Oversample & 1.25:1 & 0.979 & 0.028 & 0.018 & 0.986\\
\hline
$D_1$ & Oversample & 1.11:1 & 0.979 & 0.027 & 0.019 & 0.986\\
\hline
$D_1$ & Oversample & 1:1 & 0.979 & 0.026 & 0.019 & 0.986\\
\hline \hline
$D_1$ & Undersample & 2:1 & 0.977 & 0.023 & 0.022 & 0.985 \\
\hline
$D_1$ & Undersample & 1.67:1 & 0.976 & 0.023 & 0.025 & 0.984 \\
\hline
$D_1$ & Undersample & 1.43:1 & 0.975 & 0.023 & 0.025 & 0.984 \\
\hline
$D_1$ & Undersample & 1.25:1 & 0.972 & 0.020 & 0.031 & 0.981\\
\hline
\end{tabular}
\end{center}
\end{table}
Table \ref{tab:RatioParameter} shows the impacts of the malicious:benign ratio in the training set. We observe that the oversampling-incurred ratio 1.67:1 leads to the highest $F1$-score (and the second best FPR and lowest FNR), while undersampling never performs better than the original data ratio in terms of accuracy and $F1$-score. This can be explained by the fact that the latter eliminates useful information.
This prompts us to use oversampling to achieve the 1.67:1 ratio when training classifiers, which turns $D_1$ into $D'_1$ (i.e., the training set is augmented).
\begin{figure}[!htbp]
\centering
\includegraphics[width=.33\textwidth]{figures/ConfusionMatrix.pdf}
\caption{Confusion matrix for (a) $D_1$ with 3.1:1 malicious:benign ratio in the training data and (b) $D'_1$ with 1.67:1 ratio in the training data.}
\label{fig:confusionMatrix}
\end{figure}
Figure \ref{fig:confusionMatrix} further highlights the {\em confusion matrix} of the experiment one the same test set but corresponding to $D_1$ and $D'_1$, which shows a slight improvement in detection when augmenting the training set with oversampling.
\begin{insight}
The data imbalance issue does not affect the model performance significantly
in this case, perhaps because the degree of imbalance is not severe enough.
\end{insight}
\subsubsection{Training and Test}
Having addressed the issue of feature selection and data imbalance,
we consider the following classifiers: Random Forest (RF), Decision Tree (DT), Logistic Regression (LR), $K$-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Specifically, we use the python {\tt sklearn} module to import the following classifier algorithms: (i) Random Forest or RF with parameters {\tt n\_estimator}=$100$ (i.e., 100 trees in a forest) and {\tt criterion}={\em `entropy'} (i.e., entropy is used to measure information gain); (ii) $K$-Nearest Neighbor or KNN, with parameters {\tt n\_neighbors}=$8$ (i.e., 8 of neighbors are considered), {\tt metric}={\em `minkowski'} with $p=2$ (i.e., the Minkowski metric with $p=2$ measures the distance between two feature vectors), and the rest parameters are the default values; (iii) Decision Tree or DT with default parameters; (iv) Logistic Regression or LR with default parameters; (v) Support Vector Machine or SVM with {\tt linear} kernel and other default parameters. For voting the outputs of the five classifiers mentioned above, we use the {\tt VotingClassifier()} function and set {\tt voting}={\em `hard'} (i.e., majority voting).
We always considering the 80-20 splitting of the scaled training-test data.
\subsubsection{Answering RQ5 and RQ6}
In order to answer RQ5 and RQ6, we conduct the following experiments, where we use the 80-20 train-test splitting of $D_1$ and then augmenting the training set as mentioned above.
Our experiments are conducted on a virtual machine on \url{https://www.chameleoncloud.org/}, running CentOS 7 on a machine of an x86\_64 processor with 48 cores and CPU frequency 3.1 GHz.
\begin{itemize}
\item Experiment (Exp.) 1: Use the lexical, statistical, and TLD features (i.e., F5, F7, F9-F11) only, while ignoring the WHOIS features. (This experiment is equally applicable to $D_2$, which is not reported owing to space limitation.)
\item Experiment (Exp.) 2: Use the WHOIS features (i.e., F1-F4), while ignoring all other features.
\item Experiment (Exp.) 3: Use both lexical and WHOIS features (i.e., F1-F5, F7, F9-F11).
\end{itemize}
\begin{table} [!htbp]
\caption{Experimental results on dataset $D'_1$ with a range of classifiers (with oversampling), their total CPU times for training and test: Exp. 1 uses lexical features only; Exp.2 uses WHOIS features only; Exp. 3 uses both lexical and WHOIS features. \label{tab:MLClassifierPerformance}}
\vspace{-2em}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Exp. & Classifier & ACC & FPR & FNR & $F1$-score & Execution \\
& & & & & & Time(s)\\
\hline
1 & RF & 0.924 & 0.150 & 0.052 & 0.950 & 0.48\\
\hline
2 & RF & 0.977 & 0.025 & 0.023 & 0.985 & 0.59 \\
\hline
3 & RF & 0.980 & 0.027 & 0.017 & 0.988 & 0.64\\
\hline
1 & KNN & 0.887 & 0.199 & 0.086 & 0.925 & 0.40\\
\hline
2 & KNN & 0.949 & 0.034 & 0.056 & 0.966 & 0.25 \\
\hline
3 & KNN & 0.947 & 0.031 & 0.060 & 0.964 & 0.30\\
\hline
1 & DT & 0.917 & 0.151 & 0.061 & 0.945 & 0.07\\
\hline
2 & DT & 0.973 & 0.045 & 0.022 & 0.982 & 0.08 \\
\hline
3 & DT & 0.974 & 0.051 & 0.019 & 0.983 & 0.14\\
\hline
1 & LR& 0.885 & 0.216 & 0.082 &0.924 & 20.30\\
\hline
2 & LR & 0.883 & 0.362 & 0.038 & 0.926 & 23.03\\
\hline
3 & LR & 0.918 & 0.178 & 0.051 & 0.946 & 44.40\\
\hline
1 & SVM & 0.888 & 0.220 & 0.078 & 0.925 & 1.69\\
\hline
2 & SVM & 0.881 & 0.373 & 0.038 & 0.924 & 1.68\\
\hline
3 & SVM & 0.920 & 0.164 & 0.054 & 0.946 & 2.38\\
\hline
1 & Ensemble & 0.916 & 0.171 & 0.056 & 0.945 & 21.40\\
\hline
2 & Ensemble & 0.962 & 0. 031 & 0.041 & 0.974 & 24.75\\
\hline
3 & Ensemble & 0.970 & 0.035 & 0.028 & 0.980 & 45.70\\
\hline
\end{tabular}
\end{center}
\end{table}
\ignore{
\begin{table} [!htbp]
\caption{Experimental results with dataset $D'_1$: Exp. 1 use) does not use the WHOIS features; Experiment 2 (Exp. 2) uses the WHOIS features (table with 10-fold CV)}
\label{tab:MLClassifierPerformance}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Exp. & Classifier & ACC & FPR & FNR & $F1$-score\\
\hline
1 & RF & 0.912 & 0.179 & 0.043 & 0.935\\
\hline
2 & RF & 0.942 & 0.125 & 0.025 & 0.957\\
\hline
1 & DT & 0.912 & 0.171 & 0.046 & 0.936 \\
\hline
2 & DT & 0.971 & 0.033 & 0.027 & 0.978 \\
\hline
1 & KNN & 0.904 & 0.162 & 0.063 & 0.929 \\
\hline
2 & KNN & 0.941 & 0.099 & 0.039 & 0.956 \\
\hline
1 & LR & 0.890 & 0.222 & 0.054 & 0.920 \\
\hline
2 & LR & 0.864 & 0.309 & 0.050 & 0.903 \\
\hline
1 & SVM & 0.869 & 0.300 & 0.046 & 0.907 \\
\hline
2 & SVM & 0.922 & 0.130 & 0.052 & 0.942 \\
\hline
1 & Ensemble & 0.896 & 0.233 & 0.039 & 0.925\\
\hline
2 & Ensemble & 0.943 & 0.0856 & 0.042 & 0.958\\
\hline
\end{tabular}
\end{center}
\end{table}
}
\ignore{
\begin{table} [!htbp]
\caption{Experimental results on dataset $D'_1$ with a range of classifiers (with oversampling): Exp. 1 uses lexical features only; Exp.2 uses WHOIS features only; Exp. 3 uses both lexical and WHOIS features.}
\label{tab:MLClassifierPerformance}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Exp. & Classifier & ACC & FPR & FNR & $F1$-score\\
\hline
1 & RF & 0.536 & 0.794 & 0.201 & 0.657\\
\hline
2 & RF & 0.968 & 0.016 & 0.045 & 0.970\\
\hline
3 & RF & 0.985 & 0.005 & 0.024 & 0.986 \\
\hline
1 & KNN & 0.5037 & 0.482 & 0.508 & 0.525\\
\hline
2 & KNN & 0.953 & 0.022 & 0.066 & 0.957 \\
\hline
3 & KNN & 0.961 & 0.020 & 0.054 & 0.964\\
\hline
1 & DT & 0.530 & 0.777 & 0.225 & 0.648 \\
\hline
2 & DT & 0.972 & 0.021 & 0.033 & 0.975 \\
\hline
3 & DT & 0.976 & 0.013 & 0.032 & 0.978 \\
\hline
1 & LR & 0.557 & 0.999 & 0.001 & 0.715 \\
\hline
2 & LR & 0.856 & 0.269 & 0.045 & 0.880 \\
\hline
3 & LR & 0.887 & 0.162 & 0.072 & 0.900 \\
\hline
1 & SVM & 0.557 & 0.999 & 0.001 & 0.715\\
\hline
2 & SVM & 0.904 & 0.153 & 0.051 & 0.916 \\
\hline
3 & SVM & 0.890 & 0.190 & 0.043 & 0.904 \\
\hline
1 & Ensemble & 0.541 & 0.866 & 0.134 & 0.678\\
\hline
2 & Ensemble & 0.962 & 0.022 & 0.051 & 0.965 \\
\hline
3 & Ensemble & 0.975 & 0.014 & 0.034 & 0.977\\
\hline
\end{tabular}
\end{center}
\end{table}
}
Table \ref{tab:MLClassifierPerformance} summarizes the experimental results with a range of classifiers and the actual time spent on training a model and classifying the entire test set.
We make several observations.
First, for a specific classifier, using WHOIS features alone (Exp. 2) almost always leads to significantly higher effectiveness than using lexical features alone (Exp. 1), except for Logistic Regression. Second, for a fixed classifier, using both lexical and WHOIS features together (i.e., Exp. 3) always performs better than using lexical or WHOIS features alone.
Third, among the classifiers considered, Random Forest performs the best in every metric
in each experiment.
In particular, Random Forest (i.e., non-linear classifier) achieves a better performance than the Ensemble method because there are classifiers (e.g., Logistic Regression and SVM) that are substantially less accurate than the other classifiers and therefore ``hurt'' the voting results. Fourth, Decision Tree has the fastest execution time, followed by KNN and Random Forest, while Logistic Regression is the slowest and causes a delay for the voting ensemble.
To understand the generalizability, when conducting Exp. 1 on the augmented $D'_2$ with the benign:malicious ratio at 1.25:1, we observe that Random Forest outperforms other models by achieving a 0.947 accuracy, a 0.066 FPR, a 0.041 FNR, and a 0.947 F1-score.
\ignore{
{\color{green}
Table \ref{tab:MLClassifierPerformance} summarizes the experimental results with each classifier and the ensemble voting result.
We make several observations.
First, for a fixed classifier, using WHOIS features (i.e., Exp. 2) almost always leads to a significantly higher effectiveness than not using the WHOIS features (i.e., Exp. 1), except for the F1-score of the Logistic Regression. This means that the WHOIS features do carry useful information about the websites.
Second, Decision Tree achieves the highest accuracy in every metric, especially when the WHOIS features are used. In particular, Decision Tree is significantly more accurate than the Ensemble method, which can be attributed to the fact that there are classifiers (e.g., Logical Regression and SVM) that are substantially less accurate than Decision Tree and therefore ``hurt'' the voting results.}
}
\begin{insight}
COVID-19 themed malicious website detectors must consider WHOIS features; and Random Forest performs the best among the classifiers that are considered.
\end{insight}
\ignore{
{\color{orange}
\subsection{Answering RQ3: Impact of WHOIS Features}
\label{sec:imbalanceData}
In real-word there are lots of websites those does not have a WHOIS information available. In this experiment we use dataset $D'_2$ generated in section \ref{sec:imbalanceData}, where WHOIS features are completely missing;
An 80/20 train-test split, leads to 148,929 websites for training (including on average 74,487 benign and 74,442 malicious websites) and 37,233 websites for testing (including on average 18,594 benign and 18,639 malicious websites). We conduct the following experiments to answer RQ3:
\begin{itemize}
\item Experiment-4: We use only lexical features $F4$, $F5$, $F8$, $F11$, $F12$, and $F13$ as no WHOIS features are available (e.g., missing or hidden) for websites in $D'_2$.
\end{itemize}
\ignore{
\begin{table} [!htbp]
\caption{Experimental results with dataset $D'_2$: Experiment-4 (Exp. 4) does not have WHOIS features (old table with 10-fold CV)}
\label{tab:MLClassifierPerformance_ex2}
\begin{center} \begin{tabular}{|c|c|c|c|c|}
\hline
Classifiers & $ACC$ & $FPR$ & $FNR$ & $F1$-score\\
\hline
RF & 0.946 & 0.066 & .041 & 0.947 \\
\hline
DT & 0.946 & 0.066 & 0.042 & 0.947 \\
\hline
KNN & 0.941 & 0.066 & 0.052 & 0.942\\
\hline
LR & 0.929 & 0.081 & 0.062 & 0.929 \\
\hline
SVM & 0.926 & 0.092 & 0.057 & 0.927\\
\hline
Voting Ensemble & 0.940 & 0.067 & 0.053 & 0.940\\
\hline
\end{tabular}
\end{center}
\end{table}
}
\begin{table} [!htbp]
\caption{Experimental results with dataset $D'_2$: Experiment-4 (Exp. 4) does not have WHOIS features}
\label{tab:MLClassifierPerformance_ex2}
\begin{center} \begin{tabular}{|c|c|c|c|c|}
\hline
Classifiers & $ACC$ & $FPR$ & $FNR$ & $F1$-score\\
\hline
RF & 0.947 & 0.066 & 0.041 & 0.947 \\
\hline
DT & 0.946 & 0.066 &0.042 & 0.947 \\
\hline
KNN & 0.942 & 0.059 & 0.057 & 0.943\\
\hline
LR & 0.929 & 0.082 & 0.061 & 0.930 \\
\hline
SVM & 0.924 & 0.096 & 0.056 & 0.926\\
\hline
Voting Ensemble & 0.945 & 0.061 & 0.050 & 0.945 \\
\hline
\end{tabular}
\end{center}
\end{table}
Table \ref{tab:MLClassifierPerformance_ex2} summarizes the performance of each classifier and the ensemble method for dataset $D'_2$ (i.e., Exp. 4). We make several observations from the results. First, even if Experiment-4 has the same feature vectors as Experiment-1, all classifiers perform better in all metrics, specially significant improvement in FPR in this experiment. One possible reason for that might be the case where lexical features of websites whose WHOIS information is not available have significant difference from lexical features of websites when WHOIS information is completely available.
Second, Random Forest and Decision Tree classifiers perform equally well and outperform other classifiers, again including the Ensemble method. Third, using a larger and balanced dataset in this experiment leading to higher accuracy and lower FPR.
These observations provide us with the following insight.
\begin{insight}
In COVID-19 themed malicious website detection lexical features might have more useful information for websites where WHOIS features
are entirely missing than the one's where WHOIS features are entirely available.
\end{insight}
}
}
\subsection{Recommendations from Experts Regarding Remote Work}
\subsection{Recommendations Regarding Remote Work}
Since COVID-19 disease has no vaccination yet, to slow down the infection rate, stay-at-home orders along with social distancing have been the most effective ways to allow health systems to treat COVID-19 patients without overwhelming their resources \cite{Stay_home_covid19}. Thus, while assessing the cybersecurity plans for business continuity, enterprises need to keep WFH policy in mind. During active WFH policy enterprises should publish clear guidelines for their remote employees. The following recommendations can guide the community to chose from according to their budgets and valuation of assets.
\subsubsection{Secure home-work environments}
\begin{recommendation}
Practice ``digital distancing" by setting up routers with two separate networks, isolate enterprise networks from unpatched home devices \cite{CBDigitalDistancing}.
\end{recommendation}
\begin{recommendation}
Enterprises should provide tips to secure home networks and deliver security updates outside of corporate networks \cite{CBDigitalDistancing}.
\end{recommendation}
\begin{recommendation}
Provide employees with work laptops at home to have all the essential standard tools in that machine readily available, it can protect enterprises from reducing attack surfaces created by unverified and unchecked third-party software installations at employee's end.
\end{recommendation}
\begin{recommendation}
\label{rec:dnsSettingsPass}
Provide guidelines for employees and customers to use stronger passwords for PCs and routers, regularly monitor DNS settings.
\end{recommendation}
\begin{recommendation}
Build capabilities of enterprise threat hunting within employee's home network environments \cite{CBDigitalDistancing}.
\end{recommendation}
\subsubsection{Do not neglect employee information and privacy}
\begin{recommendation}
Record only factual information and minimum amount of information necessary.
\cite{Addiscott2020}.
\end{recommendation}
\subsubsection{Secure remote communication}
\begin{recommendation}
Always use passwords and restrict user joining meetings by admin verification for remote meetings to avoid any eavesdropping \cite{CBDigitalDistancing}.
\end{recommendation}
\begin{recommendation}
Do not use keywords such as `sensitive', `secret', `highly-classified' as meeting titles \cite{CBDigitalDistancing}.
\end{recommendation}
\begin{recommendation}
Leveraging virtual private networks (VPN) for remote access to enterprise network while working from home \cite{CBWashingHand20s}.
\end{recommendation}
\begin{recommendation}
\label{rec:mfa}
Using multi-factor authentication (MFA) for all critical communications and transactions \cite{CBWashingHand20s}. (more citations)
\end{recommendation}
\subsubsection{Build more informed users}
\begin{recommendation}
Enterprise should specify a (or make a white-list) VPN software and remote access tools for their employees which may prevent them falling into VPN lure traps.
\end{recommendation}
\begin{recommendation}
Provide clear direction for remote meeting software and meetings norms while working from home.
\end{recommendation}
\begin{recommendation}
Enterprise should provide specific guidelines for employees and consumers to securely access data and networks remotely.
\end{recommendation}
\subsection{Recommendations Independent of Remote Work}
Though WFH policy is the actual reality in many industries at current situation, COVID-19 themed attacks may stay even longer after the stay-at-home orders are lifted.
Hence to be resilient during this crisis and future similar themed cyberattacks there is no other alternative than practicing standard cyber hygiene, be vigilant, and establish more proactive defense strategies. The list of recommendations are as follows.
\subsubsection{Secure end-points}
\begin{recommendation}
Timely patching and updating of all software and operating systems \cite{CBWashingHand20s}.
\end{recommendation}
\subsubsection{Secure networks \& systems}
\begin{recommendation}
Using mandatory passwords for changing any application settings, if working with sensitive infrastructures, system level whitelisting of applications can be practiced \cite{CBDigitalDistancing}.
\end{recommendation}
\begin{recommendation}
Establish proactive defense against COVID-19 related malicious and abused websites, domains, URLs, and IP addresses.
\end{recommendation}
\begin{recommendation}
Establish proactive defense against COVID-19 themed malicious emails through scanning and reviewing emails with COVID-19 related subject, body, links, and attachments before sending it to employee's inbox.
\end{recommendation}
\subsubsection{Build more informed users}
\begin{recommendation}
Enterprise should establish a secure communication channel either through email or web portal for reliable COVID-19 related information and updates, try to boost employee morale by showing more connection and compassion.
\end{recommendation}
\begin{recommendation}
\label{rec:awareOnlure}
Establish an effective way to regularly communicate about new COVID-19 related lures, misinformation, myths, and risk factors to employees and customers to educate them beforehand \cite{CBDigitalDistancing}.
\end{recommendation}
\begin{recommendation}
Train employees to be skeptical and vigilant about emails with COVID-19 related updates, information, guidelines, or about ongoing sales or deals.
\end{recommendation}
\begin{recommendation}
Encourage and provide guidelines for employees and customers to report about any suspicion, scams, or abuses to IT (or security) teams.
\end{recommendation}
\subsubsection{Coordinated cyber security management}
\begin{recommendation}
Security teams should coordinate with decision-makers to setup a compact incident response plans by following standard NIST cybersecurity framework \cite{NIST_framework} against any successful COVID-19 themed attacks.
\end{recommendation}
\begin{recommendation}
Enterprises should be proactive and adapt to fast changes regarding existing models and frameworks to detect new COVID-19 attacks.
\end{recommendation}
\begin{recommendation}
Maintain cyber hygiene by checking reviews of software before use and discuss new software pros and cons among peers.
\end{recommendation}
\subsection{Recommendations Against Rising Non-COVID-19 Themed Cyberattacks}
Along with the rise in COVID-19 themed cyber attacks as discussed in section \ref{sec:methodology}, we are also observing other state-of-the-art non-COVID-themed attacks gaining recent interest because of added incentives amid COVID-19 pandemic. We are listing those attacks as we understand being informed and vigilant as a community can help us stay safe.
\subsubsection{Ransomware attacks}
Ransomware is a special kind of malware and propagates similarly through websites, emails, and mobile applications. Thus, securing this artifacts may also provide proactive security against ransomware attacks as well. However, some advanced multi-stage attacks (APTs) may target healthcare and COVID-19 research facilities with ransomware attacks. Compliance with standard cyber hygiene and best security practices should keep enterprise safe from ransomware attacks \cite{CISA_ransomware, hornetsec_ransomware}. Moreover, enterprise should be vigilant about new zero-day CVEs and software vulnerabilities to keep their enterprise network safe.
\subsubsection{Impersonation attacks}
Since COVID-19 pandemic force us to go virtual for business continuity, it creates opportunity for attackers to impersonate as legitimate entities (e.g., government, doctor, CEO, WHO) and make fraud transactions, compromise business emails, and other similar attacks. Usually, impersonation attacks are carried out through emails, chats, phone calls, and SMSs. As per recommendations \ref{rec:mfa} and \ref{rec:awareOnlure}, enterprises should inform their employees on such lures and enforce MFA for all critical transactions and communications during this pandemic.
\subsubsection{Defense against router DNS hijacking}
Router DNS hijackings are attacks where attackers get remote access to routers and change DNS settings to land victims into attacker controller websites and launch malware or phishing attacks. We have seen reports on these attacks rising during COVID-19 as with active WFH policies attackers have far more incentive than regular times \cite{oski_malware_attack_covid19}. The most effective defense against router DNS hijacking is to practise recommendation \ref{rec:dnsSettingsPass} \cite{DNS_hijack_protection}. Additionally, defenders can raise alerts if the browser automatically redirects from a reputed website to a non-reputed ones because of change in the DNS settings.
\subsubsection{Defense against Zoom-bombing attacks}
Zoombombing is when an unauthorized person or stranger joins a Zoom meeting/chat session and cause disorder by saying offensive things and even photobombing the meeting by sharing pornographic and hate images. When using Zoom for online classrooms, meetings or events, the host is advised to making meetings private and require a password or use the waiting room feature to control the admittance of additional people. The links to a teleconference or classroom should be sent directly to the individual participants and never be publicly available on a social media post. Turn off the annotation feature and restrict other feature as needed in host controls. Those managing a conference in Zoom should change the screen sharing option to “Host-Only.” Finally, you should always run the latest version of Zoom.
\subsubsection{Defense against HTML/XSS injection attacks}
Website and mobile app injections attacks are handy during a pandemic to propagate various malware. These attacks can be far more impactful when injected into a top legitimate website working for COVID-19 related data and updates with large volume of live users. Hence, such organization and website should regularly scan and check for changes in their website HTML contents, added scripts, hidden divisions, and images.
\ignore{
{\color{red}\footnote{find an appropriate home for this paragraph. I think we can use this as part of the discussion/conclusion to make a final comment on the issue ...}
VPN alone would not be enough, and large scale use of VPNs may provide enough incentives to attackers for targeting their attacks for VPN vulnerabilities \cite{CyberHyigene_during_COVID19}. We also find discussion from experts \cite{zero_trust_model_for_COVID_cw, zero_trust_model_for_COVID_dr, zero_trust_model_for_COVID_ng} suggesting more stringent approach to implement zero-trust security models \cite{zero_trust_model_citrix, zero_trust_model_cloudflare, zero_trust_model_cso} for enterprise data and network access. This pandemic should give the enterprise leaders a strong message to strengthen security monitoring, proactive defense, and test security software effectiveness regularly.
}}
\subsection{Defence against malicious URLs and websites}
\ignore{
}
Although the problem of COVID-19 themed malicious websites has not been investigated until now, the problem of malicious websites has been studied in the literature prior to the COVID-19 pandemic. The problem of detecting malicious URLs generated by domain generating algorithms has been investigated in \cite{DLSTM_dga_detection}. The problem of detecting phishing websites has been addressed via various approaches, including: the descriptive features-based model \cite{Christou2020PhishingUD}, the lexical and HTML features-based model \cite{DQLearning_PhishingDetection}, the HTML and URL features-based model \cite{Li2019ASM}, and the natural language processing and word vector features-based model \cite{SAHINGOZ2019345}. The problem of detecting malicious websites has been addressed via the following approaches: leveraging application and network layers information \cite{XuCodaspy13-maliciousURL}, leveraging image recognition \cite{MaliciousWebImage_CNN}, leveraging
generic URL features \cite{DBLP:journals/tist/MaSSV11,generic_features}, leveraging character-level embedding or keyword-based recurrent neural networks \cite{whats_in_url,farhan_douksieh_abdi_2017_1155304,mal_url_rnn}, the notion of adversarial malicious website detection \cite{cns14MaliciousWebsiteXu}.
However, these studies do not consider
features pertinent to the COVID-19 pandemic, which are we leverage. Nevertheless, the present study fall under the umbrella of cybersecurity data analytics \cite{XuAgility2019,XuIEEETIFS2013,
XuTIFS2015,XuPLoSOne2015,
XuTIFSDataBreach2018}, which in turn belong to the Cybersecurity Dynamics framework \cite{XuBookChapterCD2019,XuIEEETNSE2018,XuHotSoS2018Firewall,Pendleton16,XuHotSoS2018Diversity}.
\ignore{
\subsection{Studies Related to COVID-19 Themed Attacks}
There are recent studies on the types of cyberattacks and their attack trends during the COVID-19 pandemic
\cite{ten_deadly_covid19_security_threats,CyberCrime_pandemic2020,collier2020implications_policy}, studies on specific cyberattacks related to COVID-19 (e.g., mobile malware ecosystem \cite{COVID_malicious_app_paper},
cybercrimes \cite{influence_model_COVID_cybercrime}, fake news related to COVID-19 \cite{CyberCrime_in_time_of_plauge}, and COVID-19 themed cryptocurrency scams \cite{xia2020dont_cryptocurrency}).
To the best our knowledge, this the first study to address an active defense approach against the COVID-19 themed malicious websites.
}
|
1,108,101,564,672 | arxiv |
\section{Introduction}
\suppressfloats
\input{figures/envs}
Reinforcement learning (RL) has shown impressive results on artificial tasks such as game playing \citep{Mnih2015Human-levelLearning,Silver2016MasteringSearch}, where collecting experience is cheap. However, for robotics tasks, such as locomotion and manipulation \citep{Kober2013,Gu2016}, current algorithms often require manually designed smooth reward functions, limiting their applicability in real-world scenarios. In this paper, we approach learning from sparse rewards using hierarchical reinforcement learning (HRL), where multiple levels of temporally-abstract controllers modulate another to produce an action. We propose a novel hierarchical agent that is simple to train and learns to push objects and stack blocks end-to-end from sparse rewards, as shown in \cref{fig:envs}. To achieve this, we consider three common challenges of HRL.
\paragraph{Stability.}
Simultaneously updating the levels of a hierarchical agent introduces non-stationarity since the levels affect another, resulting in unstable learning dynamics. Prior HRL algorithms thus often introduce multiple training phases to stabilize learning \citep{Heess2016LearningControllers,MetaLearningSharedHierarchies}. This requires more effort in implementation and introduces additional hyperparameters, which may in part prevent HRL from becoming standard for RL problems. We alleviate this problem by jointly training the levels as separate PPO agents \citep{PPO}, encouraging smooth changes in all levels of the hierarchy. We hope that the simplicity of this solution helps to make HRL more generally applicable.
\paragraph{Modulation.}
A critical design choice of hierarchical agents is the form of communication between levels. Typically, each level receives a modulation signal from the more abstract level above it \citep{Dayan1993FeudalLearning}. Such signal could be a categorical variable called an option \citep{sutton1999between,MetaLearningSharedHierarchies,Florensa2017StochasticLearning} or a continuous-valued activation vector \citep{Heess2016LearningControllers,Vezhnevets2017FeUdalLearning,LatentSpacePolicies}. While a categorical signal allows to select exactly one skill at a time, a continuous signal allows smooth modulation over lower levels. Inspired by this trade-off, we propose communication via bit-vectors, which allows to mix multiple skills. Empirically, this outperforms categorical modulation signals.
\paragraph{Exploration.}
While hierarchical controllers with different time-scales have a built-in prior for temporally extended behavior, this does not necessarily help the exploration of skills \citep{DIAYN}. For this reason, HRL methods often report transfer performance after pre-training the agent or part of it on manually defined subtasks \citep{Heess2016LearningControllers,tessler2017dsn,kulkarni2016hierarchical}. Intrinsic motivation or curiosity \citep{schmidhuber2010formal,pathak2017curiosity} is a common approach to exploration, but is not commonly used for hierarchical agents. We achieve temporally extended exploration by employing intrinsic motivation at each level of the hierarchy, resulting in an agent that learns from sparse rewards without pre-training on simpler tasks.
We summarize the main contributions of this paper as follows:
\begin{itemize}
\item We introduce modulated policy hierarchies (MPH), a hierarchical agent that is trained jointly without requiring pre-training tasks or multiple training phases.
\item We model modulation signals as bit vectors instead of the typical one-hot modulation, allowing the agent to interpolate between its skills.
\item We employ intrinsic motivation based on prediction error of dynamics models on all levels of the hierarchy, resulting in temporally extended exploration.
\item We evaluate our method together with recent HRL algorithms on pushing and sparse block stacking and provide ablation studies of the design choices.
\end{itemize}
\section{Related work}
\label{sec:related_work}
\paragraph{Manipulation.}
Learning algorithms have been applied to a variety of robotics tasks, such as grasping~\citep{Pinto2016,Lampe2013AcquiringLearning}, opening bottles using supervised learning~\citep{Levine2015End-to-EndPolicies}, learning to grasp with reinforcement learning~\citep{Levine2016LearningCollection}, opening doors~\citep{Gu2016}, and stacking Lego blocks~\citep{Popov2017Data-efficientManipulation}. Here, we examine a pushing task, FetchPush-v1, and a stacking task similar to the one described by \citet{Popov2017Data-efficientManipulation}. We focus on pushing and stacking tasks as they conceptually require subskills, such as reaching and grasping. Previously, solving block stacking required to manually design a smooth reward function~\citep{Popov2017Data-efficientManipulation} or imitation learning~\citep{2017arXiv170307326D} with an existing low-level controller that could already grasp and place blocks.
\paragraph{Stability.}
HRL inherits common instability issues of RL~\citep{DeepRLThatMatters}. Moreover, training multiple controllers simultaneously can lead to degenerate solutions~\citep{bacon2017option}. Pre-training the low levels ~\citep{Heess2016LearningControllers} or alternating training between levels~\citep{MetaLearningSharedHierarchies} has been proposed to improve learning stability. Other methods add regularization terms based on entropy~\citep{bacon2017option,hausman2018learning,LatentSpacePolicies} or mutual information~\citep{daniel2016hierarchical,Florensa2017StochasticLearning}. We find that the slow change of the action distribution that PPO encourages can be a simple and practical way to mitigate instability. In addition to \citet{MetaLearningSharedHierarchies}, we find that we can train all levels simultaneously without degeneracies.
\paragraph{Modulation.}
A core design choice of HRL agents is how higher levels communicate with the levels underneath them. The options framework \citep{sutton1999between} uses a categorical signal that switches between low-level policies that can be implemented as separate networks \citep{tessler2017dsn,MetaLearningSharedHierarchies}. This approach requires a large number of parameters and does not allow skills to share information. Another line of work is based on feudal learning~\citep{Dayan1993FeudalLearning}, where a typically continuous valued signal modulates the lower level. In this context, the modulation signal is often referred to as goal. It can be specified directly in the observation space~\citep{hiro2018arxiv} or in a learned embedding space~\citep{Vezhnevets2017FeUdalLearning,kulkarni2016hierarchical,Heess2016LearningControllers}. However, such methods usually require prior knowledge in a form of pre-training~\citep{hausman2018learning} or reward shaping~\citep{LatentSpacePolicies} which our method with binary signals does not need.
\paragraph{Exploration.}
One of the major goals of HRL is to address hard exploration problems with long-horizon and sparse rewards. Structured exploration provides a mechanism to more effectively guide exploration and is usually referred to as intrinsic motivation. The intrinsic motivation methods vary from curiosity-based bonuses~\citep{houthooft2016vime,pathak2017curiosity} to state visitation counters~\citep{BellemareSOSSM16, OstrovskiBOM17}. However, such approaches have only been explored for single-level policies while one can imagine taking advantage of the intrinsic motivation at both layers of the hierarchy.
\section{Modulated Policy Hierarchy}
\label{sec:approach}
\input{figures/models}
\paragraph{Preliminaries.}
We follow the typical formulation of reinforcement learning as Markov decision process. At each discrete time step $t$, the agent receives state $s_t$ from the environment, selects an action $a_t$, receives a scalar reward $r_t$, and transitions into a new state $s_{t+1}$. We aim to maximize the expected return $\mathbb{E}_{\pi} \sum_{k=0}^{\infty} \gamma^k r_{t+k}$ using a policy $\pi: \mathcal{S}\times\mathcal{A}\to [0,1]$. The policy is parameterized with parameters $\theta$ and denoted as $\pi_{\theta}(a_t|s_t)$, thus at each timestep the chosen action is given by $a_t \sim \pi_{\theta}(\cdot | s_t)$.
\paragraph{Hierarchical controller.}
MPH learns a hierarchy of policies $\Pi = \{\pi_1, \pi_2, \ldots, \pi_n\}$, where in our experiments we consider two-level policies ($n=2$). MPH policies have their own state and action spaces $\pi_k: \mathcal{S}^k\times\mathcal{A}^k\to [0,1]$. Each policy is represented with a single network and modulated by bit vectors from the policies above (\cref{fig:models_mph}). In contrast, the options framework switches between independent skill networks by a categorical modulation signal (\cref{fig:models_options}). The categorical signal might be also used with the skill policies merged into a single network (\cref{fig:models_onehot}).
\paragraph{Modulation signal.}
The highest-level (master) policy $\pi_n$ receives the state from the environment ($\mathcal{S}^n = \mathcal{S}$) and outputs a bit vector of size $m_n$ as its action. Each intermediate-level policy $\pi_k$ receives the environment state concatenated with the modulation signals from the layers above. The policies are implemented as fully-connected neural networks that predict the probabilities of the bits. Given the probabilities, the modulation signal is generated by sampling from $m_k$ independent Bernoulli distributions. Once sampled, the signal is passed to all lower policies. Finally, the lowest level policy $\pi_1$ (worker) observes all the modulation signals and the state. In the two-level structure, $\pi_1$ receives the environment state and the master modulation, $s^1 \in \mathcal{S}^1 = \mathcal{S} \cup \mathcal{A}^2$. The worker policy outputs the final action $a^1 \in \mathcal{A}^1 = \mathcal{A}$ which is applied to the environment.
\paragraph{Time scales.}
To encourage each level in the hierarchy to attend to different time-scales, we activate higher-level policies less frequently, i.e., $T_{k+1} > T_k$ where $T_k$ is the time-scale of the policy at level $k$. When a policy is inactive in a given time step, it outputs the same modulation signal as was generated in the previous time step, which promotes consistency in higher-level decisions and facilitates longer-term planning. The policies at each level only receive inputs at time steps for which they are active.
\paragraph{Optimization.}
We train MPH policies using PPO \citep{PPO} which is a state-of-the-art on-policy method. PPO guarantees that after each update the new policy does not deviate too far from the old one in terms of KL divergence. We use PPO for each layer independently which also means that the MDPs seen by high-level policies change during training due to low-level policies updates. However, given the PPO guarantees, we can ensure that after a training step, the MDP on each layer of the hierarchy remain close to the old MDP in terms of transition probabilities change. As a result, the optimization problem solved by PPO for higher layers changes smoothly during the updates. This fact makes MPH more stable to train than most HRL approaches. Please refer to \cref{appendix:derivations} for exact bounds and full derivation.
\paragraph{Hierarchical exploration.}
\label{sec:curiosity}
Since MPH is designed for environments with sparse reward signals, we employ intrinsic motivation to accelerate learning. As suggested by \cite{pathak2017curiosity}, we add intrinsic motivation to our agent in the form of a curiosity-driven exploration bonus. We apply this independently for each level of the hierarchy and on the corresponding time-scale. In practice, this means that higher-level policies, which operate on longer time-scales, are more curious about longer term effects than lower-level policies. The reward for a policy at level $k$ is defined as
\begin{equation}
R^k_t = R^\text{env}_t + \norm{\hat{\phi}^F_k({s}^k_{t+1}) - \phi_k(s^k_{t+1})}_2,
\end{equation}
where $\phi_k$ is a learned embedding and $\hat{\phi}^F_k(s^k_{t+1})$ is a prediction of the next state $s^k_{t+1}$ given $(s^k_t, a^k_t)$. The standard method for learning $\phi_k$ requires an inverse model for the action prediction, but we find that training a \emph{reverse} model instead works better. Specifically, the reverse model predicts the previous state $s^k_{t}$ given $(s^k_{t+1}, a^k_{t})$. To learn the embedding, we jointly train forward and reverse models by minimizing the loss
\begin{equation}
L = \beta \norm{\hat{\phi}^F_k({s}^k_{t+1}) - \phi_k(s^k_{t+1})}_2 + (1-\beta) \norm{\hat{\phi}^R_k(s^k_{t}) - \phi_k(s^k_{t})}_2 - \lambda (\norm{\phi_k(s^k_{t})}_1 + \norm{\phi_k(s^k_{t+1})}_1),
\end{equation}%
where we add a regularization term to prevent trivial embeddings $\phi_k$ to be learned; $\beta \in (0,1)$ is a scalar weighting the reverse model loss against the forward model loss, and $\lambda \in \mathbb{R}$ is a regularization scaling factor.
\section{Conclusion}
We introduced Modulated Policy Hierarchies (MPHs) to address environments with sparse rewards that can be decomposed into subtasks. By combing rich modulation signals, temporal abstraction, and intrinsic motivation, MPH benefits from better exploration and increased stability of training. Moreover, in contrast to many state-of-the-art approaches, MPH does not require pre-training, multiple training phases or manual reward shaping. We evaluated MPH on two simulated robot manipulation tasks: pushing and block stacking. In both cases, MPH outperformed baselines and the recently proposed MLSH algorithm, suggesting that our approach may be a fertile direction for further investigation.
\paragraph{Acknowledgements.} This work was supported in part by ERC advanced grant Allegro.
\section{Experiments}
\label{sec:experiments}
We compare our approach to baselines and state-of-the-art approaches described in \cref{sec:baselines}. We evaluate our approach on two tasks with sparse rewards: block pushing and block stacking (see \cref{fig:envs}). First, we show that MPH outperforms the baselines on the block stacking in \cref{sec:stacking} and analyze the modulation signals produced by the master policy. Second, we compare MPH to baselines on the pushing task in \cref{sec:pushing}. Third, we show the benefits of temporally extended intrinsic motivation in \cref{sec:motivation}.
\subsection{Baselines}
\label{sec:baselines}
We compare to the following baselines:
\begin{itemize}
\item\textbf{PPO}\quad A flat policy trained using PPO.
\item\textbf{options}\quad An options hierarchy with separate skill networks corresponding to \cref{fig:models_options}.
\item\textbf{one-hot}\quad A two-level hierarchy with 1-hot modulation signal as in \cref{fig:models_onehot}.
\item\textbf{MLSH}\quad Meta Learning Shared Hierarchy by \citet{MetaLearningSharedHierarchies}.
\end{itemize}
All the hierarchies employ PPO as the core optimization method and use temporal abstraction for the master policy. We share the common hyperparameters between MPH and baselines for each task. We discuss the hyperparameters in more details in \cref{sec:stacking} and \cref{sec:pushing}. The last approach, MLSH~\citep{MetaLearningSharedHierarchies} is a recent, state-of-the-art approach that learns a set of skill policies, switched by a master policy. MLSH is trained stage-wise: a warm-up period where only the master is updated is alternated with a period where skills and master are trained jointly. We implemented the first three approaches and rely on the code of MLSH released by its authors.
\subsection{Stacking}
\label{sec:stacking}
\input{figures/performance}
\paragraph{Task description.}
The block stacking task is a \texttt{pybullet}~\citep{CoumansBulletEngine.} based simulation environment (see \cref{fig:envs} right). We use a model of the 7-DOF Kuka LBR arm with a 1-DOF pinch gripper. The scene contains two blocks and the goal is to stack one on top of the other. All episodes start in a randomly initialized state in terms of robot configuration and object placement. The state perceived by the agent consists of the angles and the angular velocities of the arm and the gripper, the tactile sensor for each finger of the gripper, location and orientation of each object in the scene as well as the relative distances of the two blocks to the pinch position of the gripper. The agent acts at the frequency of 40 Hz and outputs desired joint position change which is then applied using position control. The time horizon is 200 timesteps. The agent receives the following sparse rewards: a) for touching a block with the gripper, b) for successfully lifting a block, c) for holding a block above another one, and d) for placing a block on top of another block. We also reward the agent with a larger reward when the objects are stacked and the gripper pinch position is far enough from the tower.
\paragraph{Hyperparameters.}
We use identical network architectures and common hyperparameters for all the approaches including MPH. For the policies, value functions, and the models, we use fully connected neural networks with 2 hidden layers, consisting of 64 hidden units with tanh activation each. We use the implementation of PPO from \citet{hafner2017tfagents} and collect a batch of 50 rollouts using parallel environments. The learning rate is set to 0.0001, 0.01, and 0.005 for policies, value functions and models networks correspondingly. We use Adam as an optimizer. The maximum KL divergence step size of PPO is set to 0.001 and 0.002 for the master and the worker policies correspondingly. We update both the policies and the value networks using 40 training epochs. We set discount factor, $\gamma$, to 0.985 and the models loss coefficient, $\beta$, to 0.2. We use 3 as a width of all modulation signals and the number of skills for the baselines. For the master policy time-scale, we choose the best value among 4, 8, and 16. The options framework uses a time-scale of 8 for the master, the 1-hot baseline and MPH use the time-scale of 4. In the case of MLSH we adapt some of the parameters according to the suggestion of the authors: we use learning rates of 0.01 and 0.0003 for the master and the skill policies respectively, and use 10 groups of 12 cores to train MLSH, a warmup time of 20 (the best among 10, 20, 30), a training time of 70 (the best among 30, 50, 70), and a master policy time-scale of 25 (the best among 10, 25, 50).
\paragraph{Performance.}
\Cref{fig:stacking} shows that MPH outperforms the baselines on the stacking problem. We compare the success rates of the approaches averaged over 50 episodes. The stacking is considered successful if in the end of the episode the blocks are in a stacked configuration without any block being in contact with the robot. A single policy PPO does not solve the task and on average has a success rate of $24\%$. The approach with the 1-hot modulation signal achieves a success rate of $42\%$. The options framework stacks the blocks in $54\%$ of episodes on average but takes more time to train. MLSH learns faster than the two previously discussed methods. However, it plateaus out and reaches the same success rate as the options. MPH outperforms all the baselines, both in terms of final average score and the speed of learning. MPH achieves a success rate of $70\%$ on average (5 seeds) and the best random seed stacks the blocks in $98\%$ of episodes. In contrast to the options framework, MPH uses the whole batch to train all the networks and in contrast to MLSH, trains jointly in a single phase always updating all the networks.
\paragraph{Modulation.}
\input{figures/skills}
To obtain a better understanding on the role of the modulation signal, we plot histograms of the master policies' decisions, for both the options baseline and MPH. \cref{fig:skills} shows the histogram for a single random seed robustly solving the task. First, we notice that the options master (acting on a time-scale of 8) takes consistent decisions and prefers certain skills over others at each timestep (\cref{fig:skills} top). We highlight the fact that the master policy network is memoryless and does not observe the current timestep value. In the beginning, the master chooses the 3rd skill for about 24 timesteps, then it chooses the 2nd skill for roughly 16 timesteps and finally the first one for the rest of the episode. Thus we conclude that the skills correspond to reaching, lifting and placing primitives which is confirmed by observing the policy acting. Once trained, the options framework solves the problem in the first third of an episode and spends the rest of the time avoiding contact with any block (requires no specific skill). We observe a similar pattern for the MPH modulation signal switching the bits in roughly the same time intervals. The master policy of MPH acts on a time-scale of 4 and changes the modulation signal in roughly the same time intervals as the options master. However, MPH typically employs more than a single bit and benefits from higher modulation skill capacity than the categorical methods like options and MLSH.
\subsection{Pushing}
\label{sec:pushing}
\paragraph{Task description.}
The block pushing task is FetchPush-v1 from OpenAI Gym where following \cite{Andrychowicz2017HindsightER}, we discard initial state-goal pairs in which the goal is already satisfied. In FetchPush-v1, the end-effector is controlled in XYZ space and the goal is to push a randomly placed box to the target. The agent receives the reward of 1 when the block is in an epsilon ball of the episode target and 0 anywhere else. Each episode is randomly initialized in terms of the robot and the block configurations and the target. The length of the episodes is set to 50.
\paragraph{Hyperparameters.}
We use the same set of hyperparameters as described for the stacking task with several exceptions. We adapt the batch size (set to 32 rollouts), the number of training epochs (set to 32), the policies learning rate (set to 0.0001), the value functions learning rate (set to 0.0003), and the discount factor (set to 0.98). For MLSH we change the warmup time to 10, the training time to 50, and the master policy timescale to 10.
\paragraph{Performance.}
We compare MPH with four baselines on FetchPush-v1. We use episode success as performance metrics (averaged over 32 episodes). The pushing is considered to be successful when the block is close to the episode target. As shown in \cref{fig:pushing}, MPH outperforms all the other approaches. A single policy PPO plateaus out after achieving a success rate of $37\%$. The 1-hot hierarchy and options framework on average solve the task with a success rate of $41\%$ and $43\%$ correspondingly. MLSH performs better than the first three methods and achieves a success rate of $45\%$. The options and MLSH take more time to train due to the fact that each option is trained on a sub set of the batch of data. MPH is on average $26\%$ more successful than the best of the baselines and achieves a success rate of $71\%$ on average and the best random seed is able to successfully push the block in $99\%$ of episodes.
\subsection{Hierarchical intrinsic motivation}
\label{sec:motivation}
\input{figures/motivation}
We evaluate the effect of intrinsic motivation (IM) applied on both levels of the hierarchy. \Cref{fig:motivation} shows the results for both tasks with the four possible settings of intrinsic rewards: intrinsic reward for both policies, intrinsic reward only for the worker policy, intrinsic reward only for the master policy and no intrinsic reward. We notice that MPH without the intrinsic reward often struggles to find the solution and performs worse. Given the intrinsic bonus for one of the layers, MPH performance improves. The intrinsic motivation on the worker side results in faster initial exploration, however MPH with intrinsically rewarded master network has higher final score, potentially due to better long term planning. The best score is achieved with an intrinsic motivation for both policies. Applying intrinsic motivation to both policies results in an improvement of $20\%$ and $14\%$ w.r.t. the version without intrinsic motivation for the stacking and the pushing tasks correspondingly.
\section{Markovian formulation of MPH}
\label{appendix:derivations}
Since different policies act on different time-scales, we use the following notation for the action and the state of the policy $\pi_k$ acting on the time-scale $T_k$ in an $n$-level hierarchy:
\begin{equation}
(s_{t+1}^k, a_{t+1}^k) = \begin{cases}
(s_t^k, a_t^k), & \text{if}\ t \mod T_k \neq 0 \\
(\{s^\textit{env}_t, a^{k+1}_t, \ldots, a^n_t\}, \pi_k(\{s^\textit{env}_t, a^{k+1}_t, \ldots, a^n_t\})), & \text{otherwise}
\end{cases}
\end{equation}
We refer to control signals as latent variables of policies on higher levels where the policy on level $k$ is a conditional action distribution $\pi_k(a^k_t | s^k_t)$ assuming that the latent variables $a^{k+1}_t, \ldots, a^n_t$ are the part of the state $s^k_t$. Therefore, the action $a^k_t$ can be sampled once $a^{k+1:n}$ are sampled. The corresponding problem of finding the optimal policy for each hierarchy layer can be solved using the RL machinery once it is reformulated in the MDP formalism as we do below.
For each layer of the hierarchy $k > 1$, we can rewrite the transition probabilities of the MDP on the layer $k$ marginalizing over the actions of the $(k-1)^{th}$ policy and using the transition probabilities of the $(k-1)^{th}$ layer of the hierarchy MDP:
\begin{equation}
p_k(s^k_{t+1} | s^k_t, a^k_t) = \int_{\mathcal{A}_{k-1}} p_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, a^{k-1}_t) \pi_{k-1}(a^{k-1}_t | s^{k-1}_t) d a^{k-1}_t
\label{eq:margin_prob}
\end{equation}
Since the higher level policies act on longer time-scales, we also derive the time-scaled transition probabilities for each MDP with the time-scale $T_k > 1$:
\begin{equation}
p_k(s^k_{t+T_k} | s^k_t, a^k_t) = \overbrace{\int_{\mathcal{A}_k} \ldots \int_{\mathcal{S}_k} \ldots}^{2T_k} p_k(s^k_{t+1}|s^k_t,a^k_t) \prod_{i=1}^{T_k-1} p_k(s^k_{t+i+1} | s^k_{t+i}, a^k_{t+i}) \pi_k(a^k_{t+i} | s^k_{t+i}) d a^k_{t+i} d s^k_{t+i}
\label{eq:timescaled_prob}
\end{equation}
Given \cref{eq:margin_prob} and \cref{eq:timescaled_prob}, one can get the transitions probabilities for any layer MDP using only the environment transition probabilities and the policies. While the former is stationary, the policies are updated in each training epoch that might bring instabilities to the training. The trust region optimization methods, such as TRPO (or its approximation, PPO) bring in a convenient way to bound the changes of the high level MDPs. Such bound can guarantee that the optimization problem solved by TRPO (or PPO) for higher layers changes smoothly during the updates. Thus, the global solution is converging to the optimal solution of the original problem. Below we derive the upper bounds for the transition probabilities change for a discrete case which can be extended to the continuous state and action spaces. We rewrite \cref{eq:margin_prob} for the discrete case ($k$ is shifted by 1):
\begin{equation}
p_{k+1}(s^{k+1}_{t+1}|s^{k+1}_t, a^{k+1}_t) = \sum_{a^k_t} p_k(s^k_{t+1} | s^k_t, a^k_t) \pi_k(a^k_t | s^k_t) = p_k(s^k_{t+1} | s^k_t, \cdot)^T \pi_k(\cdot|s^k_t)
\end{equation}
We start by deriving the equation for the first two levels of the hierarchy. We denote the transition probability after the training epoch as $p'_k$ and the updated policy as $\pi'_k$. Since $p_1 = p_{\rm{env}} = p'_{\rm{env}} = p'_1$ where $p_{\rm{env}}$ is the transition probability of the environment, we get the following inequalities:
\begin{align}
\label{eq:prob_bound11}
\abs{p_2(s^2_{t+1}|s^2_t, a^2_t) - p'_2(s^2_{t+1}|s^2_t, a^2_t)} &\leq \norm{p_1(s^1_{t+1} | s^1_t, \cdot)}_{\infty} \norm{\pi_1(\cdot|s^1_t) - \pi'_1(\cdot|s^1_t)}_1 \\
\label{eq:prob_bound12}
& < \sqrt{\frac{1}{2}D_{KL}(\pi_1(\cdot|s^1_t), \pi'_1(\cdot|s^1_t))} \\
& \leq \sqrt{\frac{1}{2}\delta_1}
\label{eq:prob_bound13}
\end{align}
where we used Hölder's inequality in \cref{eq:prob_bound11}, Pinsker's inequality and the fact that $p_1 = p_{\rm{env}}$ is ergodic in \cref{eq:prob_bound12} and TRPO (or PPO) guarantee of $D_{KL}(\pi(\cdot|s^1_t), \pi'(\cdot|s^1_t)) < \delta$ in \cref{eq:prob_bound13}.
Next we derive the bound for $k>2$ level of the hierarchy:
\begin{align}
\label{eq:prob_boundk1}
&\abs{p_k(s^k|s^k_t, a^k_t) - p'_k(s^k_{t+1}|s^k_t, a^k_t)} \\
&= \left|p_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot)^T \pi_{k-1}(\cdot|s^{k-1}_t) - p'_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot)^T \pi_{k-1}(\cdot|s^{k-1}_t)\right. \\
&\left.+ p'_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot)^T \pi_{k-1}(\cdot|s^{k-1}_t) - p'_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot)^T \pi'_{k-1}(\cdot|s^{k-1}_t) \right| \\
&\leq \norm{p_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot) - p'_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot)}_{\infty} \norm{\pi_{k-1}(\cdot|s^{k-1}_t)}_1 \\
&+ \norm{p'_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot)}_{\infty} \norm{\pi_{k-1}(\cdot|s^{k-1}_t) - \pi'_{k-1}(\cdot|s^{k-1}_t)}_1 \\
&\leq \norm{p_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot) - p'_{k-1}(s^{k-1}_{t+1} | s^{k-1}_t, \cdot)}_{\infty} + \norm{\pi_{k-1}(\cdot|s^{k-1}_t) - \pi'_{k-1}(\cdot|s^{k-1}_t)}_1 \\
& \leq \sum_{i=1}^{k-1}\sqrt{\frac{1}{2}\delta_i}
\label{eq:prob_boundk3}
\end{align}
where we use Hölder's and Pinsker's inequalities and the result of \cref{eq:prob_bound13}.
We showed that for any layer of MPH its MDP's transition probabilities change is upperbounded with the TRPO (or PPO) update. In addition, this bound scales linearly with $k$. Thus, we have a direct control on how much the MDPs on higer layers change after each policy update. Given that the change is small, the optimization problem solved by TRPO (or PPO) for higher layers will also change smoothly during the updates. Therefore, we can apply the standard RL machinery for the hierarchical time-scaled MDPs with the given transition probabilities independently for each layer. Moreover, such guarantees also mean more stable training of the hierarchy.
|
1,108,101,564,673 | arxiv | \section*{Appendix}
In this Appendix, we summarize the cross sections for the neutrino
scattering processes which are relevant for the calculation of the
damping rate of high energy neutrinos. The fermion masses (in
particular, lepton masses) are neglected except in Eq.\
\eqref{nunubr2ffbr}.
\begin{itemize}
\item $\nu_\ell + \bar{\nu}_{\ell, {\rm BG}} \rightarrow \nu_\ell +
\bar{\nu}_\ell$:
\begin{align}
\sigma = &
\frac{g_Z^4}{192\pi}
\frac{s}{(s-m_Z^2)^2 + m_Z^2 \Gamma_Z^2}
+ \frac{g_z^4}{64\pi s}
\left[ x_Z^{-1} + 2 - 2(1+x_Z) \log( 1 + x_Z^{-1} ) \right]
\nonumber \\ & +
\frac{g_Z^4}{64\pi}
\frac{s}{(s-m_Z^2)^2 + m_Z^2 \Gamma_Z^2}
(1-x_Z)
\left[
3 + 2x_Z - 2(1+x_Z)^2 \log (1 + x_Z^{-1})
\right],
\end{align}
where, here and hereafter, $x_V\equiv m_V^2/s$ (with $V=W$ and $Z$).
\item $\nu_\ell + \bar{\nu}_{\ell, {\rm BG}} \rightarrow \ell +
\bar{\ell}$:
\begin{align}
\sigma = &
\frac{g_Z^2 (g_{Z,\ell_L}^2 + g_{Z,\ell_R}^2)}{48\pi}
\frac{s}{(s-m_Z^2)^2 + m_Z^2 \Gamma_Z^2}
+ \frac{g_2^4}{16\pi s}
\left[ x_W^{-1} + 2 - 2(1+x_W) \log( 1 + x_W^{-1} ) \right]
\nonumber \\ & +
\frac{g_Z g_{Z,\ell_L} g_2^2}{16\pi}
\frac{s}{(s-m_Z^2)^2 + m_Z^2 \Gamma_Z^2}
(1-x_Z)
\left[
3 + 2x_W - 2(1+x_W)^2 \log (1 + x_W^{-1})
\right].
\end{align}
\item $\nu_\ell + \bar{\nu}_{\ell, {\rm BG}} \rightarrow f + \bar{f}$
($f\neq\nu_\ell$, $\ell$):
\begin{align}
\sigma = &
\frac{g_Z^2 (g_{Z,f_L}^2 + g_{Z,f_R}^2)}{48\pi}
\frac{s + 2 m_f^2}{(s-m_Z^2)^2 + m_Z^2 \Gamma_Z^2}
\sqrt{1 - \frac{4m_f^2}{s}},
\label{nunubr2ffbr}
\end{align}
where $m_f$ is the mass of $f$.
\item $\nu_\ell + \bar{\nu}_{\ell',{\rm BG}}
\rightarrow \nu_\ell + \bar{\nu}_{\ell'}$ ($\ell\neq\ell'$):
\begin{align}
\sigma =
\frac{g_2^4}{64\pi s}
\left[ x_Z^{-1} + 2 - 2(1+x_Z) \log( 1 + x_Z^{-1} ) \right].
\end{align}
\item $\nu_\ell + \bar{\nu}_{\ell',{\rm BG}}
\rightarrow \ell + \bar{\ell}'$ ($\ell\neq\ell'$):
\begin{align}
\sigma =
\frac{g_2^4}{16\pi s}
\left[ x_W^{-1} + 2 - 2(1+x_W) \log( 1 + x_W^{-1} ) \right].
\end{align}
\item $\nu_\ell + \nu_{\ell, {\rm BG}} \rightarrow \nu_\ell + \nu_{\ell}$:
\begin{align}
\sigma = &
\frac{g_Z^4}{64\pi s} \frac{1}{x_Z (1+x_Z)}
+ \frac{g_Z^4}{32\pi s} \frac{1}{1 + 2x_Z}
\log (1 + x_Z^{-1} ).
\end{align}
\item $\nu_\ell + \nu_{\ell',{\rm BG}}
\rightarrow \nu_\ell + \nu_{\ell'}$ ($\ell\neq\ell'$):
\begin{align}
\sigma = &
\frac{g_Z^4}{64\pi s} \frac{1}{x_Z (1+x_Z)}.
\end{align}
\end{itemize}
In the above expressions, $g_2$ is the gauge coupling constant for
$SU(2)_L$, $g_Z\equiv\sqrt{g_2^2+g_1^2}$ (with $g_1$ being the gouge
coupling constant for $U(1)_Y$), and
\begin{align}
g_{Z,u_L} & = \frac{1}{2} g_Z
- \frac{2}{3} \frac{g_1^2}{g_Z},
\\
g_{Z,u_R} &= - \frac{2}{3} \frac{g_1^2}{g_Z},
\\
g_{Z,d_L} & = - \frac{1}{2} g_Z
+ \frac{1}{3} \frac{g_1^2}{g_Z},
\\
g_{Z,d_R} &= \frac{1}{3} \frac{g_1^2}{g_Z},
\\
g_{Z,\ell_L} & = - \frac{1}{2} g_Z
+ \frac{g_1^2}{g_Z},
\\
g_{Z,\ell_R} &= \frac{g_1^2}{g_Z},
\\
g_{Z,\nu_L} & = \frac{1}{2} g_Z,
\\
g_{Z,\nu_R} &= 0.
\end{align}
|
1,108,101,564,674 | arxiv | \section{Introduction}
\label{sec:intro}
The construction of next-generation radio telescopes such as LOFAR has in recent years provided impetus for considering which might be the best targets for detecting exoplanetary auroral radio emissions. This attention has traditionally focused on so-called `hot Jupiter'-type exoplanets orbiting close to their local star \citep[e.g.][]{farrell99a, farrell04a, zarka01a,zarka07a,lazio04a, griessmeier04a, griessmeier05a, griessmeier07a, stevens05a, jardine08a, fares10a, reiners10a, vidotto11c}, and in such cases the auroral radio emission is assumed to be generated by a star-planet interaction, mediated either by magnetic reconnection as at the Earth or Alfv\'en waves such as at Io. However, \cite{nichols11a} recently considered the radio emission generated by magnetosphere-ionosphere (M-I) coupling at Jupiter-like exoplanets with internal plasma sources, and concluded that such systems are also able to generate detectable emissions. In this process, shown schematically in Figure~\ref{fig:miccs}, plasma is generated internally to the magnetosphere from sources such as volcanic moons or ionospheric outflow and, in a fast-rotating magnetosphere such as Jupiter's, becomes centrifugally unstable and diffuses radially away from the planet before being lost down the dusk flank of the magnetotail via the pinching off of plasmoids, in a process known as the Vasyli\=unas Cycle \citep[e.g.][]{hill79, vasyliunas83, hill01, pontius97, cowley01, nichols03, nichols04, nichols05, nichols11b}. Conservation of angular momentum requires that, as the plasma diffuses radially outward, its angular velocity drops such that a radial gradient of equatorial plasma angular velocity is set up, which when mapped along the magnetic field to the planet causes a current to flow in the Pedersen layer of the ionosphere. This Pedersen current balances through the $\mathbf{J}\times\mathbf{B}$ force the drag of the atmospheric neutrals on the sub-rotating ionospheric plasma, and this torque is transmitted along the field lines to the equatorial plane by the sweep-back of the planet's magnetic field out of the meridian planes, such that the Pedersen current is balanced by an associated radial current in the equatorial plane. Current continuity is maintained between these two field-perpendicular currents by field-aligned currents, the upward component of which, associated with downward-precipitating electrons, is on Jupiter associated with the main auroral oval \citep{grodent03b, clarke04, nichols09b} and significant components of the planet's radio emissions, i.e.\ the b-KOM, HOM and non-Io-DAM emissions \citep{zarka98a}.
\begin{figure}
\noindent\includegraphics[width=84mm]{mi_coupling_fig.eps}
\caption{
Sketch of a meridian cross section through a Jupiter-like exoplanet's inner and middle magnetosphere, showing the principal physical features involved. The arrowed solid lines indicate magnetic field lines, the arrowed dashed lines the magnetosphere-ionosphere coupling current system, and the dotted region the rotating disc of outflowing plasma. From Nichols (2011a).}
\label{fig:miccs}
\end{figure}
\cite{nichols11a} showed that the best candidates for detection of such internally-generated radio emissions are rapidly rotating Jupiter-like exoplanets orbiting stars with high X-ray-UV (XUV) luminosity at orbital distances beyond \ensuremath{\sim}1~AU. The XUV luminosity is significant since it generates the ionospheric conductivity which allows intense M-I coupling currents to flow. It is also worth noting that the variable nature of very active stars tends to reduce the frequency of detection of exoplanets around such stars using radial velocity and transit methods. An obvious question is then how many internally-generated radio targets might one expect to exist, and we thus address this issue in this Letter. We determine the best candidates for detection of these radio emissions by estimating the maximum spectral flux density expected from planets orbiting F-M dwarfs within 25 pc using data listed in the NASA/IPAC/NExScI Star and Exoplanet Database (NStED, now the NASA Exoplanet Archive). We show that a number of systems may be detectable if they host massive, fast-rotating planets. The results discussed here will be of benefit for the currently-underway planning of observations of exoplanetary candidates using LOFAR, which will have a sensitivity threshold of $\sim$1~mJy at 70~MHz and $\sim$0.14 mJy at 240~MHz for 1 h integration \citep{farrell04a}. For simplicity, here we use 1~mJy as the detection threshold, and note that for higher frequencies this is thus a conservative value.
\section{Analysis}
\label{sec:analysis}
We take the X-ray luminosity $L_X$ as a proxy for the XUV band as a whole, since X-ray and EUV luminosities are broadly correlated \citep{hodgkin:1994aa}, and we employ values of $L_X$ determined from data listed in the NStED catalogue. Values of $L_X$ are either those as directly measured during the ROSAT all-sky survey (RASS), including flux detected in both the hard (0.5 -- 2.0 keV) and soft (0.1 -- 0.4 keV) passbands of ROSAT \citep{hunsch:1999aa}, or calculated from the chromospheric emission ratio $R^\prime_\mathrm{HK}$ using the relation of \cite{sterzik:1997aa}, either using the directly-measured value or that computed from the Mount Wilson S-value using the relations given by \cite{noyes:1984aa}. These relations are given in the Supplementary Material (SM). It is worth noting, however, that the stars with the highest XUV luminosity generally have directly-measured values. The proxies used here are typically related to the activity of the star, parameterised by the ratio $L_X/L_\mathrm{bol}$, although here we are simply interested in $L_X$. We do note, however, that all stars for which the \cite{sterzik:1997aa} relation is employed have $L_X/L_\mathrm{bol} < -3.7$, i.e.\ within the range of validity of the relation and not within the activity-rotation saturation zone \citep{gudel:2004aa}. \\
The value of $L_X$ determined as above for each star within 25~pc (within error) listed in the NStED database is then used to determine the Pedersen conductance of the planet, which comprises a component decreasing with orbital distance as $1/R_{orb}$ and a constant component induced by auroral precipitation \citep{nichols11a}, i.e.\
\begin{equation}
\Sigma_{P}^*=\left(\frac{L_{XUV\:\star}}{L_{XUV\:\sun}}\right)^{1/2}\frac{2.6}{\ensuremath{R_{orb}}}+1.5\;\mathrm{mho}\;\;.
\label{eq:sigmap}
\end{equation}
Details of the computation of the M-I coupling currents and radio power are given in previous works \citep[e.g.][]{nichols03,nichols11a}, but briefly, the currents arise from an angular velocity gradient in the magnetosphere owing to the centrifugally-driven outflow of plasma, described by
\begin{equation}
\frac{\rho_e}{2}\frac{\mathrm{d}}{\mathrm{d}\rho_e}\left(\frac{\omega}{\Omega_p}\right)+\left(\frac{\omega}{\Omega_p}\right)=\frac{4\pi \Sigma_P^*F_e|B_{ze}|}{\dot{M}}\left(1-\frac{\omega}{\Omega_p}\right)\;\;,
\label{eq:hp}
\end{equation}
\noindent where $\rho_e$ is the distance from magnetic axis, $\omega$ is the plasma angular velocity, $\Omega_p$ is the planetary angular velocity, $\dot{M}$ is the plasma mass outflow rate (taken here to be the canonical jovian mass outflow rate of 1000~$\mathrm{kg\;s^{-1}}$), $|B_{ze}|$ is the magnitude of the north-south magnetic field threading the equatorial plane, and $F_e$ is the equatorial value of the poloidal flux function related to the magnetic field via $\mathbf{B}=(1/\rho)\nabla F \times \hat{\varphi}$. The equatorial field strength $|B_{ze}|$ is dependent on the planetary equatorial field strength $B_{eq}$ and the size of the magnetosphere. Here we examine results of two models for $B_{eq}$. The first is that employed by \cite{nichols11a}, who took $B_{eq}$ to vary with the rotation rate of the planet as $\Omega_p^{0.75}$, and we consider planets with $\Omega_p / \Omega_J = 1$ and 3, i.e.\ representative of the angular velocities that might be expected of Jupiter-mass planets, thus corresponding to $B_{eq} / B_J= 1$ and $\sim2.3$, respectively, where $B_J$ is Jupiter's equatorial surface field strength. For comparison, we also consider a more massive planet with mass $10M_J$, for which we employ the model of \cite{reiners10a}, which is independent of rotation rate for fast rotating bodies, and for $10M_J$ yields $B_{eq} / B_J= 17$. These magnetic field strengths correspond to polar electron cyclotron frequencies, and thus radio emission bandwidth, of \ensuremath{\sim}24, \ensuremath{\sim}54, and \ensuremath{\sim}406~MHz, respectively. The size of the magnetosphere is governed by pressure balance between magnetospheric magnetic field and plasma pressures on the one hand and stellar wind dynamic pressure on the other. Along with \cite{nichols11a} we employ the empirical \cite{huddleston98} relation for sub-solar magnetopause distance versus dynamic pressure $p_{sw}$, a quantity related to the stellar mass loss rate. \cite{wood05a} have discussed a relation between stellar activity and mass loss rate per unit surface area. However, whether this relation holds for very active stars remains uncertain, and they showed that stars more active than the Sun exhibit values ranging between \ensuremath{\sim}0.01-100 times the solar value. \cite{cohen:2011aa} showed that for the Sun no relationship between mass loss rate and X-ray luminosity is apparent, and attributed this to the fact that the solar mass loss is dominated by the fast solar wind originating from open solar magnetic flux, while solar activity is associated with closed flux. In the light of this uncertainty, and in the lack of in situ measurements of stellar wind densities and velocities we simply consider two stellar wind dynamic pressures, $p_{sw\sun}$ and $100p_{sw\sun}$. Once the plasma angular velocity profile has been obtained from equation~\ref{eq:hp}, the density $j_{\|i}$ of the upward-directed auroral field-aligned current responsible for the radio emission is given by
\begin{equation}
j_{\|i} = \frac{4B_{eq}\Omega_p}{\rho_e B_{ze}}\frac{d}{d \rho_e}
\left[\Sigma_P^*F_e\left(1-\frac{\omega}{\Omega_p}\right)\right]\;\;.
\label{eq:jpari}
\end{equation}
\noindent This field-aligned current must in general be carried by a field-aligned voltage, which yields a precipitating electron energy flux given by
\begin{equation}
\ensuremath{E_f}=
\frac{\ensuremath{E_{f\circ}}}{2}\left(\frac{\ensuremath{j_{\|i}}}{\ensuremath{j_{\|i\circ}}}\right)^2\;\;,
\label{eq:ef}
\end{equation}
\noindent where $E_{f\circ}$ and $j_{\|i\circ}$ are the energy flux and field-aligned current that can be carried by unaccelerated precipitating electrons. The radio power $P_r$ is then found by integrating this precipitating energy flux over the hemisphere and converting to radio power assuming a 1 per cent generation efficiency for the electron cyclotron maser instability. The maximum power was determined over the range $1<(R_{orb}/\mathrm{AU})<500$. The outer limit is somewhat arbitrarily large, but we note that exoplanets with semi-major axes of up to several thousand AU have been detected by direct imaging \citep[e.g.][]{burgasser:2010aa}. Finally, the spectral flux density $\mathcal{F}$ is determined assuming the emission is beamed into 1.6 sr \citep{zarka04a} with bandwidth equal to the polar electron cyclotron frequency. \\
Computed powers versus orbital distance for the different cases discussed above are shown in the SM, but in Figure~\ref{fig:maxpowers} we show the maximum radio powers computed by the model versus $L_X$ for the different cases. For each case the results employing both terms in equation~\ref{eq:sigmap} are shown, along with those for only the stellar XUV term for comparison. First, comparing panels it is apparent that higher magnetic field strengths and faster rotation lead to higher radio powers. It is also clear that, as discussed e.g. by \cite{cowley07}, higher dynamic pressure results in lower auroral current intensities, and thus lower radio powers, owing to the lower magnetosphere size. This effect also results in increased power with orbital distance, an effect which is, however, offset by the simultaneous decreasing of the conductance induced by stellar XUV flux. Considering then solely the effect due to XUV flux, the power reaches a maximum at an orbital distance that is larger for increased $L_X$, resulting in increased maximum radio power as shown by the dashed and dotted lines in Figure~\ref{fig:maxpowers}. Including the conductance contribution from auroral precipitation results in the power asymptoting to a finite value at $R_{orb}=\infty$ instead of decreasing to zero, resulting in the flattening of the solid and dot-dashed lines in Figure~\ref{fig:maxpowers} for low XUV luminosity. For planets with larger magnetic fields, the switch between regimes dominated by XUV- and precipitation-induced conductance occurs at increased $L_X$, such that the power from planets with large field strengths are essentially independent of $L_X$. In these cases the spectral flux density is instead strongly dependent on distance from the Earth to the star. \\
Two categories of target were considered in the analysis. The first is all F-M dwarfs within 25~pc (within error) which are known to host planets and have directly- or indirectly-measured values of $L_X$, and the second is those with $L_X$ greater than 100 times the solar X-ray luminosity $L_{X\sun}$, irrespective of whether the star is known to host planets or not. We take $L_{X\sun}$ to be the mean of the 0.1-2.4 keV full solar cycle range measured by \cite{judge:2003aa}, i.e.\ $10^{20.35}$~W. The threshold of $100 L_{X\sun{}}$ was used since in the model of \cite{nichols11a} stars with such high X-ray luminosities are required to host planets detectable from beyond 1 pc with the jovian rotation rate.
\section{Results}
\begin{figure}
\noindent\includegraphics[width=75mm]{maxpowers.eps}
\caption{
Figure showing computed maximum radio powers $P_r$ versus $L_X$. The different cases discussed in the text are shown by the labels above each panel and in panel (a). In each panel values of $L_X$ for stars with planets are shown by the pluses and values for stars with $L_X > 100$ are shown by the crosses.}
\label{fig:maxpowers}
\end{figure}
\begin{table*}
\caption{Table showing the top ten potential targets for observing internally-generated exoplanetary radio emissions amongst stars known already to host planets. The major columns are the SIMBAD default name, the spectral type, the distance $s$ in parsecs, declination in degrees, the star X-ray luminosity $L_X$, and estimates of the maximum radio spectral flux density $\mathcal{F}$. The $L_X$ column indicates whether the value is determined from direct ROSAT measurements (R) or indirectly from the Mount Wilson S-value (S) or chromospheric emission ratio $R^\prime_\mathrm{HK}$ (H). For each target, eight estimates of the maximum radio spectral flux density $\mathcal{F}$ are shown. Four of these employ the planetary magnetic field strengths used by Nichols (2011), and four employ the Reiners et al. (2010) algorithm for a planet with mass $M_p=10M_J$. Each of these groups are divided into results employing solar and 100~$\times$~solar dynamic pressure values, and these are further divided into results employing two values of the planetary angular velocity normalised to Jupiter's rotation rate $\ensuremath{\Omega_p}/\ensuremath{\Omega_J}$.}
\begin{center}
\begin{tabular} {l| c|c|c| c |c |c |c| c |c |c| c| c }
& & & & & \multicolumn{8}{|c}{$\mathcal{F}$ / mJy} \\
& & & & & \multicolumn{4}{|c}{$B_{eq}$ Nichols (2011)} & \multicolumn{4}{|c}{$B_{eq}$ Reiners et al. (2010), $10M_J$}\\
Alias & Sp. Type & \emph{s}/pc & dec & Log($L_X$/W) & \multicolumn{2}{|c}{$p_{sw\odot}$} & \multicolumn{2}{|c}{$100p_{sw\odot}$} & \multicolumn{2}{|c}{$p_{sw\odot}$} & \multicolumn{2}{|c}{$100p_{sw\odot}$}\\
& & & & & \multicolumn{2}{|c}{$\Omega_p / \Omega_J$ =} & \multicolumn{2}{|c}{$\Omega_p / \Omega_J$ =} &\multicolumn{2}{|c}{$\Omega_p / \Omega_J$ =} & \multicolumn{2}{|c}{$\Omega_p / \Omega_J$ =}\\
& & & & & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\
\hline
V* eps Eri & K2V & 3.2 & -9.5 &21.32 (R) & 0.61 & 18.21 & 0.49 & 11.63 & 31.43 &282.87 & 5.10 & 45.93\\
HIP 85523 & M2+V & 4.5 &-46.9 &20.53 (R) & 0.30 & 9.04 & 0.24 & 5.78 & 15.62 &140.56 & 2.55 & 22.95\\
V* IL Aqr & M3.5V & 4.7 &-14.3 &19.49 (R) & 0.28 & 8.38 & 0.23 & 5.37 & 14.49 &130.42 & 2.37 & 21.35\\
LHS 3685 & M2/3V & 4.9 &-49.0 &19.77 (R) & 0.25 & 7.59 & 0.21 & 4.86 & 13.13 &118.19 & 2.15 & 19.33\\
LHS 349 & G5V & 8.5 &-18.3 &19.87 (R) & 0.08 & 2.55 & 0.07 & 1.64 & 4.42 & 39.75 & 0.72 & 6.50\\
HR 7722 & K3V & 8.8 &-27.0 &20.23 (R) & 0.08 & 2.39 & 0.06 & 1.53 & 4.13 & 37.15 & 0.67 & 6.07\\
LHS 311 & G3/5V & 9.2 &-40.5 &19.78 (R) & 0.07 & 2.17 & 0.06 & 1.39 & 3.75 & 33.78 & 0.61 & 5.53\\
LHS 310 & M3V &10.2 & 26.7 &19.84 (R) & 0.06 & 1.77 & 0.05 & 1.13 & 3.06 & 27.56 & 0.50 & 4.51\\
LHS 3257 & M1V &10.3 & 25.7 &19.25 (S) & 0.06 & 1.74 & 0.05 & 1.11 & 3.00 & 27.03 & 0.49 & 4.43\\
HD 13445 & K1V &10.9 &-50.8 &20.61 (R) & 0.05 & 1.57 & 0.04 & 1.00 & 2.71 & 24.36 & 0.44 & 3.98\\
\hline
\end{tabular}
\end{center}
\label{tab:planets}
\end{table*}
A total of 40 F-M dwarfs known to host planets have directly- or indirectly measured values of $L_X$, while 51 have $L_X > 100L_{X\sun}$. A complete listing of the results of this study is available in the SM, but in Tables~\ref{tab:planets} and \ref{tab:100lx} we show the top 10 results for stars known to host planets, and stars with $L_X > 100L_{X\sun}$, respectively. It is first apparent from columns 6 and 8 of Table~\ref{tab:planets} that no planets of Jupiter's mass which rotate at the jovian angular velocity and orbit stars already known to host planets would be expected to be viable targets. The situation is improved somewhat for the faster rotating planets considered in columns 7 and 9, with 15 stars potential targets for detection of a planet with $(\Omega_p/\Omega_J) = 3$, reducing to 10 for $100p_{sw\sun}$, although we note that most of these stars are in the southern hemisphere. For the $10M_J$ planets in columns 10--13, 21 would be detectable with a solar wind dynamic pressure and jovian rotation rate, reducing to 4 for $100p_{sw\sun}$, while for $(\Omega_p/\Omega_J) = 3$ all 40 would be detectable with solar wind dynamic pressure, reducing to 34 for $100p_{sw\sun}$. The brightest target in the northern hemisphere is LHS 310. Considering now the targets listed in Table~\ref{tab:100lx}, we first note that the nearest star is at $\sim$8.7~pc, i.e.\ further than the top 5 targets in Table~\ref{tab:planets}. However, for the regimes in which the XUV-induced conductance is not negligible (i.e.\ for columns 6--9, corresponding to panels (a) and (b) in Figure~\ref{fig:maxpowers}) the higher XUV luminosity results in higher radio powers, and thus higher spectral flux density for a given distance from Earth than in Table~\ref{tab:planets}, but otherwise, the pattern is similar. Thus, again, from columns 6 and 8, it is apparent that no planets with jovian mass and rotation rate would be detectable, but columns 7 and 9 indicate that 12 fast-rotating planets would be detectable with solar wind dynamic pressure, reducing to 4 for $100p_{sw\sun}$. For the $10M_J$ planets in columns 10--13, 20 would be detectable with a solar wind dynamic pressure and jovian rotation rate, reducing to none for $100p_{sw\sun}$, while for $(\Omega_p/\Omega_J) = 3$ all 51 would be detectable with solar wind dynamic pressure, reducing to 39 for $100p_{sw\sun}$.
{\footnotesize
\begin{table*}
\caption{As for Table~\ref{tab:planets}, but for any star with $L_X>100L_{X\sun}$.}
\begin{center}
\begin{tabular} {l| c|c|c| c |c |c |c| c |c |c| c| c }
& & & & & \multicolumn{8}{|c}{$\mathcal{F}$ / mJy} \\
& & & & & \multicolumn{4}{|c}{$B_{eq}$ Nichols (2011)} & \multicolumn{4}{|c}{$B_{eq}$ Reiners et al. (2010), $10M_J$}\\
Alias & Sp. Type & \emph{s}/pc & dec & Log($L_X$/W) & \multicolumn{2}{|c}{$p_{sw\odot}$} & \multicolumn{2}{|c}{$100p_{sw\odot}$} & \multicolumn{2}{|c}{$p_{sw\odot}$} & \multicolumn{2}{|c}{$100p_{sw\odot}$}\\
& & & & & \multicolumn{2}{|c}{$\Omega_p / \Omega_J$ =} & \multicolumn{2}{|c}{$\Omega_p / \Omega_J$ =} &\multicolumn{2}{|c}{$\Omega_p / \Omega_J$ =} & \multicolumn{2}{|c}{$\Omega_p / \Omega_J$ =}\\
& & & & & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\
\hline
CPD-28 332 & F9-V & 8.7 &-28.2 &22.66 (R) & 0.19 & 3.31 & 0.07 & 1.71 & 4.64 & 41.77 & 0.73 & 6.54\\
V* FF And & M0VEP & 8.6 &-20.6 &22.38 (R) & 0.15 & 2.84 & 0.07 & 1.69 & 4.59 & 41.34 & 0.73 & 6.56\\
V* AT Mic & M1VE & 9.9 &-31.3 &22.74 (R) & 0.15 & 2.55 & 0.06 & 1.32 & 3.57 & 32.14 & 0.56 & 5.03\\
V* AK Pic & M4.0V &10.2 &-32.4 &22.55 (R) & 0.13 & 2.25 & 0.05 & 1.23 & 3.34 & 30.03 & 0.52 & 4.72\\
V* BY Dra & M0VP &11.5 &-43.8 &22.60 (R) & 0.10 & 1.78 & 0.04 & 0.97 & 2.64 & 23.72 & 0.41 & 3.73\\
V* V834 Tau & G0V &12.8 & 47.7 &22.70 (R) & 0.09 & 1.55 & 0.03 & 0.80 & 2.17 & 19.51 & 0.34 & 3.05\\
HIP 61941B &F1V+F0 &11.8 & -1.4 &22.42 (R) & 0.08 & 1.51 & 0.04 & 0.90 & 2.45 & 22.05 & 0.39 & 3.50\\
V* OU Gem & G0V &21.7 & 33.9 &23.66 (R) & 0.10 & 1.36 & 0.01 & 0.33 & 0.92 & 8.24 & 0.13 & 1.17\\
V* V775 Her & K4Vke &13.5 & 20.9 &22.57 (R) & 0.07 & 1.29 & 0.03 & 0.71 & 1.92 & 17.27 & 0.30 & 2.72\\
* tet Boo & F8V &14.6 & 51.9 &22.63 (R) & 0.06 & 1.11 & 0.03 & 0.61 & 1.64 & 14.80 & 0.26 & 2.33\\
\hline
\end{tabular}
\end{center}
\label{tab:100lx}
\end{table*}}
\section{Summary and discussion}
In this paper we have considered which might be the best targets for the discovery of internally-generated exoplanetary radio emissions. We have employed the stellar X-ray luminosity data available in the NStED database (now the NASA Exoplanetary Archive) to determine the maximum radio power available from M-I coupling due to outflowing internally-generated plasma for 2 values of each of the planetary rotation rate, mass, and stellar wind dynamic pressure. We have considered two categories of potential targets: all F-M dwarfs within 25~pc (within error) which are known to host planets and have directly- or indirectly-measured values of $L_X$, and all those with $L_X > 100L_{X\sun}$, irrespective of whether they are known to host planets or not. We have shown that up to 40 and 51 potential targets exist, respectively, for each of these categories with the actual number depending on the system parameters. In general, stronger planetary field strength, combined with faster rotation rate, higher stellar XUV luminosity, and lower stellar wind dynamic pressure results in higher radio power. The top two targets for each category are $\epsilon$ Eri and HIP 85523, and CPD-28 332 and FF And. All these are in the southern hemisphere. The top two northern hemisphere targets in each category are LHS 310 and LHS 3257, and V834 Tau and OU Gem. \\
It is worth mentioning that this model requires a number of elements in place to produce a detectable system, such that, while we have highlighted eight targets above, a wider survey of reasonable targets would increase the chance of discovering a system with all elements in place. It is also worth re-iterating that \cite{nichols11a} highlighted a number of areas in which the model could be developed. For example, the model does not take into account the stretching of the magnetic field due to the centrifugal force and hot plasma pressure, shown recently by \cite{nichols11b} to have a significant effect on the magnitude of the currents, and we have not considered at all the stellar wind interaction, which could be associated with significant emissions in Jupiter's polar regions \citep{waite01, grodent03a, nichols09a,nichols09b}. Therefore, it should be noted that the list of the best targets for observation may be reasonably expected to evolve with future developments of the model. Finally, while we have not discriminated by multiplicity in this study, we note that recent observations by \cite{doyle:2011aa} have shown that exoplanets can exist around binary star systems.
\section*{Acknowledgments}
JDN was supported by an STFC Advanced Fellowship, and wishes to thank I.~R. Stevens and M.~R. Burleigh for constructive discussions during this study. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
|
1,108,101,564,675 | arxiv | \section{Introduction}
The study of scalar reaction-diffusion equations $\partial_t p - \Delta p = f(p)$ with a given nonlinearity $f$ has a long history. For suitable choices of $f$, this equation can be used to model some phenomena in biology such as population dynamics (see e.g. \cite{FIF}, \cite{MUR}, \cite{SMO}). To investigate the structure of the steady-state solutions, the semilinear elliptic equation $\Delta p + f(p) = 0$ has been studied extensively.
Many results about the multiplicity of positive solutions for the parametrized version $\Delta p + \lambda f(p) = 0$ in a bounded domain are known. Here, $\lambda$ is a positive parameter. Various works investigated the number of solutions and the global bifurcation diagrams of this equation according to different classes of the nonlinearity $f$ and boundary conditions. For Dirichlet problems, in \cite{LIO}, Lions used many ``bifurcation diagrams" to describe the solution set of this equation with several kinds of nonlinearities $f$, and gave nearly optimal multiplicity results in each case. The exact number of solutions and the precise bifurcation diagrams with cubic-like nonlinearities $f$ were given in the works of Korman {\it et. al.} \cite{KOR96}, \cite{KOR97}, Ouyang and Shi \cite{OUY98} and references therein. In these works, the authors developed a global bifurcation approach to obtain the exact multiplicity of positive solutions. In the case of one-dimensional space with two-point boundary, Korman gave a survey of this approach in \cite{KOR06}. Another approach was given by Smoller and Wasserman in \cite{SMO2} using phase-plane analysis and the time mapping method. This method was completed and applied in the works of Wang \cite{WAN1}, \cite{WAN2}. While the bifurcation approach is convenient to solve the problem with more general cubic nonlinearities $f$, the phase-plane method is more intuitive and easier to compute.
Although many results were obtained concerning the number of solutions for Dirichlet problems, relatively little seems to be known concerning the results for other kinds of boundary conditions. For the Neumann problem, the works of Smoller and Wasserman \cite{SMO2}, Schaaf \cite{SCHA}, and Korman \cite{KOR02} dealt with cubic-like nonlinearities $f$ in one dimension. Recently, more works have been done for Robin boundary conditions (see e.g. \cite{DAN}, \cite{SHI}, \cite{ZHA}), or even nonlinear boundary conditions (see e.g. \cite{GOD}, \cite{GOR} and references therein). However, those works only focused on other types of nonlinearities such as positive and monotone $f$. To the best of our knowledge, the study of Robin problems with cubic-like nonlinearities remains quite open.
In this paper, we study the steady-state solutions with values in $[0,1]$ of a reaction-diffusion equation in one dimension with inhomogeneous Robin boundary conditions
\begin{equation}
\begin{cases}
\partial_t p^0 - \partial_{xx} p^0 = f(p^0) & \text{ in } (0,\infty) \times \Omega , \\
\frac{\partial p^0}{\partial \nu} = -D(p^0 - p^\text{ext}) & \text{ on } (0,\infty) \times \partial\Omega, \\
p^0(0,\cdot) = p^\text{init} & \quad\text{ in } \Omega,
\end{cases}
\label{eqn:pb1}
\end{equation}
where $\Omega = (-L,L)$ is a bounded domain in $\mathbb{R}$. The steady-state solutions satisfy the following elliptic boundary-value problem,
\begin{equation}
\begin{cases}
-p'' & = f(p) \qquad \qquad \qquad \text{ in } (-L,L), \\
p'(L) & = -D(p(L) - p^\text{ext}), \\
-p'(-L) & = - D(p(-L) - p^\text{ext}).
\end{cases}
\label{eqn:pb2}
\end{equation}
where $L > 0$, $D > 0, p^\text{ext} \in (0,1)$ are constants. The reaction term $f: [0,1] \rightarrow \mathbb{R}$ is of class $\mathcal{C}^1$, with three roots $\{0, \theta, 1\}$ where $0 < \theta < 1$ (see \cref{fig:f(q)}). The dynamics of \cref{eqn:pb1} can be determined by the structure of steady-state solutions which satisfy \cref{eqn:pb2}. Note that, by changing variable from $x$ to $y = x/L$, then \cref{eqn:pb2} becomes $p''(y) + L^2 f(p(y)) = 0$ on $(-1,1)$ with parameter $L^2$. Thus, we study problem \cref{eqn:pb2} with three parameters $L> 0, D >0$, and $p^\text{ext} \in (0,1)$.
The Robin boundary condition considered in \cref{eqn:pb1} and \cref{eqn:pb2} means that the flow across the boundary points is proportional to the difference between the surrounding density and the density just inside the interval. Here we assume that $p^\text{ext}$ does not depend on space variable $x$ nor time variable $t$.
The existence of classical solutions for such problems was studied widely in the theory of elliptic and parabolic differential equations (see, for example, \cite{PAO}). In our problem, due to difficulties caused by the inhomogeneous Robin boundary condition and the variety of parameters, we cannot obtain the exact multiplicity of solutions. However, our main results in \cref{thm:exist} and \ref{thm:nonSM} show how the existence of solutions and their ``shapes'' depend on parameters $D, p^\text{ext}$ and $L$. The idea of phase plane analysis and time-map method as in \cite{SMO2} are extended to prove these results.
Since the solutions of \cref{eqn:pb2} are equilibria of \cref{eqn:pb1}, their stability and instability are the next problems that we want to investigate. The stability analysis of the non-constant steady-state solutions is a delicate problem especially when the system under consideration has multiple steady-state solutions. In \cref{thm:stability}, we use the principle of linearized stability to give some sufficient conditions for stability. Finally, as a consequence of these theorems, we obtain \cref{result} which provides a comprehensive result about existence and stability of the steady-state solutions when the size $L$ is small.
The main biological application of our results is the control of dengue vectors. {\it Aedes} mosquitoes are vectors of many vector-borne diseases, including dengue. Recently, a biological control method using an endosymbiotic bacterium called {\it Wolbachia} has gathered a lot of attention. {\it Wolbachia} helps reduce the vectorial capacity of mosquitoes and can be passed to the next generation. Massive release of mosquitoes carrying this bacterium in the field is thus considered as a possible method to replace wild mosquitoes and prevent dengue epidemics. Reaction-diffusion equations have been used in previous works to model this replacement strategy (see \cite{BAR, CHA, STR16}). In this work, we introduce the Robin boundary condition to describe the migration of mosquitoes through the boundary. Since inflows of wild mosquitoes and outflows of mosquitoes carrying {\it Wolbachia} may affect the efficiency of the method, the study of existence and stability of steady-state solutions depending on parameters $D, p^\text{ext}$ and $L$ as in \cref{eqn:pb2}, \cref{eqn:pb1} will provide necessary information to maintain the success of the control method using {\it Wolbachia} under the effects of migration.
Problem (\ref{eqn:pb1}) arises often in the study of population dynamics. $p^0$ is usually considered as the relative proportion of one population when there are two populations in competition. This is why, we only focus on solutions with values that belong to the interval $[0,1]$. \cref{eqn:pb1} is derived from the idea in paper \cite{STR16}, where the authors reduce a reaction-diffusion system modelling the competition between two populations $n_1$ and $n_2$ to a scalar equation on the proportion $p = \dfrac{n_1}{n_1 + n_2}$. More precisely, they consider two populations with a very high fecundity rate scaled by a parameter $\epsilon > 0 $ and propose the following system depending on $\epsilon$ for $t > 0, x \in \mathbb{R}^d$,
\begin{equation}
\begin{cases}
\partial_t n_1^\epsilon - \Delta n_1^\epsilon = n_1^\epsilon f_1(n_1^\epsilon,n_2^\epsilon), \\
\partial_t n_2^\epsilon - \Delta n_2^\epsilon = n_2^\epsilon f_2(n_1^\epsilon,n_2^\epsilon).
\label{eqn:redu1}
\end{cases}
\end{equation}
The authors obtained that under some appropriate conditions, the proportion $p^\epsilon = \dfrac{n_1^\epsilon}{n_1^\epsilon + n_2^\epsilon}$ converges strongly in $L^2(0,T;L^2(\mathbb{R}^d))$, and weakly in $L^2(0,T;H^1(\mathbb{R}^d))$ to the solution $p^0$ of the scalar reaction-diffusion equation $\partial_t p^0 - \Delta p^0 = f(p^0)$ when $\epsilon \rightarrow 0$ , where $f$ can be given explicitly from $f_1, f_2$.
Now, in order to describe and study the migration phenomenon, we aim here at considering system \cref{eqn:redu1} in a bounded domain $\Omega$ and introduce the boundary conditions to characterize the inflow and outflow of individuals as follows
\begin{equation}
\begin{cases}
\frac{\partial n_1^\epsilon}{\partial \nu} = -D(n_1^\epsilon - n_1^{\text{ext},\epsilon}) & \text{ on } (0,T) \times \partial \Omega ,\\
\frac{\partial n_2^\epsilon}{\partial \nu} = -D(n_2^\epsilon - n_2^{\text{ext},\epsilon}) & \text{ on } (0,T) \times \partial \Omega,
\label{eqn:redu2}
\end{cases}
\end{equation}
where $n_1^{\text{ext},\epsilon}, n_2^{\text{ext},\epsilon}$ depend on $\epsilon$ but do not depend on time $t$ and position $x$. \cref{eqn:redu2} models the tendency of the population to cross the boundary, with rates proportional to the difference between the surrounding density and the density just inside $\Omega$. Reusing the idea in \cite{STR16}, we prove in \cref{sec:converge} that the proportion $p^\epsilon = \dfrac{n_1^\epsilon}{n_1^\epsilon + n_2^\epsilon}$ converges on any bounded time-domain to the solution of \cref{eqn:pb1} when $\epsilon$ goes to zero. Hence, we can reduce the system \cref{eqn:redu1}, \cref{eqn:redu2} to a simpler setting as in \cref{eqn:pb1}. The proof is based on a relative compactness argument that was also used in previous works about singular limits (e.g. \cite{STR16, HIL08, HIL13}), but here, the use of the trace theorem is necessary to prove the limit on the boundary.
The outline of this work is the following. In the next section, we present the setting of the problem and the main results. In \cref{sec:proof}, we provide detailed proof of these results. Section \ref{sec:bio} is devoted to an application to the biological control of mosquitoes. We also present numerical simulations to illustrate the theoretical results we obtained. \cref{sec:converge} is devoted to proving the asymptotic limit of a 2-by-2 reaction-diffusion system when the reaction rate goes to infinity. Finally, we end this article with a conclusion and perspectives section.
\section{Results on the steady-state solutions}
\label{sec:result}
\subsection{Setting of the problem}
In one-dimensional space, consider the system \cref{eqn:pb1} in a bounded domain $\Omega = (-L,L) \subset \mathbb{R}$. Let $D > 0$, $p^\text{ext} \in (0,1)$ be some constant and $p^\text{init}(x) \in [0,1]$ for all $x \in (-L,L)$.
The reaction term $f$ satisfies the following assumptions
\begin{assumption}[bistability]
\label{reaction}
Function $f: [0,1] \rightarrow \mathbb{R}$ is of class $\mathcal{C}^1([0,1])$ and $f(0) = f(\theta) = f(1) = 0$ with $\theta \in (0,1)$, $f(q) < 0$ for all $q \in (0,\theta)$, and $f(q) > 0$ for all $q \in (\theta,1)$. Moreover, $\displaystyle \int_{0}^{1} f(s)ds > 0$.
\end{assumption}
\begin{assumption}[convexity]
\label{convexity}
There exist $\alpha_1 \in (0,\theta)$ and $\alpha_2 \in (\theta,1)$ such that $f'(\alpha_1) = f'(\alpha_2) = 0$, $f'(q) < 0$ for any $q \in [0,\alpha_1) \cup (\alpha_2,1]$, and $f'(q) > 0$ for $q \in (\alpha_1,\alpha_2)$. Moreover, $f$ is convex on $(0,\alpha_1)$ and concave on $(\alpha_2,1)$.
\end{assumption}
A function $f$ satisfying \cref{reaction} and \cref{convexity} is illustrated in \cref{fig:f(q)}.
\begin{remark}
\label{extreme}
\begin{enumerate}[label=(\alph*),leftmargin=1\parindent]
\item[]
\item Due to \cref{reaction} and the fact that $p^\text{ext} \in (0,1), p^\text{init}(x) \in [0,1]$ for any $x$, one has that $0$ and $1$ are respectively sub- and super-solution of problem \cref{eqn:pb1}. Since $f$ is Lipschitz continuous on $(0,1)$ then by Theorem 4.1, Section 2.4 in \cite{PAO}, we obtain that problem \cref{eqn:pb1} has a unique solution $p^0$ that is in $\mathcal{C}^{1,2}((0,T]\times \Omega)$ with $0 \leq p^0(t,x) \leq 1$ for all $x \in (-L,L), t > 0$.
\item Again by Assumption \ref{reaction}, $0$ and $1$ are respectively sub- and super-solutions of \cref{eqn:pb2}. For fixed values of $D, p^\text{ext}$ and $L$, we use the same method as in \cite{PAO} to obtain that there exists a $\mathcal{C}^2$ solution of \cref{eqn:pb2} with values in $[0,1]$. However, \cref{reaction} and \cref{convexity} on $f$ are not enough to conclude the uniqueness of the solution. In the following section, we prove that the stationary problem \cref{eqn:pb2} may have multiple solutions and their existence depends on the values of the parameters.
\item
For any $p^\text{ext} \in (0,1)$ and $p^\text{ext} \neq \theta$, system \cref{eqn:pb2} cannot have a monotone solution on the whole interval $(-L,L)$. Indeed, assume that \cref{eqn:pb2} admits an increasing solution $p$ on $(-L,L)$ (the case when $p$ is decreasing on $(-L,L)$ is analogous). Thus, we have $p'(x) \geq 0$ for all $x \in [-L,L]$ and $p(L) > p(-L)$. So thanks to the boundary condition of \cref{eqn:pb2}, one has
\begin{center}
$ D p^\text{ext} = p'(L) + Dp(L) \geq Dp(L) > Dp(-L) \geq -p'(-L) + Dp(-L) = D p^\text{ext},$
\end{center}
which is impossible. Therefore, we can deduce that the solutions of system \cref{eqn:pb2} always admit at least one locally extremum on the open interval $(-L,L)$.
\end{enumerate}
\end{remark}
\noindent To study system \cref{eqn:pb2}, we define function $F$ (see \cref{fig:F(q)}) as below
\begin{equation}
F(q) = \displaystyle \int_{0}^{q} f(s)ds,
\label{eqn:F}
\end{equation}
then $F'(q) = f(q)$ and $F(0) = 0$. From \cref{reaction}, $F$ reaches the minimal value at $q = \theta$ and the (locally) maximal values at $q = 0$ and $q = 1$. Since $\displaystyle \int_0^1 f(s) ds >0$, then $F(1) > F(0)$, it implies that $F(1) = \displaystyle \max_{[0,1]} F; F(\theta) = \displaystyle \min_{[0,1]}F$.
Moreover, since $F(\theta) < F(0)$ and function $F$ is monotone in $(\theta,1)$ ($F'(q) = f(q) > 0$ for any $q \in (\theta,1)$). Thus, there exists a unique value $\beta \in (\theta,1)$ such that
\begin{equation}
F(\beta) = F(0) = 0.
\label{eqn:beta}
\end{equation}
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[scale=3]{r.png}
\caption{$f(q)$}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\label{fig:f(q)}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[scale=3]{F.png}
\caption{$F(q)$}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\label{fig:F(q)}
\end{subfigure}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Graph of functions $f$ and $F$}
\label{fig:func}
\end{figure}
The main results of the present work concerns existence and stability of steady-state solutions of \cref{eqn:pb1}, i.e. solutions of \cref{eqn:pb2}.
\subsection{Existence of steady-state solutions}
\label{sec:exist}
In our result, we first focus on two types of steady-state solutions defined as follows
\begin{definition} Consider a steady-state solution $p(x)$,
$p$ is called a symmetric-decreasing (SD) solution when $p$ is symmetric on $(-L,L)$ with values in $[0,1]$, decreasing on $(0,L)$ and $p'(0) = 0$ (see \cref{fig:p1}).
Similarly, $p$ is called a symmetric-increasing (SI) solution when $p$ is symmetric on $(-L,L)$ with values in $[0,1]$, increasing on $(0,L)$ and $p'(0) = 0$ (see \cref{fig:p2}).
Any solution which is either (SD) or (SI) is called a symmetric-monotone (SM) solution.
\end{definition}
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{p1.png}
\caption{(SD): $p$ decreasing on $(0,L)$}
\label{fig:p1}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{p2.png}
\caption{(SI): $p$ increasing on $(0,L)$}
\label{fig:p2}
\end{subfigure}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Symmetric steady-state solutions $p$}
\label{fig:functionp}
\end{figure}
The following theorems present the main result of existence of (SM) solutions depending upon the parameters. For each value of $p^\text{ext} \in (0,1)$ and $D > 0$, we find the critical values of $L$ such that \cref{eqn:pb2} admits solutions.
\begin{theorem}
\label{thm:exist}
In a bounded domain $\Omega = (-L,L) \subset \mathbb{R}$, consider the stationary problem \cref{eqn:pb2}. Assume that the reaction term $f$ satisfies \cref{reaction} and \cref{convexity}. Then, there exist two functions
\begin{equation}
\begin{array}{c r c l}
M_{d}, M_i: & (0,1) \times (0,+\infty) & \longrightarrow & [0,+\infty], \\
& (p^\text{ext},D) & \longmapsto & M_d(p^\text{ext},D), M_i(p^\text{ext},D),
\end{array}
\end{equation}
such that for any $p^\text{ext} \in (0,1), D > 0$, problem \cref{eqn:pb2} admits at least one (SD) solution (resp., (SI) solution) if and only if $L \geq M_{d}(p^\text{ext},D)$ (resp., $L \geq M_{i}(p^\text{ext},D)$) and the values of these solutions are in $[p^\text{ext}, 1]$ (resp., $[0,p^\text{ext}]$). More precisely,
\begin{enumerate}[label=(\alph*),leftmargin=1\parindent]
\item If $0 < p^\text{ext} < \theta$, then for any $D > 0$, $M_{i}(p^\text{ext},D) = 0$ and $M_{d}(p^\text{ext},D) \in (0,+\infty)$.
\noindent Moreover, if $p^\text{ext} \leq \alpha_1$, the (SI) solution is unique.
\item If $\theta < p^\text{ext} < 1$, then for any $D > 0$, $M_d(p^\text{ext},D) = 0$. If $\alpha_2 \leq p^\text{ext}$, the (SD) solution is unique. Moreover, consider $\beta$ as in \cref{eqn:beta},
$\bullet$ if $p^\text{ext} \leq \beta$, then $M_i(p^\text{ext},D) \in (0,+\infty)$ for any $D > 0$;
$\bullet$ if $p^\text{ext} > \beta$, then there exists a constant $D_* > 0$ such that $M_i(p^\text{ext},D) \in (0,+\infty)$ for any $D < D_*$, and $M_i(p^\text{ext}, D) = +\infty$ for $D \geq D_*$.
\item If $p^\text{ext} = \theta $, then $M_d(\theta, D) = M_i(\theta, D)= 0$. Moreover, there exists a constant solution $p \equiv \theta$.
\end{enumerate}
\end{theorem}
In the statement of the above result, $M_i = 0$ means that for any $L >0$, \cref{eqn:pb2} always admits (SI) solutions. $M_i = +\infty$ means that there is no (SI) solution even when $L$ is large. The same interpretation applies for $M_d$.
Besides, problem \cref{eqn:pb2} can also admit solutions which are neither (SD) nor (SI). The following theorem provides an existence result for those solutions.
\begin{theorem}
\label{thm:nonSM}
In a bounded domain $\Omega = (-L,L) \subset \mathbb{R}$, consider the stationary problem \cref{eqn:pb2}. Assume that the reaction term $f$ satisfies \cref{reaction} and \cref{convexity}. Then, there exists a function
\begin{equation}
\begin{array}{c r c l}
M_*: & (0,1) \times (0,+\infty) & \longrightarrow & [0,+\infty], \\
& (p^\text{ext},D) & \longmapsto & M_*(p^\text{ext},D),
\end{array}
\end{equation}
such that for any $p^\text{ext} \in (0,1), D > 0$, problem \cref{eqn:pb2} admits at least one solution which is not (SM) if and only if $L \geq M_{*}(p^\text{ext},D)$. Moreover,
\noindent $\bullet$ If $p^\text{ext} \leq \beta $, then for any $D > 0$, one has
\begin{equation}
0 < M_i(p^\text{ext},D) + M_d(p^\text{ext},D) < M_*(p^\text{ext},D) < +\infty.
\end{equation}
$\bullet$ If $p^\text{ext} > \beta $, then for any $D < D_*$, one has $0 < M_i(p^\text{ext},D) < M_*(p^\text{ext},D) < +\infty$. Otherwise, for $D \geq D_*$, $M_*(p^\text{ext},D) = +\infty$. Here, $D_*$ was defined in \cref{thm:exist}.
\end{theorem}
\noindent The construction of $M_i, M_d, M_*$ will be done in the proof in \cref{sec:proof}. The idea of the proof is based on a careful study of the phase portrait of \cref{eqn:pb2}.
In the next section, we present a result about stability and instability of steady-state solutions of problem \cref{eqn:pb2}.
\subsection{Stability of steady-state solutions}
The definition of stability and instability used in the present work comes from Lyapunov stability
\begin{definition}
A steady-state solution $p(x)$ of \cref{eqn:pb1} is called stable if for any constant $\epsilon > 0$, there exists a constant $\delta > 0$ such that when $||p^\text{init} - p||_{ \infty} < \delta$, one has
\begin{equation}
||p^0(t,\cdot) - p||_{ \infty} < \epsilon, \quad \text{ for all } t > 0
\end{equation}
where $p^0(t,x)$ is the unique solution of \cref{eqn:pb1}. If, in addition,
\begin{equation}
\displaystyle \lim_{t \rightarrow \infty}||p^0(t,\cdot) - p||_{ \infty} = 0,
\end{equation}
then $p$ is called asymptotically stable. The steady-state solution $p$ is called unstable if it is not stable.
\end{definition}
The following theorem provides sufficient conditions for the stability of steady-state solutions given in \cref{sec:exist}.
\begin{theorem}
\label{thm:stability}
In the bounded domain $\Omega = (-L,L) \subset \mathbb{R}$, consider the problem \cref{eqn:pb1} with the reaction term satisfying \cref{reaction} and \cref{convexity}. There exists a constant $\lambda_1 \in \left(0,\dfrac{\pi^2}{4L^2} \right)$ such that for any steady-state solution $p$ of \cref{eqn:pb1},
$\bullet$ If $f'(p(x)) > \lambda_1$ for any $x \in (-L,L)$, then $p$ is unstable.
$\bullet$ If $f'(p(x)) < \lambda_1$ for any $x \in (-L,L)$, then $p$ is asymptotically stable.
\end{theorem}
The principle of linearized stability is used to prove this theorem (see \cref{sec:proof}). $\lambda_1$ is the principle eigenvalue of the linear problem and its value is the smallest positive solution of equation $\sqrt{\lambda}\tan{\left(L\sqrt{\lambda}\right)} = D$.
\begin{remark}
By \cref{convexity}, $f'(q) \leq 0 < \lambda_1$ for any $q \in [0,\alpha_1] \cup [\alpha_2,1] $, we can deduce that the steady-state solutions with values smaller than $\alpha_1$ or larger than $\alpha_2$ are asymptotically stable.
\end{remark}
As a consequence of Theorems \ref{thm:exist}, \ref{thm:nonSM}, and \ref{thm:stability}, the following important result provides complete information about existence and stability of steady-state solutions in some special cases.
\begin{corollary}
\label{result}
In the bounded domain $\Omega = (-L,L) \subset \mathbb{R}$, consider the problem \cref{eqn:pb1} with the reaction term satisfying \cref{reaction} and \cref{convexity}. Then for any $D > 0$, we have
$\bullet$ If $p^\text{ext} \leq \alpha_1$, for any $L > 0$, there exists exactly one (SI) steady-state solution $p$ and it is asymptotically stable. Moreover, if $L < M_d(p^\text{ext},D)$, then $p$ is the unique steady-state solution of \cref{eqn:pb1}.
$\bullet$ If $p^\text{ext} \geq \alpha_2$, for any $L > 0$, there exists exactly one (SD) steady-state solution $p$ and it is asymptotically stable. Moreover, if $L < M_i(p^\text{ext},D)$, then $p$ is the unique steady-state solution of \cref{eqn:pb1}.
\end{corollary}
\begin{remark}
This corollary gives us a comprehensive view about long-time behavior of solutions of \cref{eqn:pb1} when the size $L$ of the domain is small. In this case, the unique steady-state solution $p$ is symmetric, monotone on each half of $\Omega$ and asymptotically stable. Its values will be close to $0$ if $p^\text{ext}$ is small and close to $1$ if $p^\text{ext}$ is large. We discuss an essential application of this result in \cref{sec:bio}.
\end{remark}
\section{Proof of the theorems}
\label{sec:proof}
\subsection{Proof of existence}
In this section, we use phase-plane analysis to prove the existence of both (SM) and non-(SM) steady-state solutions depending on the parameters. The studies of (SD) and (SI) solutions will be presented respectively in \cref{sec:SD} and \cref{sec:SI}. Then, using these results, we prove \cref{thm:exist}. The proof of \cref{thm:nonSM} will be presented after that using the same technique.
First, we introduce the following function
\begin{equation}
E(p,p') = \dfrac{(p')^2}{2} + F(p).
\label{eqn:energy}
\end{equation}
Since $\dfrac{d}{dx} E(p,p') = p'(p'' + f(p)) = 0$, then $E(p,p')$ is constant along the orbit of \cref{eqn:pb2}. From \cref{extreme}(c), we can deduce that there exists an $x_0 \in (-L,L)$ such that $p'(x_0) = 0$, thus one has
\begin{equation}
E(p(x_0),0) = E(p(x),p'(x)),
\end{equation}
for any $x \in (-L,L)$. Therefore, the relation between $p'$ and $p$ is as below
\begin{equation}
p' = \pm \sqrt{2F(p(x_0)) - 2F(p)}.
\label{eqn:phase}
\end{equation}
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{phase1.png}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$p^\text{ext} < \theta < \beta, D > 0$}
\label{fig:phase}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{phase2.png}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$p^\text{ext} > \beta, D \leq D_*$}
\label{fig:phase2}
\end{subfigure}
\setlength{\abovecaptionskip}{5pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Phase portrait of \cref{eqn:pb2}}
\label{fig:phaseportrait}
\end{figure}
According to this relation, one has a phase plane as in \cref{fig:phase}, in which the curves illustrate the relation between $p'(x)$ and $p(x)$ in \cref{eqn:phase} with respect to different values of $p(x_0)$. We can see that some curves do not end on the axis $p=0$ but wrap around the point $(\theta,0)$. This is dues to the fact that for any $p_1 \in [\theta,\beta]$, there exists a value $p_2 \in [0,\theta]$ such that $F(p_1) = F(p_2)$. Thus, if the curve passes through the point $(p_1,0)$, it will also pass through the point $(p_2,0)$ on the axis $p'=0$. Moreover, those curves only exist if their intersection with the axis $p'=0$ has $p$-coordinate less than or equal to $\beta$. Besides, the two straight lines show the relation between $p'$ and $p$ at the boundary points. Solutions of \cref{eqn:pb2} correspond to those orbits that connect the intersection of the curves with the line $p' = D(p-p^\text{ext})$ to the intersection of the curves with the line $p' = -D(p-p^\text{ext})$.
In the phase plane in \cref{fig:phase}, orbit $T_1$ describes a (SD) solution, while orbit $T_2$ corresponds to a (SI) solution. On the other hand, the solid curve $T_3$ shows the orbit of a steady-state solution which is not symmetric-monotone.
\begin{remark}{\it (Graphical interpretation of $D_*$)} The (SI) solutions (see \cref{fig:p2}) have orbit as $T_2$ in \cref{fig:phase}. This type of orbits only exists when the lines $p = \pm D(p - p^\text{ext})$ intersect the curves wrapping around the point $(\theta,0)$. In the case when $p^\text{ext} > \beta$, the constant $D_* > 0$ in \cref{thm:exist} is the slope of the tangent line to the curve passing through $(\beta,0)$ as in \cref{fig:phase2}. Hence, if $D > D_*$, there exists no (SI) solution. We construct explicitly the value of $D_*$ in \cref{Type2} below.
\end{remark}
Next, we establish some relations between the solution $p$ and the parameters based on the phase portrait above. For any $x > x_0$, if $p$ is monotone on $(x_0,x)$, we can invert $x \mapsto p(x)$ into function $p \mapsto X(p)$. We obtain $X'(p) = \dfrac{\pm 1}{\sqrt{2F(p(x_0)) - 2F(p)}} $. By integrating this equation, we obtain that
\begin{equation}
x - x_0 = \displaystyle \int_{p(x_0)}^{p(x)} \dfrac{ (-1)^k ds}{\sqrt{2F(p(x_0)) - 2F(s)}},
\label{eqn:time}
\end{equation}
where $k = 1$ if $p$ is decreasing and $k = 2$ if $p$ is increasing on $(x_0,x)$. We can obtain the analogous formula for $x < x_0$.
First, we focus on symmetric-monotone (SM) solutions for which $p'(0) = 0$, then we analyze the integral in \cref{eqn:time} with $x = L, x_0 = 0$. For any $p^\text{ext} \in (0,1)$, using \cref{eqn:phase}, we have
\begin{equation}
F(p(0)) = F(p(L)) + \dfrac{1}{2} D^2 \left( p(L) - p^\text{ext}\right)^2 = G(p(L)),
\label{eqn:eq1}
\end{equation}
for $F$ defined in \cref{eqn:F} and
\begin{equation}
G(q) := F(q) + \dfrac{1}{2} D^2 (q - p^\text{ext} )^2,
\label{eqn:G}
\end{equation}
and from \cref{eqn:time} with $x=L, x_0 = 0$, we have
\begin{equation}
L = \displaystyle \int_{p(0)}^{p(L)} \dfrac{ (-1)^k ds}{\sqrt{2F(p(0)) - 2F(s)}},
\label{eqn:eq2}
\end{equation}
where $k = 1$ if $p$ is decreasing on $(0,L)$, $k = 2$ if $p$ is increasing on $(0,L)$.
Thus, the (SM) solution of \cref{eqn:pb2} exists if there exist values $p(L)$ and $p(0)$ that satisfy \cref{eqn:eq1} and \cref{eqn:eq2}. When such values exist, we can assess the value of $p(x)$ for any $x$ in $(-L,L)$ using \cref{eqn:time}.
Before proving existence of such values of $p(0)$ and $p(L)$, we establish some useful properties of the function $G$ defined in \cref{eqn:G}. It is continuous in $[0,1]$ and $G(q)\geq F(q) $ for all $q \in [0,1]$. Moreoever, the following lemma shows that $G$ has a unique minimum point.
\begin{lemma}
\label{G}
For any $p^\text{ext} \in (0,1)$, there exists a unique value $\overline{q} \in (0,1)$ such that $G'(\overline{q}) = 0$, $G'(q) < 0$ for all $q \in [0,\overline{q})$ and $G'(q) > 0$ for all $q \in (\overline{q},1]$. Particularly, $G(\overline{q}) = \displaystyle \min_{[0,1]} G$.
\end{lemma}
\begin{proof}
We have $
G'(q) = f(q) + D^2(q-p^\text{ext})
$.
We consider the following cases.
{\bf Case 1}: When $p^\text{ext} = \theta$, we have $G'(p^\text{ext}) = G'(\theta) = f(\theta) = 0, G'(q) < 0 $ for all $q \in (0,\theta)$ and $G'(q) > 0$ for all $q \in (\theta,1)$. Thus $\overline{q} = \theta = p^\text{ext}$.
{\bf Case 2}: When $p^\text{ext} < \theta$, we have $G'(q) < 0$ for all $q \in [0,p^\text{ext}]$ and $G'(q) > 0$ for all $q \in [\theta,1]$. So there exists at least one value $\overline{q} \in (p^\text{ext},\theta)$ such that $G'(\overline{q}) = 0$.
For any $\overline{q} \in (p^\text{ext},\theta)$ such that $G'(\overline{q}) = 0$, we have $f(\overline{q}) + D^2 (\overline{q} - p^\text{ext}) = 0$ so that $D^2 = -\dfrac{f(\overline{q})}{\overline{q}- p^\text{ext}}$. We can prove that $G''(\overline{q})$ is strictly positive. Indeed, from \cref{convexity} we have that $\alpha_1$ is the unique value in $(0,\theta)$ such that $f'(\alpha_1) = 0$, thus $f(\alpha_1) = \displaystyle \min_{[0,\theta]} f < 0$.
If $\alpha_1 \leq \overline{q} < \theta $ then $f'(\overline{q}) \geq 0$. One has $ G''(\overline{q}) = f'(\overline{q}) + D^2 > 0$.
If $p^\text{ext} < \overline{q} < \alpha_1$, due to the fact that $f$ is convex in $(0,\alpha_1)$ one has $f'(\overline{q}) \geq \dfrac{f(\overline{q}) - f(p^\text{ext})}{\overline{q} - p^\text{ext}}$. Since $f(p^\text{ext}) < 0$, one has $G''(\overline{q}) = f'(\overline{q}) + D^2 = f'(\overline{q}) - \dfrac{f(\overline{q})}{\overline{q} - p^\text{ext}} > f'(\overline{q}) + \dfrac{f(p^\text{ext}) -f(\overline{q})}{\overline{q} - p^\text{ext}} \geq 0.$
One can deduce that $\overline{q}$ is the unique value in $(0,1)$ such that $G'(\overline{q}) = 0$ and $G(\overline{q}) = \displaystyle \min_{[0,1]} G$, so it satisfies \cref{G}.
{\bf Case 3}: When $p^\text{ext} > \theta$, the proof is analogous to case 2 but using the concavity of $f$ in $(\alpha_2,1)$. We obtain that there exists a unique value $\overline{q}$ in $(\theta,p^\text{ext})$ which satisfies \cref{G}.
\end{proof}
When $p^\text{ext} = \theta$, it is easy to check that $p\equiv \theta$ is a solution of \cref{eqn:pb2}. We now analyze two types of (SM) solutions (see \cref{fig:functionp}) in the following parts.
\subsubsection{Existence of (SD) solutions}
\label{sec:SD}
In this part, the solution $p$ we study is symmetric on $(-L,L)$ and decreasing on $(0,L)$ (see \cref{fig:p1}). So $ p(L) < p(x) < p(0)$ for any $x \in (0,L)$. But from \cref{eqn:phase}, we have that $F(p(x)) \leq F(p(0))$, so $F'(p(0)) \geq 0$. It implies that $p(0) \in [\theta,1]$. Next, we use two steps to study existence of (SD) solutions:
\noindent {\bf Step 1: Rewriting as a non-linear equation on $p(L)$}
For any $q \in (\theta,1)$, we have $F'(q) = f(q) > 0$ so $F|_{(\theta,1)}: (\theta, 1) \longrightarrow \left(F(\theta), F(1)\right)$ is invertible. Define $F_1^{-1} := (F|_{(\theta,1)})^{-1}: \left( F(\theta), F(1)\right) \longrightarrow (\theta,1)$, and $F_1^{-1}(F(\theta)) = \theta, F^{-1}_1(F(1)) = 1$. Then, $F^{-1}_1$ is continuous in $[F(\theta),F(1)]$. For any $y \in \left( F(\theta),F(1)\right)$, one has $\left( F^{-1}_1\right)'(y) = \dfrac{1}{F'\left( F^{-1}_1(y)\right)} = \dfrac{1}{f\left( F^{-1}_1(y)\right)} > 0 $, so $F^{-1}_1$ is an increasing function in $\left( F(\theta), F(1)\right)$. From \cref{eqn:eq1} and \cref{eqn:eq2}, since $p$ is decreasing in $(0,L)$, we have $L = \displaystyle \int_{p(L)}^{p(0)} \dfrac{ ds}{\sqrt{2G(p(L)) - 2F(s)}}$. Denote
\begin{equation}
\mathcal{F}_1(q) := \displaystyle \int_{q}^{F_1^{-1}(G(q))}\dfrac{ds}{\sqrt{2G(q) - 2F(s)}}.
\label{eqn:F1}
\end{equation}
Hence, a (SD) solution $p$ of system \cref{eqn:pb2} has $p(0) = F^{-1}_1(G(p(L)))$, and $p(L)$ satisfies
\begin{equation}
L = \mathcal{F}_1(p(L)).
\label{eqn:sys1}
\end{equation}
Moreover, one has $p'(x) \leq 0$ for all $x \in (0,L)$ thus $-D(p(L) - p^\text{ext}) = p'(L) \leq 0$. One can deduce that
\begin{equation}
p(L) \geq p^\text{ext}.
\end{equation}
\noindent {\bf Step 2: Solving \cref{eqn:sys1} in $[p^\text{ext},1]$}
The existence of value $p(L)$ of the (SD) solutions is established as follows
\begin{proposition}
\label{Type1}
For any $D > 0, p^\text{ext} \in (0,1)$, we have
\begin{enumerate}[leftmargin=1\parindent]
\item [1. ] If $0 < p^\text{ext} < \theta$, then there exists a constant $M_1 > 0$ such that equation \cref{eqn:sys1} has at least one solution $p(L) \geq p^\text{ext}$ if and only if $L \geq M_1$.
\item [2. ] If $ \theta \leq p^\text{ext} < 1$, then equation \cref{eqn:sys1} admits at least one solution $p(L) \geq p^\text{ext}$ for all $L > 0$. If $p^\text{ext} \geq \alpha_2$, then this solution is unique.
\end{enumerate}
\end{proposition}
\begin{proof}
Since $F_1^{-1}$ is only defined in $[F(\theta),F(1)]$, we need to find $p(L) \in [p^\text{ext},1]$ such that $G(p(L)) \in [F(\theta),F(1)]$.
For all $q \in (0,1)$, we have $G(q) \geq F(q) \geq F(\theta)$ and from \cref{G}, there exists a value $\overline{q} \in (0,1)$ such that $\displaystyle \min_{[0,1]} G = G(\overline{q}) \leq G(p^\text{ext}) = F(p^\text{ext}) < \max_{[0,1]} F = F(1)$. Moreover, one has $G(1) > F(1)$, thus there exists a value $p^* \in (p^\text{ext},1)$ such that $G(p^*) = F(1)$. Then, for all $q \in [p^\text{ext},p^*], G(q) \in [F(\theta),F(1)]$ and we will find $p(L)$ in $[p^\text{ext},p^*]$. Since $F^{-1}_1$ increases in $(F(\theta),F(1))$, then $p(0) = F^{-1}_1(G(p(L))) \geq F^{-1}_1(F(p(L))) \geq p(L). $
Function $\mathcal{F}_1$ in \cref{eqn:F1} is well-defined and continuous in $[p^\text{ext},p^*)$, $\mathcal{F} \geq 0$ in $[p^\text{ext},p^*)$. Moreover, since $F'(1) = 0$, one has $\displaystyle \lim_{p \rightarrow p^*}\mathcal{F}_1(p) = \displaystyle \int_{p^*}^{1} \dfrac{ds}{\sqrt{2F(1) - 2F(s)}} = +\infty$.
{\bf Case 1}: If $0 < p^\text{ext} < \theta$, we will prove that $\mathcal{F}_1$ is strictly positive in $[p^\text{ext},p^*)$. Indeed, for any $y \in [0,1]$, if $y < \theta$, by the definition of $F^{-1}_1$, we have $F^{-1}_1(G(y)) \in [\theta,1]$ so $F^{-1}_1(G(y)) > y$. If $y \geq \theta > p^\text{ext}$ then $G(y) = F(y) + \dfrac{1}{2} D^2(y - p^\text{ext})^2 > F(y)$ so again $F^{-1}_1(G(y)) > y$. Hence $\mathcal{F}_1(y) > 0$ for all $y \in [p^\text{ext},p^*)$. We have $\mathcal{F}_1(p) \rightarrow +\infty$ when $p \rightarrow p^*$, so there exists $p \in [p^\text{ext},p^*)$ such that $M_1:= \mathcal{F}_1(p) = \displaystyle \min_{[p^\text{ext},p^*]} \mathcal{F}_1 > 0$, and system \cref{eqn:sys1} admits at least one solution if and only if $L \geq M_1$.
{\bf Case 2}: If $\theta \leq p^\text{ext} < 1$, one has $G(p^\text{ext}) = F(p^\text{ext})$, then $F^{-1}_1(G(p^\text{ext})) = p^\text{ext}$ so
$\mathcal{F}_1 (p^\text{ext}) = 0$. On the other hand, $\mathcal{F}_1(p) \rightarrow +\infty$ when $p \rightarrow p^*$. Thus, for any $L > 0$, there always exists at least one value $p(L) \in (p^\text{ext},p^*)$ such that $\mathcal{F}_1(p(L)) = L$.
Moreover, when $p^\text{ext} \geq \alpha_2$, we can prove that $\mathcal{F}_1' > 0$ on $(p^\text{ext},p^*)$. Indeed, denoting $\gamma(q) = F_1^{-1}(G(q))$, and changing the variable from $s$ to $t$ such that $s = t\gamma(q) + (1-t)q$, one has
\begin{displaymath}
\mathcal{F}_1(q) = \displaystyle \int_{0}^{1} \dfrac{[\gamma(q) - q]dt}{\sqrt{2F(\gamma(q)) - 2F(t\gamma(q) + (1-t)q)}}.
\end{displaymath}
To simplify, denote $s(q)= t\gamma(q) + (1-t)q$. For any $t \in (0,1)$, one has $q < s(q) < \gamma(q)$. Let us define $\Delta F = F(\gamma(q)) - F(s(q))$, then one has
\noindent $ \sqrt{2}\mathcal{F}_1'(q) = \displaystyle \int_0^1 (\gamma'(q)-1) (\Delta F)^{-1/2} dt - \dfrac{1}{2} \int_0^1 (\Delta F)^{-3/2}(\gamma(q)-q)\dfrac{d\Delta F}{dq} dt $
$ = \displaystyle \int_0^1 (\Delta F)^{-3/2} \left[ (\gamma'(q)-1) \Delta F - \dfrac{1}{2} (\gamma(q)-q) (f(\gamma(q)) \gamma'(q) - f(s(q))s'(q)) \right] $.
\noindent Let $P $ be the formula in the brackets, then
\noindent $ \begin{array}{r l}
P & = (\gamma'-1) \Delta F - \dfrac{1}{2} (\gamma-q) \left[f(\gamma) \gamma' - f(s)(t\gamma' + 1 - t)\right]\\
& = (\gamma'-1) \left[\Delta F -\dfrac{1}{2}(\gamma-q)f(\gamma) + \dfrac{1}{2}(s-q)f(s)\right] - \dfrac{1}{2}(\gamma - q)(f(\gamma) - f(s)),
\end{array}
$
Define $\psi(y) := F(y) - \dfrac{1}{2}f(y)(y - q)$ for any $y \in [q,\gamma(q)]$, then one has $\psi'(y) = \dfrac{1}{2}[f(y) - f'(y)(y-q)] \geq \dfrac{f(q)}{2}> 0$ since $y \geq q > p^\text{ext} \geq \alpha_2$ and $f$ is concave in $(\alpha_2,1)$, $f(q) > 0$. Moreover, $f$ is decreasing on $(\alpha_2,1)$ so $0 < f(\gamma(q)) < f(s(q)) < f(q)$, and $\gamma'(q) = \dfrac{G'(q)}{f(F_1^{-1}(G(q)))} = \dfrac{f(q) + D^2 (q - p^\text{ext})}{f(\gamma(q))} > 1$. Hence, we can deduce that $P = (\gamma'-1)(\psi(\gamma) - \psi(s)) - \dfrac{1}{2}(\gamma - q)(f(\gamma) - f(s)) > 0$ for any $t \in (0,1)$. This proves that function $\mathcal{F}_1$ is increasing on $(p^\text{ext},p^*)$, so the solution of equation \cref{eqn:sys1} is unique.
\end{proof}
\subsubsection{Existence of (SI) solutions}
\label{sec:SI}
In this case, the technique we use to prove existence of (SI) solutions is analogous to (SD) solutions except the case when $p^\text{ext} > \beta$ (case 3 below). Since the proof is not straight forward, it is worth to re-establish this technique for (SI) solutions in two following steps:
\noindent {\bf Step 1: Rewriting as a non-linear equation on $p(L)$}
Since now $p$ is symmetric on $(-L,L)$ and increasing in $(0,L)$ (see \cref{fig:p2}), then $ p(0) < p(x) < p(L)$ for any $x \in (0,L)$. But from \cref{eqn:phase}, we have that $F(p(x)) \leq F(p(0))$, so $F'(p(0)) \leq 0$. This implies that $p(0) \in [0,\theta]$.
For any $q \in (0,\theta)$, we have $F'(q) = f(q) < 0$ so $F|_{(0,\theta)}: (0,\theta) \longrightarrow \left(F(\theta), F(0)\right)$ is invertible. Define $F_2^{-1} := (F|_{(0,\theta)})^{-1}: \left( F(\theta), F(0)\right) \longrightarrow (0,\theta)$, $F_2^{-1}(F(\theta)) = \theta, F^{-1}_2(F(0)) = 0$, and $F^{-1}_2$ is continuous in $[F(\theta),F(0)]$. For any $y \in \left( F(\theta), F(0)\right)$, $\left( F^{-1}_2\right)'(y) = \dfrac{1}{F'\left( F^{-1}_2(y)\right)} = \dfrac{1}{f\left( F^{-1}_2(y)\right)} < 0$, so $F^{-1}_2$ is a decreasing function in $\left( F(\theta), F(0)\right)$. From \cref{eqn:eq1} and \cref{eqn:eq2}, we have $L = \displaystyle \int_{p(0)}^{p(L)} \dfrac{ds}{\sqrt{2G(p(L)) - 2F(s)}}$. Denote
\begin{equation}
\mathcal{F}_2(q) := \displaystyle \int_{F^{-1}_2(G(q))}^{q} \dfrac{ds}{\sqrt{2G(q) - 2F(s)}}.
\label{eqn:F2}
\end{equation}
Hence, a (SI) solution of system \cref{eqn:pb2} has $p(0) = F^{-1}_2(G(p(L)))$, and $p(L)$ satisfies
\begin{equation}
L = \mathcal{F}_2(p(L)),
\label{eqn:sys2}
\end{equation}
and in this case, one needs to find $p(L)$ in $[0,p^\text{ext}]$.
\noindent {\bf Step 2: Solving of \cref{eqn:sys2} in $[0,p^\text{ext}]$}
\begin{proposition}
\label{Type2}
For any $p^\text{ext} \in (0,1)$, considering the value $\beta$ as in \cref{eqn:beta}, we have:
\begin{enumerate}[leftmargin=1\parindent]
\item [1. ] If $ 0 < p^\text{ext} \leq \theta$, then equation \cref{eqn:sys2} admits at least one solution $p$ with $p(L) \leq p^\text{ext}$ for all $L > 0, D > 0$. If $p^\text{ext} \leq \alpha_1$, this solution is unique.
\item [2. ] If $\theta < p^\text{ext} \leq \beta$, then for all $D > 0$, there exists a constant $M_2 > 0$ such that equation \cref{eqn:sys2} has at least one solution $p$ with $p(L) \leq p^\text{ext}$ if and only if $L \geq M_2$.
\item [3. ] If $\beta < p^\text{ext} < 1$, then there exists a constant $D_* > 0$ such that when $D \geq D_*$, equation \cref{eqn:sys2} has no solution. Otherwise, there exists a constant $M_3 > 0$ such that equation \cref{eqn:sys2} has at least one solution $p$ with $p(L) \leq p^\text{ext}$ if and only if $L \geq M_3 $.
\end{enumerate}
\end{proposition}
\begin{proof}
As we assume that $F(0) < F(1)$ and $F(\theta) < F(0)$ then, due to the continuity of $F$, one can deduce that there exists a value $\beta \in (\theta,1)$ such that $F(\beta) = F(0) = 0$.
Since $F_2^{-1}$ is only defined in $[F(\theta),F(0)]$, we need to find $p(L) \in [0,p^\text{ext}]$ such that $G(p(L)) \in [F(\theta),F(0)]$. For all $q \in (0,1)$, we have $G(q) \geq F(q) \geq F(\theta)$, thus equation \cref{eqn:sys2} has solutions if and only if $\displaystyle \min_{[0,1]} G < F(0)$. Even when $\displaystyle \min_{[0,1]} G = G(\overline{q}) = F(0)$, $\mathcal{F}_2$ is still not defined in $[0,1]$ since $\mathcal{F}_2 (\overline{q}) = +\infty$.
One has the following cases:
{\bf Case 1:} $0 < p^\text{ext} \leq \theta$:
We have $\displaystyle \min_{[0,1]} G = G(\overline{q}) \leq G(p^\text{ext}) = F(p^\text{ext}) < \max_{[0,\theta]}F = F(0)$, and $G(0) > F(0)$ so there is a value $p_* \in (0,p^\text{ext})$ such that $G(p_*) = F(0)$. Moreover $F'(0) = 0$, then $\displaystyle \lim_{p \rightarrow p^*}\mathcal{F}_2(p) = +\infty$. Thus, function $\mathcal{F}_2$ is only well-defined and continuous in $(p_*,p^\text{ext}]$.
When $0 < p^\text{ext} \leq \theta$, $F^{-1}_2(G(p^\text{ext})) = F^{-1}_2(F(p^\text{ext})) = p^\text{ext}$ so $\mathcal{F}_2 (p^\text{ext}) = 0$. We can deduce that for any $L > 0$, there always exists at least one value $p(L) \in (p_*,p^\text{ext})$ such that $\mathcal{F}_2(p(L)) = L$.
When $p^\text{ext} \leq \alpha_1$, arguing analogously to the second case of \cref{Type1}, one has $\mathcal{F}_2' < 0$ on $(p_*,p^\text{ext})$, thus the solution is unique.
{\bf Case 2:} $\theta < p^\text{ext} \leq \beta$:
Since $F$ increases on $(\theta,1)$, then $\displaystyle \min_{[0,1]} G = G(\overline{q}) < G(p^\text{ext}) = F(p^\text{ext}) \leq F(\beta) = F(0)$. Analogously to the previous case, $\mathcal{F}_2$ is well-defined and continuous in $(p_*,p^\text{ext}] $, $\displaystyle \lim_{p \rightarrow p^*}\mathcal{F}_2(p) = +\infty$, and $\mathcal{F}_2$ is strictly positive in $(p_*,p^\text{ext}]$. Therefore, there exists $p \in (p_*,p^\text{ext}]$ such that
\begin{equation}
M_2:= \mathcal{F}_2(p) = \min_{[p_*,p^\text{ext}]} \mathcal{F}_2 > 0,
\end{equation}
and system \cref{eqn:sys2} admits as least one solution if and only if $L \geq M_2$.
{\bf Case 3:} $\beta < p^\text{ext} < 1$:
Consider the function $H(q) = F(q) + \dfrac{1}{2} f(q)(p^\text{ext} - q)$ defined in an interval $[\theta,p^\text{ext}]$. For any $\theta < q < p^\text{ext}$, one can prove that $H'(q) \geq 0$.
Indeed, if $q \leq \alpha_2$, then $f'(q) \geq 0$, and $f(q) >0$. One has $H'(q) = \dfrac{1}{2} f(q) + \dfrac{1}{2}f'(q)(p^\text{ext} - q) > 0$. If $q > \alpha_2$, from \cref{convexity}, the function $f$ is concave in $(\alpha_2,1)$, and hence $f'(q)(p^\text{ext} - q) \geq f(p^\text{ext}) - f(q)$. Thus,
\begin{center}
$ H'(q) = \dfrac{1}{2} (p^\text{ext} - q) \left(f'(q) + \dfrac{f(q)}{p^\text{ext} - q}\right) > \dfrac{1}{2} (p^\text{ext} - q) \left(f'(q) + \dfrac{f(q) - f(p^\text{ext})}{p^\text{ext} - q}\right) \geq 0.$
\end{center}
Therefore, function $H$ increases in $(\theta,p^\text{ext})$. Moreover $H(\theta) = F(\theta) < F(0)$ and $H(p^\text{ext}) = F(p^\text{ext}) > F(\beta) = F(0)$, and so there exists a unique value $\overline{p}_* \in (\theta,p^\text{ext})$ such that $H(\overline{p}_*) = F(0)$. Take $D_* > 0$ such that $D_*^2 = \dfrac{f(\overline{p}_*)}{p^\text{ext} - \overline{p}_*}$. Then, for any $D > 0$, from \cref{G}, there is a unique value $\overline{q} \in (\theta,p^\text{ext})$ such that $G'(\overline{q}) = 0$, $G(\overline{q}) = \displaystyle \min_{[0,1]}G$, and $D^2 =
\dfrac{f(\overline{q})}{p^\text{ext} - \overline{q}}$. If $D < D_*$, then $\dfrac{f(\overline{q})}{p^\text{ext} - \overline{q}} < \dfrac{f(\overline{p}_*)}{p^\text{ext} - \overline{p}_*}$.
Let $h(q) = \dfrac{f(q)}{p^\text{ext} - q}$, then $h'(q) = \dfrac{1}{p^\text{ext} - q} \left( f'(q) + \dfrac{f(q)}{p^\text{ext} - q} \right) > 0$ for $q \in (\theta, p^\text{ext})$. So function $h$ is increasing in $(\theta, p^\text{ext})$, and we can deduce that $\overline{q} < \overline{p}_*$. Hence, $\displaystyle \min_{[0,1]}G = G(\overline{q}) = F(\overline{q}) + \dfrac{1}{2}D^2(p^\text{ext} - \overline{q})^2 = F(\overline{q}) + \dfrac{1}{2}f(\overline{q})(p^\text{ext}- \overline{q}) = H(\overline{q}) < H(\overline{p}_*) = F(0)$.
Moreover, $G(p^\text{ext}) = F(p^\text{ext}) > F(\beta ) = F(0)$, $G(0) > F(0)$. Thus, there exists a maximal interval $(q_*,q^*) \subset [0,p^\text{ext}]$ such that $G(q) \in (F(\theta),F(0))$ for all $q \in (q_*,q^*)$. We have $0 < q_* < \overline{q} < q^* < p^\text{ext}$ and $G(q_*) = G(q^*) = F(0)$. Therefore, $\mathcal{F}_2$ is well-defined and continuous in $(q_*,q^*)$, and $ \displaystyle \lim_{p \rightarrow q^*}\mathcal{F}_2(p) = \lim_{p \rightarrow q_*} \mathcal{F}_2(p) = +\infty$. Reasoning like in the previous case, \cref{eqn:sys2} admits solution if and only if $L \geq M_3$, where
\begin{equation}
M_3 := \displaystyle \min_{[q_*,q^*]} \mathcal{F}_2 > 0,
\end{equation}
On the other hand, if $D \geq D_*$, $\displaystyle \min_{[0,1]} G \geq F(0)$, and equation \cref{eqn:sys2} has no solution.
\end{proof}
\begin{proof}[Proof of \cref{thm:exist}]
As we showed in \cref{sec:SD}, the (SD) steady-state solution $p$ of \cref{eqn:pb2} has $p(L)$ satisfying equation \cref{eqn:sys1}. From \cref{Type1}, we can deduce that for fixed $p^\text{ext} \in (0,1), D > 0$, $M_d(p^\text{ext}, D) = \displaystyle \min_q \mathcal{F}_1(q)$. Thus, we obtain the results for (SD) steady-state solutions of \cref{eqn:pb2} in \cref{thm:exist}.
Similarly, \cref{Type2} provides that for fixed $p^\text{ext} \in (0,1), D > 0$, we have $M_i(p^\text{ext}, D) = \displaystyle \min_q \mathcal{F}_2(q)$ when $p^\text{ext} \leq \beta$ or $D < D_*$. Otherwise, $M_i(p^\text{ext}, D) = +\infty$.
\end{proof}
\subsubsection{Existence of non-(SM) solutions} As we can see in the phase portrait in \cref{fig:phaseportrait}, there exist some solutions of \cref{eqn:pb2} which are neither (SD) nor (SI). These solutions can be non-symmetric or can have more than one (local) extremum. By studying these cases, we prove \cref{thm:nonSM} as follows
\begin{proof}[Proof of \cref{thm:nonSM}]
We can see from \cref{fig:phase} that for fixed $p^\text{ext} \leq \beta, D > 0$, the non-(SM) solutions $p$ of \cref{eqn:pb2} have more than one (local) extreme value because their orbits have at least two intersections with the axis $p' = 0$ (see e.g. $T_3$). Those solutions have the same local minimum values, denoted $p_\text{min}$, and the same maximum values, denoted $p_\text{max}$. Moreover, we have $p_\text{min} < \theta < p_\text{max}$, and $F(p_\text{min}) = F(p_\text{max})$.
Since the orbits make a round trip of distance $2L$, then the more extreme values a solution has, the larger $L$ is. Hence, to find the minimal value $M_*$, we study the case when $p$ has one local minimum and one local maximum with orbit as $T_3$ in \cref{fig:phase}. Then we have
\begin{center}
$G(p(-L)) = G(p(L)) = F(p_\text{min}) = F(p_\text{max})$,
\end{center}
and by using \cref{eqn:time}, we obtain
\begin{center}
$2L = \mathcal{F}_1((p(-L)) + \displaystyle \int_{p_\text{min}}^{p_\text{max}} \dfrac{ds}{\sqrt{2F(p_\text{min})- 2F(s)}} + \mathcal{F}_2(p(L))$.
\end{center}
Using the same idea as above, we can show that $L$ depends continuously on $p(-L)$.
Moreover, $\displaystyle \int_{p_\text{min}}^{p_\text{max}} \dfrac{ds}{\sqrt{2F(p_\text{min})- 2F(s)}} > \mathcal{F}_1(p(-L)) + \mathcal{F}_2(p(L))$, and $M_d = \min \mathcal{F}_1,$ $M_i = \min \mathcal{F}_2$, therefore there exists a constant $M_* > M_d + M_i$ such that \cref{eqn:pb2} admits at least one non-(SM) solution $p$ if and only if $L \geq M_*$.
On the other hand, for fixed $p^\text{ext} > \beta, D < D_*$, it is possible that \cref{eqn:pb2} admits a non-symmetric solution with only one minimum. The orbit of this solution is as $T_4$ in \cref{fig:phase2}. In this case, we have $G(p(L)) = G(p(-L)) = F(p_\text{min})$ with $p(-L) < p(L)$ and
\begin{center}
$2L = \mathcal{F}_2(p(-L)) + \mathcal{F}_2(p(L)) > 2M_i$.
\end{center}
Hence, in this case we only need $M_* > M_i$.
\end{proof}
\subsection{Stability analysis}
We first study the principal eigenvalue and eigenfunction for the linear problem. Then by using these eigenelements, we construct the super- and sub-solution of \cref{eqn:pb1} and prove the stability and instability corresponding to each case in \cref{thm:stability}.
\begin{proof}[Proof of \cref{thm:stability}]
Consider the corresponding linear eigenvalue problem:
\begin{equation}
\begin{cases}
\begin{array}{r l}
-\phi'' & = \lambda \phi \qquad \text{ in } (-L,L), \\
\phi'(L) & = -D\phi(L) ,\\
\phi'(-L)& = D\phi(-L),
\end{array}
\end{cases}
\label{eqn:EVP}
\end{equation}
where $\lambda$ is an eigenvalue with associated eigenfunction $\phi$. We can see that $\phi = \cos{\left(\sqrt{\lambda}x\right)}$ is an eigenfunction iff $\sqrt{\lambda}\tan{\left(L\sqrt{\lambda}\right)} = D$. Denote $\lambda_1$ the smallest positive value of $\lambda$ which satisfies this equality, thus $L\sqrt{\lambda_1} \in \left( 0, \dfrac{\pi}{2}\right)$. Hence, $\lambda_1 \in \left(0, \dfrac{\pi^2}{4L^2} \right)$. Moreover, for any $x \in (-L,L)$, the corresponding eigenfunction $\phi_1(x) = \cos{\left( \sqrt{\lambda_1}x \right)}$ takes values in $(0,1)$.
{\bf Proof of stability:}
Now let $p$ be a steady-state solution of \cref{eqn:pb1} governed by \cref{eqn:pb2}. First, we prove that if $f'(p(x)) < \lambda_1$ for any $x \in (-L,L)$ then $p$ is asymptotically stable. Indeed, since $f'(p(x)) < \lambda_1$, there exist positive constants $\delta, \gamma$ with $\gamma < \lambda_1$ such that for any $\eta \in [0,\delta]$,
\begin{equation}
f(p+\eta) - f(p) \leq (\lambda_1 - \gamma) \eta, \qquad f(p) - f(p-\eta) \leq (\lambda_1 - \gamma) \eta,
\label{eqn:lambda1}
\end{equation}
on $(-L,L)$. Now consider
\begin{displaymath}
\overline{p}(t,x) = p(x) + \delta e^{-\gamma t} \phi_1(x), \qquad \underline{p}(t,x) = p(x) - \delta e^{-\gamma t} \phi_1(x).
\end{displaymath}
Assume that $p^\text{init}(x) \leq p(x) + \delta \phi_1(x)$. Then by \cref{eqn:lambda1}, we have that $\overline{p}$ is a super-solution of \cref{eqn:pb1} because
\begin{displaymath}
\partial_t\overline{p} - \partial_{xx}\overline{p} = (\lambda_1 - \gamma) \delta e^{-\gamma t} \phi_1(x) + f(p) \geq f(p + \delta e^{-\gamma t} \phi_1(x)) = f(\overline{p}),
\end{displaymath}
due to the fact that $ 0 < \delta e^{-\gamma t} \phi_1(x) < \delta $ for any $t > 0$, $x \in (-L,L)$. Moreover, at the boundary points one has $\frac{\partial \overline{p}}{\partial \nu} + D(\overline{p} - p^\text{ext}) = \frac{\partial p}{\partial \nu} + D(p - p^\text{ext}) = 0.$
Similarly, if we have $p^\text{init}(x) \geq p(x) - \delta \phi_1(x)$, and so $\underline{p}$ is a sub-solution of \cref{eqn:pb1}. Then, by the method of super- and sub-solution (see e.g. \cite{PAO}), the solution $p^0$ of \cref{eqn:pb1} satisfies $\underline{p} \leq p^0 \leq \overline{p}$. Hence, $ |p^0(t,x) - p(x)| \leq \delta e^{-\gamma t}\phi_1(x)$. Therefore, we can conclude that, whenever $|p^\text{init}(x) - p(x)| \leq \delta \phi_1(x)$ for any $x \in (-L,L)$, the solution $p^0$ of \cref{eqn:pb1} converges to the steady-state $p$ when $t \rightarrow +\infty$. This shows the stability of $p$.
{\bf Proof of instability: } In the case when $f'(p(x)) > \lambda_1$, there exist positive constants $\delta, \gamma$, with $\gamma < \lambda_1$, such that for any $\eta \in [0,\delta]$,
\begin{equation}
f(p+\eta) - f(p) \geq (\lambda_1 + \gamma) \eta,
\label{eqn:lambda2}
\end{equation}
on $(-L,L)$.
For any $p^\text{init} > p$, there exists a positive constant $\sigma < 1$ such that $p^\text{init} \geq p + \delta (1- \sigma)$. Then $\widetilde{p}(t,x) = p(x) + \delta (1 - \sigma e^{-\gamma' t}) \phi_1(x)$, with $\gamma' < \gamma$ small enough, is a sub-solution of \cref{eqn:pb1}. Indeed, by applying \cref{eqn:lambda2} with $\eta = \delta (1 - \sigma e^{-\gamma' t}) \phi_1(x) \in [0,\delta]$ for any $x \in (-L,L)$, we have
\begin{displaymath}
\partial_t \widetilde{p} - \partial_{xx} \widetilde{p} = \gamma' \delta \sigma e^{-\gamma' t} \phi_1(x) + \lambda_1 \delta (1 - \sigma e^{-\gamma' t}) \phi_1(x) + f(p) \leq f(p + \delta (1 - \sigma e^{-\gamma' t}) \phi_1(x))
\end{displaymath}
if $\gamma \geq \dfrac{\gamma'\sigma e^{-\gamma' t}}{1 - \sigma e^{-\gamma' t}} = \dfrac{\gamma' \sigma}{e^{\gamma' t} - \sigma} $ for any $t \geq 0$. This inequality holds when we choose $\gamma' \leq \dfrac{\gamma (1-\sigma)}{\sigma}$. Now, we have that $\widetilde{p}$ is a sub-solution of \cref{eqn:pb1}, thus for any $t \geq 0, x \in (-L,L)$, the corresponding solution $p^0$ satisfies
\begin{displaymath}
p^0(t,x) - p(x) \geq \tilde p(t,x)-p(x) \geq \delta (1 - \sigma e^{-\gamma' t}) \phi_1(x).
\end{displaymath}
Hence, for a given positive $\epsilon < \delta \displaystyle \min_{x} \phi_1(x)$, when $t \rightarrow +\infty$, solution $p^0$ cannot remain in the $\epsilon$-neighborhood of $p$ even if $p^\text{init} - p$ is small. This implies the instability of $p$.
\end{proof}
Now, we present the proof of \cref{result},
\begin{proof}[Proof of \cref{result}]
For $p^\text{ext} \leq \alpha_1 < \theta, D > 0$, from \cref{thm:exist}, the (SI) steady-state solution $p$ exists for any $L > 0$ and is unique, $p(x) \leq p^\text{ext} \leq \alpha_1$ for all $x \in (-L,L)$. Moreover, from \cref{convexity}, the reaction term $f$ has $f'(q) < 0$, for any $q \in (0,\alpha_1)$. Then, for any $x \in (-L,L)$, $f'(p(x)) \leq 0 < \lambda_1$. Hence, $p$ is asymptotically stable.
Besides, from Theorems \ref{thm:exist} and \ref{thm:nonSM}, for any $L > 0$ such that $L < M_d(p^\text{ext}, D) < M_*(p^\text{ext}, D)$, \cref{eqn:pb1} has neither (SD) nor non-(SM) steady-state solutions. So the (SI) steady-state solution is the unique steady-state solution.
Using a similar argument for the case $p^\text{ext} \geq \alpha_2$, we obtain the result in \cref{result}.
\end{proof}
\section{Application to the control of dengue vectors by introduction of the bacterium {\it Wolbachia}}
\label{sec:bio}
\subsection{Model}
In this section, we show an application of our model to the control of mosquitoes using {\it Wolbachia}. Mosquitoes of genus {\it Aedes} are the vector of many dangerous arboviruses, such as dengue, zika, chikungunya and others. There exists neither effective treatment nor vaccine for these vector-borne diseases, and in such conditions, the main method to control them is to control the vector population. A biological control method using a bacterium called {\it Wolbachia} (see \cite{HOF}) was discovered and developed with this purpose. Besides reducing the ability of mosquitoes to transmit viruses, {\it Wolbachia} also causes an important phenomenon called {\it cytoplasmic incompatibility} (CI) on mosquitoes. More precisely, if a wild female mosquito is fertilized by a male carrying {\it Wolbachia}, its eggs almost cannot hatch. For more details about CI, we refer to \cite{WER}. In the case of {\it Aedes} mosquitoes, {\it Wolbachia} reduces lifespan, changes fecundity, and blocks the development of the virus. However, it does not influence the way mosquitoes move.
In \cite{STR16}, model \cref{eqn:redu1}, \cref{eqn:redu2} was considered with $n_1 = n_i$ the density of the mosquitoes which are infected by {\it Wolbachia} and $n_2 = n_u$ the density of wild uninfected mosquitoes. Consider the following positive parameters:
$\bullet$ $d_u, \delta d_u$: death rate of, respectively uninfected mosquitoes and infected mosquitoes, $\delta > 1$ since {\it Wolbachia} reduces the lifespan of the mosquitoes;
$\bullet$ $b_u, (1-s_f)b_u$: birth rate of, respectively uninfected mosquitoes and infected ones. Here $s_f \in [0,1)$ characterizes the fecundity decrease;
$\bullet$ $s_h \in (0,1] $: the fraction of uninfected females' eggs fertilized by infected males that do not hatch, due to the cytoplasmic incompatibility (CI);
$\bullet$ $K$: carrying capacity, $A$: diffusion coefficient.
Parameters $\delta, s_f, s_h$ have been estimated in several cases and can be found in the literature (see \cite{BAR} and references therein). We always assume that $s_f < s_h$ (in practice, $s_f$ is close to 0 while $s_h$ is close to 1).
Several models have been proposed using these parameters. In the present study, a system of Lotka-Volterra type is proposed, where the parameter $\epsilon > 0$ is used to characterize the high fertility as follows.
\begin{equation}
\begin{cases}
\partial_t n_i^\epsilon - A\partial_{xx}n_i^\epsilon = (1-s_f)\dfrac{b_u}{\epsilon}n_i^\epsilon \left(1 - \dfrac{n_i^\epsilon + n_u^\epsilon}{K}\right) - \delta d_u n_i^\epsilon, \\
\partial_t n_u^\epsilon - A\partial_{xx} n_u^\epsilon = \dfrac{b_u}{\epsilon}n_u^\epsilon \left( 1- s_h \dfrac{n_i}{n_i^\epsilon + n_u^\epsilon} \right) \left(1 - \dfrac{n_i^\epsilon + n_u^\epsilon}{K}\right) - d_u n_u^\epsilon,
\end{cases}
\label{eqn:model}
\end{equation}
where the reaction term describes birth and death. The factor $\left( 1- s_h \dfrac{n_i^\epsilon}{n_i^\epsilon + n_u^\epsilon} \right)$ characterizes the cytoplasmic incompatibility. Indeed, when $s_h = 1$, no egg of uninfected females fertilized by infected males can hatch, that is, there is complete cytoplasmic incompatibility. The factor becomes $\dfrac{n_u^\epsilon}{n_i^\epsilon + n_u^\epsilon}$ which means the birth rate of uninfected mosquitoes depends on the proportion of uninfected parents because only an uninfected couple can lay uninfected eggs. Whereas, $s_h = 0$ means that all the eggs of uninfected females hatch. In this case, the factor $\left( 1- s_h \dfrac{n_i^\epsilon}{n_i^\epsilon + n_u^\epsilon} \right)$ becomes $1$, so the growth rate of uninfected population is not altered by the pressure of the infected one.
In paper \cite{STR16}, the same model was studied in the entire space $\mathbb{R}$. In that case, the system \cref{eqn:model} has exactly two stable equilibria, namely the {\it Wolbachia} invasion steady state and the {\it Wolbachia} extinction steady state. In this paper, the authors show that when $\epsilon \rightarrow 0$ and the reaction terms satisfies some appropriate conditions, the proportion $p^\epsilon = \dfrac{n_i^\epsilon}{n_i^\epsilon + n_u^\epsilon}$ converges to the solution $p^0$ of the scalar equation $\partial_t p^0 - A\partial_{xx} p^0 = f(p^0)$, with the reaction term
\begin{equation}
f(p) = \delta d_u s_h \dfrac{p(1-p)(p-\theta)}{s_h p^2 - (s_f + s_h)p + 1},
\label{eqn:reac}
\end{equation}
with $\theta = \dfrac{s_f + \delta -1}{\delta s_h}$. We will always assume that $s_f + \delta (1 - s_h) < 1$, so $\theta \in (0,1)$, and $f$ is a bistable function on $(0,1)$. The two stable steady states $1$ and $0$ of \cref{eqn:pb1} correspond to the success or failure of the biological control using {\it Wolbachia}.
\subsection{Mosquito population in presence of migration}
In this study, the migration of mosquitoes is taken into account. Typically, the inflow of wild uninfected mosquitoes and the outflow of the infected ones may influence the efficiency of the method using {\it Wolbachia}. Here, to model this effect, system \cref{eqn:model} is considered in a bounded domain with appropriate boundary conditions to characterize the migration of mosquitoes. In one-dimensional space, we consider $\Omega = (-L,L)$ and Robin boundary conditions as in \cref{eqn:redu2}
\begin{equation}
\begin{cases}
\frac{\partial n_i^\epsilon}{\partial \nu} = -D(n_i^\epsilon - n_i^{\text{ext},\epsilon}) \qquad \text{ at } x = \pm L,\\
\frac{\partial n_u^\epsilon}{\partial \nu} = -D(n_u^\epsilon - n_u^{\text{ext},\epsilon}) \qquad \text{ at } x = \pm L,
\label{eqn:bord}
\end{cases}
\end{equation}
where $n_i^{\text{ext},\epsilon}, n_u^{\text{ext},\epsilon}$ do not depend on $t$ and $x$ but depend on parameter $\epsilon > 0$. Denote $p^\epsilon = \dfrac{n_i^\epsilon}{n_i^\epsilon + n_u^\epsilon}, n^\epsilon = \dfrac{1}{\epsilon} \left( 1 - \dfrac{n_i^\epsilon + n_u^\epsilon}{K} \right)$. In \cref{sec:converge}, we prove that when $\epsilon \rightarrow 0$, up to extraction of sub-sequences, $n^\epsilon$ converges weakly to $n^0 = h(p^0) $ for some explicit function $h$, and $p^\epsilon $ converges strongly towards solution $p^0$ of \cref{eqn:pb1} where $p^\text{ext}$ is the limit of $\dfrac{n_i^{\text{ext},\epsilon}}{n_i^{\text{ext},\epsilon} + n_u^{\text{ext},\epsilon}}$ when $\epsilon \rightarrow 0$, and the reaction term $f$ as in \cref{eqn:reac}. Function $f$ satisfies \cref{reaction} and \cref{convexity}, so the results in \cref{thm:exist} and \ref{thm:stability} can be applied to this problem. By changing spatial scale, we can normalize the diffusion coefficient into $A = 1$.
In this application, the parameters $L, D, p^\text{ext}$ correspond to the size of $\Omega$, the migration rate of mosquitoes, the proportion of infected mosquitoes surrounding the boundary. The main results in the present paper give information about existence and stability of equilibria depending upon different conditions for these parameters. Especially, from \cref{result}, we obtain that when the size $L$ of the domain is small, there exists a unique equilibrium for this problem and its values depend on the proportion of mosquitoes carrying \textit{Wolbachia} outside the domain ($p^\text{ext}$). More precisely, when $p^\text{ext}$ is small (i.e., $p^\text{ext} \leq \alpha_1$), solution of \cref{eqn:pb1} converges to the steady-state solution close to $0$, which corresponds to the extinction of mosquitoes carrying {\it Wolbachia}. Therefore, in this situation, the replacement strategy fails because of the migration through the boundary. Otherwise, when the proportion outside the domain is high (i.e., $p^\text{ext} \geq \alpha_2$), then the long-time behavior of solutions of \cref{eqn:pb1} has values close to $1$, which means that the mosquitoes carrying \textit{Wolbachia} can invades the whole population.
\subsection{Numerical illustration}
\label{sec:simulation}
In this section, we present the numerical illustration for the above results. Parameters are fixed according to biologically relevant data (adapted from \cite{FOC}). Time unit is the day, and parameters per day are in \cref{tab:parameter}.
\begin{table}
\caption{Parameters for the numerical illustration}
\label{tab:parameter}
\begin{center}
\begin{tabular}{| c | c c c c c c |}
\hline
Parameters & $b_u$ & $d_u$ & $\delta$ & $\sigma$ & $s_f$ & $s_h$ \\ [0.5ex]
\hline
Values & 1.12 & 0.27 & $\frac{10}{9}$ & 1 & 0.1 & 0.8 \\
\hline
\end{tabular}
\end{center}
\end{table}
Then, the reaction term $f$ in \cref{eqn:reac} has $\theta = 0.2375$, $\beta \approx 0.3633, \alpha_1 \approx 0.12, \alpha_2 \approx 0.7$. As proposed in section 3 of the modeling article \cite{OTR}, we may pick the value $830 m^2$ per day for the diffusivity of {\it Aedes} mosquitoes. Choose $A = 1$, so the $x$-axis unit in the simulation corresponds to $\sqrt{830/1} \approx 29$ m.
In the following parts, we check the convergence of $p^\epsilon$ when $\epsilon \rightarrow 0$ in \cref{convergence_num}. In \cref{steady_state_num}, corresponding to different parameters, we compute numerically the solutions of \cref{eqn:pb1} and \cref{eqn:pb2} to check their existence and stability.
\subsubsection{Convergence to the scalar equation}
\label{convergence_num}
Consider a mosquito population with large fecundity rate, that is, $ \epsilon \ll 1$. Model \cref{eqn:model} with boundary condition in \cref{eqn:bord} takes into account the migration of mosquitoes.
\begin{figure}
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{e01.png}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$\epsilon = 0.1$}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{e005.png}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$\epsilon = 0.05$}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{e001.png}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$\epsilon = 0.01$}
\end{subfigure}
\setlength{\abovecaptionskip}{5pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$p^0$ and $p^\epsilon$ at time $t = 50$ (days)}
\label{fig:epsilon}
\end{figure}
Fix $D = 0.05, p^\text{ext} = 0.1$ and $ L = 2$, the system \cref{eqn:model}, \cref{eqn:bord} is solved numerically thanks to a semi-implicit finite difference scheme with 3 different values of the parameters $\epsilon$. The initial data are chosen such that $n_i^\epsilon(t = 0) = n_u^\epsilon(t = 0)$, that is, $p^\text{init} = 0.5$. In \cref{fig:epsilon}, at time $t = 50$ days, the numerical solutions of \cref{eqn:pb1} are plotted with blue solid lines, the proportions $p^\epsilon = \dfrac{n_i^\epsilon}{n_i^\epsilon+n_u^\epsilon}$ are plotted with dashed lines. We observe that when $\epsilon$ goes to 0, the proportion $p^\epsilon$ converges to the solution $p^0$ of system \cref{eqn:pb1}.
\subsubsection{Steady-state solutions}
\label{steady_state_num}
For the different values of $p^\text{ext}$, the values of the integrals $\mathcal{F}_{1}$ and $\mathcal{F}_2$ as functions of $p(L)$ in \cref{eqn:F1} and \cref{eqn:F2} are plotted in \cref{fig:plotF}. For fixed values of $D$ and $p^\text{ext}$, \cref{fig:plotF} can play the role of bifurcation diagrams that show the relation between the value $p(L)$ of symmetric solutions $p$ and parameter $L$. Then, we can obtain the critical values of parameter $L$. Next, we compute numerically the (SM) steady-state solutions of \cref{eqn:pb1} with different values of $L > 0, D > 0, p^\text{ext} \in (0,1)$.
\begin{figure}
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{pext01.png}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$p^\text{ext} = 0.1, D = 0.05$}
\label{fig:pext01}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{pext08.png}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$p^\text{ext} = 0.8, D = 0.05$}
\label{fig:pext08}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{pext082.png}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{$p^\text{ext} = 0.8, D = 0.5$}
\label{fig:pext082}
\end{subfigure}
\setlength{\abovecaptionskip}{5pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Graphs of $\mathcal{F}_1$ and $\mathcal{F}_2$ with respect to $p(L)$.}
\label{fig:plotF}
\end{figure}
\noindent \textbf{Numerical method: } We use Newton method to solve equations $L = \mathcal{F}_{1,2}(p(L))$ and obtain the values of $p(L)$, then we can deduce the value of $p(0)$ by \cref{eqn:eq1}. Again by Newton method, we obtain $p(x)$ for any $x$ by solving $x = \displaystyle \int_{p(0)}^{p(x)} \dfrac{(-1)^k ds}{\sqrt{2F(p(0)) - 2 F(s)}}$. We also construct numerically a non-(SM) steady-state solution by the same technique but it is more sophisticated and details of the construction are omitted in this article for the sake of readability.
We also plot the time dynamics of solution $p^0(t,x)$ of \cref{eqn:pb1} at $t =10, 20, 40,60, 100$ to verify the asymptotic stability of steady-state solutions. Next, we consider different values of $p^\text{ext}$ and present our observation in each case.
\begin{figure}
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{fig04.png} \\
\includegraphics[width = \textwidth]{fig4.png}
\caption{$L = 0.5 < M_1$}
\label{fig:04}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{fig03.png} \\
\includegraphics[width = \textwidth]{fig3.png}
\caption{$L = 8.96 > M_* > M_1$}
\label{fig:03}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{fignonsym.png} \\
\includegraphics[width = \textwidth]{fignonsym0.png}
\caption{$L = 8.96 > M_* > M_1$}
\label{fig:nonsym}
\end{subfigure}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-5pt}
\caption{Steady-state and time-dependent solutions when $p^\text{ext} = 0.1, D = 0.05$}
\label{fig:equi1}
\end{figure}
\noindent $\bullet$ { \bf Case 1: $p^\text{ext} = 0.1 < \alpha_1 $}.
For $D = 0.05$ fixed, we observe in \cref{fig:pext01} that for any $L > 0$, equation $\mathcal{F}_2(p(L)) = L$ always admits exactly one solution. Thus, there always exists one (SI) steady-state solution with small values. We approximate that
\begin{displaymath}
M_d(0.1,0.05) = M_1 \approx 0.8819, \quad M_*(0.1, 0.05) \approx 8.625.
\end{displaymath}
Also from \cref{fig:pext01}, we observe that when $L = M_1$, a bifurcation occurs and \cref{eqn:pb1} admits a (SD) steady-state solution, and when $L > M_1$ one can obtain two (SD) solutions. Moreover, when $L \geq M_*$, there exist non-symmetric steady-state solutions. We do numerical simulations for two values of $L$ as follows.
For $L = 0.5 < M_1$, the unique equilibrium $\overline{p}_{21}$ is (SI) and has values close to $0$ (see \cref{fig:04}). Solution $p^0$ of \cref{eqn:pb1} with any initial data converges to $\overline{p}_{21}$. This simulation is coherent with the asymptotic stability that we proved in \cref{result}.
For $L = 8.96 > M_* > M_1$, together with $\overline{p}_{21}$, there exist two more (SD) steady-state solutions, namely $\overline{p}_{11}$, $\overline{p}_{12}$, (see \cref{fig:03}). This plot show that these steady-state solutions are ordered, and the time-dependent solutions converges to either the largest one $\overline{p}_{11}$ or the smallest one $\overline{p}_{21}$, while $\overline{p}_{12}$ with intermediate values is not an attractor. In \cref{fig:nonsym}, we find numerically a non-symmetric solution $\overline{p}$ of \cref{eqn:pb2} corresponding to orbit $T_3$ as in \cref{fig:phase}. Let the initial value $p^\text{init} \equiv \overline{p}$, then we observe from \cref{fig:nonsym} that $p^0$ still converges to the symmetric equilibrium $\overline{p}_{21}$.
\noindent Moreover, the value $\lambda_1$ of \cref{thm:stability} in this case is approximately equal to $0.0063$. We also obtain that for any $x \in (-L,L)$,
\begin{displaymath}
f'(\overline{p}_{11}(x)) < 0,\quad f'(\overline{p}_{21}(x)) < 0,\quad f'(\overline{p}_{12}(x)) > 0.0462,\quad f'(\overline{p}(x)) > 0.022.
\end{displaymath}
Therefore, by applying \cref{thm:stability}, we deduce that the steady-state solutions $\overline{p}_{11}, \overline{p}_{21}$ are asymptotically stable, $\overline{p}_{12}$ and the non-symmetric equilibrium $\overline{p}$ are unstable. Thus, the numerical simulations in \cref{fig:equi1} are coherent to the theoretical results that we proved.
\noindent $\bullet$ { \bf Case 2: $p^\text{ext} = 0.8 > \alpha_2 > \beta$}.
In this case, we obtain $D_* \approx 0.16$. We present numerical illustrations for two cases: $D = 0.05 < D_*$ and $D = 0.5 > D_*$.
$\circ$ For $D = 0.05 < D_*$, we have $M_i(0.8, 0.05) = M_2 \approx 10.3646$ (see \cref{fig:pext08}).
For $L = 2 < M_2$, the unique equilibrium $\overline{p}_{11}$ is (SD) and has values close to $1$ (see \cref{fig:01}). The time-dependent solution $p^0$ of \cref{eqn:pb1} with any initial data converges to $\overline{p}_{11}$. This simulation is coherent to the asymptotic stability we obtained in \cref{result}.
For $L = 12 > M_2$, together with $\overline{p}_{11}$, there exist two more (SI) steady-state solutions, namely $\overline{p}_{21}$, $\overline{p}_{22}$, and they are ordered (see \cref{fig:02}). In this case, we obtain approximately that $\lambda_1 \approx 0.0063$ and for any $x \in (-L,L)$, one has
\begin{displaymath}
f'(\overline{p}_{11}(x)) < 0, \quad f'(\overline{p}_{21}(x)) \in (-0.0398, 0.0368), \quad f'(\overline{p}_{22}(x)) \in (-0.0195, 0.0673).
\end{displaymath}
By sufficient conditions in \cref{thm:stability}, we obtain that $\overline{p}_{11}$ is asymptotically stable but we can not conclude the stability for $\overline{p}_{21}$ and $\overline{p}_{22}$. The time dynamics of $p^0$ in \cref{fig:02} suggests that the smallest steady-state solution $\overline{p}_{21}$ is asymptotically stable and $\overline{p}_{22}$ seems to be unstable.
$\circ$ For $D = 0.5 > D_*$, function $\mathcal{F}_2$ is not defined (see \cref{fig:pext082}), so problem \cref{eqn:pb2} admits only one (SD) steady-solution, and we obtain that it is unique and asymptotically stable (see \cref{fig:00}).
\begin{figure}
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{fig1.png} \\
\includegraphics[width = \textwidth]{fig01.png}
\caption{$L = 2, D = 0.05 < D_*$}
\label{fig:01}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{fig2.png} \\
\includegraphics[width = \textwidth]{fig02.png}
\caption{$L = 12, D = 0.05 < D_*$}
\label{fig:02}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width = \textwidth]{fig0.png} \\
\includegraphics[width = \textwidth]{fig00.png}
\caption{$L = 12, D = 0.5 > D_*$}
\label{fig:00}
\end{subfigure}
\setlength{\abovecaptionskip}{5pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Steady-state and time-dependent solutions when $p^\text{ext} = 0.8$}
\label{fig:equi2}
\end{figure}
\section{Conclusion and perspectives}
We have studied the existence and stability of steady-state solutions with values in $[0,1]$ of a reaction-diffusion equation
\begin{displaymath}
\partial_t p - \partial_{xx}p = f(p)
\end{displaymath}
on an interval $(-L,L)$ with cubic nonlinearity $f$ and inhomogeneous Robin boundary conditions
\begin{displaymath}
\dfrac{\partial p}{\partial \nu} = D(p-p^\text{ext}),
\end{displaymath}
where constant $p^\text{ext} \in (0,1)$ is an analogue of $p$, and constant $D > 0$. We have shown how the analysis of this problem depends on the parameters $p^\text{ext}$, $D$, and $L$. More precisely, the main results say that there always exists a symmetric steady-state solution that is monotone on each half of the domain. For $p^\text{ext}$ large, the value of this steady-state solution is close to 1, otherwise, it is close to $0$. Besides, the larger value of $L$, the more steady-state solutions this problem admits. We have found the critical values of $L$ so that when the parameters surpass these critical values, the number of steady-state solutions increases. We also provided some sufficient conditions for the stability and instability of the steady-state solutions.
We presented an application of our results on the control of dengue vector using {\it Wolbachia} bacterium that can be transmitted maternally. Since {\it Wolbachia} can help reduce vectorial capacity of the mosquitoes, the main goal of this method is to replace wild mosquitoes by mosquitoes carrying {\it Wolbachia}. In this application, we considered $p$ as the proportion of mosquitoes carrying {\it Wolbachia} and used the equation above to model the dynamic of the mosquito population. The boundary condition describes the migration through the border of the domain. This replacement method only works when $p$ can reach an equilibrium close to $1$. Therefore, the study of existence and stability of the steady-state solution close to $1$ is meaningful and depends strongly on the parameters $p^\text{ext}$, $D$, and $L$. In realistic situations, the proportion $p^\text{ext}$ of mosquitoes carrying {\it Wolbachia} outside the domain is usually low. Using the theoretical results proved in this article, one sees that, to have major chances of success, one should try to treat large regions ($L$ large), well isolated ($D$ small) and possibly applying a population replacement method in a zone outside $\Omega$ (to increase $p^\text{ext}$ by reducing its denominator).
As a natural continuation of the present work, higher dimension problems and more general boundary conditions can be studied. In more realistic cases, $p^\text{ext}$ can be considered to depend on space and the periodic solutions can be the next problem for our study. Besides, when an equilibrium close to $1$ exists and is stable, one may consider multiple strategies using multiple releases of mosquitoes carrying {\it Wolbachia}. To optimize the number of mosquitoes released to guarantee the success of this method under the difficulties enlightened by this paper is an interesting problem for future works.
|
1,108,101,564,676 | arxiv | \section{Introduction}
The deep disagreement between general relativity and quantum theory
is well known \cite{Penrose}. In this work I would like represent
the model of quantum geometry, intended to reach the desirable
``peaceful coexistence" between these theories. The proposed scheme
is inherently based on the notion of the relative quantum
amplitudes. Formally, I deal with ``classical" non-linear field
theory developed over the complex projective Hilbert space
$CP(N-1)$. It is worth while to emphasize, that my approach
\cite{Le1,Le2,Le3,Le4,Le5} quite differs from number of works using
$CP(N-1)$; see for example \cite{CMP,Hughston1,Hughston2}.
The sketch of the proposed scheme is as follows:
a). I use the realization of the $G=SU(N)$ acting on the states $|S>
\in {\cal{H}}=C^N$ in terms of local dynamical variables (LDV's)
represented by the tangent vectors to $CP(N-1)$ (the operators of
differentiation).
b). Quantum measurement realized as a perturbation of the
generalized coherent quantum state (GCS).
c). The self-identification of quantum system is realized by the
affine parallel transport of its local dynamical variables, agrees
with Fubini-Study metric.
d). Variation principle applied to the local Hamiltonian vector
field leads to quasi-linear PDE field equations for the ``field
shell" of the GCS. This ``field shell" represents some ``quantum
potential" of the model extended ``particle" corresponding GCS.
e). The ``yes/no" measuring process formulated as a detection of
this extended particle serving for the establishment of the local
state-dependent space-time structure.
This approach leads to some conclusion concerning so-called
measurement problem. It is convenient to refer to the encyclopedic
book of R. Penrose \cite{Penrose}.
1. Projective postulate and null-measurement.
The so-called null-measurement (see paragraph 22.7, \cite{Penrose})
is in fact a non-relevant construction, since the conclusion that
``we know that photon is in state $|\rho>$ even though it has not
interacted with the detector (in the transmission channel
$|\tau>$-P.L.) at all", is based on the explicit belief that the
photon already has passed splitter. But photon might be simply
absorbed even before this splitter. Therefore, strictly speaking, we
do not have reliable information without a detection in the
corresponding channel. This example shows that if one has left some
gap between two successive quantum states, the application of the
projective postulate (if no $|\tau>$, then $|\rho>$) is meaningless.
In the framework of my model the projection acts continuously and
locally along $CP(N-1)$ trajectory of GCS onto the corresponding
tangent spaces, since it is the covariant differentiation of vector
fields representing local dynamical variables (LDV's) on $CP(N-1)$.
2. Deformation of GCS during interaction used for measurement.
Let me discuss dynamics of Schr\"odinger's lump during measurement
(see paragraph 30.10, \cite{Penrose}). This construction is a humane
version of the Schr\"odinger's cat. In distinguish with so
complicated system as poisoned cat, and indefinite displaced lump of
matter, we would like discuss the deformation of GCS which is
theoretically analyzable.
First of all I should note that the assumption that ``the energy in
each case is the same" may be correct only approximately, say, in
the case of adiabatic ``kicking" of the lump. The finite time of
transition unavoidably leads to the acceleration of the lump of
matter, to the deformation of its quantum state \cite{Le6,Le7}, and
to the shift of mass-energy. Hence, the superposition state is not
stationary (beating) and, therefore, this is useless for our
decision about real interaction process of the photon and splitter
(as well as in the original ``comic example" of Schr\"odinger's cat
demonstrating incompleteness of the wave function description of an
nuclear decay \cite{Schr1}).
In the framework of my model, the GCS of the lump is ``kicked'' in
the first approximation by the coset transformations of $SU(N)$
group. The coefficient functions of the $SU(N)$ generators obey some
quasi-linear relativistic field equations in the local dynamical
space-time \cite{Le1,Le2,Le3}.
3. The difference of the masses of the original and the displaced
lumps leads to different time-like Killing vectors (if any) in the
vicinities of two lumps. This is an obstacle to write Schr\"odinger
equations for superposed wave function. But, who does need it? This
is rather a privilege than a defect, since one has a natural
decoherence mechanism.
In the framework of my model one has state-dependent space-times
arising as specific cross-section of the tangent fibre bundle over
$CP(N-1)$. Linear superposition has a sense only in dynamical
space-time in the quantum system (setup, lump) under the condition
of {\it physical integrity}. In general, the formulation of the
physical integrity is a difficult problem; in my model the GCS
expresses this property. This leads to the dynamics in the tangent
fibre bundle over $CP(N-1)$. All thought compounds of free
independent systems are trivial since they live in the tensor
product of the state spaces.
Below will be introduced some fundamental notions of my
construction.
\section{Action states with entire number N of $\hbar$}
The masses of known ``elementary'' particles $m_J$ are in the
fundamental de Broglie relation to corresponding internal
frequencies $\omega_J$:
\begin{equation}
\frac{\omega_J}{m_J}=\frac{c^2}{\hbar}.
\end{equation}
If one treat the $U=c^2$ as the cosmic potential, then arise the
natural question about the micro-selective mechanism capable produce
very specific spectrum of frequencies. In the ordinary quantization
scheme it is assumed that the oscillator is really some fundamental
entity. But the spectrum of oscillator is equidistant and unbounded
whereas the mass-spectrum of ``elementary'' particles does not.
Furthermore, the classical soliton-like solution cannot be
decomposed into harmonic waves, hence quantum solitons are not a
compound of quantum oscillators. I try to find a dispersion law
$\Omega(P)$ (initially in the form $\Omega(X)$) as a solution of the
non-linear field equations.
There are some additional reasons for the modification of the
``second quantization'' procedure.
{\it First.} In the second quantization method one has formally
given particles whose properties are defined by some commutation
relations between creation-annihilation operators. Note, that the
commutation relations are only the simplest consequence of the
curvature of the dynamical group manifold in the vicinity of the
group's unit (in algebra). Dynamical processes require, however,
finite group transformations and, hence, the global group structure.
The main technical idea is to use vector fields over group manifold
instead of indefinite Dirac's q-numbers. This scheme therefore
looking for the dynamical nature of the creation and annihilation
processes of quantum particles.
{\it Second.} The quantum particles (energy bundles) should
gravitate. Hence, strictly speaking, their behavior cannot be
described as a linear superposition. Therefore the ordinary second
quantization method (creation-annihilation of free particles) is
merely a good approximate scheme due to the weakness of gravity.
Thereby the creation and annihilation of particles are time
consuming dynamical non-linear processes. So, linear operators of
creation and annihilation (in Dirac sense) do exist as approximate
quantities.
{\it Third.} Nobody knows how arise a quant of energy (quantum
particle). Definitely, there is an energy quantization but the
dynamical nature of this process is unknown. Avoiding the vacuum
stability problem, its self-energy, etc., we primary quantize,
however, the action, not energy. The relative (local) vacuum of some
problem is not the state with minimal energy, it is a state with an
extremal of some action functional.
POSTULATE 1.
\noindent {\it We assume that there are elementary quantum states
(EQS) $|\hbar a>, a=0,1,...$ of abstract Planck's oscillator whose
states correspond to the quantum motions with given number of
Planck's action quanta}.
Thereby only action subject to primary quantization but the
quantization of dynamical variables such as energy, spin, etc.,
postponed to dynamical stage. Presumably there are some non-linear
field equations describing energy (frequency) distribution, whose
soliton-like solution provides the quantization of the dynamical
variables but their field carriers - ``field shell" are smeared in
dynamical space-time. Therefore, quantum ``particles'', and, hence,
their numbers should arise as some countable solutions of non-linear
wave equations. In order to establish acceptable field equation
capable intrinsically to describe all possible degrees of freedom
defreezing under intensive interaction, we should build some {\it
universal ambient Hilbert state space} $\cal{H}$. We will use {\it
the universality of the action} whose variation capable generate any
dynamical variable. Vectors of {\it action state space} $\cal{H}$ we
will call {\it action amplitude} (AA). Some of them will be EQS's of
motion corresponding to entire numbers of Planck's quanta $| \hbar
a>$. Since the action in itself does not create gravity, it is
legible to create the linear superposition of $|\hbar a>=(a!)^{-1/2}
({\hat \eta^+})^a|0>$ constituting $SU(\infty)$ multiplete of the
Planck's action quanta operator $\hat{S}=\hbar {\hat \eta^+} {\hat
\eta}$ with the spectrum $S_a=\hbar a$ in the separable Hilbert
space $\cal{H}$. The standard basis $\{|\hbar a>\}_0^{\infty}$ will
be used with the `principle' quantum number $a=0,1,2...$ assigned by
Planck's quanta counting. Generally (AA) are their coherent
superposition
\begin{eqnarray}
|G>=\sum_{a=0}^{\infty} g^a| \hbar a>.
\end{eqnarray}
may represented of the ground state - ``vacuum" of some quantum
system. In order to avoid the misleading reminiscence about
Schr\"odinger state vector, I use $|G>,|S>$ instead of $|\Psi>$.
In fact only finite, say, $N$ EQM may be
involved. Then one may restrict $CP(\infty)$ QPS to finite
dimensional $CP(N-1)$. Hereafter I will use the indices as follows:
$0\leq a \leq N$, and $1\leq i,k,m,n,s \leq N-1$.
\section{Quantum analog of force and SU(N) factorization}
Since any ray AA has isotropy group $H=U(1)\times U(N)$, only coset
transformations $G/H=SU(N)/S[U(1) \times U(N-1)]=CP(N-1)$
effectively act in $\cal{H}$. Therefore the ray representation of
$SU(N)$ in $C^N$ and, in particular, the embedding of $H$ and $G/H$
in $G$, require a state-depending parametrization. Hence, there is a
diffeomorphism between space of the rays marked by the local
coordinates in the map
$U_j:\{|G>,|g^j| \neq 0 \}, j>0$
\begin{eqnarray}\label{coor}
\pi^i_{(j)}=\left\{
\matrix{\frac{g^i}{g^j} &if& 1 \leq i < j \cr
\frac{g^{i+1}}{g^j} &if& j \leq i < N-1}
\right\}
\end{eqnarray}
and the group manifold of the coset transformations
$G/H=SU(N)/S[U(1) \times U(N-1)]=CP(N-1)$. This diffeomorphism is
provided by the coefficient functions $\Phi^i_{\alpha}$ of the local
generators (see below and \cite{Le6,Le7}). The choice of the map
$U_j$ means, that the comparison of quantum amplitudes refers to the
amplitude with the action $\hbar j$. The breakdown of $SU(N)$
symmetry on each AA to the isotropy group $H=U(1)\times U(N-1)$
contracts full dynamics down to $CP(N-1)$. The physical
interpretation of these transformations is given by the
POSTULATE 2.
\noindent {\it Super-equivalence principle: the unitary
transformations of the AA may be identified with the physical
unitary fields. The coset transformation $G/H=SU(N)/S[U(1)\times
U(N-1)]=CP(N-1)$ is the quantum analog of classical force: its
action is equivalent to some physically distinguishable variation of
GCS in $CP(N-1)$}.
The $CP(N-1)$ manifold takes the place of ``classical phase space''
\cite{ZF}, since its points, corresponding to the GCS, are most
close to classical states of motion. Two interpretations may be
given for the points of $CP(N-1)$. One of them is the
``Schr\"odinger's lump" \cite{Penrose} and the second one is the
analog of the Stern-Gerlach ``filter's orientations" discussed by
Fivel \cite{Fivel}. The root content of their physical
interpretations is that one has {\it a macroscopic (i.e. space-time)
discriminator} of two quantum states. As such, they may be used as
``yes/no'' states of some two-level detector. We will use the
``Schr\"odinger's lump" interpretation. Let us assume that GCS
described by local coordinates $(\pi^1,...,\pi^{N-1})$ correspond to
the original lump, and the coordinates $(\pi^1+\delta
\pi^1,...,\pi^{N-1}+\delta \pi^{N-1})$ correspond to the displaced
lump. Such hidden coordinates of the lump gives a firm geometric
tool for the description of quantum dynamics during interaction used
for the measuring process.
Then the question that I now want to rise is as following: {\it what
``classical field'', i.e field in space-time, correspond to the
transition from the original to the displaced lump?} In other words
we would like find the ``field shell" of the lump, its space-time
shape and its dynamics. The lump's perturbations will be represented
by the ``geometric bosons'' \cite{Le8} whose frequencies are not a
priori given, but which defined by some field equations which should
established due to a new variation problem. Before its formulation,
we ought to use in fact a sophisticated differential geometric
construction in order to avoid the clash between quantum mechanics
and general relativity \cite{Penrose}.
I will assume that all ``vacua'' solutions belong to single
separable {\it projective Hilbert space} $CP(N-1)$. The vacuum
represented by GCS is merely the stationary point of some action
functional, not solution with the minimal energy. Energy will be
associated with tangent vector field to $CP(N-1)$ giving velocity of
the action variation in respect with the notion of the
Newton-Stueckelberg-Horwitz-Piron (NSHP) time \cite{H1}. Dynamical
(state-dependent) space-time will be built at any GCS and,
particulary, at the vacuum of some ``classical'' problem (see
below). Therefore Minkowskian space-time is functionally local
(state-dependent) in $CP(N-1)$ and the space-time motion dictated by
the field equations connected with two infinitesimally close GCS.
The connection between these local space-times may be physically
established by the measurement given in terms of geometry of the
base manifold $CP(N-1)$. It seems to be like the Everett's idea
about ``parallel words'', but has of course different physical
sense. Now we are evidences of the Multiverse (omnium) concept
\cite{Penrose,W1}. I think there is only one Universe but there
exists continuum of dynamical space-times each of them related to
one point of the quantum phase space $CP(N-1)$. The standard
approach, identifying Universe with space-time, is too strong
assumption from this point of view.
\section{LDV's and tangent fibre bundles}
The state space ${\cal H}$ of the field configurations with finite
action quanta is a stationary construction. We introduce dynamics
{\it by the velocities of the GCS variation} representing some
``elementary excitations'' (quantum particles). Their dynamics is
specified by the Hamiltonian, giving time variation velocities of
the action quantum numbers in different directions of the tangent
Hilbert space $T_{(\pi^1,...,\pi^{N-1})} CP(N-1)$ where takes place
the ordinary linear quantum scheme. The temp of the action variation
gives the energy of the ``particles''.
The local dynamical variables corresponding internal symmetries of
the GCS and their breakdown should be expressed now in terms of the
local coordinates $\pi^k$. The Fubini-Study metric
\begin{equation}
G_{ik^*} = [(1+ \sum |\pi^s|^2) \delta_{ik}- \pi^{i^*} \pi^k](1+
\sum |\pi^s|^2)^{-2} \label{FS}
\end{equation}
and the affine connection
\begin{eqnarray}
\Gamma^i_{mn} = \frac{1}{2}G^{ip^*} (\frac{\partial
G_{mp^*}}{\partial \pi^n} + \frac{\partial G_{p^*n}}{\partial
\pi^m}) = - \frac{\delta^i_m \pi^{n^*} + \delta^i_n \pi^{m^*}}{1+
\sum |\pi^s|^2} \label{Gamma}
\end{eqnarray}
in these coordinates will be used. Hence the internal dynamical
variables and their norms should be state-dependent, i.e. local in
the state space \cite{Le6,Le7}. These local dynamical variables
realize a non-linear representation of the unitary global $SU(N)$
group in the Hilbert state space $C^N$. Namely, $N^2-1$ generators
of $G = SU(N)$ may be divided in accordance with Cartan
decomposition: $[B,B] \in H, [B,H] \in B, [B,B] \in H$. The
$(N-1)^2$ generators
\begin{eqnarray} \Phi_h^i \frac{\partial}{\partial \pi^i}+c.c. \in
H,\quad 1 \le h \le (N-1)^2
\end{eqnarray}
of the isotropy group $H = U(1)\times
U(N-1)$ of the ray (Cartan sub-algebra) and $2(N-1)$ generators
\begin{eqnarray}
\Phi_b^i \frac{\partial}{\partial \pi^i} + c.c. \in B, \quad 1 \le b
\le 2(N-1)
\end{eqnarray}
are the coset $G/H = SU(N)/S[U(1) \times U(N-1)]$ generators
realizing the breakdown of the $G = SU(N)$ symmetry of the GCS.
Furthermore, $(N-1)^2$ generators of the Cartan sub-algebra may be
divided into the two sets of operators: $1 \le c \le N-1$ (where
$N-1$ is the rank of $Alg SU(N)$) Abelian operators, and $1 \le q
\le (N-1)(N-2)$ non-Abelian operators corresponding to the
non-commutative part of the Cartan sub-algebra of the isotropy
(gauge) group. Here $\Phi^i_{\sigma}, \quad 1 \le \sigma \le N^2-1 $
are the coefficient functions of the generators of the non-linear
$SU(N)$ realization. They give the infinitesimal shift of
$i$-component of the coherent state driven by the $\sigma$-component
of the unitary multipole field rotating the generators of $Alg
SU(N)$ and they are defined as follows:
\begin{equation}
\Phi_{\sigma}^i = \lim_{\epsilon \to 0} \epsilon^{-1}
\biggl\{\frac{[\exp(i\epsilon \lambda_{\sigma})]_m^i g^m}{[\exp(i
\epsilon \lambda_{\sigma})]_m^j g^m }-\frac{g^i}{g^j} \biggr\}=
\lim_{\epsilon \to 0} \epsilon^{-1} \{ \pi^i(\epsilon
\lambda_{\sigma}) -\pi^i \},
\end{equation}
\cite{Le1}.
Then the sum of $N^2-1$ the energies of the `elementary
systems' (particle plus fields) is equal to the excitation energy of
the GCS, and the local Hamiltonian $\vec{H}$ is linear against the
partial derivatives $\frac{\partial }{\partial \pi^i} = \frac{1}{2}
(\frac{\partial }{\partial \Re{\pi^i}} - i \frac{\partial }{\partial
\Im{\pi^i}})$ and $\frac{\partial }{\partial \pi^{*i}} = \frac{1}{2}
(\frac{\partial }{\partial \Re{\pi^i}} + i \frac{\partial }{\partial
\Im{\pi^i}})$, i.e. it is the tangent vector to $CP(N-1)$
\begin{eqnarray}
\vec{H}=\vec{T_c}+\vec{T_q} +\vec{V_b} = \hbar \Omega^c \Phi_c^i
\frac{\partial }{\partial \pi^i} + \hbar \Omega^q \Phi_q^i
\frac{\partial }{\partial \pi^i} + \hbar \Omega^b \Phi_b^i
\frac{\partial }{\partial \pi^i} + c.c.. \label{field}
\end{eqnarray}
The characteristic equations for the PDE $\vec{H}|E>=E|E>$ give the
parametric representations of their solutions in $CP(N-1)$. The
parameter $\tau$ in these equations I will identify with ``universal
time of evolution'' of Newton-Stueckelberg-Horwitz-Piron-(NSHP)
\cite{H1}. This time is the measure of the GCS variation, i.e. it is
a state-dependent measure of the distance in $CP(N-1)$ (an evolution
trajectory length in the Fubini-Study metric) expressed in time
units. The energy quantization will be discussed elsewhere.
\section{Lorentz transformations and dynamical space-time }
The Einstein's analysis of the Galileo-Newtonian kinematics in an
inertial reference frame, based on the classical Maxwell's
electromagnetic field theory, led us to a new relativistic
kinematics \cite{Einstein1}. Unfortunately, similar analysis based
on the quantum theory is in a very preliminary state
\cite{Penrose,Ashtekar}. The continuation of such a work is
necessary.
It is clear that the {\it coincidence} of the ``arrow'' with some
number on the ``limb'' is in fact the coincidence of the two
space-time points. But one has in the quantum area literally neither
``arrow'' nor the ``limb''; some ``clouds'' or `` field shell" one
has instead. Thereby the uncertainty principe puts the limit for the
exact coincidence of two events. Therefore, in comparison with us,
Einstein had two privileges: he had intuitively clear classical
measuring devises (clocks, scales, rods, etc.) and the intuitively
clear spatial coincidence of two ``points'', say, the end of a rod
and the end of the scale. Without these ingredients it is difficult
to image the measurement process and even the space-time notion
itself. Generally, space-time coordinates lose direct physical sense
even in the framework of general relativity \cite{Einstein2}.
Quantum theory poses a new problem concerning operational sense of
the microscopic invariance of the space-time scale. Indeed, all
abstract (notional) tools of macroscopic laboratory (clocks, scales,
rods, etc.) one should change for the microscopic ones. Note, Bohr's
proposal about ``classical apparatus'' is unacceptable since it is
inconsistent. We should construct now the space-time notion in the
internal quantum terms.
The notion of physical space is based on the abstraction of
separation between two material points assuming that they may be as
far as we need. This separation may be measured by some hard scale.
The ``hard scale'' in fact means the ``the same'' or identical
scale, i.e. the scale with invariant length relative some
transformation group. But in quantum theory it is inconsistent (or
it is at least questionable) to use a priori space-time symmetries.
Similar arguments is applicable to time separation because of the
specific problem of the ``time-of-arrival'' \cite{A1}. Generally
speaking, {\it space-time separation is state-dependent}. In such
situation, one should decide on what is the criterion of identity is
physically acceptable in our case. If, say, electrons used for the
measurement of separation between a source $S_1$ and a detector
$D_1$, than the most reasonable to use the criterion of the ``same
electron'' (emitted from $S_1$ and detected in $D_1$). All quantum
electrons are of course identical but there are momentum and spin
which distinguish one electron from another. But in general this
criterion is not so good as we need since we cannot be sure that the
electron detected in $D_2$ is same as in $D_1$, or even that this
has some causal connection with the previous stage of the
measurement. There is at least one reason for this verdict: the
detection of some accidental electron, e.g. due to a quantum
fluctuations, etc. Nevertheless, in the bubble chamber one may be
sure that whole visible trace belongs to ``same electron''.
Therefore, if interaction is not so drastic or, if one takes into
account all possible decay channels of an unitary multiplete, we
could formulate the criterion of identity. Let me to formulate this
criterion previously with promise to decode all components of the
statement: {\it the local Hamiltonian should be parallel transported
during a ``smooth'' evolution}. I introduce the concept of
``dynamical space-time'' as a new construction capable to detect the
coincidences of the qubit components in the formal two-level
``detector'' which is a part of the full quantum configuration
(setup modeled by the GCS). The ``extraction'' of this ``detector''
is of course more or less a free choice of an observer. It is
important only that the chosen LDV should be invariantly connected
with the qubit coherent state in respect with a one of the points of
the LDV spectrum. I will assume that the spectrum of the LDV is
known even if it is really problematic like, for example, in the PDE
eigen-problem $\vec{H}|E>=E|E>$ mentioned above.
\subsection{Embedding ``Hilbert (quantum)
dynamics" in space-time} If we would like to have some embedding of
the ``Hilbert (quantum) dynamics" in space-time we should to
formalize the quantum observation (or measurement of some internal
dynamical variable).
Mentioned above diffeomorphism between rays of $CP(N-1)$ and $SU(N)$
generators will be realized in terms of the local $SL(2,C)$ action
onto the qubit states space $C^2$ as follows.
The basis of these spaces form two vectors: the normal vector $|N>$
to the ``vacuum landscape'' $CP(N-1)$ corresponding to eigenvalue
$\lambda_D$ of measuring dynamical variable $\hat{D}$ and the
tangent vector $|T>$, generated by the coset generators of $G/H$.
The last ones describing the interaction used for the measurement
process. It is important to understand that the measurement i.e.
comparison of the expected qubit spinor $(\alpha_0,\beta_0)$ at and
measured qubit spinor $(\alpha_1,\beta_1)$ pave the way to embedding
Hilbert space dynamics into the local dynamical space-time. This is
the replacement of the notorious ``arrow'' of the measuring device,
namely: one has two-level system (logical spin $1/2$ \cite{Le6})
created by the quantum question-unitary projector onto one of the
two states $|N>,|T>$. Their coherent states are given by the qubit
spinors $(\alpha,\beta)$ being connected with infinitesimal
$SL(2,C)$ transformations give rise to the variation of the
space-time coordinates generated by local infinitesimal Lorentz
transformations. Why we can do this conclusion?
Causal classical events lie (in good approximation) on a light cone
which is invariant relative the Lorentz group. On the other hand the
formal ``Lorentz spin transformations matrix'' transform the spinor
of the quantum question being applied to measurement of some LDV
helping us to detect some event. The classical detection of an event
is based on the coincidence of the two spinors one of which
corresponds to the expectation value and the second to detecting
value of LDV. This is possible only under the tuning of orientation
by rotation and the tuning of velocity by acceleration. Therefore we
should identify ``Lorentz spin transformations matrix'' of the
qubit spinors with Lorentz transformation of classical inertial
frame.
The specific components of LDV's (see below) take the place of these
entities. But now LDV is vector field defined over $CP(N-1)$ and
the comparison of LDV at different setups (initial and perturbed due
to interaction used for measurement) require some procedure of the
{\it self-identification}. It is impossible to compare expected and
measured LDV ``directly'' (decoherence due to $CP(N-1)$ geometry
\cite{Le4}). The affine parallel transport is quite acceptable for
this aim. The parallel transport forms the condition for the
coefficient functions of the LDV leading to the nonlinear field
equations in the local dynamical space-time.
\subsection{Differential geometry of the measuring procedure}
The measurement, i.e. attributing a number to some dynamical
variable or observable has in physics subjective as well as
objective sense. Namely: the numeric value of some observable
depends as a rule on a setup (the character of motion of
laboratory, type of the measuring device, field strength, etc.).
However the relationships between numeric values of dynamical
variables and numeric characteristics of laboratory motion, field
strength, etc., should be formulated as invariant, since they
reflect the objective character of the physical interaction used in
the measurement process. The numbers obtained due to the
measurements carry information which does not exist a priori, i.e.
before the measurement process. But the information comprised of
subjective as well as objective invariant part reflects the physics
of interaction. The last is one of the main topics of QFT. Since
each measurement reducible (even if it is unconscious) to the answer
of the question ``yes'' or ``no'', it is possible to introduce
formally a quantum dynamical variable ``logical spin 1/2" \cite{Le6}
whose coherent states represent the quantum bit of information
``qubit".
POSTULATE 3
{\it We assume that the invariant i.e. physically essential part of
information represented by the coherent states of the ``logical spin
1/2" is related to the space-time structure.}
Such assumption is based on the observation that on one side the
space-time is the manifold of points modeling different physical
systems (stars, atoms, electrons, etc.) artificially depleted of all
physical characteristics (material points without reference to
masses). In principle arbitrary local coordinates may be attributed
to these points. But as we know from general relativity the metric
structure depends on the matter distribution and the zero
approximation of the metric tensor $g_{\mu \nu}=\eta_{\mu \nu}+...$
gives the Lorentz invariant interval \cite{Einstein2}. On the other
hand the spinor structure of the Lorentz transformations represents
the transformations of the coherent states of the ``logical spin
1/2" or ``qubit". Thereby we can assume the measurement of the
quantum dynamical variables expressed by the ``qubit" spinor
``creates'' the local space-time coordinates. We will formulate
non-linear field equations in this local space-time due to a
variational principle referring to the generator of the quantum
state deformation.
The internal hidden dynamics of the quantum configuration given by
GCS should be somehow reflected in physical space-time. Therefore we
should solve the ``inverse representation problem'': to find locally
unitary representation of dynamical group $SU(N)$ in the dynamical
space-time where acts the induced realization of the coherence group
$SU(2)$ of the qubit spinor \cite{Le1,Le2}. Its components subjected
to the ``Lorentz spin matrix transformations'' \cite{G}. We should
build the local spinor basis invariantly related to the ground
states manifold $CP(N-1)$. First of all we have to have the local
reference frame (LRF) as some analog of the ``representation'' of
$SU(N)$. Each LRF and, hence, $SU(N)$ ``representation'' may be
marked by the local coordinates (\ref{coor}) of the ``vacuum
landscape''. Now we should almost literally repeat differential
geometry of a smooth manifold embedded in flat ambient Hilbert
space${\cal{H}}=C^N$. The geometry of this smooth manifold is the
projective Hilbert space equipped with the Fubini-Study metric
(\ref{FS}) and with the affine connection (\ref{Gamma}).
In order to express the measurement of the ``particle's field'' in
the geometrically intrinsic terms, I assume that GCS is expressed in
the local coordinates
\begin{eqnarray}
|G(\pi^1,...,\pi^{N-1})>=(g^0(\pi^1,...,\pi^{N-1}),g^1(\pi^1,...,\pi^{N-1}),
...,g^{N-1}(\pi^1,...,\pi^{N-1}))^T,
\end{eqnarray}
where $\sum_{a=0}^N |g^a|^2= R^2$, and, hence,
\begin{eqnarray}
g^0(\pi^1,...,\pi^{N-1})=\frac{R^2}{\sqrt{R^2+\sum_{s=1}^{N-1}|\pi^s|^2}},
\end{eqnarray} and for $1\leq i\leq N-1$ one has
\begin{eqnarray}
g^i(\pi^1,...,\pi^{N-1})=\frac{R
\pi^i}{\sqrt{R^2+\sum_{s=1}^{N-1}|\pi^s|^2}},
\end{eqnarray}
i.e. $CP(N-1)$ will be embedded in the Hilbert space of Planck's
quanta ${\cal{H}}=C^N$.
Then the velocity of ground state evolution relative NSHP time is
given by the formula
\begin{eqnarray}\label{41}
|H> = \frac{d|G>}{d\tau}=\frac{\partial g^a}{\partial
\pi^i}\frac{d\pi^i}{d\tau}|a\hbar>=|T_i>\frac{d\pi^i}{d\tau}=H^i|T_i>,
\end{eqnarray}
is the tangent vector to the evolution curve
$\pi^i=\pi^i(\tau)$, where
\begin{eqnarray}\label{42}
|T_i> = \frac{\partial g^a}{\partial \pi^i}|a\hbar>=T^a_i|a\hbar>.
\end{eqnarray}
Then the ``acceleration'' is as follows
\begin{eqnarray}\label{43}
|A> =
\frac{d^2|G>}{d\tau^2}=|g_{ik}>\frac{d\pi^i}{d\tau}\frac{d\pi^k}{d\tau}
+|T_i>\frac{d^2\pi^i}{d\tau^2}=|N_{ik}>\frac{d\pi^i}{d\tau}\frac{d\pi^k}{d\tau}\cr
+(\frac{d^2\pi^s}{d\tau^2}+\Gamma_{ik}^s
\frac{d\pi^i}{d\tau}\frac{d\pi^k}{d\tau})|T_s>,
\end{eqnarray}
where
\begin{eqnarray}\label{44}
|g_{ik}>=\frac{\partial^2 g^a}{\partial \pi^i \partial \pi^k}
|a\hbar>=|N_{ik}>+\Gamma_{ik}^s|T_s>
\end{eqnarray}
and the state
\begin{eqnarray}\label{45}
|N> = N^a|a\hbar>=(\frac{\partial^2 g^a}{\partial \pi^i \partial
\pi^k}-\Gamma_{ik}^s \frac{\partial g^a}{\partial \pi^s})
\frac{d\pi^i}{d\tau}\frac{d\pi^k}{d\tau}|a\hbar>
\end{eqnarray}
is the normal to the ``hypersurface'' of the ground states. Then the
minimization of this ``acceleration'' under the transition from
point $\tau$ to $\tau+d\tau$ may be achieved by the annihilation of
the tangential component
\begin{equation}
(\frac{d^2\pi^s}{d\tau^2}+\Gamma_{ik}^s
\frac{d\pi^i}{d\tau}\frac{d\pi^k}{d\tau})|T_s>=0
\end{equation}
i.e. under the condition of the affine parallel transport of the
Hamiltonian vector field
\begin{equation}\label{par_tr}
dH^s +\Gamma^s_{ik}H^id\pi^k =0.
\end{equation}
The Gauss-Codazzi equations
\begin{eqnarray}\label{46}
\frac{\partial N^a}{\partial \pi^i}=B^s_i T^a_s \cr \frac{\partial
T_k^a}{\partial \pi^i}-\Gamma^s_{ik}T^a_s=B_{ik}N^a
\end{eqnarray}
I used here instead of the anthropic principle \cite{Penrose,W1}.
These give us dynamics of the vacuum (normal) vector and the tangent
vectors, i.e. one has the LRF dynamics modeling the ``moving
representation'' or moving quantum setup
\begin{eqnarray}\label{47}
\frac{d N^a}{d \tau}=\frac{\partial N^a}{\partial \pi^i} \frac{d
\pi^i}{d\tau}+c.c.= B^s_i T^a_s \frac{d \pi^i}{d\tau} +c.c. = B^s_i
T^a_s H^i +c.c.; \cr \frac{d T_k^a}{d \tau}=\frac{\partial
T_k^a}{\partial \pi^i}\frac{d \pi^i}{d\tau} +c.c. =
(B_{ik}N^a+\Gamma^s_{ik}T^a_s)\frac{d \pi^i}{d\tau}+c.c. \cr
= (B_{ik}N^a+\Gamma^s_{ik}T^a_s) H^i+c.c.
\end{eqnarray}
Please, remember that $0 \leq a \leq N$, but $1\leq i,k,m,n,s \leq
N-1$. The tensor $B_{ik}$ of the second quadratic form of the ground
states ``hypersurface'' is as follows:
\begin{eqnarray}\label{48}
B_{ik} =<N|\frac{\partial^2 |G>}{\partial \pi^i \partial \pi^k}.
\end{eqnarray}
Now one should build the qubit spinor in the local basis $(|N>,|D>)$
for the quantum question in respect with the measurement of some
local dynamical variable $\vec{D}$ at some GCS which may be marked
by the normal $|N>$. We will assume that there is {\it natural state
$\widetilde{|D>}$ of the quantum system in the LRF representation}
equal to the renormalized lift of LDV $\vec{D} \in T_{\pi}CP(N-1)$
into the environmental Hilbert space $\cal{H}$, and there is {\it
expectation state} $\hat{D}|D_{expect}>=\lambda_D|D_{expect}>$,
associated with the tuning of ``measuring device''. This notional
measuring device is associate with the local unitary projector along
the normal $|N>$ onto the natural state $\widetilde{|D>}$. In fact
it defines the covariant derivative in $CP(N-1)$. The lift-vectors
$|N>,|D>$ are given by the solutions of (\ref{47}) arising under
interaction used for the measurement of the LDV $\vec{D}$. In
general $|D>$ it is not a tangent vector to $CP(N-1)$. But
renormalized vector defined as the covariant derivative
$|\widetilde{D}>=|D>-<Norm|D>|Norm>$ is a tangent vector to
$CP(N-1)$ if $|Norm>=\frac{|N>}{\sqrt{<N|N>}}$. The operation of the
$|\widetilde{D}>$ renormalization is the orthogonal (unitary)
projector. Indeed,
\begin{eqnarray}
\widetilde{|\widetilde{D}>}= \widetilde{(|D>-<Norm|D>|Norm>)}\cr =
|D>-<Norm|D>|Norm> \cr - <Norm|(|D>-<Norm|D>|Norm>)|Norm> \cr
=|D>-<Norm|D>|Norm> = |\widetilde{D}>.
\end{eqnarray}
Then at the point $(\pi^1,...,\pi^{N-1})$ one has two components of
the qubit spinor
\begin{eqnarray}\label{513}
\alpha_{(\pi^1,...,\pi^{N-1})}=\frac{<N|D_{expect}>}{<N|N>} \cr
\beta_{(\pi^1,...,\pi^{N-1})}=\frac{<\widetilde{D}|D_{expect}>}
{<\widetilde{D}|\widetilde{D}>}
\end{eqnarray}
then at the infinitesimally close point
$(\pi^1+\delta^1,...,\pi^{N-1}+\delta^{N-1})$ one has new qubit
spinor
\begin{eqnarray}\label{514}
\alpha_{(\pi^1+\delta^1,...,\pi^{N-1}+\delta^{N-1})}=\frac{<N'|D_{expect}>}
{<N'|N'>} \cr \beta_{(\pi^1+\delta^1,...,\pi^{N-1}+\delta^{N-1})}=
\frac{<\widetilde{D}'|D_{expect}>}{<\widetilde{D}'|\widetilde{D}'>}
\end{eqnarray}
where the basis $(|N'>,|\widetilde{D}'>)$ is the lift of the
parallel transported $(|N>,|\widetilde{D}>)$ from the
infinitesimally close $(\pi^1+\delta^1,...,\pi^{N-1}+\delta^{N-1})$
back to $(\pi^1,...,\pi^{N-1})$.
These two infinitesimally close qubit spinors being expressed as
functions of $\theta,\phi,\psi,R$ and
$\theta+\epsilon_1,\phi+\epsilon_2,\psi+\epsilon_3,R+\epsilon_4,$
represented as follows
\begin{eqnarray}\label{s1}
\eta = R \left( \begin {array}{c} \cos \frac{\theta}{2}(\cos
\frac{\phi_1- \psi_1}{2} - i\sin \frac{\phi_1 - \psi_1}{2}) \cr \sin
\frac{\theta}{2} (\cos \frac{\phi_1+\psi_1}{2} +i \sin
\frac{\phi_1+\psi_1}{2}) \end {array}
\right)
= R\left( \begin {array}{c} C(c-is) \cr S( c_1+is_1)
\end
{array} \right)
\end{eqnarray}
and
\begin{eqnarray}
\eta+\delta \eta = R\left( \begin {array}{c} C(c-is) \cr S(
c_1+is_1) \end {array} \right) \cr + R\left( \begin {array}{c}
S(is-c)\epsilon_1-C(s+i c)\epsilon_2+
C(s+ic)\epsilon_3+C(c-is)\frac{\epsilon_4}{R} \cr
C(c_1+is_1)\epsilon_1+S(ic_1-s_1)\epsilon_2-S(s_1-ic_1)\epsilon_3
+S(c_1+is_1)\frac{\epsilon_4}{R}
\end
{array}
\right)
\end{eqnarray}
may be connected with infinitesimal ``Lorentz spin transformations
matrix'' \cite{G}
\begin{eqnarray}
L=\left( \begin {array}{cc} 1-\frac{i}{2}\tau ( \omega_3+ia_3 )
&-\frac{i}{2}\tau ( \omega_1+ia_1 -i ( \omega_2+ia_2)) \cr
-\frac{i}{2}\tau
( \omega_1+ia_1+i ( \omega_2+ia_2))
&1-\frac{i}{2}\tau( -\omega_3-ia_3)
\end {array} \right).
\end{eqnarray}
Then accelerations $a_1,a_2,a_3$ and angle velocities $\omega_1,
\omega_2, \omega_3$ may be found in the linear approximation from
the equation
\begin{eqnarray}\label{equ}
\eta+\delta \eta = L \eta
\end{eqnarray}
as functions of the qubit spinors components depending on local
coordinates $(\pi^1,...,\pi^{N-1})$.
Hence the infinitesimal Lorentz transformations define small
``space-time'' coordinates variations. It is convenient to take
Lorentz transformations in the following form $ct'=ct+(\vec{x}
\vec{a}) d\tau, \quad \vec{x'}=\vec{x}+ct\vec{a} d\tau
+(\vec{\omega} \times \vec{x}) d\tau$, where I put
$\vec{a}=(a_1/c,a_2/c,a_3/c), \quad
\vec{\omega}=(\omega_1,\omega_2,\omega_3)$ \cite{G} in order to have
for $\tau$ the physical dimension of time. The coordinates $x^\mu$
of points in this space-time serve in fact merely for the
parametrization of deformations of the ``field shell'' arising under
its motion according to non-linear field equations \cite{Le1,Le2}.
\section{Field shell equations (FSE)}
In order to find the ``field shell'' of the perturbed GCS one should
establish some wave equations in the dynamical space-time. All these
notions require more precise definitions. Namely, say, in the
simplest case of $CP(1)$, the ``field shells'' being represented in
the spherical coordinates are the classical vector fields
$\Omega^{\alpha}=\frac{x^{\alpha}}{r}(\omega +i \gamma), \quad 1\leq
\alpha \leq 3 $ giving the temps of the GCS variations. The tensor
fields $1\leq \alpha \leq 8,15,...,N^2-1$ will be discussed
elsewhere. Note, that the maximal number of EQS $a=0,1,...N,...$ now
strongly connected with the tensor character of the GCS driving
field $\Omega^{\alpha}$. These fields are ``classical'' since they
are not subjected to quantization directly, i.e. by the attribution
of the fermionic or bosonic commutation relations. They obey to
nonlinear field equations. Their internal dynamical variables like
spin, charge, etc., will be a consequence of their dynamical
structure.
``Particle'' now associated with the ``field shell'' in the
dynamical space-time (see below), given locally by the Hamiltonian
vector field $\vec{H}$. At each point $(\pi^1,...,\pi^{N-1})$ of the
$CP(N-1)$ one has an ``expectation value'' of the $\vec{H}$ defined
by a measuring device. But displaced GCS may by reached along of one
of continuum pathes. Therefore the comparison of two vector fields
and their ``expectation values'' in neighborhood points requires
some natural rule. The ``natural'' in our case means that the
comparison has sense only for same ``particle'' or for its ``field
shell''. For this reason one should have a ``self-identification''
procedure. The affine parallel transport in $CP(N-1)$ of vector
fields is a natural and the simplest rule for the comparison of
corresponding ``field shells''. Physically the self-identification
of ``particle'' literally means that its Hamiltonian vector field is
the Fubini-Study covariant constant.
But there are questions: what should coincide, and what is the
``expected'' and what is ``the detected particles'', because we have
not particles at all? Since we have only the unitary fields
$\Omega^{\alpha}$ as parameters of GCS transformations we assume
that in accordance with the super-equivalence principle under the
infinitesimal shift of the unitary field $\delta \Omega^{\alpha}$ in
the dynamical space-time, the shifted Hamiltonian field should
coincide with the infinitesimal shift of tangent Hamiltonian field
generated by the parallel transport in $CP(N-1)$ during NSHP time
$\delta \tau$ \cite{H1}. Thus one has
\begin{equation}
\hbar (\Omega^{\alpha} + \delta \Omega^{\alpha} ) \Phi^k_{\alpha}=
\hbar \Omega^{\alpha}( \Phi^k_{\alpha} - \Gamma^k_{mn}
\Phi^m_{\alpha} V^n \delta \tau)
\end{equation}
and, hence,
\begin{equation}
\frac{ \delta \Omega^{\alpha}}{\delta \tau} = -
\Omega^{\alpha}\Gamma^m_{mn} V^n
\end{equation}.
We introduce the dynamical space-time coordinates $x^{\mu}$ as
state-dependent quantities, transforming in accordance with the
local Lorentz transformations $x^{\mu} + \delta x^{\mu} =
(\delta^{\mu}_{\nu} + \Lambda^{\mu}_{\nu} \delta \tau )x^{\nu}$. The
parameters of $\Lambda^{\mu}_{\nu} (\pi^1,...,\pi^{N-1})$ depend on
the local transformations of LRF in $CP(N-1)$ described in the
previous paragraph. Assuming a spherically symmetrical solution, we
will use the coordinates $(x^0=ct,x^1=r\sin \Theta \cos \Phi,
x^2=r\sin \Theta \sin \Phi, x^3=r\cos \Theta)$. In the case of
spherical symmetry, $\Omega^1=(\omega+i \gamma) \sin \Theta \cos
\Phi, \Omega^2=(\omega+i \gamma) \sin \Theta \sin \Phi,
\Omega^3=(\omega+i \gamma) \cos \Theta)$ and in the general case of
the separability of the angle and radial parts, one has
$\Omega^{\alpha}=\sum C_{l,m}^{\alpha} Y_{l,m}(\Theta,\Phi)(\omega+i
\gamma)=\sum C_{l,m}^{\alpha} Y_{l,m}(\Theta,\Phi)\Omega$. Then
taking into account the expressions for the ``4-velocity"
$v^{\mu}=\frac{\delta x^{\mu}}{\delta \tau} = \Lambda^{\mu}_{\nu}
(\pi^1,...,\pi^{N-1})
x^{\nu} $ one has the field equation
\begin{equation}\label{FSE}
v^{\mu} \frac{\partial \Omega}{\partial x^{\mu} } = -
\Omega\Gamma^m_{mn} V^n,
\end{equation}
where
\begin{equation}
\matrix{ v^0&=&(\vec{x} \vec{a}) \cr
\vec{v}&=&ct\vec{a} +(\vec{\omega} \times
\vec{x}) \cr }.
\end{equation}
If one wishes to find the field corresponding to a given trajectory,
say, a geodesic in $CP(N-1)$, then, taking into account that any
geodesic as whole belongs to some $CP(1)$, one may put $ \pi^1=
e^{i\phi} \tan(\sigma \tau)$. Then $V^1=\frac{d \pi^1}{d
\tau}=\sigma \sec^2(\sigma \tau) e^{i\phi}$, and one has a linear
wave equations for the gauge unitary field $\Omega^{\alpha}$ in the
dynamical space-time with complicated coefficient functions of the
local coordinates $(\pi^1,...,\pi^{N-1})$. Under the assumption
$\tau = w t$ this equation has following solution
\begin{eqnarray}
\omega+i \gamma=(F_1(r^2-c^2t^2)+i F_2(r^2-c^2t^2)) \exp{(-2w c
\int_0^t dp \frac{ \tan(w p)}{A \sqrt{c^2(p^2-t^2)+r^2}})},
\end{eqnarray}
where $F_1,F_2$ are an arbitrary function of the interval
$s^2=r^2-c^2t^2$, $(\vec{a},\vec{x})=Ar\cos(\chi)$,
$A=\sqrt{a_x^2+a_y^2+a_z^2}$ and $r=\sqrt{x^2+y^2+z^2}$. The angle
$\chi$ in fact is defined by a solution of the equation (5.20). I
used $\chi=\pi$ since for us now interesting only ``radial boost
turned toward the center of the field shell".
The general factor demonstrates the diffusion of the light cone
(mass shell) due to the boosts. Thus our results consist with the
so-called ``off-shell'' idea of Horwitz-Piron-Stueckelberg
\cite{H2}.
\section{Quasi-Hamiltonian equations}
The theory of the quasi-liner PDE field equations
(\ref{FSE}) is well known \cite{Kamke,Arnold}. I wish apply general
approach looking on the quasi-linear PDE as a particular case of the
nonlinear PDE. Let me write such equation in the form
\begin{equation}\label{FSE1}
G(x^{\mu}, P_{\mu},\Omega)=v^{\mu} \frac{\partial \Omega}{\partial
x^{\mu} } + \Omega\Gamma^m_{mn} V^n=0,
\end{equation}
where I put $P_{\mu}=\frac{\partial \Omega}{\partial x^{\mu} }$.
Such PDE equations demonstrate a natural ``wave-corpuscular duality"
in the following sense. Equation (\ref{FSE1}) itself is field
equation relative $\Omega$. Then ``4-velocity" $v^{\mu} = \frac{d
x^{\mu}}{d \tau}$ against the parameter of evolution $\tau$ may be
treated as velocities of ``corpuscules" moving along trajectories in
the dynamical space-time. We can see it from the following
calculations. The equation (\ref{FSE1}) define the hyper-surface $E$
in the 9-dimension space of 1-jets \cite{Arnold}. The phase curves
lie as whole on $E$ and obey following quasi-Hamiltonian system.
\begin{equation}\label{FSE2}
\frac{d G(x^{\mu}, P_{\mu},\Omega)}{d \tau}=
G_{x^{\mu}}v^{\mu}+G_{P_{\mu}}\frac{d P_{\mu}}{d
\tau}+G_{\Omega}\frac{d\Omega}{d\tau}=0,
\end{equation}
Using the explicit form of $G$, one can rewrite this equation as
follows:
\begin{equation}\label{FSE3}
G_{x^{\mu}}v^{\mu}+v^{\mu} \frac{d P_{\mu}}{d
\tau}+G_{\Omega}\frac{\partial \Omega}{\partial x^{\mu}} v^{\mu}=0.
\end{equation}
Then the full characteristic system reads now
\begin{eqnarray}\label{FSE4}
\frac{d x^{\mu}}{d \tau} = \frac{\partial G}{\partial P_{\mu}} =
v^{\mu}, \cr \frac{d P_{\mu}}{d \tau}=-\frac{\partial G}{\partial
x^{\mu}} - \frac{\partial G}{\partial \Omega}P_{\mu}, \cr
\frac{d\Omega}{d\tau}=\frac{\partial \Omega}{\partial
x^{\mu}}v^{\mu}=\frac{\partial \Omega}{\partial
x^{\mu}}\frac{\partial G}{\partial P_{\mu}}=P_{\mu} \frac{\partial
G}{\partial P_{\mu}}.
\end{eqnarray}
Here one has generalized Hamiltonian equations describing a
``corpuscular" point-wise motion in the 4D dynamical space-time. The
analysis of stability of their solutions and the physical sense of
their Schr\"odinger's quantization require future investigations.
\section{Conclusion}
The main new points of my approach are following:
A. I use the notion of ``elementary quantum motions" (EQM) $|\hbar
a>$ with well defined quantized Planck's action $S_a=\hbar a$
instead of the notion of ``elementary particles". Their GCS's serve
as an abstract formalization of the ``quasi-classical'' description
of a quantum setup or ``Schr\"odinger's lump" \cite{Penrose}.
B. The quantum phase space $CP(N-1)$ serves as the base of the
tangent fibre bundle of the local dynamical variables. The special
cross-section of this bundle and affine gauge field are geometric
tools for the quantum measurement in the state-dependent dynamical
space-time.
C. Integration over all pathes (alternatives) realizes the objective
approach in quantum theory. The dominate contribution will be given
by the geodesic of $CP(N-1)$ spanning two GCS's \cite{Le4}.
The technical details are as follows:
1. The projective representation of pure $N$-dimension quantum
states (one could think of arbitrary large $N$), provides a natural
non-linear realization of the $G=SU(N)$ group manifold and the coset
sub-manifold $G/H=SU(N)/S[U(1)\times U(N-1)]=CP(N-1)$. I consider
the generators of this group as LDV's \cite{Le4} of the model.
2. These quantum dynamical variables are represented by the tangent
vector fields to $CP(N-1)$. Embedding of $CP(N-1)$ into
$\mathcal{H}=C^N$ provides the measurement procedure for the
dynamical variables.
3. Quantum measurement ``creates" local dynamical space-time capable
of detecting the coincidence of expectation and measured values of
these quantum dynamical variables.
4. The affine parallel transport, associated with the Fubini-Study
metric, accompanied with ``Lorentz spin transformation matrix"
\cite{G}, establish this coincidence due to the identification of
the parallel transported LDV at different GCS's.
5. The parametrization of the measurement results with the help of
{\it attributed} local space-time coordinates is in fact the
embedding of quantum dynamics in Hilbert space into 4D world. This
procedure is well definite due to the existence of the infinitesimal
$SL(2,C)$ transformations of the qubit spinor treated as Lorentz
transformations of local space-time coordinates.
6. The quasi-linear PDE for non-Abelian gauge field in dynamical
space-time naturally related to the ODE of their characteristics.
The last ones are similar to the Hamilton canonical equations. Their
quantization leads to Schro\"odinger-like equations whose properties
will be discussed elsewhere.
\vskip 0.2cm ACKNOWLEDGEMENTS
I am sincerely grateful Larry Horwitz for interesting discussions
and critical notes.
\vskip 0.2cm
|
1,108,101,564,677 | arxiv | \section{Introduction}
Low-energy supersymmetry \cite{susy}
provides the most compelling framework for electroweak physics,
in which the electroweak symmetry breaking is generated via the
dynamics of an elementary scalar Higgs sector. The scalar boson
masses are kept light (of order the electroweak symmetry breaking
scale) due to an approximate supersymmetry in nature.
Supersymmetry is broken at the TeV scale or below, and this
information is transmitted to the scalar sector, thereby
generating electroweak symmetry breaking dynamics at the proper scale.
The simplest model of low-energy supersymmetry is the minimal
supersymmetric extension of the Standard Model (MSSM). In this
model, the Higgs sector consists of eight degrees of freedom
made up from two complex weak scalar doublets of hypercharge $\pm
1$ respectively \cite{hhg}. Supersymmetry requires that the hypercharge
$-$1 [+1] Higgs doublets couple exclusively to down-type [up-type]
fermions,
respectively. After minimizing the Higgs potential, the neutral
components of the Higgs doublets acquire vacuum expectation values
(vevs) with $\langle H_i\rangle=v_i/\sqrt{2}$.
The model possesses five physical Higgs bosons: two CP-even scalars,
$h^0$ and $H^0$ (with $m_{\hl}<m_{\hh}$), a CP-odd Higgs scalar $A^0$
and a charged Higgs pair $H^\pm$. As usual, I define
$\tan\beta\equiv v_2/v_1$ and normalize $v^2\equiv
v_1^2+v_2^2\equiv 4m_W^2/g^2=(246~{\rm GeV})^2$. Due to the form
of the Higgs-fermion interaction, the third
generation quark masses are given by $m_b= h_b v_1/\sqrt{2}$ and
$m_t= h_t v_2/\sqrt{2}$, where $h_q$ ($q=t,b$) are the
corresponding Yukawa couplings.
The tree-level physical Higgs spectrum is easily computed
\cite{hhg}. Its
most noteworthy feature is the upper bound on the light CP-even
Higgs scalar: $m_{\hl}\leqm_Z|\cos 2\beta|\leqm_Z$. The maximum
tree-level upper bound of $m_Z$ is saturated when one of the vevs
vanishes (and $m_{\ha}>m_Z$).
It is convenient to consider a limiting case of the
MSSM Higgs sector where $v_1=0$. For finite $h_b$, this
limit corresponds to $m_b=0$, which is a reasonable
approximation.\footnote{In practice, it is sufficient to take
$v_1\ll v_2$, and then fix the value of $h_b$ to be consistent
with the observed $b$-quark mass.} In the $v_1=0$ model, the Higgs
sector degenerates to a one-doublet model with:
\begin{equation}
\label{vonedoublet} V_{\rm
Higgs}=m^2\Phi^\dagger\Phi+\ifmath{{\textstyle{1 \over 2}}}\lambda(\Phi^\dagger\Phi)^2\,,
\qquad\lambda\equiv\ifmath{{\textstyle{1 \over 4}}}(g^2+g^{\prime 2})\,.
\end{equation}
The supersymmetric constraint on the value of $\lambda$ is a
consequence of the fact that the MSSM Higgs quartic couplings
originate from the $D$-term contributions to the scalar potential.
The squared-mass of the light CP-even Higgs boson of the $v_1=0$ model is
given by $m_{\hl}^2=\lambda v^2=m_Z^2$.
\section{Upper bound of {\boldmath $m_{\hl}$} in the MSSM}
The upper bound of $m_{\hl}\leqm_Z$ will be modified by radiative
corrections \cite{hhprl,early-veff}. In order to obtain the
radiatively corrected
upper bound of $m_{\hl}$, it suffices to compute radiative corrections in
the $v_1=0$ model. Let us focus on the real part of the neutral
scalar component: $\Phi^0=(v+h)/\sqrt{2}$.
The bare Higgs potential takes the following form: $V_{\rm
Higgs}=t_0 h+\ifmath{{\textstyle{1 \over 2}}} (m_{\hl}^2)_0 h^2 + {\cal O}(h^3)$, where
\begin{equation}\label{bareparms}
t_0 = v_0\left[\ifmath{{\textstyle{1 \over 2}}}\lambda_0
v_0^2+m_0^2\right]\,,\qquad (m_{\hl}^2)_0 =
m_0^2+{\textstyle{3 \over 2}}\lambda_0 v_0^2\,,
\end{equation}
and the subscript $0$
indicates bare parameters. We also introduce
\begin{equation}
(m_Z^2)_0=\ifmath{{\textstyle{1 \over 4}}} (g_0^2+g_0^{\prime 2})v_0^2\,.
\end{equation}
The
on-shell renormalization scheme is defined such that $m_Z$ and
$m_{\hl}$ are physical masses corresponding to zeros of the
corresponding inverse propagators. Let the sum of all one-loop
(and higher) Feynman graphs contributing to the $Z$ and $h^0$
two-point functions be denoted by
$iA_{ZZ}(q^2)g^{\mu\nu}+iB_{ZZ}(q^2)q^\mu q^\nu$ and
$-iA_{hh}(q^2)$, respectively, where $q$ is the four-momentum of
one of the external legs. The physical masses are given by:
\begin{Eqnarray}
m_Z^2 &=& (m_Z^2)_0+{\rm Re}~A_{ZZ}(m_Z^2)\,, \label{oneloopz}\\
m_{\hl}^2 &=& (m_{\hl}^2)_0+{\rm Re}~A_{hh}(m_{\hl}^2)\,.\label{onelooph}
\end{Eqnarray}
Since $v$ is the vev
of the scalar field at the true minimum of the potential, we
require that the sum of all tadpoles must vanish. That is,
\begin{equation}\label{tadpole}
t_0+A_h(0)=0\,,
\end{equation}
where $-iA_h(0)$ is the sum of
all one-loop (and higher) Feynman graphs contributing to the $h^0$
one-point function. Combining
eqs.~(\ref{bareparms})--(\ref{tadpole}), we obtain
\begin{equation}\label{oneloopmass}
m_{\hl}^2=m_Z^2+{\rm
Re}~\left[A_{hh}(m_Z^2)-A_{ZZ}(m_Z^2)\right] -{A_h(0)\over v}
+\left[\lambda_0-\ifmath{{\textstyle{1 \over 4}}}(g_0^2+g_0^{\prime 2})\right]v_0^2\,.
\end{equation}
This result is accurate at one-loop order, since we have put
$m_{\hl}=m_Z$ and $v_0=v$ on the right hand side where possible.
Naively, one might argue that \eq{oneloopmass} can be
simplified by using the supersymmetric condition
$\lambda_0=\ifmath{{\textstyle{1 \over 4}}}(g_0^2+g_0^{\prime 2})$. However,
this is correct only if a regularization scheme that preserves
supersymmetry is employed. Of course, the physical quantity
$m_{\hl}^2$ must be independent of scheme.
Consider two different regularization schemes:
dimensional regularization (DREG) and
dimensional reduction \cite{dred} (DRED). Renormalized couplings
defined via (modified) minimal subtraction in these two schemes are
called $\overline{\rm MS}$ and $\overline{\rm DR}$ couplings,
respectively. DREG does not preserve supersymmetry because the number
of gauge and gaugino degrees of freedom does not match in $n\neq 4$
dimensions. In contrast, DRED preserves supersymmetry
(at least at one and two-loop order).
In DRED [DREG], bare quantities will be denoted
with the subscript $D$ [$G$]. Then,
the supersymmetric condition holds in DRED:
\begin{equation} \label{dred}
\lambda_D-\ifmath{{\textstyle{1\over 4}}}(g_{D}^2+g_{D}^{\prime 2})=0 \,.
\end{equation}
We now demonstrate that the above relation is
violated in DREG. First, the gauge couplings of the two schemes are
related as follows \cite{antoniadis}
\begin{equation} \label{gdg}
g_{D}^2 = g_{G}^2+{g^4\over 24\pi^2}\,,\qquad\qquad
g_{D}^{\prime 2} = g_{G}^{\prime 2}\,.
\end{equation}
For the Higgs self-coupling $\lambda$,
the relation between the two schemes is derived by considering
the one-loop effective potential (in the Landau gauge),
$V\equiv V^{(0)}+V^{(1)}$, where $V^{(0)}$ is the
tree-level scalar potential and $V^{(1)}$ is given by:
\begin{equation} \label{vone}
V^{(1)} =-{1\over 64 \pi^2}\, {\rm Str}\,{\cal{M}}^4(\Phi)\left[\Delta +K - \ln
{{\cal{M}}^2(\Phi)\over \mu^2}\right]\,.
\end{equation}
In \eq{vone}, $K$ is a scheme-dependent constant (see below),
${\cal{M}}^2(\Phi)$ denotes the squared-mass
matrix as a function of the scalar Higgs fields ({\it i.e.},
the corresponding tree-level squared-mass matrices are obtained when
$\Phi$ is replaced by its vev),
and
the divergences that arise in the computation of the one-loop integrals
in $4-2\epsilon$ dimensions appear in the factor
$\Delta\equiv 1/\epsilon
-\gamma_E+\ln(4\pi)$ [where $\gamma_{\rm E}$ is the Euler constant].
We have also employed the notation
\begin{equation} \label{strace}
{\rm Str}\,\{\cdot\cdot\cdot\}\equiv\sum_i\,C_i(2J_i+1)(-1)^{2J+1}\,
\{\cdot\cdot\cdot\}_i\,,
\end{equation}
where the sum is taken over the corresponding mass matrix eigenvalues,
including a factor $C_i$ which counts internal degrees of freedom
({\it e.g.}, charge and color) for all particles of spin $J_i$ that
couple to the Higgs bosons.
In DRED, $K=3/2$, independent of particle $i$ in the sum [\eq{strace}].
The fact that particles of different spin yield the same constant $K$ is an
indication that DRED preserves supersymmetry at one-loop.
In DREG, $K=3/2$ for spin 0 and spin-1/2 particles, while
$K=5/6$ for spin-1 particles. However, the effective
potential (expressed in terms of bare parameters) must be independent of
scheme. Comparing the DREG and DRED computations, it follows that
\begin{equation}
\ifmath{{\textstyle{1 \over 8}}}\lambda_D v^4 -{1\over 64\pi^2}
\left({\textstyle{3 \over 2}}\right)(6m_W^4+3m_Z^4)
=\ifmath{{\textstyle{1 \over 8}}}\lambda_G v^4-{1\over 64\pi^2}
\left(\ifmath{{\textstyle{5\over 6}}}\right)(6m_W^4+3m_Z^4)\,,
\end{equation}
which yields
\begin{equation} \label{lambdadg}
\lambda_D=\lambda_G+{g^4(m_Z^4+2m_W^4)\over 64\pi^2m_W^4}\,.
\end{equation}
Combining the results of \eqs{gdg}{lambdadg} gives the DREG result
\begin{equation} \label{lambdarel}
\lambda_G-\ifmath{{\textstyle{1 \over 4}}}(g_{G}^2+g_{G}^{\prime 2})=-{g^4\over 64\pi^2m_W^4}
\left(m_Z^4 +{\textstyle{4 \over 3}} m_W^4\right)\,.
\end{equation}
Thus, in computing the one-loop corrected Higgs mass [\eq{oneloopmass}]
in DREG [DRED], one must use the relation between $\lambda_0$ and
$\ifmath{{\textstyle{1 \over 4}}}(g_0^2+g_0^{\prime 2})$ given be \eq{lambdarel} [\eq{dred}].
One can check that this difference is precisely compensated by the
difference in DREG and DRED that arises in
the computation of the vector boson loop contributions to
the one-point and two-point functions.
Henceforth, we shall always use the DRED scheme, in which case
\cite{hhprl}
\begin{equation} \label{dredoneloopmass}
m_{\hl}^2=m_Z^2+{\rm Re}~\left[A_{hh}(m_Z^2)-A_{ZZ}(m_Z^2)\right]
-{A_h(0)\over v}\,.
\end{equation}
Although the loop functions above are individually divergent,
all divergences precisely cancel in the sum and yield a well-defined
one-loop result for $m_{\hl}$.
The method described above [resulting in \eq{dredoneloopmass}] is
sometimes called the diagrammatic method \cite{hhprl,completeoneloop}
since one explicitly evaluates
the one-point and two-point functions by standard Feynman diagram
techniques. A second method for computing $m_{\hl}$, called the effective
potential technique,
is often employed in the literature \cite{early-veff,veff,carena}. This
is not an alternate ``scheme'', but simply another way of organizing the
calculation. Consider the DRED one-loop effective potential introduced
above (with $K=3/2$). In the sum $V=V^{(0)}+V^{(1)}$, the
$\overline{\rm DR}$ scheme consists of absorbing the factor of $\Delta$ into
the bare parameters ($m_0$, $\lambda_0$ and $\Phi_0$),
which converts them into $\overline{\rm DR}$ parameters.
Renormalized quantities (such as the effective potential or the
$n$-point Green functions) will be denoted with tildes in the following.
These are computed in the Landau gauge; the divergent piece $\Delta$
is removed by $\overline{\rm DR}$ subtraction and the bare parameters
are replaced by renormalized $\overline{\rm DR}$ parameters. Finally,
the $\overline{\rm DR}$ parameters are related to physical
parameters.
We proceed as follows.
First, we minimize the renormalized
effective potential by setting the first derivative equal to zero. This
condition yields:
\begin{equation} \label{mincon}
\left[\ifmath{{\textstyle{1 \over 2}}}\lambda v^2+m^2\right]v+\widetilde A_h(0)=0\,.
\end{equation}
In \eq{mincon}, the first term on the left hand side arises at
tree-level, while the second term is a consequence of the
the fact that the $n$th derivative of
$V^{(1)}$, evaluated at the potential minimum, is equal to the scalar
$n$-point function evaluated at zero external momentum \cite{pokorski}.
The second
derivative of the effective potential, denoted by $(m_{\hl}^2)_{\rm eff}$,
is similarly given by:
\begin{equation} \label{meff}
(m_{\hl}^2)_{\rm eff}=m^2+{\textstyle{3 \over 2}}\lambda v^2+\widetilde A_{hh}(0)\,.
\end{equation}
We may use the DRED relation [\eq{dred}], which is also satisfied by the
renormalized $\overline{\rm DR}$ parameters, to eliminate
$\lambda$. The $\overline{\rm DR}$ $Z$-mass parameter is given by
$(m_Z^2)_{\overline{\rm DR}}=\ifmath{{\textstyle{1 \over 4}}} (g^2+g^{\prime 2})v^2$. Combining
the above results yields:
\begin{equation} \label{mhone}
(m_{\hl}^2)_{\rm eff}= (m_Z^2)_{\overline{\rm DR}}+\widetilde A_{hh}(0)-
{\widetilde A_h(0)\over v}\,.
\end{equation}
In the literature, $(m_{\hl}^2)_{\rm eff}$ is sometimes used as the
approximation to the one-loop-improved Higgs squared-mass.
However, this is {\it not} a physical parameter, since it
depends on an arbitrary scale that is introduced in
the $\overline{\rm DR}$ subtraction scheme. To obtain an expression
for the physical mass, which corresponds to the zero of inverse propagator,
we note that $(m_{\hl}^2)_{\rm eff}$ has been computed using the two-point
function evaluated at {\it zero} external momentum.
Thus, the physical Higgs squared-mass is given by:
\begin{equation} \label{mhtwo}
m_{\hl}^2=(m_{\hl}^2)_{\rm eff}+{\rm Re}~\widetilde A_{hh}(m_{\hl}^2)-\widetilde
A_{hh}(0)\,.
\end{equation}
Likewise, we must convert from $(m_Z^2)_{\overline{\rm DR}}$ to the
physical $Z$ squared-mass. This is accomplished using a result
analogous to that of \eq{oneloopz}, which guarantees that $m_Z$ corresponds
to the zero of the inverse $Z$ propagator:
\begin{equation} \label{mhthree}
m_Z^2=(m_Z^2)_{\overline{\rm DR}}+{\rm Re}~\widetilde A_{ZZ}(m_Z^2)\,.
\end{equation}
Combining eqs.~(\ref{mhone})--(\ref{mhthree}), we end up with
\begin{equation} \label{veffoneloopmass}
m_{\hl}^2=m_Z^2+{\rm Re}~\left[\widetilde A_{hh}(m_Z^2)-\widetilde A_{ZZ}(m_Z^2)\right]
-{\widetilde A_h(0)\over v}\,.
\end{equation}
Not surprisingly, we have reproduced the diagrammatic result
[\eq{dredoneloopmass}].
\section{Leading Logarithms and Renormalization Group
Improvement}
When the loop functions in \eq{dredoneloopmass} are computed, one finds
that the most significant contributions grow logarithmically with the
top squark masses. (Terms that are logarithmically sensitive to other
supersymmetric particle masses also exist.) Over a large range
of supersymmetric parameter space, the
radiatively corrected Higgs mass can be
well approximated by just a few terms. On the other hand, if the
logarithms become too large, then the
validity of the perturbation theory becomes suspect. However, in this
case the leading
logarithms can be resummed using renormalization group (RG)
techniques \cite{rgesum,llog}.
We begin with a one-loop analysis. Consider an effective field theory
approach \cite{llog}, and assume for simplicity that supersymmetry
breaking
is characterized by one mass scale, $M_{\rm SUSY}$, which is assumed to be
large compared with $m_Z$. At scales $\mu\leqM_{\rm SUSY}$,
the Higgs potential takes the form:
\begin{equation}
V=\ifmath{{\textstyle{1 \over 2}}} m^2(\mu)[h(\mu)]^2+\ifmath{{\textstyle{1 \over 8}}}\lambda(\mu)[h(\mu)]^4\,.
\end{equation}
Letting $h\to h+v$ with $m^2(\mu)<0$, the Higgs mass is given by
\begin{equation} \label{mhlmu}
m_{\hl}^2(\mu)=\lambda(\mu) v^2(\mu)\,.
\end{equation}
Since the effective theory is supersymmetric only for $\mu\geqM_{\rm SUSY}$,
we impose the supersymmetric boundary condition [see \eq{dred}]:
\begin{equation} \label{boundary}
\lambda(M_{\rm SUSY})=\ifmath{{\textstyle{1 \over 4}}}\left[g^2(M_{\rm SUSY})+g^{\prime 2}(M_{\rm SUSY})\right]\,.
\end{equation}
Scale dependent parameters satisfy renormalization group equations
(RGEs). For $\mu<M_{\rm SUSY}$, the Standard Model RGEs are relevant:
\begin{Eqnarray} \label{rges}
\beta_\lambda &\equiv& {d\lambda\over d\ln \mu^2} = {1\over
16\pi^2}\Bigl[6\lambda^2+{\textstyle{3 \over 8}}[2g^4+(g^2+g^{\prime 2})^2]-2\sum_f
N_{cf} h_f^4\Bigr] -2\lambda\gamma_v\,, \nonumber \\
\gamma_v &\equiv& {d\ln v^2\over d\ln \mu^2}
= {1 \over 16\pi^2}\Bigl[{\textstyle{9 \over 4}} g^2 + {\textstyle{3 \over 4}}
g^{\prime 2}-\sum_f N_{cf} h_f^2\Bigr]\,, \nonumber \\
\beta_{g^2+g^{\prime 2}} & \equiv & {d(g^2+g^{\prime 2}) \over d\ln \mu^2}
= {1\over 96\pi^2}\Bigl[(8N_g-43)g^4
+({\textstyle{40 \over 3}} N_g+1)g^{\prime 4}\Bigr]\,,
\end{Eqnarray}
where $h_f=\sqrt{2} m_f/v$, $N_g=3$ is the number of fermion
generations, and $N_{cf}=3$ $[1]$ when $f$ runs over quark [lepton]
indices.
It is instructive to solve the RGEs iteratively to one-loop, by ignoring the
$\mu$ dependence on the right hand sides in \eq{rges}.
Incorporating the boundary condition [\eq{boundary}],
the solution for $\lambda(m_Z)$ is given by~\footnote{One subtlety
consists of the proper way to run down from $m_t$ to $m_Z$, since
below $\mu=m_t$, the electroweak symmetry is broken. I will ignore
this subtlety here although it can be addressed; see \Ref{llog}.}
\begin{Eqnarray}
\lambda(m_Z) & = & \ifmath{{\textstyle{1 \over 4}}}(g^2+g^{\prime
2})(M_{\rm SUSY})-\beta_\lambda\ln\left({M_{\rm SUSY}^2\overm_Z^2}\right) \nonumber \\
& = & \ifmath{{\textstyle{1 \over 4}}}(g^2+g^{\prime
2})(m_Z)+(\ifmath{{\textstyle{1 \over 4}}}\beta_{g^2+g^{\prime 2}}
-\beta_\lambda)\ln\left({M_{\rm SUSY}^2\overm_Z^2}\right)\,.
\end{Eqnarray}
Finally, using \eq{mhlmu}, we identify the physical Higgs mass by
evaluating $m_{\hl}^2(\mu)$ at $\mu=m_Z$ and taking $v(m_Z)= 246$~GeV.
We know from the previous section that this is not strictly correct.
However, at the one-loop leading logarithmic level, this procedure is
accurate, and we end up with:
\begin{equation} \label{mh1LL}
(m_{\hl}^2)_{\rm 1LL}= m_Z^2+ (\ifmath{{\textstyle{1 \over 4}}}\beta_{g^2+g^{\prime 2}}
-\beta_\lambda)v^2\ln\left({M_{\rm SUSY}^2\overm_Z^2}\right)\,,
\end{equation}
where the subscript 1LL indicates that the result is only accurate to
one-loop leading logarithmic order. To obtain the full one-loop leading
logarithmic expression, simply insert the results of \eq{rges} into
\eq{mh1LL} [in $\beta_\lambda$ one can consistently
set $\lambda=\ifmath{{\textstyle{1 \over 4}}}(g^2+g^{\prime 2})$].
We have checked \cite{hhprl,llog}
that the above result matches precisely with
the diagrammatic computation [\eq{dredoneloopmass}] in the limit of
$M_{\rm SUSY}\ggm_Z$, where $M_{\rm SUSY}$ characterizes the scale of
supersymmetric particle masses (taken to be roughly degenerate).
The dominant term at one-loop is proportional to $m_t^4$ and
arises from the piece of $\beta_\lambda$ proportional to $h_t^4$.
Inserting $\beta_\lambda=-3h_t^4/8\pi^2$ with
$h_t=\sqrt{2}m_t/v$ into \eq{mh1LL}, one obtains~\footnote{The lower
scale of the logarithm in this case is $m_t^2$ (and not $m_Z^2$) since
this term arises from the incomplete cancelation of the top quark and
top squark loops.}
\begin{equation} \label{mhlapprox}
(m_{\hl}^2)_{\rm 1LT}=m_Z^2+{3g^2m_t^4\over
8\pi^2m_W^2}\ln\left(M_{\rm SUSY}^2\overm_t^2\right)\,.
\end{equation}
The subscript 1LT indicates that this is the leading $m_t^4$ piece of
$(m_{\hl}^2)_{\rm 1LL}$. However, the additional terms in $(m_{\hl}^2)_{\rm
1LL}$ are numerically significant as we shall show at the end of this
section.
Thus, we see that given the RG functions, no additional diagrammatic
computations are needed to extract the full one-loop leading logarithmic
contribution to the Higgs mass. Thus the RG-approach provides a useful
short cut for extracting the leading one-loop contributions to the Higgs
mass. Of course, if the leading logarithms are
large, then they should be resummed to all orders.
This is accomplished by computing the
RG-improvement of the exact one-loop result as follows. Let
$(m_{\hl}^2)_{\rm 1RG}\equiv\lambda(m_Z)v^2(m_Z)$,
where $\lambda(m_Z)$ is obtained by {\it numerically}
solving the one-loop RGEs. Write the exact
one-loop result as: $m_{\hl}^2=(m_{\hl}^2)_{\rm 1LL}+(m_{\hl}^2)_{\rm 1NL}$,
where $(m_{\hl}^2)_{\rm 1NL}$ is the result obtained by subtracting the
one-loop leading logarithmic contribution from the exact one-loop
result. Clearly, this piece contains no term that grows logarithmically with
$M_{\rm SUSY}$. Then the complete one-loop RG-improved result is given by
$m_{\hl}^2=(m_{\hl}^2)_{\rm 1RG}+(m_{\hl}^2)_{\rm 1NL}$.
The RG technique can be extended to two loops as follows \cite{hhh}.
For simplicity, we focus on the leading corrections, which depend on
$\alpha_t\equiv h_t^2/4\pi$ and $\alpha_s\equiv g_s^2/4\pi$, {\it
i.e.}, we work in the approximation of
$h_b=g=g'=0$ and $\lambda\ll h_t$. (All two-loop results quoted in this
section are based on this approximation.)
The dependence on the strong coupling constant is a new
feature of the two-loop analysis.
We now solve the one-loop RGEs by iterating twice to two loops.
In the second iteration, we need the RGE for $h_t^2$, which in the
above approximation is given by
\begin{equation} \label{rgeht}
\beta_{h_t^2}\equiv\frac{\rm d}{\rm d\ln\mu^2}\,h_t^2
= \frac{1}{16\pi^2}\left[{\textstyle{9 \over 2}} h_t^2 -8 g_s^2\right]\,h_t^2\,.
\end{equation}
This iteration produces the two-loop leading double logarithm
\cite{carena}, and yields
\begin{equation} \label{lambdatwice}
\lambda(m_t)=\ifmath{{\textstyle{1 \over 4}}}(g^2+g^{\prime 2})-{3h_t^4(m_t)\over 8\pi^2}\ln\left(
{M_{\rm SUSY}^2\overm_t^2}\right)\left[1+\left(\gamma_v+
{\beta_{h_t^2}\over h_t^2}\right)\ln\left({M_{\rm SUSY}^2\overm_t^2}\right)
\right]\,.
\end{equation}
Next, we must incorporate the sub-dominant two loop effects.
Only three modifications of our one-loop analysis
are required (in the limit of $h_b=g=g'=0$ and $\lambda\ll h_t$).
First, we need only the $h_t$ and $g_s$ dependent parts of the
two loop contribution to
$\beta_\lambda$. That is, $\beta_\lambda$ is
modified as follows \cite{2looprges}
\begin{equation}
\beta_{\lambda} \longrightarrow \beta_{\lambda} +
\frac{1}{(16\pi^2)^2}\,\left[\,30 h_t^6-32 h_t^4\,g_s^2\,\right]\,.
\label{betalambdatwoloop}
\end{equation}
Including this into the iterative solution of the RGEs adds a two-loop
singly logarithmic term to the result of \eq{lambdatwice}.
Second, we must
distinguish between the Higgs pole mass (denoted by $m_{\hl}$ with
no argument) and the running Higgs mass evaluated at $m_t$.
Using the results of Sirlin and Zucchini \cite{sirlin},
\begin{equation}
m_{\hl}^2 = {4\,m_W^2\,\lambda(m_t)\over g^2}\left[1+\frac{1}{8}
\left(\frac{\alpha_t}{\pi}\right)\right]\,.
\label{poletorun}
\end{equation}
Third, we make use of the relation
between $v^2(m_t)$ and $v^2\equiv 4m_W^2/g^2$,
\begin{equation} \label{vevcorrection}
v^2(m_t)={4m_W^2\over g^2}\left[1-{3\over 8}\left({\alpha_t\over\pi}\right)
\right]\,.
\end{equation}
Using the above results, we end up with
\begin{Eqnarray} \label{twolooprun}
m_{\hl}^2 & =& m_Z^2+
\frac{3g^2}{8\pi^2m_W^2}\,
m_t^4(m_t)\,
\ln\left(\frac{M_{\rm SUSY}^2}{m_t^2}\right)\,
\left[1+\left(\gamma_v+
\frac{\beta_{h_t^2}}{h_t^2}\right)\,
\ln\left(\frac{M_{\rm SUSY}^2}{m_t^2}\right)\right. \nonumber \\
&&\qquad\qquad\qquad\qquad\qquad +
\left.\frac{4}{3}\left(\frac{\alpha_s}{\pi}\right)-
\frac{3}{8}\left(\frac{\alpha_t}{\pi}\right)\,\right] \,,
\end{Eqnarray}
where $h_t\equiv h_t(m_t)$ and $m_t(m_t)\equiv h_t(m_t)\,v(m_t)/\sqrt{2}$.
Numerically, the two-loop singly logarithmic piece of \eq{twolooprun}
contributes about $3\%$ relative to the one-loop leading logarithmic
contribution.
Let us compare this result with the
two-loop diagrammatic computation of \Ref{hempfhoang}. In order to
make this comparison, we must express \eq{twolooprun} in terms of the
top quark pole mass, $m_t$. The relation between $m_t$ and the running
top-quark mass is given by
\cite{tpoleref1,tpoleref2}
\begin{equation} \label{tpole}
m_t = m_t(m_t)\,\left[1+
\frac{4}{3}\left(\frac{\alpha_s}{\pi}\right)-
\frac{1}{2}\left(\frac{\alpha_t}{\pi}\right)\right]\,,
\end{equation}
where $m_t(m_t)$ is the $\overline{\rm MS}$ running top-quark mass
evaluated at $m_t$.\footnote{We
caution the reader that \Ref{tpoleref2} defines
$m_t(m_t)=h_t(m_t)v/\sqrt{2}$, which differs slightly from
the definition of $m_t(m_t)$ used here.}
Inserting the above result into \eq{twolooprun} yields:
\begin{Eqnarray} \label{leading2loop}
m_{\hl}^2 & =& m_Z^2+
\frac{3g^2 m_t^4}{8\pi^2m_W^2}\,
\ln\left(\frac{M_{\rm SUSY}^2}{m_t^2}\right)\,
\left[1+\left(\gamma_v+
\frac{\beta_{h_t^2}}{h_t^2}\right)\,
\ln\left(\frac{M_{\rm SUSY}^2}{m_t^2}\right) \right. \nonumber \\
&& \qquad\qquad\qquad\qquad\qquad -\left.
\left(\frac{4\alpha_s}{\pi}\right)+
\frac{13}{8}\left(\frac{\alpha_t}{\pi}\right)\,\right] \,.
\end{Eqnarray}
This result matches precisely the one obtained in \Ref{hempfhoang} in the
limit of $M_{\rm SUSY}\ggm_Z$. Note that the numerical
contribution of the two-loop singly-logarithmic contribution in
\eq{leading2loop} is about $10\%$ of the corresponding one-loop
contribution. Clearly, the
use of the running top quark mass [as in \eq{twolooprun}]
results in a slightly
better behaved perturbation expansion.
Finally, we can employ a very useful trick to make our results above
even more compact. The two-loop doubly-logarithmic contribution can be
absorbed into the one-loop leading-logarithmic contribution by an
appropriate choice of scale for the running top-quark mass.
Specifically, using the iterative one-loop leading-logarithmic
solution to the RGEs for $h_t$ and $v$ yields
\begin{equation}
m_t(\mu) = \ifmath{{\textstyle{1 \over \sqrt{2}}}} h_t(\mu) v(\mu) =
m_t(m_t)\left[1-\left({\alpha_s\over\pi}-{3\alpha_t\over
16\pi}\right)\ln\left({\mu^2\over m_t^2}\right)\right]\,.
\end{equation}
If we choose the scale $\mu_t\equiv\sqrt{m_tM_{\rm SUSY}}$ to evaluate the
running top-quark mass in \eq{twolooprun}, we end up with:
\begin{equation}
m_{\hl}^2 = m_Z^2+
\frac{3g^2}{8\pi^2m_W^2}\,
m_t^4(\mu_t)\, \ln\left(\frac{M_{\rm SUSY}^2}{m_t^2(\mu_t)}\right)\,
\left[1+
\frac{1}{3}\left(\frac{\alpha_s}{\pi}\right)-
\frac{3}{16}\left(\frac{\alpha_t}{\pi}\right)\,\right]\,.
\label{higgsmass3}
\end{equation}
One can check that the sum of the terms in the brackets
deviates from one by less than $1\%$. Thus, in practice, the
two-loop singly-logarithmic contribution can now be neglected since it
is numerically insignificant. That is, one can incorporate the
leading two-loop contributions by
simply inserting the running top-quark mass evaluated
at $\mu_t\equiv\sqrt{m_tM_{\rm SUSY}}$ into
the one-loop leading-logarithmic expression for $m_{\hl}^2$.
\begin{figure}[htb]
\centerline{\psfig{file=hhhbar1.ps,width=8.5cm,height=6.8cm,angle=90}}
\caption{The upper bound to the mass of the light CP-even Higgs boson
of the MSSM is plotted as a function of the common supersymmetric mass
$M_{\rm SUSY}$ (in the absence of squark mixing).
The one-loop leading logarithmic result [dashed line]
is compared with the RG-improved result, which was obtained
by a numerical computation [solid line] and by the simple
recipe described in the text [dot-dashed line].
Also shown are the leading $m_t^4$ result of
eq.~(\protect\ref{mhlapprox}) [higher dotted line], and its
RG-improvement [lower dotted line]. The running top quark mass used in
our numerical computations is $m_t(m_t)= 166.5$~GeV.}
\label{hhhfig1}
\end{figure}
Fig.~\ref{hhhfig1} illustrates the results of this section. We
display the results for $m_{\hl}$ based on
five different expressions for the light CP-even Higgs mass.
Case~(i) corresponds to the one-loop leading $m_t^4$ result,
$(m_{\hl}^2)_{\rm 1LT}$ [\eq{mhlapprox}]. In case~(ii)
we exhibit the full one-loop leading logarithmic expression,
$(m_{\hl}^2)_{\rm 1LL}$ [\eq{mh1LL}]. In case~(iii), we consider
$(m_{\hl}^2)_{\rm 1RG}$ obtained by solving the one-loop RGEs numerically.
Finally, case~(iv) corresponds to the simple recipe
proposed above, in which we evaluate $(m_{\hl}^2)_{\rm 1LL}$
by setting $m_t$ to the running top quark mass at the scale
$\mu_t$. For completeness, we also include
case~(v), where we apply the same recipe to
$(m_{\hl}^2)_{\rm 1LT}$.
The following general features are
noteworthy. First, we observe that over the region of $M_{\rm SUSY}$ shown,
$(m_{\hl})_{\rm 1RG}\simeq (m_{\hl})_{\rm 1LL}(m_t(\mu_t))$.
Second, the difference between $(m_{\hl})_{\rm 1LL}$
and $(m_{\hl})_{\rm 1RG}$ is non-negligible for even moderate values of
$M_{\rm SUSY}$;
neglecting RG-improvement can lead to an overestimate of $m_{\hl}$ which
can be as large as 10 GeV
(for $M_{\rm SUSY}>2$~TeV, the deviation grows even larger). Finally,
note that although the simplest approximation, $(m_{\hl})_{\rm 1LT}$,
reflects the dominant source of radiative corrections, it yields
the largest overestimate of the light Higgs boson mass.
\section{Additional Complications: Supersymmetric Thresholds}
In the analysis of the previous section, we assumed that all
supersymmetric particle masses were roughly equal and
substantially larger than $m_Z$.
To account for a non-degenerate supersymmetric
spectrum, we must recompute the RGEs in steps starting from
$\mu=M_{\rm SUSY}$ and ending at $m_Z$. Every
time the threshold of a supersymmetric particle is passed, we integrate
it out of the theory, and determine a new set of RGEs for the new
effective theory. Eventually, when
we pass below the lightest supersymmetric threshold, we regain the RGEs
of the Standard Model given in \eq{rges}. We can solve
iteratively for $\lambda(m_Z)$ as we did in the previous section, but
now using the more complicated set of RGEs.
Explicit formulae can be found in \refs{llog}{hhh}.
However, the above procedure fails to incorporate the effects of squark
mixing. Since the most important contribution to the Higgs mass
radiative corrections arises from the incomplete cancelation of the top
quark and top squark loops, it is important to examine
this sector more closely. First, we define our notation. The
physical top squark squared-masses (in the $v_1=0$ model)
are eigenvalues of the following two $2\times 2$ matrix
\begin{equation}
\left(\begin{array}{cc}
M_{Q}^2+m_t^2-m_Z^2(\ifmath{{\textstyle{1 \over 2}}}-e_t \sin^2\theta_W) & m_t A_t \\
m_t A_t & M_{U}^2+m_t^2-m_Z^2 e_t \sin^2\theta_W
\end{array}\right)
\label{stopmatrix}
\end{equation}
where $e_t=2/3$ and $M_{Q}$, $M_{U}$,
$A_t$ are soft-supersymmetry-breaking parameters.
We shall
treat the squark mixing perturbatively, assuming that the off-diagonal
squark squared-masses are small compared to the diagonal
terms.~\footnote{Formally, we assume that $(M_1^2-M_2^2)/(M_1^2+M_2^2)\ll
1$, where $M_1^2$, $M_2^2$ are the top squark squared-masses.
Thus, we demand that $m_t A_t/M_{\rm SUSY}^2\ll 1$.}
The perturbative effect of squark mixing is to modify the
supersymmetric relation between the Higgs quartic coupling and the gauge
couplings [\eq{boundary}]. Such modifications arise from one loop squark
corrections to the Higgs quartic self-coupling via: (i) corrections to
the scalar two-point function on the external legs; (ii) triangle graphs
involving two trilinear Higgs-squark-squark interactions and one quartic
Higgs-Higgs-squark-squark interaction; and (iii) box graphs involving
four trilinear Higgs-squark-squark interactions \cite{infn}. Then,
\eq{boundary} is modified to:
\begin{equation} \label{newboundary}
\lambda(M_{\rm SUSY})=\ifmath{{\textstyle{1 \over 4}}}(g^2+g^{\prime
2})+\delta\lambda_2+\delta\lambda_3+\delta\lambda_4\,,
\end{equation}
where the $\delta\lambda_i$ arise from the three sources quoted above.
Explicitly,
\begin{Eqnarray}
\delta\lambda_2 &=& {-3(g^2+g^{\prime 2})\over 32\pi^2}A_t^2 h_t^2
B(M_Q^2,M_U^2)\,,\nonumber \\
\delta\lambda_3 &=& {3\over 32\pi^2}\Bigl[4h_t^4 A_t^2 h(M_Q^2,M_U^2)
+g^2 h_t^2 A_t^2 p_t(M_Q^2,M_U^2)\Bigr]\,, \nonumber \\
\delta\lambda_4 &=& {3\over 16\pi^2}h_t^4 A_t^4 g(M_Q^2,M_U^2)\,,
\end{Eqnarray}
where
\begin{Eqnarray} \label{functiondefs}
B(a,b)&\equiv &{1\over (a-b)^2}\left[\ifmath{{\textstyle{1 \over 2}}}\left(a+b\right)-{ab\over
a-b}\ln\left({a\over b}\right)\right]\,,\nonumber \\
h(a,b)&\equiv &{1\over a-b}\ln\left({a\over b}\right)\,,\nonumber \\
f(a,b)&\equiv & {-1\over (a-b)}\left[1-{b\over a-b}\ln\left({a\over b}
\right)\right]\,, \nonumber \\
g(a,b)&\equiv & {1\over (a-b)^2}\left[2-{a+b\over a-b}\ln\left({a\over b}
\right)\right]\,, \nonumber \\
p_t(a,b)&\equiv &f(a,b)+2e_t \sin^2\theta_W (a-b)g(a,b)\,.
\end{Eqnarray}
For simplicity, consider the case of $M_Q=M_U\equivM_{\rm SUSY}$.
Using $B(a,a)= 1/6a$, $h(a,a)=1/a$, $f(a,a)=-1/2a$ and $g(a,a)=-1/6a^2$,
\eq{newboundary} becomes:
\begin{equation} \label{mixboundary}
\lambda(M_{\rm SUSY})=\ifmath{{\textstyle{1 \over 4}}}(g^2+g^{\prime
2})+{3h_t^4 A_t^2\over 8\pi^2 M_{\rm SUSY}^2}\left[1-{A_t^2\over
12M_{\rm SUSY}^2}\right]\,.
\end{equation}
Note that the correction term due to squark mixing has a maximum when
$A_t=\sqrt{6}M_{\rm SUSY}$. This relation is often called the maximal mixing
condition, since it corresponds to the point at which the one-loop
radiative corrections to $m_{\hl}^2$ are maximal.
Using the new boundary condition, we may repeat the analysis of the
previous section and recompute $m_{\hl}^2$.
At one loop, the effect of the squark mixing is simply additive. That
is, the modification of $m_{\hl}^2$ due to squark mixing
at one loop is given by:
$(\Delta m_h^2)_{\rm 1mix}=
(\delta\lambda_2+\delta\lambda_3+\delta\lambda_4)v^2$.
At two-loops, we solve for $\lambda(m_Z)$ by iterating the RGE for
$\lambda(\mu)$ twice as in the previous section.
However, the boundary condition for $\lambda(M_{\rm SUSY})$ has been
altered, and this modifies the computation. The end result is
\begin{equation} \label{mhmixlog}
(\Delta m_h^2)_{\rm mix}={3g^2m_t^4 A_t^2\over
8\pi^2m_W^2M_{\rm SUSY}^2}\left(1-{A_t^2\over 12M_{\rm SUSY}^2}\right)
\left[1+2\left(\gamma_v+
\frac{\beta_{h_t^2}}{h_t^2}\right)\,
\ln\left(\frac{M_{\rm SUSY}^2}{m_t^2}\right) \right]
\end{equation}
{\it i.e.}, $(\Delta m_h^2)_{\rm mix}$ acquires a logarithmically-enhanced
piece at two loops.
In this approximation, the maximum in $(\Delta m_h^2)_{\rm mix}$
at $A_t=\sqrt{6}M_{\rm SUSY}$ is not shifted.
However, this method does {\it not} pick up any
non-logarithmically-enhanced
two-loop terms proportional to $A_t$. To obtain such terms,
one would have to perform a two-loop computation in order to find
the necessary two-loop terms that modify the boundary condition
[\eq{newboundary}].
It is again possible to absorb the two-loop singly-logarithmic term into
the one-loop contribution, $(\Delta m_h^2)_{\rm 1mix}$, by an
appropriate choice of scale for the top-quark mass. The end result is
quite simple:
\begin{equation} \label{deltamix2}
(\Delta m_h^2)_{\rm mix}={3g^2m_t^4(M_{\rm SUSY}) A_t^2\over
8\pi^2m_W^2M_{\rm SUSY}^2}\left(1-{A_t^2\over 12M_{\rm SUSY}^2}\right)\,.
\end{equation}
That is $(\Delta m_h^2)_{\rm mix}=(\Delta m_h^2)_{\rm
1mix}(m_t(\mu_{\widetilde t}))$, where
the appropriate choice of scale in this case is $\mu_{\widetilde
t}\equiv M_{\rm SUSY}$. The difference from the previous case
[where $\mu_t=\sqrt{m_tM_{\rm SUSY}}$] arises due to the extra factor of 2
multiplying the two-loop singly-logarithmic term in \eq{mhmixlog} [compare
this with \eq{lambdatwice}]. Physically, $\mu_{\widetilde t}=M_{\rm SUSY}$
corresponds to the scale at which the squarks decouple and
the boundary condition [\eq{mixboundary}] is modified due to squark
mixing.
To illustrate the above results, we compare in Fig.~\ref{hhhfig4}
the value of $m_{\hl}$ as a function of $A_t$
based on the five cases exhibited in Fig.~\ref{hhhfig1}.
Specifically, the effects of $(\Delta m_h^2)_{\rm mix}$ are included at
the one-loop level in cases (i) and (ii), while cases (iv) and (v) make
use of the improved result given by \eq{deltamix2}.
In the full RG-improved result [case (iii)], the RGE for $\lambda(\mu)$
is computed numerically using the modified boundary condition
[\eq{newboundary}]. We see that
$(m_{\hl}^2)_{\rm 1RG}\simeq (m_{\hl}^2)_{\rm
1LL}(m_t(\mu_t))+(\Deltam_{\hl}^2)_{\rm 1mix}(m_t(\widetilde\mu_t))$.
Thus, once again a simple recipe provides an
excellent approximation to the numerically-integrated RG-improved result
over the entire region of the graph.
Note that the maximal value of $m_{\hl}$ occurs for $|A_t|\simeq
2.4M_{\rm SUSY}$. The solid or dash-dotted line provides our best mass estimate,
and we conclude that $m_{\hl}\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 125$~GeV for $M_{\rm SUSY}\leq 1$~TeV.
Similar results were also obtained by Carena {\it et al.} \cite{carena}.
\begin{figure}[htb]
\centerline{\psfig{file=hhhbar2.ps,width=8.5cm,height=6.8cm,angle=90}}
\caption{The upper bound to the mass of the light CP-even Higgs boson of
the MSSM plotted as a function of $A_t/M_{\rm SUSY}$. Squark-mixing effects
are incorporated as described in the text. See the caption to
Fig.~\ref{hhhfig1}.}
\label{hhhfig4}
\end{figure}
During the past year, two groups have computed the $A_t$ dependence of
$m_{\hl}$ at the two-loop level. \Rref{hollik} has performed a
diagrammatic two-loop computation which includes all terms of ${\cal
O}(\alpha_s$), as a function of $\tan\beta$. \Rref{zhang}
uses an effective potential approach to extend the computation of
\Ref{hempfhoang} and compute directly the two-loop squark mixing
contributions in the $v_1=0$ model. These results show that the $A_t$
dependence of $m_{\hl}$ is modified slightly at two loops: the maximal
squark mixing point occurs at $A_t\simeq 2M_{\rm SUSY}$, a value somewhat
below the result noted above. Moreover, the value of $m_{\hl}$ at maximal
squark mixing is slightly higher than the one shown in Fig.~\ref{hhhfig4};
for $M_{\rm SUSY}=1$~TeV, the maximal value of $m_{\hl}$ is found to be close to
$m_{\hl}\simeq 130$~GeV.
Presumably, these results are due to genuine
two-loop non-logarithmically enhanced terms proportional to a power
of $A_t^2/M_{\rm SUSY}^2$.
An important check of the calculations presented in \refs{hollik}{zhang}
would be to explicitly verify the two-loop logarithmically-enhanced
contribution exhibited in \eq{mhmixlog}.
\section{Conclusions}
I have described in detail the theoretical basis for the computation of
the upper bound of the mass of the light CP-even Higgs boson of the
MSSM. It suffices to consider the limiting case of $v_1=0$ which
considerably simplifies the analysis.
I explained how one can use renormalization
group methods to provide a short-cut for obtaining the leading one-loop
and two-loop contributions to $m_{\hl}$. These methods can also be
generalized to the full MSSM Higgs sector at arbitrary $\tan\beta$.
Further details and references can be found in \Ref{hhh}.
As a result of the work by many groups during this past decade, we
believe that the predicted value of $m_{\hl}$ as a function of the MSSM
parameters is accurately predicted within an uncertainty of a few GeV.
Simple analytic formulae provide an excellent representation of the
known results over a large range of the MSSM parameter
space \cite{carena,hhh}.
The present partially known two-loop information is essential to this
conclusion and provides confidence that there are no surprises lurking
in some corner of the supersymmetric parameter space. Some clarification is
still needed to understand more completely the dependence on the squark
mixing parameters.
\section*{Acknowledgments}
This paper is based on a collaboration with Ralf Hempfling and Andre
Hoang. I have learned much from their wisdom. I would also like to
thank Joan Sola for his kind hospitality during my visit to Barcelona and
RADCOR-98. This work is partially supported by a grant from the
U.S. Department of Energy.
\begin{sloppy}
\begin{raggedright}
\def\app#1#2#3{{\sl Act. Phys. Pol. }{\bf B#1} (#2) #3}
\def\apa#1#2#3{{\sl Act. Phys. Austr.}{\bf #1} (#2) #3}
\def\ppnp#1#2#3{{\sl Prog. Part. Nucl. Phys. }{\bf #1} (#2) #3}
\def\npb#1#2#3{{\sl Nucl. Phys. }{\bf B#1} (#2) #3}
\def\jpa#1#2#3{{\sl J. Phys. }{\bf A#1} (#2) #3}
\def\plb#1#2#3{{\sl Phys. Lett. }{\bf B#1} (#2) #3}
\def\prd#1#2#3{{\sl Phys. Rev. }{\bf D#1} (#2) #3}
\def\pR#1#2#3{{\sl Phys. Rev. }{\bf #1} (#2) #3}
\def\prl#1#2#3{{\sl Phys. Rev. Lett. }{\bf #1} (#2) #3}
\def\prc#1#2#3{{\sl Phys. Reports }{\bf #1} (#2) #3}
\def\cpc#1#2#3{{\sl Comp. Phys. Commun. }{\bf #1} (#2) #3}
\def\nim#1#2#3{{\sl Nucl. Inst. Meth. }{\bf #1} (#2) #3}
\def\pr#1#2#3{{\sl Phys. Reports }{\bf #1} (#2) #3}
\def\sovnp#1#2#3{{\sl Sov. J. Nucl. Phys. }{\bf #1} (#2) #3}
\def\jl#1#2#3{{\sl JETP Lett. }{\bf #1} (#2) #3}
\def\jet#1#2#3{{\sl JETP Lett. }{\bf #1} (#2) #3}
\def\zpc#1#2#3{{\sl Z. Phys. }{\bf C#1} (#2) #3}
\def\ptp#1#2#3{{\sl Prog.~Theor.~Phys.~}{\bf #1} (#2) #3}
\def\nca#1#2#3{{\sl Nouvo~Cim.~}{\bf#1A} (#2) #3}
\def\hpa#1#2#3{{\sl Helv.~Phys.~Acta~}{\bf #1} (#2) #3}
\def\aop#1#2#3{{\sl Ann.~of~Phys.~}{\bf #1} (#2) #3}
\def\fP#1#2#3{{\sl Fortschr.~Phys.~}{\bf #1} (#2) #3}
\section*{References}
|
1,108,101,564,678 | arxiv | \subsection{References}
\newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1}
\newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
|
1,108,101,564,679 | arxiv | \section{Introduction}\label{sec:one}
Measurement of proton spin asymmetry in polarized deep inelastic scattering (DIS) experiment performed by European Muon Collaboration in 1987 \cite{Ashman:1987hv} showed that the total contribution from quark spin to the proton's spin is less than half. Since that time, along with the studies of unpolarized parton distribution functions (PDFs), many experimental, theoretical and phenomenological investigations have been performed to understand the constituents of the proton's spin and also determine their distributions. In this regard, the determination of polarized PDFs (PPDFs), which describe the structure of a nucleon in a helicity eigenstate, by performing a global analysis of available experimental data has also been of great interest~\cite{deFlorian:2009vb,Blumlein:2010rn,Leader:2010rb,deFlorian:2014yva,Nocera:2014gqa,
Jimenez-Delgado:2014xza,Sato:2016tuz,Shahri:2016uzl,Khanpour:2017cha,Ethier:2017zbq,Khanpour:2017fey,Salajegheh:2018hfs}.
On the other hand, the structure of nucleon both in unpolarized and polarized cases can be investigated in more detail using generalized parton distributions (GPDs)~\cite{Goeke:2001tz,Diehl:2003ny,Polyakov:2002yz,Freund:2002qf,Scopetta:2003et,Belitsky:2005qn,Boffi:2007yc,Guzey:2005ec,Guzey:2006xi,Hagler:2007xi,Alexandrou:2013joa,Kumericki:2007sa,Guidal:2013rya,Kumericki:2016ehc,Khanpour:2017slc} which comprise important concepts of Quantum Chromodynamics (QCD). Actually, GPDs provide quantitative information on the longitudinal and transverse distribution of partons inside the nucleon, and also their intrinsic and orbital angular momenta. Therefore, studying GPDs can shed light on various aspects of hadron structure.
An accurate knowledge of GPDs is essential for understanding and describing various hard exclusive processes~\cite{Goeke:2001tz,Diehl:2003ny,Belitsky:2005qn}, such as deeply virtual Compton scattering (DVCS), timelike Compton scattering (TCS), exclusive meson production by longitudinally polarized photons and photoproduction of heavy vector mesons~\cite{Ivanov:2004vd}. One of the main important properties of GPDs is their mutual relations with PDFs and elastic form factors (FFs). On one hand, GPDs are the off-forward kinematic generalizations of the ordinary PDFs which play a crucial role in inclusive DIS. In other words, PDFs can be recovered from GPDs (in the so-called forward limit) by setting to zero the extra variables in GPDs, such as the transverse momentum between the initial and final protons and skewness parameter. On the other hand, FFs (including the electric and magnetic form factors, or even the form factors associated with the energy-momentum tensor) which are other important quantities giving us valuable information on the structure of nucleon can be obtained from GPDs as well~\cite{Diehl:2004cx,Diehl:2007uc,Diehl:2013xca}. In fact, GPDs give us not only all the information contained in FFs, but also other useful information, for example, about the transverse displacement of partons~\cite{Burkardt:2002ks}. Consequently, from this point of view, GPDs are generalization of both PDFs and FFs. It is worth noting that the axial form factors (AFFs) which are fundamental quantities that describe spin content of the nucleon are also intimately related to polarized GPDs (see~\Cref{sec:two}).
Just like PDFs and PPDFs, GPDs are essentially non-perturbative objects, so that they cannot be determined directly from the perturbative QCD apart from first Mellin moments in special cases in lattice QCD~\cite{Hagler:2007xi,Alexandrou:2013joa}. Although early studies of GPDs using various dynamical models of the nucleon structure (see Ref.~\cite{Khanpour:2017slc} and references therein) have played an important role for better understanding of GPDs and exclusive processes, but at the moment, more attention is being paid to determine GPDs from fitting the available experimental data (see Ref.~\cite{Kumericki:2016ehc} and references therein). Actually, the extraction of GPDs from exclusive processes, for which all particles are detected in the final state, is theoretically well developed. There are valuable experimental data from DVCS on proton at HERA collider which cover a wide kinematical region (see Table 4 of~\cite{Kumericki:2016ehc}). For the case of DVCS experiments with fixed proton target, there are also some data on various observables from HERMES, CLAS and Hall A Collaborations (see Table 5 of~\cite{Kumericki:2016ehc}). Nevertheless, DVCS data-taking will be continued at CERN by the COMPASS Collaboration~\cite{dHose:2004usi}. The future measurements in Jefferson Lab (JLab) by CLAS Collaboration~\cite{Armstrong:2017wfw} with experiments starting at 12 GeV (CLAS12) will also provide new information on valence region. Moreover, one of the main goals of future Electron-Ion Collider (EIC) is the measurement of DVCS observables and FFs~\cite{Accardi:2012qut} that makes the extraction of both $H$ and $E$ GPDs possible.
As mentioned, FFs can be written in terms of GPDs, and hence their measurements can give us useful important information on GPDs. In the case of nucleon spin studies, AFFs which are related to polarized GPDs can be extracted using various approaches~\cite{Tsushima:1988xv,JuliaDiaz:2004qr,Mamedov:2016ype,Liu:2016kpb,Anikin:2016teg,Adamuscin:2007fk,Aznauryan:2012ba,Ramalho:2017tga}.
One can find a review of experimental data in Refs.~\cite{Bernard:2001rs,Schindler:2006jq}. Many lattice QCD calculations of FFs have also been done since 1980s and have lead to considerable results. In recent years, lattice QCD simulations of AFFs have been presented for pion masses in the range $m_\pi=0.2-0.6$ GeV~\cite{Bhattacharya:2013ehc,Liang:2016fgy,Green:2017keo,Yao:2017fym,Abdel-Rehim:2015owa,Bali:2014nma,Bhattacharya:2016zcn}. Very recently, PACS Collaboration has reported the result of a lattice QCD calculation of nucleon AFF in 2+1 flavor near the physical pion mass~\cite{Ishikawa:2018rew}. In addition, neural networks can be applied to extract nucleon AFF from experimental data. In particular, the authors of Ref.~\cite{Alvarez-Ruso:2018rdx} have used this tool to analyze the neutrino-deuteron scattering data measured by the Argonne National Laboratory (ANL) bubble chamber experiment.
In this work, we study the nucleon axial form factor and polarized GPDs, given the fact that they are connected via sum rules. Although, there are various models~\cite{Pasquini:2005dk,Pasquini:2006dv,Dahiya:2007mt,Frederico:2009fk,Mukherjee:2013yf,Maji:2015vsa} and parameterizations~\cite{Goldstein:2010gu,Goldstein:2013gra,Sharma:2016cnf} for GPDs, we use a practical ansatz suggested by Diehl~\cite{Diehl:2004cx,Diehl:2007uc,Diehl:2013xca} which relates the predetermined (polarized) PDFs as input to (polarized) GPDs. An important advantage of this ansatz is that it has a few free parameters to be fixed by analyzing experimental data. Considering different scenarios, we determine parameters of the model using standard $\chi^2$ analysis of experimental data for nucleon AFF.
This paper is organized as follows. In \Cref{sec:two}, the theoretical framework of our study is presented and we briefly describe the physics related to GPDs and AFFs. Our method to obtain optimum values for the polarized GPDs of quarks using the available experimental data for nucleon AFF is also introduced in this section. \Cref{sec:three} is devoted to introduction of the experimental data which are used in our $ \chi^2 $ analyses. In \Cref{sec:four}, we study in detail the nucleon AFF with emphasis on its dependence on PPDFs according to Diehl model~\cite{Diehl:2004cx,Diehl:2007uc,Diehl:2013xca}, and also the value of scale $ \mu^2 $ associated with the PPDFs. Moreover, we investigate the model uncertainties that are imposed on the nucleon AFF from various sources. In \Cref{sec:five}, we determine the best values of parameters of the model by performing some $ \chi^2 $ analyses of nucleon AFF data in various scenarios and discuss the results obtained and possible outlooks.
Finally, we summarize our results and conclusions in \Cref{sec:six}.\\
\section{Theoretical framework}\label{sec:two}
In this section, we briefly review physical concepts on GPDs and nucleon AFF, and present the theoretical framework we use to obtain optimum values and bounds for polarized GPDs using the available experimental data for nucleon AFF. As mentioned in the Introduction, GPDs (PDFs) are non-perturbative objects needed for describing hard exclusive (inclusive) electroproduction processes, which are defined as matrix elements of quark and gluon operators at a light-like separation between two proton states with different (same) momenta. GPDs are also universal objects just like PDFs, because they can be defined in the framework of QCD collinear factorization for hard exclusive processes~\cite{Collins:1998be,Collins:1996fb} such as DVCS and exclusive meson production by longitudinally polarized photons. The importance of GPDs is due to the fact that they contain valuable information on the hadron structure in QCD. Actually, the distributions of quarks and gluons in hadrons in terms of both momentum fractions and position in the transverse plane can be well described through GPDs.
In the present work, we use the convention of Ji~\cite{Ji:1996ek} for GPDs, in which $H$, $E$, $\widetilde{H}$ and $\widetilde{E}$ are defined as~\cite{Belitsky:2005qn,Diehl:2003ny}:
\begin{widetext}
\begin{flalign}
\label{Eq:1}
& \frac{1}{2}\int \frac{d z^-}{2\pi}\, e^{ix P^+ z^-}
\langle p'|\, \bar{q}(-\frac{1}{2} z)\, \gamma^+ q(\frac{1}{2} z)
\,|p \rangle \Big|_{\substack{z^+=0,\\\bm{z_\perp=0}}}
= \frac{1}{2P^+} \left[
H^q(x,\xi,t)\, \bar{u}(p') \gamma^+ u(p) +
E^q(x,\xi,t)\, \bar{u}(p')
\frac{i \sigma^{+\alpha} \Delta_\alpha}{2m} u(p)
\, \right] ,
\nonumber \\
&\frac{1}{2} \int \frac{d z^-}{2\pi}\, e^{ix P^+ z^-}
\langle p'|\,
\bar{q}(-\frac{1}{2} z)\, \gamma^+ \gamma_5\, q(\frac{1}{2} z)
\,|p \rangle \Big|_{\substack{z^+=0,\\\bm{z_\perp=0}}}
= \frac{1}{2P^+} \left[
\widetilde{H}^q(x,\xi,t)\, \bar{u}(p') \gamma^+ \gamma_5 u(p) +
\widetilde{E}^q(x,\xi,t)\, \bar{u}(p') \frac{\gamma_5 \Delta^+}{2m} u(p)
\, \right],
\end{flalign}
\end{widetext}
where $z=\left(z^+,\bm{z_\perp},z^-\right)$. As one can readily see from Eq.~(\ref{Eq:1}), GPDs have three degrees of freedom, and then are expressed as functions of three parameter $ x $, $ \xi $ and $ t $. The first argument is the well-known Bjorken scaling variable (the average momentum fraction) $x=\frac{Q^2}{2 p\cdot q}$, with photon virtuality $ Q^2 $. Another longitudinal variable that has a crucial role in GPDs is $\xi=\frac{p^+-p'^+}{p^++p'^+}$, which is called ``skewness". The last argument is $t=(p'-p)^2=\Delta^2= -Q^2$, i.e. the squared of the momentum transferred to the target.
As mentioned, GPDs cannot be calculated from perturbative QCD, but there are some lattice QCD calculations~\cite{Hagler:2007xi,Alexandrou:2013joa}. A suitable method to extract GPDs
is performing a $ \chi^2 $ analysis of experimental data using factorization theorem. Hard exclusive processes such as DVCS~\cite{Kumericki:2016ehc} and meson production~\cite{Diehl:2013xca} are most used experiments for the extraction of GPDs. As data on hard exclusive processes are much less than inclusive processes, extraction of GPDs from experimental data is not yet feasible with precisions comparable to PDFs. One of the best ways to overcome this problem is using a model for GPDs, but with as few parameters as possible. In this work, we implement Diehl's model \cite{Diehl:2004cx} for calculating polarized GPDs which can be expressed in terms of PPDFs and also has few free parameters to be fixed by fitting to nucleon AFF data. We describe the model below.
It is well established now that various nucleon FFs can be related to GPDs through sum-rules~\cite{Diehl:2013xca}. For example, the Dirac and Pauli form factors, $ F_1 $ and $ F_2 $, for proton and neutron can be expressed in the following form
\begin{align}
F_i^p= e_u F_i^u + e_d F_i^d + e_s F_i^s, \nonumber \\
F_i^n= e_u F_i^d + e_d F_i^u + e_s F_i^s,
\label{Eq:2}
\end{align}
where $ i=1, 2 $ and $ F_i^q $ is the contribution from quark flavor $ q $ to the nucleon form factor $ F_i^A $, with $ A=p,n $. As usual, $ e_q $ is the electric charge of the quark in units of the positron charge. Now, the flavor form factors $ F_i^q $ can be written in terms of the proton ``valence GPDs" $ H_v $ and $ E_v $ for unpolarized quarks of flavor $ q $ as
\begin{align}
F_1^q(t)= \int_0^1 dx~H_v^q(x,t), \nonumber \\
F_2^q(t)= \int_0^1 dx~E_v^q(x,t),
\label{Eq:3}
\end{align}
where valence GPDs $ {\cal G}_v= H_v, E_v $ for flavor $ q $ are expressed in terms of ``quark GPDs" $ {\cal G} $ as
\begin{equation}
{\cal G}_v^q(x,t)= {\cal G}^q (x,\xi=0,t) + {\cal G}^q (-x,\xi=0,t),
\label{Eq:4}
\end{equation}
with $ {\cal G}^q (-x,\xi=0,t)= - {\cal G}^{\bar q} (x,\xi=0,t) $. Note that the result, as a consequence of Lorentz invariance, is independent of skewness $\xi$, so one can choose zero skewness GPDs and omit this variable.
As we pointed out, some models for GPDs use ordinary PDFs as input. Considering this fact, PDFs can be defined as
\begin{flalign}
{\label{Eq:5}}
q(x)=\int \frac{dz^-}{2\pi}e^{-ixP^+z^-} \langle p|\, \bar{q}(z)\, \gamma^+ q(0)
\,|p \rangle \Big|_{\substack{z^+=0,\\\bm{z_\perp}=0}},
\end{flalign}
and then recovered from GPDs at forward limit ($ t=0 $). For example, for positive $ x $, the GPD $ H $ changes to the usual quark and antiquark densities as $ H^q(x,0,0)=q(x) $ and $ H^q(-x,0,0)=\bar{q}(x) $. According to the Diehl's ansatz~\cite{Diehl:2004cx} which gives $x$ and $t$ dependence of GPDs at zero skewness, the valence GPDs $ H_v^q $, for example, can be related to ordinary valence PDFs as
\begin{equation}
H_v^q(x,t)= q_v(x)\exp [tf_q(x)],
\label{Eq:6}
\end{equation}
in which the profile functions $ f_q(x) $ specifies the $ x $ dependent width. Actually, this ansatz assumes an exponential $ t $ dependence with a $ x $-dependent slope for $ H_v^q $. The profile functions $ f_q(x) $ can have the simple form shown below, which we shall henceforth call the simple ansatz,
\begin{equation}
f_q(x)=\alpha^{\prime}(1-x)\log \frac{1}{x}.
\label{Eq:7}
\end{equation}
This ansatz, along with a more complex one also given in \cite{Diehl:2004cx}, were used for example, in Ref.~\cite{Diehl:2007uc} for the strange Dirac form factor $ F_1^{s} $. The value of $ \alpha^{\prime} $ can be extracted by analyzing the soft hadronic scattering processes like kaon-nucleon scattering or photoproduction of the mesons; Various analyses have indicated that its values should be close to one~\cite{Diehl:2004cx,Diehl:2007uc,Diehl:2013xca}.
In analogy with the Dirac and Pauli FFs, the nucleon axial form factor can be expressed in terms of polarized GPDs as~\cite{Diehl:2013xca}
\begin{flalign}
G_A(t)=&\int_0^1 dx \left[\widetilde{H}^u_v(x,t)-\widetilde{H}^d_v(x,t)\right]+\nonumber\\
2&\int_0^1 dx \left[\widetilde{H}^{\bar{u}}(x,t)-\widetilde{H}^{\bar{d}}(x,t)\right].
\label{Eq:8}
\end{flalign}
Note that, for valence polarized GPDs $\widetilde{H}^q_v$, we have
\begin{equation}
\label{Eq:9}
\widetilde{H}^q_v(x,t)\equiv \widetilde{H}^q(x,\xi=0,t)-\widetilde{H}^q(-x,\xi=0,t),
\end{equation}
with
$ \widetilde{H}^q(-x,\xi=0,t)= \widetilde{H}^{\bar{q}}(x,\xi=0,t) $. In fact, one can write the quark contribution to AFF generally as an integral of polarized GPDs over Bjorken $x$,
\begin{equation}
G_A^q(t)=\int_{0}^{1} dx~\widetilde{H}^q(x,t),
\label{Eq:10}
\end{equation}
where $ q $ covers here both valence and sea type contributions of $ up $ and $ down $ quarks. To be more precise, Eq.~(\ref{Eq:8}) clearly shows that, in contrast to Pauli and Driac FFs, the axial form factor contains also some contributions from the sea quark sector. Although these contributions are not significant compared with those come from valence sector, they cannot be neglected. It is worth noting that Eq.~(\ref{Eq:10}) is also the intrinsic spin contribution of quark $q$ to the spin of nucleon.
According to Diehl's model, an ansatz similar to that shown in Eq.~(\ref{Eq:6}) can be also considered for valence polarized GPDs $\widetilde{H}^q_v$, so that they can be related to valence polarized PDFs, $\Delta q_v(x)\equiv q^+(x)-q^-(x)$, as following
\begin{equation}
\widetilde{H}^q_v(x,t)=\Delta q_v(x) \exp [t \widetilde{f}_q(x)],
\label{Eq:11}
\end{equation}
where $\widetilde{f}_q(x)$ is the corresponding profile function which can have again a simple form like Eq.~(\ref{Eq:7}), or a complex form with more adjustable parameters. For simplicity we use the ansatz Eq.~(\ref{Eq:11}) both for $ \widetilde{H}_v^q(x,t) $ and $ \widetilde{H}^{\bar q}(x,t) $ in Eq.~(\ref{Eq:8}). In fact, this is an ad hoc ansatz for $ \widetilde{H}^{\bar q}(x,t) $ whose physical motivation is not as strong as that of the dominant $ \widetilde{H}_v^q(x,t) $.
\section{Experimental data}\label{sec:three}
One of the best ways to investigate the electromagnetic and weak structure of hadrons is using the electroweak probes and then measuring various structure form factors. Actually, the extraction of the electromagnetic nucleon FFs has a long history and remains a popular field of experimental research. An overview and discussion of FF data can be found in~\cite{Diehl:2013xca}. Although the vector electroweak FFs, which give us valuable information on the spatial distribution of the charge and magnetism, have been explored experimentally to a large extent, our information about the axial form factors is very limited. At the present, there are only two classes of experiments that can be used to determine AFF: first, (anti)neutrino scattering off protons or nuclei, and second, charged pion electroproduction.
In this section, we introduce the nucleon AFF data that is used in our study. For a clear and thorough review and discussion of AFF data, one can refer to Refs.~\cite{Bernard:2001rs,Schindler:2006jq}).
Reference~\cite{Bernard:2001rs} also includes clear explanations about the relevant methods to determine AFF of the nucleon. For the case of (anti)neutrino scattering experiments, we use the data obtained by analyzing the measurements of (quasi)elastic (anti)neutrino scattering off Ca, O and Fe nuclei from MiniBooNE experiments~\cite{Butkevich:2013vva}. These data cover a wide range of $ Q^2 $ in the interval $ 0.25 < Q^2 < 0.9 $ GeV$ ^2 $. As mentioned, the other information on the AFF is obtained from the analysis of charged pion electroproduction off protons, slightly above the pion production threshold. Although such type of analysis is more complicated, but there are more experimental data of this class. In the present work, we use a wide range of charged pion electroproduction data~\cite{Bernard:2001rs,Amaldi:1970tg,Amaldi:1972vf,Bloom:1973fn,Brauel:1973cw,DelGuerra:1975uiy,DelGuerra:1976uj,Joos:1976ng,Esaulov:1978ed,Choi:1993vt,Choi:1993}.
In such analyses, the Nambu, Lurié and Shrauner low-energy (NLS) theorem~\cite{Nambu:1997wa,Nambu:1997wb} is firstly used for the electric dipole amplitude $E^{(-)}_ {0+}$ at production threshold. Note that the NLS theorem is valid for soft pions, namely pions that have vanishing four-momentum. Then, the so-called hard pion corrections (model-dependent corrections), labeled as SP, FPV, DR and BNR, are used to connect the low-energy theorem to the data; in other words, to the realistic case with a finite pion mass~\cite{Bernard:2001rs}.
In most cases, the AFF data are presented as a simple parametrization~\cite{Bernard:2001rs}. The commonly used parametrization for AFF is the so-called dipole ansatz, for its $Q^2$ dependence, which is as follows:
\begin{equation}
\label{Eq:12}
G^{\textnormal{dipole}}_A(Q^2)=\frac{g_A}{1+\frac{Q^2}{M_A^2}},
\end{equation}
where the value of axial mass $M_A$ varies between $1.03$ and $1.07 $ GeV depending on the method which is used for analyzing experimental data~\cite{Bernard:2001rs,Schindler:2006jq}. The value of $G_A$ at $t=0$, the axial charge $ g_A $, is precisely determined from $\beta$-decay experiments. As can be seen, Eq.~(\ref{Eq:12}) has only a single free parameter, $M_A$, which should be fixed by fitting the experimental data. It should be also noted that, in the Breit frame and for small momenta, such $Q^2$ dependence of AFF leads to an exponentially decrease for the axial charge distribution~\cite{Alvarez-Ruso:2018rdx}.
However, from a theoretical point of view, it has been indicated that this ansatz is not a good choice, e.g. see~\cite{Bhattacharya:2011ah,Bhattacharya:2015mpa}. For example, a recent analysis of $G_A$~\cite{Meyer:2016oeg}, using conformal mapping or z-expansion, shows that the dipole ansatz systematically underestimates the uncertainty of AFF. Therefore, in this work, we do not implement dipole ansatz and use the experimental data points directly.
Another important point which should be noted is that the experimental data of Refs.~\cite{Butkevich:2013vva,Amaldi:1970tg,Amaldi:1972vf,Bloom:1973fn,Brauel:1973cw,DelGuerra:1975uiy,DelGuerra:1976uj,Joos:1976ng,Esaulov:1978ed,Choi:1993vt,Choi:1993} for AFF have been presented as ratio to $G_A$ at $t=0$, i.e. $ G_A(Q^2)/G_A(0) $. Hence one can use two approaches for analyzing these data: 1) using the original data as ratios, and 2) using data as $ G_A(Q^2) $. In the next section, we first use both of them and compare their results, and then continue our investigations with just $ G_A(Q^2) $ data.
Note that for extracting $ G_A(Q^2) $ data from original $ G_A(Q^2)/G_A(0) $ data we need the value of $ G_A(0) $ (axial charge $ g_A $). Although more accurate results for $ g_A $ can be extracted from recent measurements of the nucleon lifetime~\cite{Gonzalez-Alonso:2018omy}, we use the latest value from PDG~\cite{Tanabashi:2018oca}, i.e. $g_A=1.2723\pm 0.0023$.
As a last point, note that the total number of data points from Refs.~\cite{Butkevich:2013vva,Amaldi:1970tg,Amaldi:1972vf,Bloom:1973fn,Brauel:1973cw,DelGuerra:1975uiy,DelGuerra:1976uj,Joos:1976ng,Esaulov:1978ed,Choi:1993vt,Choi:1993} that we can use in our study is 84. However, comparing data points from various experiments, one find that some of points have same $ Q^2 $. Therefore, another way to analyze these data is removing those with same value of $ Q^2 $ and retaining most accurate ones. If we do this, 40 data points will remain which we refer to them as ``reduced data", in the following.
\section{Study of nucleon axial form factor}\label{sec:four}
In the previous sections, we presented the theoretical framework and experimental information related to the nucleon axial form factor $ G_A $. In this section we study $ G_A $ in detail with emphasis on its dependence on PPDFs according to ansatz Eq.~(\ref{Eq:11}), and also the value of scale $ \mu$ at which PPDFs are chosen. Moreover, we investigate the model uncertainties that are imposed upon the nucleon AFF due to the PPDFs uncertainties and also variation of $ \alpha^{\prime} $ in profile functions $ f(x) $.
\subsection{Dependence of $G_A$ on the PPDFs}
As can be seen from Eq.~(\ref{Eq:8}), the nucleon axial form factor can be related to PPDFs through its dependence on polarized GPDs, and the relationship between polarized GPDs and PPDFs. It is natural to expect that using different sets of PPDFs to perform calculations, should not change behaviour and magnitude of the resulting $ G_A $, otherwise the model can be considered as not flexible enough or inconsistent. Consequently, in this section, we choose the simple ansatz given by Eq.~(\ref{Eq:7}) and calculate $ G_A $ using different sets of PPDF and compare the results to see if such dependence is present. We will show below that such dependence is not present and hence the model can be used to describe the data.
Since the evaluation of the uncertainty in $ G_A $ due to the PPDF uncertainties is also of interest, we choose two NLO PPDF sets \texttt{DSSV08}~\cite{deFlorian:2009vb} and \texttt{NNPDFpol1.1}~\cite{Nocera:2014gqa} which provide error PPDF sets in addition to their central fit results \footnote{The \texttt{NNPDFpol1.1} PPDFs are available through the \texttt{LHAPDF} package~\cite{Buckley:2014ana}.}. An important advantage of these sets is that one can calculate the uncertainties in any quantity related to PPDFs more easily~\cite{Pumplin:2001ct,Nadolsky:2008zw}.
Figure~(\ref{fig:fig1}) shows the results obtained for nucleon axial form factor $ G_A $ as a function of $ Q^2 $ in which the hachured and filled bands correspond to the \texttt{NNPDFpol1.1} and \texttt{DSSV08} PPDFs, respectively. In order to investigate in more detail the differences between the predictions in various regions of $ Q^2 $, their ratios to the \texttt{NNPDFpol1.1} prediction have also been plotted in the bottom panel of Fig.~(\ref{fig:fig1}). Moreover, the experimental data from various experiments, which as explained in the previous section are referred to as ``reduced", have been shown for comparison. Both \texttt{DSSV08} and \texttt{NNPDFpol1.1} PPDFs have been taken at $ \mu=2 $ GeV as suggested in Refs.~\cite{Diehl:2004cx,Diehl:2007uc,Diehl:2013xca}.
Note also that the value of $ \alpha^{\prime} $ in the Eq.~(\ref{Eq:7}) and~(\ref{Eq:11}) has been set to $ \alpha^{\prime}=0.95 $ GeV$ ^{-2} $ which is in conformity with that which has been used in the study performed in Ref.~\cite{Diehl:2007uc} on the strange Dirac form factor $ F_1^{s} $.
\begin{figure}[t!]
\centering
\includegraphics[width=0.53\textwidth]{fig1}
\caption{The theoretical results obtained for nucleon AFF, $ G_A $, as a function of $ Q^2 $ using simple ansatz Eq.~(\ref{Eq:7}) with NLO PPDF sets \texttt{DSSV08}~\cite{deFlorian:2009vb} (filled band) and \texttt{NNPDFpol1.1}~\cite{Nocera:2014gqa} (hachured band) taken at $ \mu=2 $ GeV, value of $\alpha'$ is set to $0.95~\mathrm{GeV}^2$. The data points labeled as ``reduced" are related to various experiments selected in a procedure explained in Sec.~\ref{sec:three}.}
\label{fig:fig1}
\end{figure}
According to the results obtained, one can conclude that if the ansatz Eq.~(\ref{Eq:11}) is used for calculating $ G_A $, the final results will not be remarkably sensitive to the choice of PPDFs set. To be more precise, according to the bottom panel of Fig.~(\ref{fig:fig1}), the difference between the results obtained for $ G_A $ using the \texttt{DSSV08} and \texttt{NNPDFpol1.1} PPDFs is almost less than 2\% in full range of $ Q^2 $, though the amounts of their uncertainties are somewhat different. However, Fig.~(\ref{fig:fig1}) clearly shows that the model fails to represent the data. As we shall show below, we can obtain an acceptable fit with a readjustment of parameters $\alpha'$ and $\mu$.
\subsection{Dependence on the scale \boldmath $ \mu $ of PPDFs}
Although the results presented in the previous subsection for the nucleon AFF $ G_A $ using the simple ansatz Eq.~(\ref{Eq:7}) follow roughly the experimental data, the question to be answered is to what extent the results will change if we take PPDFs at scales other than $ \mu=2 $ GeV. In Refs.~\cite{Diehl:2004cx}, the authors explained that the choice of scale should be a compromise between being large enough for PPDFs, $\Delta q_v(x)$, to be rather directly fixed by data and small enough to make contact with soft physics like conventional Regge phenomenology. However, since the recent analysis of PPDFs performed by the NNPDF Collaboration, namely \texttt{NNPDFpol1.1}~\cite{Nocera:2014gqa}, included a wide range of the available experimental data which covers the lower values of $ \mu $ down to $ \mu=1 $ GeV, it is also of interest to study the impact of taking PPDFs at a scale different from $ \mu=2 $ GeV on the theoretical predictions of $ G_A $ using the simple ansatz Eq.~(\ref{Eq:7}).
To evaluate the dependence of $ G_A $ on the value of scale $ \mu $ in which PPDFs are chosen, we repeat here the calculations performed in the previous subsection using the \texttt{NNPDFpol1.1} PPDFs but at different values of scale $ \mu $. The results obtained have been shown in Fig.~(\ref{fig:fig2}) where the dashed, solid, dotted-dashed and dotted curves correspond to \texttt{NNPDFpol1.1} PPDFs taken at $ \mu=1, 2, 3 $ and $ 4 $ GeV, respectively. In order to make a better comparison, in the bottom panel, we have also plotted the ratios of the predictions to the corresponding result obtained using \texttt{NNPDFpol1.1} PPDFs taken at $ \mu=2 $ GeV as a reference. As can be clearly seen, by decreasing the value of $ \mu $ in which PPDFs are chosen, $ G_A $ increases especially for larger values of $ Q^2 $, so that the difference between the results of $ \mu=1 $ and $ \mu=2 $ GeV reaches even to 30\% at $ Q^2=2 $ GeV$ ^2 $. On the other hand, as the value of $ \mu $ increases, $ G_A $ decreases but with a smaller rate than before, such that the difference between the results of $ \mu=2 $ and $ \mu=4 $ GeV reaches only to 20\% at $ Q^2=2 $ GeV$ ^2 $. Comparing the results of Figs.~(\ref{fig:fig1}) and~(\ref{fig:fig2}), one can conclude that taking PPDFs at a lower scale $ \mu $
can lead to a better description of the experimental data and lessen the relatively large discrepancy observed in Fig.~(\ref{fig:fig1}) between the predictions of the model and experimental data.
\begin{figure}[t!]
\centering
\includegraphics[width=0.53\textwidth]{fig2}
\caption{The dependence of $ G_A $ on the value of scale $ \mu $ in which PPDFs are chosen. The calculations for the model have been performed using simple ansatz Eq.~(\ref{Eq:11})($\alpha'=0.95~\mathrm{GeV}^2$), NLO PPDFs of \texttt{NNPDFpol1.1}~\cite{Nocera:2014gqa} taken at $ \mu=1 $ (dashed), 2 (solid), 3 (dotted-dashed), and 4 (dotted) GeV. The ratios of the predictions to the corresponding result of $ \mu=2 $ GeV have been shown in the bottom panel.}
\label{fig:fig2}
\end{figure}
\subsection{Model uncertainties}
After studying the dependence of the nucleon axial form factor $ G_A $ on the PPDFs and also the value of scale $ \mu $ at which they are chosen, now we investigate the amount of uncertainties imposed on predictions of the model for $ G_A $ due to various sources and compare them with each other. According to sum rule given in Eq.~(\ref{Eq:8}), the model uncertainties in $ G_A $ can arise from the PPDFs uncertainties, the uncertainty of the scale $ \mu $ in which PPDFs are chosen, and the uncertainty of $ \alpha^{\prime} $ in the profile function $ f(x) $. We have studied the first two in Fig.~(\ref{fig:fig1}) and~\ref{fig:fig2}, respectively, and here investigate the uncertainties which arise from the $ \alpha^{\prime} $ variation. For this purpose, we repeat the calculations performed in Fig.~(\ref{fig:fig1}) using \texttt{NNPDFpol1.1} PPDFs~\cite{Nocera:2014gqa} taken at $ \mu=2 $ GeV, but this time vary $ \alpha^{\prime} $ in the range $ 0.85~\mathrm{GeV}^2 <\alpha^{\prime}< 1.15~\mathrm{GeV}^2 $.
Figure~(\ref{fig:fig3}) shows a comparison between the model uncertainties in $ G_A $ due to the PPDFs uncertainties (filled band) and $ \alpha^{\prime} $ variations (hachured band) in aforementioned range. The bottom panel shows the relative uncertainties obtained by dividing the upper and lower bands of each prediction by its central value. As can be seen, the uncertainty arising from the $ \alpha^{\prime} $ variations is remarkably dominant compared to the PPDFs uncertainty, except for very small values of $ Q^2 $ in which the PPDFs uncertainty becomes dominant. Note also that the uncertainty due to $ \alpha^{\prime} $ variations is asymmetric, while the PPDFs uncertainty is symmetric.
\begin{figure}[t!]
\centering
\includegraphics[width=0.53\textwidth]{fig3}
\caption{A comparison between the model uncertainties in $ G_A $ due to the PPDFs uncertainties (filled band) and $ \alpha^{\prime} $ variations (hachured band) in the range $ 0.85~\mathrm{GeV}^2 <\alpha^{\prime}< 1.15~\mathrm{GeV}^2 $. The theoretical calculations have been performed using simple ansatz Eq.~(\ref{Eq:7}) with NLO PPDFs of \texttt{NNPDFpol1.1}~\cite{Nocera:2014gqa} taken at $ \mu=2 $ GeV. The bottom panel shows the relative uncertainties.}
\label{fig:fig3}
\end{figure}
\section{\boldmath $ \chi^2 $ analysis of the experimental data}\label{sec:five}
In the previous section, we found that the nucleon axial form factor $ G_A $ is not sensitive to the set of PPDFs chosen if an anzatz like Eq.~(\ref{Eq:11}) is used for connecting $ G_A $ to PPDFs via GPDs. However, according to the results obtained, any change in the scale $ \mu $ in which the PPDFs are taken and also the value of $ \alpha^{\prime} $ in profile function $ f(x) $ can lead to different results for $ G_A $. For this reason, in this section, we compute the best values for parameters of the model, i.e. $\mu$ and $\alpha'$, by performing $ \chi^2 $ analyses of the available experimental data. The optimization is done by
the CERN program \texttt{MINUIT}~\cite{James:1975dr}.
\subsection{Simple ansatz}
In oder to determine the best values of $ \mu $ and $ \alpha^{\prime} $ which are consistent with the experimental data of the nucleon axial form factor, various scenarios can be considered. As a first step, we perform a $ \chi^2 $ analysis of all data for the ratio $ G_A(Q^2)/G_A(0) $ from various experiments introduced in Sec.~\ref{sec:three}. For the theoretical calculations, we consider simple ansatz Eq.~(\ref{Eq:7}) with the \texttt{NNPDFpol1.1}~\cite{Nocera:2014gqa} as input PPDFs. Note that the theoretical calculation of quantity $ G_A(Q^2)/G_A(0) $ is not sensitive to the value of scale $ \mu $ in which PPDFs are chosen, since it is performed according to Eq.~(\ref{Eq:8}) and hence both the numerator and dominator include PPDFs similarly. We have examined various values of $ \mu $ and found that the results for the value of $ \alpha^{\prime} $ determined from the fit do not change up to four decimal places.
With 84 data points and 1 free parameter, the value of $ \chi^2 $ divided by the number of degrees of freedom is equal to $ \chi^2 /d.o.f=4.237 $. The value of $ \alpha^{\prime} $ extracted from the fit is
\begin{equation}
\label{Eq:13}
\alpha^{\prime}= 2.754 \pm 0.0058 ~\textrm{GeV}^2,
\end{equation}
which is larger than the result obtained in Ref.~\cite{Diehl:2007uc} (about 1 GeV$ ^2 $). Using the reduced data set for $ G_A(Q^2)/G_A(0) $ which includes most precise point among the data points with the same $ Q^2 $ (40 data points), the value of $ \alpha^{\prime} $ is changed to $ \alpha^{\prime}= 2.476 \pm 0.0064 ~\textrm{GeV}^2 $. However, the value of $ \chi^2 /d.o.f $ increases to 5.129, since more than 40 data point with larger uncertainties have been removed from the analysis.
Since, as mentioned before, the quantity $ G_A(Q^2)/G_A(0) $ cannot put any constraint on the value of scale $ \mu $ at which PPDFs are chosen, the above values obtained for $ \alpha^{\prime} $ are not very reliable. Actually, according to the results obtained in the previous section, we know that the change in $ \mu $ can change the result of $ G_A(Q^2) $ and subsequently the best value of $ \alpha^{\prime} $. Consequently, it is more reliable to extract the value of $\mu$ by performing a $ \chi^2 $ analysis of $ G_A(Q^2) $ data. For this purpose, we consider the reduced data for $ G_A(Q^2) $ that have been obtained from the original measurements of $ G_A(Q^2)/G_A(0) $, using $ G_A(0)= 1.2723\pm 0.0023 $~\cite{Tanabashi:2018oca}.
Since the quantity $ G_A(Q^2) $ is sensitive both to the value of $ \mu $ and $ \alpha^{\prime} $, we can determine their optimal values simultaneously. For this purpose, we can follow two procedures: 1) Performing several $ \chi^2 $ analysis by choosing different values for $ \mu $ as a fixed parameter and then minimizing $\chi^2$ with respect to $\alpha'$ and then plotting the $ \chi^2 $ as a function of $ \mu $ to find the point at which $ \chi^2 $ is an absolute minimum and its corresponding $ \alpha^{\prime} $. We call this procedure ``minimum tracing''. 2) Taking both $ \mu $ and $\alpha'$ as free parameters and minimizing $\chi^2$ with respect to both simultaneously. By following these two procedures, we can also find if there is a correlation between $ \mu $ and $ \alpha^{\prime} $.
Figure~(\ref{fig:fig4}) shows the results obtained for the minimum tracing of $ \chi^2/d.o.f.$ values of the reduced $ G_A(Q^2) $ data as a function of $ \mu $ in which PPDFs are chosen in calculation of the nucleon AFF Eq.~(\ref{Eq:8}). As can be seen, for the very small values of $ \mu $, the $ \chi^2 $ arises rapidly, while for the values greater than 1, it increases slowly. Note that the minimum occurs at about $ \mu= 0.96 $ GeV which is smaller than the value considered by the authors of Ref.~\cite{Diehl:2007uc} ($ \mu= 2 $ GeV). Moreover, the value of $ \alpha^{\prime} $ corresponding to this minimum is now as follows
\begin{equation}
\label{Eq:14}
\alpha^{\prime}= 0.59 \pm 0.0014 ~\textrm{GeV}^2,
\end{equation}
which is also smaller than the result obtained in Ref.~\cite{Diehl:2007uc}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.53\textwidth]{fig4}
\caption{The minimum tracing of $ \chi^2/d.o.f. $, shows the minimum of $ \chi^2/d.o.f. $ as a function of $ \mu $ in which PPDFs are chosen in calculation of the nucleon AFF Eq.~(\ref{Eq:8}), for the analysis of ``reduced" $ G_A(Q^2) $ data utilizing the simple ansatz for the profile function.}
\label{fig:fig4}
\end{figure}
As mentioned earlier, another method is to find the best values of $ \mu $ and $ \alpha^{\prime} $ from the $ \chi^2 $ analysis of reduced $ G_A(Q^2) $ data simultaneously. By performing such an analysis using \texttt{MINUIT}~\cite{James:1975dr}, the following results are obtained
\begin{align}
\label{Eq:15}
\alpha^{\prime}= 0.59 \pm 0.0022 ~\textrm{GeV}^2, \nonumber\\
\mu= 0.962 \pm 0.0098 ~\textrm{GeV},
\end{align}
which, as expected, are the same as those of the minimum tracing method. If we use all of the $ G_A(Q^2) $ data rather than its reduced set, these values are changed to
\begin{align}
\label{Eq:16}
\alpha^{\prime}= 0.65 \pm 0.0014 ~\textrm{GeV}^2, \nonumber\\
\mu= 0.987 \pm 0.11 ~\textrm{GeV}.
\end{align}
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{fig5.pdf}
\caption{(Color online) Contour plot of $\chi^2/d.o.f.$ as a function of free parameters $ \mu $ and $ \alpha^{\prime} $ for the simple ansatz. The value of $\chi^2/d.o.f.$ is shown by colors (see bar legend on the right). The best value of $\chi^2/d.o.f.$ is shown as a red dot, the dashed line shows the path taken in Fig.~(\ref{fig:fig4}) for the minimum tracing procedure (see the text for more details).}
\label{fig:fig5}
\end{figure}
The sensitivity of $ \chi^2 $ to the value of $ \mu $ and $ \alpha^{\prime} $ can be also studied in detail by plotting a contour plot which includes appropriate ranges of two parameters $ \mu $ and $ \alpha^{\prime} $. Figure~(\ref{fig:fig5}) shows the contour plot of $\chi^2/d.o.f.$ as a function of free parameters $ \mu $ and $ \alpha^{\prime} $. As can be seen from this figure, a steep rise in $\chi^2/d.o.f.$ occurs, if we increase or decrease $\alpha'$ more than 0.1 away from the best value. This fact means that the value of $\chi^2/d.o.f.$ is very sensitive to $\alpha'$, and that the reduced $ G_A(Q^2) $ data can put a good constraint on $\alpha'$. This is consistent with the small uncertainties for $\alpha'$ given in Eq.~(\ref{Eq:7})and~(\ref{Eq:16}). The situation is somewhat different for parameter $\mu$. Actually, a steep rise in the $\chi^2/d.o.f.$ occurs for $\mu > 2$ or $\mu < 0.9$, but for $0.9\lesssim\mu\lesssim 2$ one can see that $\chi^2/d.o.f.$ do not change as much. The minimum value of $\chi^2/d.o.f.$ has been shown as a red dot in the figure which occurs in $ \mu $ and $ \alpha^{\prime} $ values shown in Eq.~(\ref{Eq:15}). The dashed line shows the path in $\alpha'-\mu$ plane taken in Fig.~(\ref{fig:fig4}) for finding the minimum value of $\chi^2/d.o.f.$ using our minimum tracing procedure. In other words dashed line in Fig.~(\ref{fig:fig5}) shows the correspondance between the two procedures for finding the minimum of $\chi^2/d.o.f.$ explained earlier in this section; each point on this curve shows the pair $\alpha'$ and $\mu$ for which Fig.~(\ref{fig:fig4}) has a corresponding $\mu$ and $\chi^2/d.o.f.$ pair.
Figure~(\ref{fig:fig6}) shows a comparison between the theoretical predictions for the nucleon axial form factor $ G_A $ using the values obtained for $ \alpha^{\prime} $ and $ \mu $ (Eq.~(\ref{Eq:15})) from the analysis of reduced $ G_A(Q^2) $ data (filled band) and the results obtained using default values $ \alpha^{\prime}=0.95 $ GeV$ ^2 $ and $ \mu= 2 $ GeV (hachured band). The data points are those of the reduced set. As can be seen, the theoretical prediction is now in more consistent with the experimental data.
\begin{figure}[t!]
\centering
\includegraphics[width=0.53\textwidth]{fig6}
\caption{A comparison between the reduced experimental data and theoretical predictions for $ G_A $ using the values obtained for $ \alpha^{\prime} $ and $ \mu $ (Eq.~(\ref{Eq:15})) from the the fit (filled band) and the results obtained using default values of ~\cite{Diehl:2007uc}, $ \alpha^{\prime}=0.95 $ GeV$ ^2 $ and $ \mu= 2 $ GeV (hachured band).}
\label{fig:fig6}
\end{figure}
\subsection{Complex ansatz}
Although Fig.~(\ref{fig:fig6}) clearly shows that using simple ansatz Eq.~(\ref{Eq:7}) can lead to an acceptable fit of the nucleon axial form factor $ G_A $ data, it is also of interest to investigate the effect of considering a more flexible profile function. In Ref.~\cite{Diehl:2004cx}, the authors indicated that low and high-$ x $ behavior of profile function $ f(x) $ and also the intermediate $ x $ region can be well characterized by the forms
\begin{equation}
\label{Eq:17}
f_q(x)=\alpha^{\prime}(1-x)^2\log\frac{1}{x}+B_q(1-x)^2 + A_qx(1-x),
\end{equation}
and
\begin{equation}
\label{Eq:18}
f_q(x)=\alpha^{\prime}(1-x)^3\log\frac{1}{x}+B_q(1-x)^3 + A_qx(1-x)^2.
\end{equation}
In this section, we examine these profile functions to see whether any improvement in the theoretical predictions and the fit can be achieved. Note that for the calculation of $ G_A $ according to Eq.~(\ref{Eq:8}), one in principle needs to consider profile function Eq.~(\ref{Eq:17}) (or Eq.~(\ref{Eq:18})) for each flavor $ u_v $, $ d_v $, $ \bar u $ and $ \bar d$. In this way, there are 8 more free parameters that should be determined from the analysis of $ G_A $ data. The best procedure for selecting the most appropriate parameters and then finding the optimal parametrization form is performing a parametrization scan as described in Ref.~\cite{Aaron:2009aa} for the case of PDFs determination through a QCD analysis of HERA DIS data.
First, we consider the profile function Eq.~(\ref{Eq:17}) and again use the reduced set of $ G_A $ data. The value of $ \mu $ in which PPDFs are chosen is set to the value obtained in Eq.~(\ref{Eq:15}) using simple ansatz Eq.~(\ref{Eq:7}). By performing a parametrization scan, it is found that none of the free parameters can lead to a decrease in the value of $ \chi^2 $ more than one unit as compared to the corresponding value for the analysis using the simple ansatz (see previous subsection). Consequently, adding some free parameters in the form of Eq.~(\ref{Eq:17}), even for the valence quark profile functions, will not have any effect on the fit quality. However, if we use the profile function Eq.~(\ref{Eq:18}) instead, some improvements can be achieved in the fit quality. We find that the only parameters that can lead to a significant decrease in the value of $ \chi^2 $ are the valence quarks parameters. Moreover, by considering $ A_{u_v} $, $ B_{u_v} $, $ A_{d_v} $ and $ B_{d_v} $ as free parameters and setting the other parameters for $ \bar u $ and $ \bar d $ equal to zero, the value of $ \chi^2 $ decreases from $ 184.1 $ (which is $ \chi^2 $ for the analysis of reduced $ G_A $ data using simple ansatz Eq.~(\ref{Eq:7})) to $ 173.6 $. Next we investigate the possibility of assuming $ u_v $ and $ d_v $ parameters to be equal, to reduce the number of free parameter as much as possible without damaging the quality of the fit. For this purpose, we consider $ A_{u_v}=A_{d_v}=A_v $ and $ B_{u_v}=B_{d_v}=B_v $, so that only two extra parameters contribute in the fit. The value of $ \mu $ is again set to the value obtained in Eq.~(\ref{Eq:15}). As a result, we find that the value of $ \chi^2 $ changes less than two units, namely from $ 173.6 $ to $ 175.0 $. It means that it is acceptable to take the $ u_v $ and $ d_v $ parameters to be equal and reduce the number of free parameters. Then, the optimal values obtained for the parameters of the fit are as follows
\begin{equation}
\label{Eq:19}
\begin{split}
& \alpha^{\prime}= 1.029 \pm 0.22 ~\textrm{GeV}^2, \\
& A_v= 12.74 \pm 2.20,~~~~~~ B_v= -3.5 \pm 0.64.
\end{split}
\end{equation}
As can be seen, the value of $ \alpha^{\prime} $ now has increased to about $1.0$ which is consistent with the result obtained in Ref.~\cite{Diehl:2007uc}.
For the analyses performed so far in this subsection, we have set the value of $ \mu $ in which PPDFs are chosen equal to $ \mu= 0.962 $ GeV according to Eq.~(\ref{Eq:15}). However, we should find the best value of $ \mu $ by performing a minimum tracing or considering it as a free parameter of the fit just like the previous section. Figure~(\ref{fig:fig7}) shows the results obtained by minimum tracing of $ \chi^2/d.o.f.$ values of the reduced $ G_A(Q^2) $ data as a function of $ \mu $ using the profile function Eq.~(\ref{Eq:18}) with same $ A_v $ and $ B_v $ parameters for valence quarks and setting the corresponding sea quark parameters equal to zero. According to this figure the minimum occurs at about $ \mu= 1.0 $ GeV which is somewhat larger than before, but still smaller than $ \mu= 2.0 $ GeV which has been considered in Ref.~\cite{Diehl:2007uc}. In this situation, the optimal values obtained for the parameters of the fit are as follows
\begin{equation}
\label{Eq:20}
\begin{split}
& \alpha^{\prime}= 1.054 \pm 0.22 ~\textrm{GeV}^2, \\
& A_v= 13.28 \pm 2.00,~~~~~~ B_v= -3.64 \pm 0.64.
\end{split}
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=0.53\textwidth]{fig7}
\caption{Same as Fig.~(4), but using Eq.~(\ref{Eq:18}) for the profile function.}
\label{fig:fig7}
\end{figure}
Next we examine the effect of considering $ \mu $ as a free parameter whose value is to be determined by simultaneous optimization along with the other three parameters. We find that the results do not change significantly, just like before. The results are,
\begin{equation}
\label{Eq:21}
\begin{split}
& \alpha^{\prime}= 1.054 \pm 0.22 ~\textrm{GeV}^2,~~\mu=0.997 \pm 0.363 ~\textrm{GeV}\\
& A_v= 13.28 \pm 2.00,~~~~~~~~~ B_v= -3.64 \pm 0.64.
\end{split}
\end{equation}
All things considered, we can conclude that using reduced set of $ G_A $ data to determine the best value of $ \mu $, in which PPDFs are chosen, leads to a smaller amount for it (about $ \mu= 1.0 $ GeV) as compared to the value assumed in Ref.~\cite{Diehl:2007uc}, whether a simple ansatz is used or a more flexible ansatz like Eq.~(\ref{Eq:18}). However, for the case of $ \alpha^{\prime} $, the situation is somewhat different. Actually, using simple ansatz Eq.~(\ref{Eq:7}) leads to $ \alpha^{\prime}=0.59 $ which is smaller than the one obtained by the study of strange Dirac form factor $ F_1^{s} $~\cite{Diehl:2007uc}, while using a complex ansatz like Eq.~(\ref{Eq:18}) leads to a value about $ \alpha^{\prime}= 1.054 ~\textrm{GeV}^2 $ which is in consistent with the result of Ref~\cite{Diehl:2007uc}.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{fig8a}
\includegraphics[width=0.5\textwidth]{fig8b}
\caption{The polarized GPDs $ \widetilde{H}_{u_v} $ (top) and $ \widetilde{H}_{d_v} $ (bottom) as a function of $ x $ at $ \mu=1 $ GeV for three different values of $ t=-Q^2= 0, -0.5 $. The theoretical calculations have been performed using ansatz Eq.~(\ref{Eq:11}) with profile function Eq.~(\ref{Eq:18}) and values Eq.~(\ref{Eq:21}).}
\label{fig:fig8}
\end{figure}
It is also of interest now to plot polarized GPDs according ansatz Eq.~(\ref{Eq:11}) and using profile function Eq.~(\ref{Eq:18}) and values shown in Eq.~(\ref{Eq:21}) obtained from final analysis. Figure~(\ref{fig:fig8}) shows polarized GPDs $ \widetilde{H}_{u_v} $ (top) and $ \widetilde{H}_{d_v} $ (bottom), as a function of $ x $ for three different values of $ t=-Q^2= 0, -0.5 $ and 1 GeV$ ^2 $. Note that for $ t=0 $, we obtain the original polarized PDFs from \texttt{NNPDFpol1.1}~\cite{Nocera:2014gqa}. As can be seen, as the absolute value of $ t $ increases, the distributions for the valence quarks decrease in magnitude and shift somewhat to smaller values of $ x $, as expected. Note that, the uncertainty of $ \widetilde{H}_{u_v} $ is less than $ \widetilde{H}_{d_v} $, since $ \Delta u_v$ PPDF of \texttt{NNPDFpol1.1} has less uncertainty. The corresponding plots for $ \widetilde{H}_{\bar u} $ and $ \widetilde{H}_{\bar d} $ are shown in Fig~(\ref{fig:fig9}). Note that in this case, the parameters $ A $ and $ B $ in profile function Eq.~(\ref{Eq:18}) are equal to zero. This figure shows that as the absolute value of $t$ increases, the distributions for the sea quarks slightly decrease in magnitude and shift to larger $x$. However their uncertainty bands are greater than those of the valence quarks.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{fig9a}
\includegraphics[width=0.5\textwidth]{fig9b}
\caption{Same as Fig.~(\ref{fig:fig8}), but for $ \widetilde{H}_{\bar u} $ and $ \widetilde{H}_{\bar d} $.}
\label{fig:fig9}
\end{figure}
We can now compare the result of our final analysis to the experimental data. Figure~(\ref{fig:fig10}) shows a comparison between the theoretical predictions for the nucleon AFF $ G_A $ obtained using profile function Eq.~(\ref{Eq:18}) with values Eq.~(\ref{Eq:20}) (filled band), and reduced $ G_A(Q^2) $ data. Note that \texttt{NNPDFpol1.1} PPDFs have been chosen at $ \mu=0.997 $ GeV as shown in Eq.~(\ref{Eq:21}). As can be seen, the theoretical prediction is in a good agreement with the experimental data.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{fig10}
\caption{A comparison between the reduced experimental data and theoretical predictions for $ G_A $ obtained using profile function Eq.~(\ref{Eq:18}) with values of parameters given Eq.~(\ref{Eq:21}) (filled band). \texttt{NNPDFpol1.1} PPDFs have been chosen at $ \mu=0.997 $ GeV.}
\label{fig:fig10}
\end{figure}
\section{Summary and Conclusions}\label{sec:six}
An accurate knowledge of generalized parton distributions is necessary for describing hard exclusive electroproduction processes. GPDs are non-perturbative objects which can be determined using analysis of experimental data from exclusive processes such as DVCS and meson production. One of the important properties of GPDs is their mutual relations with PDFs and form factors. To be more precise, (polarized) GPDs reduce to (polarized) PDFs in the limit of zero momentum transfer. On the other hand, the integration of GPDs over Bjorken $ x $ yields Dirac and Pauli form factors. This procedure for the case of polarized GPDs yields AFF or intrinsic quark contribution to the nucleon spin. In the present work, considering Diehl's model~\cite{Diehl:2004cx,Diehl:2007uc,Diehl:2013xca} to relate GPDs and PDFs, we have calculated polarized GPDs ($\widetilde{H}_q$) using predetermined polarized PDFs and have studied in details the axial form factor of nucleon $ G_A $. As a result, we have shown that our model to calculate $ G_A $ is not sensitive to the choice of PPDFs set, such that the difference between the results obtained using the \texttt{DSSV08} and \texttt{NNPDFpol1.1} PPDFs is almost less than 2\% in full range of $ Q^2 $. By studying the dependence of $ G_A $ on the value of scale $ \mu $ at which PPDFs are chosen, we can also found that as $ \mu $ decreases, $ G_A $ increases especially for larger values of $ Q^2 $, so that the difference between the results of $ \mu=1 $ and $ \mu=2 $ GeV reaches 30\% at $ Q^2=2 $ GeV$ ^2 $. Overall, we have concluded that taking PPDFs at a lower scale $ \mu $
can lead to a better description of the experimental data. Moreover, we have investigated the model uncertainties imposed on $ G_A $ due to the PPDFs uncertainties and also variation of $ \alpha^{\prime} $ in profile functions $ f(x) $. We have indicated that the uncertainty arising from the $ \alpha^{\prime} $ variations is dominant as compared to the PPDFs uncertainty, except for very small values of $ Q^2 $ in which the PPDFs uncertainty becomes dominant. Moreover, by considering different scenarios, we have determined the optimal values of parameters of the model using standard $\chi^2$ analysis of the available experimental data related to nucleon axial form factor. We used both a simple and complex profile function to find the best conditions for obtaining better consistency between the theoretical predictions and experimental data. We have shown that using $ G_A $ data to determine the best value of $ \mu $, in which PPDFs are chosen, leads to a smaller amount for it (about $ \mu= 1.0 $ GeV) compared with the value assumed in Ref.~\cite{Diehl:2007uc} whether one uses a simple ansatz or a more flexible ansatz. In addition, using a simple ansatz leads to a smaller value for $ \alpha^{\prime} $ rather than one obtained by the study of strange Dirac form factor $ F_1^{s} $~\cite{Diehl:2007uc}, while using a complex ansatz leads to $ \alpha^{\prime}= 1.054 ~\textrm{GeV}^2 $ which is in consistent with the result of Ref~\cite{Diehl:2007uc}.
More precise measurements of neutrino cross section on hydrogen and deuterium are needed to unravel the axial structure of the nucleon.
\section*{ACKNOWLEDGEMENT}
We thank H. Abedi and M. J. Kazemi for reading the manuscript and for stimulating discussions. We also wish to express our gratitude toward U. G. Meissner and A. Butkevich for providing us with the data. HH and SSG would like to thank the research council of the Shahid Beheshti University for financial support. MG thanks the School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM) for financial support provided for this research.
|
1,108,101,564,680 | arxiv | \section{Introduction}
Ultracold atom-ion systems have emerged in the last decade as a fast-growing field and have gained large interest due to their potential contribution to quantum chemistry \cite{Ratschbacher2012,Puri2017,Chang2013,Sik2018}, quantum computing \cite{Doerk2010,Gerritsma2012} and quantum simulation \cite{Joger2014,Secker2016} fields.
Collisions between atoms and ions are characterized by an attractive
long-range polarization-potential which scales as $-r^{-4}$ and leads to a semi-classical behavior over a wide range of collision energies \cite{Saito2017}. At very low energies, quantum phenomenon, such as Feshbach \cite{Idziaszek2009,Idziaszek2011,Tomza2015} and shape resonances \cite{Silva2015,Raab2009,Tacconi2011,Belyaev2012}, are predicted, similarly to the one observed in atom-atom \cite{Inouye1998} and atom-molecules \cite{Klein2017} collisions. Therefore, there is a considerable experimental effort for cooling atom-ion mixtures into the few partial-wave regime and measuring the energy dependence of the cross section for different collisions and reactions with high resolution.
Reaching the few partial-wave regime in atom-ion systems has been a significant challenge for the atom-ion community in the last couple of decades. The reason being that at steady-state, the collision energy between atoms and ions is neither fundamentally limited by the temperature to which both species are cooled, nor by the ion’s trap residual Excess-Micromotion (EMM) energy. Instead, this fundamental limit is set by the force that the atom exerts on the ion during a collision. This force is then amplified by the ion-trap oscillating fields \cite{Cetina2012,Meir2016,Pinkas2020,Feldker2020}.
This effect sets the lower bound of atom-ion steady-state interaction energy in these systems.
Up until recently, this lower bound has been at least two orders of magnitude higher than the s-wave energy limit \cite{Schmid2018}.
Nevertheless, this fundamental energy limit is species dependent \cite{Cetina2012}, and favorable for mixtures combined by light-atoms and heavy-ions such as $^6$Li-Yb$^+$. Only recently researchers have reached the s-wave regime for that system, with collisional energies of about 10 $\mu$K$\cdot$k$_B$ \cite{Feldker2020}. \\
In recent years, several experiments studied the rates and cross sections of inelastic atom-ion collisions as a function of collision energy. Several experiments reached the energy regime where quantum resonances should appear \cite{Hall2013, Hall2012, Haze2013,Saito2017, Dorfler2019}, but these have yet to be observed. In all previous studies, scanning the energy was accompanied by an increase of the energy spread, and thereby compromising the energy resolution. In one method, the collision energies were varied by increasing the micromotion energy of the ions, which is associated with their motion in the oscillating rf, electric field \cite{Zipkes2010,Hall2013,Schmid2010,Hall2012,Schmid2012,Haze2013}. However, increasing excess micromotion broadened the ion energy spread into a power-law distribution in which the distribution spread was larger than the distribution peak \cite{Hall2012,Silva2015,Bell2009,Grier2009,Puri2019}. In a different experiment a magneto-optical trap of atoms was shuttled across a crystal of atomic \cite{Eberle2016} or molecular ions \cite{Dorfler2019}, using radiation-pressure forces, reaching an energy resolution in the mK regime. Another approach is shuttling the ion by modulating the voltage on the trap electrodes \cite{Puri2018}. By these methods the collision energy can be scanned between $\sim$10 mK to $\sim$1 K with a relative resolution of $\sim$10. In the method presented here, the inferred energy resolution of $\sim$200 $\mu$K$\cdot$k$_B$ is at-least one order of magnitude narrower. \\
Here we present a method for high energy-resolution control of atom-ion collision by shuttling a cloud of ultracold atoms, trapped in a one-dimensional optical lattice, across a single trapped ion while maintaining a narrow collision energy spread. The collision energy is scanned with high resolution by changing the frequency difference between the optical lattice beams. We avoid the limitations imposed by the steady-state atom-ion energy distribution by limiting the average number of Langevin collisions in each pass to be smaller than one. Thus, the broadness of the collision energy is determined by the ion's and atoms' energy distributions prior to the collision both of which are in the 10's $\mu$K$\cdot$k$_B$ regime. For that, this method has sufficient energy resolution to potentially allow for the observation of quantum signatures such as shape resonances. \\
We demonstrated our method by measuring the energy dependence of the inelastic collisions cross sections of the Electronic-Excitation Exchange (EEE) and Spin-Orbit Change (SOC), channels that occur when a $^{88}$Sr$^+$ ion, optically excited to the 4d$^2$D$_{5/2}$ meta-stable state, collides with ground-state $^{87}$Rb atoms. These processes were shown \cite{Ruti2019} to occur through a non-adiabatic Landau-Zener crossing and their energy dependence was only theoretically discussed up to now \cite{Belyaev2012,Hall2011}, for the same collisional energy range. We measured the energy dependence of the inelastic collision cross section of the EEE channel and the SOC channels separately. We found that for collision energies ranging between 0.2$-$12 mK$\cdot$k$_B$, the cross-section for both channels follow the semi-classical Langevin $E^{-1/2}$ scaling with good statistical significance.
Finally, we discuss in this manuscript the effect of multiple collisions on the energy resolution of our method and also analyze possible deviations from the semi-classical Langevin scaling, in search of quantum resonances, by performing a maximum-likelihood estimation test.
\section{EXPERIMENTAL SET-UP}
An illustration of our experimental setup is shown in Fig.~\ref{ESC}. Our hybrid atom-ion system is described in detail in a review article \cite{Meir2018}. The setup consists of two separate vacuum chambers. At the top chamber $\sim$5$\cdot$10$^7$ cold Rb atoms are trapped in a magneto-optical-trap and then loaded into a CO$_2$ trap to evaporatively cool the atoms to the $\sim$5 $\mu$K$\cdot$k$_B$ temperature range. At the end of the evaporation, $\sim$50,000 atoms remain in the CO$_2$ trap and are adiabatically loaded into a 1D optical lattice. The lattice consists of two counter-propagating YAG laser beams ($
\lambda=1064$ nm, $P=1.5$ Watt for each beam), which are collimated, vertically orientated and have a Gaussian profile \cite{Schmid2006}. The beams are characterized by a waist of $\sim$220 $\mu$m and a Rayleigh range of $z_R = \pi w^{2}_{0}/\lambda=143$ mm, comparable to the transport distance to the bottom chamber of 248 mm. The strong confinement of the atoms in the optical lattice sites in the transport direction prevents the loss of atoms due to gravity.
We shuttle the atoms to the bottom chamber by changing the relative frequency between the two lattice beams (further details in the next section). During the transport, a $^{88}$Sr$^+$ ion is held in a linear segmented rf Paul trap, optically pumped to a specific Zeeman state in the electronic ground state, 5s$^2$S$_{1/2}(m=-1/2)$, followed by ground-state cooling on all three motional mode to $\bar{n}<0.1$. We repeatedly compensate the EMM of the ion every roughly half an hour during the experiment to avoid EMM drifts.
A thorough analysis of the EMM in our system yields that the sum of all EMM contributions is $\sim$30 $ \mu$K$\cdot$k$_B$ \cite{Meir2018}. This number can be used to estimate the lower bound for the energy resolution of this method in our system. Here, however, due to drifts in the micromotion compensation during the experiment, we set an upper limit for the EMM in our system to be $\sim$200 $ \mu$K$\cdot$k$_B$ which sets the limit for the resolution in the experiments presented here.
\section{COLLISION VELOCITY CONTROL}
We set the relative velocity of the atoms compared to the stationary ion by controlling the relative frequency of the lattice beams. The atoms’ velocity is directly proportional to the instantaneous frequency difference between the beams, $\Delta f(t)$, and equal to $v(t)=\frac{\lambda\Delta f(t)}{2}$ in the lab-frame where $\lambda=$1064 nm is the laser wavelength.
The linear velocity of the atoms in the lattice is much higher than the thermal velocity of the atoms or the ion. Therefore, the atom-ion collision energy is set by the velocity of the lattice.
In order to transport the cloud of atoms across the trapped-ion in a well defined collision energy, $E_{coll}=\frac{1}{2}m_{Rb}v_{lattice}^2$, in the lab-frame, the frequency difference between the laser beams should satisfy,
\begin{equation}
\label{coll_freq}
\Delta f(t)=2\sqrt{(2E_{coll}/m_{Rb})}/\lambda.
\end{equation}
Here, $m_{Rb}$ is the mass of Rb atom.
\begin{figure*}[t!]
\begin{center}
\centering{\includegraphics[width=\textwidth]{figure1_short1.pdf}}
\caption{An illustration of the atom-ion system and the velocity profiles of the transport. a) The experiment setup. Rb atoms are held in a CO$_2$ dipole trap, loaded into the 1D, optical lattice made of two counter-propagating beams. The atoms occupy $\sim$40 lattice sites. The distance between the loading position of the atoms in the upper chamber, and the ion in the lower chamber is 248 cm, as indicated by the dotted black curve. The figure is not to scale. b) The atoms' position as function of their velocity, for five different collision energies: 0, 0.1, 1, 5, 10 $k_B\cdot$mK (from black to blue). The background represents the different stages of transport: acceleration (pale blue), movement in a constant velocity (yellow) and deceleration to the desired collision velocity (pink). The atoms continue to be transported by the lattice at a constant velocity, even after colliding the ion. c) The atoms velocity profile. The right y-axis $\Delta f$, is the corresponding frequency difference. Each line is the difference between the same color lines (solid and dotted-dashed) in Fig.~\ref{freq}. The diamond marks indicate the times for which the atoms collide the ion for each velocity profile.}
\label{ESC}
\end{center}
\end{figure*}
The two lattice beams pass through separate acousto-optic modulators (AOMs) in a double-pass configuration, to control their frequency and intensity. After the AOMs, the beams are coupled to fibers, one entering from above the atoms' chamber and the other from below the ion's chamber, as illustrated in Fig.~\ref{ESC}a. When varying the frequency upon the AOMs, different diffraction angles cause a change in the intensity of the beams. Therefore, we actively stabilize the intensity level of each beam, maintaining constant intensity throughout the entire experiment. Each lattice beam is connected to a separate frequency channel of a function-generator capable of generating a trapezoidal sweep of the frequency independently in each channel. The two trapezoidal sweeps combine to generate the relative frequency profile, $\Delta f(t)$, shown in Fig. 1c. (see Appendix).
To bring the atoms to the desired velocity when colliding with the ion, we control the frequencies of both lattice beams.
We design the frequency profile such that the atoms always accelerate to the same maximal velocity, and then decelerate to the desirable collision velocity, as can be seen in Fig.~\ref{ESC}b and c.
The atoms reach a maximal velocity of $v$=160 cm/sec after 0.1 seconds of acceleration. Then, the atoms are held at a constant velocity for 0.01 seconds, after which the atoms are decelerated to the desired velocity.
For this velocity regime, the transport itself involves negligible atoms loss, where higher velocities introduce losses.
The travelled distance of the atoms as function of their instantaneous velocity is shown in Fig.~\ref{ESC}b for different transport profiles. After the first 0.11 seconds, the atoms are transported 9.6 cm. From this point, the atoms start to decelerate until they arrive to the desired velocity.
For each collision energy, the atoms cease to decelerate at a different position relative to the ion. The atoms continue to move at a constant velocity until they pass the position of the ion.
\section{Measuring inelastic collisional cross sections}
The rate at which a given inelastic collisions occur is given by
\begin{equation}
\Gamma_{inelastic}=n_{atoms}\sigma(E_{coll})v_{coll}
\end{equation}
where $n_{atoms}$ is the atomic density, $\sigma(E_{coll})$ is the inelastic collisional cross section which is energy dependent, and $v_{coll}$ is the relative atom-ion velocity in the center-of-mass frame.
In the proposed scheme, the atomic density in the position of the ion is time-dependent due to the relative motion of the atoms in the lattice with respect to the stationary ion. The mean number of collisions per pass is given by
\begin{equation}
N=\sigma(E_{coll})v_{coll}\int_{-\infty}^{+\infty}n_{atoms}(t)dt.
\end{equation}
Since the atoms moves at a constant velocity, $v_{lattice}$, the integration can be taken over a spatial dimension, in the moving direction of the lattice,
\begin{equation}
N=\sigma(E_{coll})\frac{v_{coll}}{v_{lattice}}\int_{-\infty}^{+\infty}n_{atoms}(x)dx.
\end{equation}
Here, the temperatures of the ion and atoms are negligible relative to the velocity of the atoms in the lattice, and since the ion is stationary, this collision velocity is equal to the lattice velocity $v_{coll}=v_{lattice}$, in the lab frame.
Then, the number of collisions per pass is
\begin{equation}
N=\sigma(E_{coll})\int_{-\infty}^{+\infty}n_{atoms}(x)dx.
\label{N_coll}
\end{equation}
Therefore, the number of events we measure is directly proportional to the collisional cross section, through the density of the atoms in the lattice, integrated along the vertical direction of motion.
Assuming the length of the atomic cloud is finite, denoting it by $L_{Rb}$, we can rewrite the number of event as:
\begin{equation}
N=n_{eff}L_{Rb}\sigma(E_{coll}),
\label{crosssection}
\end{equation}
where we define an effective density as:
\begin{equation}
n_{eff}\equiv\frac{1}{L_{Rb}}\int_{-\infty}^{+\infty}n_{atoms}(x)dx.
\end{equation}
In the semi-classical regime, the total cross section for hard-sphere collisions between an ion and an atom is given by the Langevin cross section \cite{Langevin1905}:
\begin{equation}
\sigma_L=\pi \sqrt{\frac{2C_4}{E_{coll}}},
\end{equation}
Where $C_4=\alpha e^2/(4\pi\epsilon_0)^2$, with $\alpha$, $e$ and $\epsilon_0$ the atoms polarizability, electronic charge and the vacuum permittivity, respectively.
Thus, the mean number of Langevin collisions per pass is
\begin{equation}
N_L=\pi n_{eff} L_{Rb} \sqrt{\frac{2C_4}{E_{coll}}}.
\end{equation}
In the semi-classical regime, inelastic processes are proportional to the Langevin cross section and therefore scale as $\sim E_{coll}^{-1/2}$.
\section{ENERGY DEPENDENCE OF NON-ADIABATIC QUENCH OF META-STABLE EXCITED STATES }
To demonstrate our method, we measured the energy dependence of a non-adiabatic quench of a meta-stable electronically excited level of the ion during a collision with a ground-state atom . In previous work \cite{Ruti2019}, we found that the excited long-lived 4d$^2D_{5/2}$ and 4d$^2D_{3/2}$ states of the $^{88}$Sr$^+$ ion, quench after roughly three Langevin collisions with ground-state $^{87}$Rb atoms, and that the excitation energy is transformed into kinetic energy of the colliding particles.
In Ref.~\cite{Ruti2019} we identified two types of collisional quenching. One is the EEE where the ion relaxes to the ground S state and the atom is excited to the P state followed by energy release of $\sim$ 3000 K$\cdot$k$_B$. The second is SOC where the ion relaxes from the higher fine-structure D$_{5/2}$ level to the lower D$_{3/2}$ level releasing $\sim$ 400 K$\cdot$k$_B$ into kinetic energy. These processes were theoretically understood to occur through Landau-Zener avoided-crossings between the different molecular potential curves. \\
Here we measured the dependency of these inelastic cross sections on the collision energy. As described above, a single Sr$^+$ ion, cooled to its ground-state in all three motional modes and with a residual EMM bounded by $\sim$200 $\mu$K$\cdot$k$_{B}$, was prepared in the 4d$^2$D$_{5/2}(m=-5/2)$, lower Zeeman state. Here, we report a higher bound on the EMM value than what was reported in Ref. \cite{Meir2018} since it was compensated less often due to long interrogation times. Meanwhile, a cloud of un-polarized atoms was loaded into the optical lattice and shuttled to the lower chamber while scanning 119 energy points, from 0.2 to 12 mK in the lab-frame with energy steps of 100 $\mu$K$\cdot$K$_{B}$. The average number of Langevin collisions per sweep was tuned to be 0.09 in the lowest energy point.
\begin{figure}
\begin{center}
\centering{\includegraphics[width=\columnwidth]{allscan_4.pdf}}
\caption{Quench cross section as a function of the collision energy. (a) The total quench probability (green) and for the (b) EEE (blue) and (c) SOC (red) channels, separately. Error-bars are 1$\sigma$ standard deviation of a binomial distribution and also includes systematic noises as explained in the body of the text. Smaller error-bars indicate areas over which we performed more repetitions. Black curve is an exponential fit to A$\cdot $E$^{\alpha}$, where the exponents given by the fits are $\alpha$=-0.51(3), -0.53(4) and -0.48(6), respectively. The orange crossed lines are fits to the Langevin cross section multiplied by a pre-factor $\eta$ as a free parameter, given by Eq.~\ref{langsigma}.}
\label{EnergyQuench}
\end{center}
\end{figure}
After the atoms passed through the ion, we performed a single-shot Doppler thermometry \cite{singleshot} on the ion to detect the quenched (hotter than $\sim$10's K$\cdot$k$_B$) events from non-quenched events. Due to the large energy separation between the SOC (400 K$\cdot$k$_B$) and EEE (3000 K$\cdot$k$_B$) energy release, these events are easily separated in the single-shot thermometry \cite{Ruti2019}. As a control experiment we tested whether quench events are detected in the absence of atoms in the optical lattice. Since no hot events were observed in the absence of atoms, we concluded that our measurement had no false positive detection of quench events. To avoid accumulative systematic noises, we scanned the energy of the collision in a randomized manner performing a single experiment for each energy value and only then repeating the experiment to accumulate the signal.
The quench data presented in Fig.~\ref{EnergyQuench}a was derived from 300,000 repetitions in which 3100 quench events were identified. With a repetition time ranging between 1 to 10 sec, depending on the quench channel and the Doppler-recooling time, this data was integrated over weeks.
\\
In Fig.~\ref{EnergyQuench}, we present our measured results. We plot the quench cross section as a function of the relative collisional energy through the relation of Eq.~\ref{crosssection}:
\begin{equation}
\sigma_{Quench}(E_{coll})=\frac{N_{Quench}}{n_{eff} L_{Rb}}.
\end{equation}
In this experiment the typical, effective, atomic density is $n_{eff}$=4.4$\cdot10^{17}$ m$^{-3}$. The collisional velocity ranging from $v_{coll}$=19.4 cm/sec up to 150 cm/sec, corresponds to collision energies of 0.2 to 12 mK$\cdot$K$_{B}$, respectively. The size of the cloud in the transport direction is $L_{Rb}=$20 $\mu$m, which occupies $\sim$40 lattice sites.
Here, different sets of data where taken with different cloud densities, adding systematic noise, biasing the overall data in the vertical direction upon the graph.
This systematic noise is estimated to be $\Delta (n_{eff}\cdot L_{Rb})\sim15\%$. The statistical noise for comparison, varies between 5$\%$ to 30$\%$, depending on the number of repetitions.
The data presented in green in Fig.~\ref{EnergyQuench}a, contains all quench events summing over both channels. In Fig.~\ref{EnergyQuench}b and Fig.~\ref{EnergyQuench}c we show the energy dependent collisional cross section for EEE (blue) and SOC (red) channels, respectively. The black curves are fits to a power-law, A$\cdot$E$^{\alpha}$. The fitted power-law, $\alpha$, agrees well with the Langevin scaling of E$^{-1/2}$ (see Fig.~\ref{EnergyQuench} caption). Quenching from the metastable D-state happens when the atom and ion reach to very short inter-nuclear distances and once overcoming the centrifugal barrier. Therefore these type of collisions are Langevin collisions, but happen with lower probabilities. Here we compare the cross sections through:
\begin{equation}
\sigma_{Quench}(E_{coll})=\eta\cdot\sigma_{L}(E_{coll})
\label{langsigma}
\end{equation}
By fitting the data to Eq.~\ref{langsigma} with $\eta$ as a free parameter, we find that
$\sigma_{Quench}(E_{coll})$ is proportional but smaller than the Langevin cross section by: $\eta$=0.52(6), 0.35(5), 0.16(3) for the green, blue and red data, respectively. While this total cross section is slightly higher than the one reported in previous studies \cite{Ruti2019} (0.52(6) compare to 0.38(5)), the ratio between the two channels $\sigma^{SOC}_{Quench}$/$\sigma^{EEE}_{Quench}$ agrees within the statistical error (0.48(11) compare to 0.39(5) in the previous measurements).
\section{The effect of multiple collisions}
In this experiment the atoms are much colder than the ion and therefore the energy resolution of our measurement is mainly limited by the energy uncertainty of the latter. Since the ion is cooled to the ground-state of all it's secular motional modes, the initial residual energy, prior to the collision, is mainly due to the residual EMM.
However, after a collision the energy of the ion can be changed due to coupling of the EMM to the ion's external degrees of freedom \cite{Zipkes2011} or due to exchange of kinetic energy between the atom and the ion \cite{DeVoe2009,Chen2014}. Both these effects depend on the position and phase of the ion in the rf trap and lead to a power-law energy distribution \cite{Rouse2017}.
Thus, in determining the energy spread of the ion before a reaction occurs, we have to take into account the possibility of ion heating due to previous, elastic, Langevin collisions.
In order to find the ion's energy distribution after a certain average number of collisions, we performed a molecular dynamics simulation which takes into account the residual EMM of the ion and the lattice velocity, as described in Ref. \cite{Zipkes2011,Meir2018}. In Fig.~\ref{ion_energy_dist}, the energy distribution of the ion after a single collision is shown for different velocities of the lattice. As can be seen, following a single collision, the ion is heated up to the energy of the atoms in the lattice, with a wide energy distribution. As a result, if the measured inelastic process (for example, a quench) does not occur in the first collision, the energy of that collision is no longer defined by the velocity of the lattice and has a wide distribution.
The probability for multiple Langevin collision events can be reduced by lowering the density of atoms that are loaded into the lattice dipole trap. However, this leads to longer integration times. As an example, in the data of Fig.~ \ref{EnergyQuench} the probability for at least one Langevin collision per-pass was approximately 0.09, for low collisional energies. At such a low mean number of collision, the probability for observing a quench event that occurred after the first collision is $\sim9\cdot10^{-4}$, and hence the signal is not effected by heating due to multiple elastic collisions. However, this measurement lasted for several weeks.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ion_energ_dist.pdf}
\caption{(a) The distribution of ion's energy, $E_{ion}$, after a single collision with an atom in the moving optical lattice, for different lattice velocities, $E_{col}$. The distributions are calculated by a molecular dynamics simulation that takes into account the Paul trap potential and the EMM of the ion.}
\label{ion_energy_dist}
\end{figure}
\section{A Search for quantum resonances using Maximum Likelihood Estimation}
A hallmark of quantum scattering in the low energy regime is the appearance of scattering resonances. Such resonances occur, for example, when the collision energy in the center-of-mass frame resonates with the energy of a quasi-bound molecular state by the centrifugal barrier of one of the partial waves involved. These shape-resonances are anticipated to occur in atom-ion collisions even in the mK energy range \cite{Silva2015,Belyaev2012,Tacconi2011}. In order to search for such resonances we performed a likelihood-ratio test, differentiating between resonance and no-resonance hypotheses, and calculated their statistical significance. \\
At each collision energy, $E_i$, the number of observed quench events is a random variable which follows a binomial distribution. The log-likelihood function for observing $k_i$ quench events out of $N$ repetitions is, up to a constant factor,
\begin{equation}
\log\mathcal{L}(p_i|k_i,N_i)=k_i\log p_i+(N-k_i)\log(1-p_i),
\end{equation}
where $p_i$ is the probability for observing a quench event in a single experiment.
The total log-likelihood for observing $\textbf{k}=\{k_i\}$ quench events in all energy points is the sum over the log-likelihood function in each point,
\begin{equation}
\log \mathcal{L}(\textbf{p}|\textbf{k},\textbf{N})=\sum_i\log \mathcal{L}(p_i|k_i,N_i).
\end{equation}
We want to estimate the probability that the data we measured is the result of a local peak at some energy point. The null hypothesis, $H_0$, assumes that the measured data has as a power-law $p_i(E_i)=CE^{-\alpha}$ behavior, whereas the alternative hypothesis includes a Gaussian resonance at energy $E_0$ with a width of $\sigma_{g}$ and magnitude $A$.
\begin{equation}
p_i=CE^{-\alpha}+Ae^{\frac{(E_i-E_0)^2}{\sigma_{g}^2}}.
\end{equation}
We estimated the free parameters ($C$, $\alpha$, $A$, $\sigma$ and $E_0$) as the parameters that maximize the log-likelihood function for our measured data. Using these parameters, we calculated the observed likelihood-ratio between the null hypothesis and the alternative hypothesis
\begin{equation}
\label{eq:likelihood-ratio}
\log \lambda_{obs}=\max \log \mathcal{L}_0- \max \log \mathcal{L}_1.
\end{equation}
In order to make the maximization process of the alternative hypothesis more robust, we found the maximum likelihood for a resonance separately for each energy point, and then identified the energy point that yielded the maximal likelihood as a suspect for resonance.
We used the likelihood-ratio of the measurement to estimate the statistical significance of the alternative hypothesis over the null hypothesis. To this end, we calculated the p-value: the probability of observing a likelihood-ratio that is higher than the one we measured under the null hypothesis. A small p-value indicates that it is less likely that our measured data was generated by the null hypothesis and the resonance hypothesis is favorable. The p-value can be related to the number of standard deviations, $N_\sigma$, of the observed data from the null hypothesis \cite{Demortier2007}
\begin{equation}
p=1-\mbox{erf}(\frac{N_\sigma}{\sqrt{2}}),
\end{equation}
where $\mbox{erf(x)}$ is the standard error-function.
In order to find the p-value of our measurement, we simulated $1000$ experiments ($3000$ for the SOC experiment), each with the same number of repetitions we had in the real experiment, under the null hypothesis. For each one of these simulations, we repeated the analysis above in order to find the likelihood-ratio. From the simulated likelihood-ratio distribution we found the fraction of experiments that yielded a higher value than our observed likelihood-ratio, which gives the p-value of the measurement.
Analysing the EEE events, we observed a weak resonance at 10.3 mK with a likelihood ratio of 4.6 and a p-value of 0.091, equivalent to 1.7$\sigma$, see Fig.~\ref{fig:res_MLE}a. The analysis of the SOC events (Fig.~\ref{fig:res_MLE}b), indicated a peak around 3 mK with likelihood-ratio of 7.9. The p-value in this case is 0.0088, equivalent to 2.6$\sigma$, which is marginally significant.
Longer integration and improved statistics around suspected energies will help determine whether there is a resonance behavior or not. However, longer integration can suffer from systematic drifts that will wash-out the effect of a resonance. A further investigation of this effect with higher statistics is needed, with an improved repetition rate of the experiment to avoid drifts.
\begin{figure}
\centering
\begin{minipage}{0.4\textwidth}
\includegraphics[width=1\linewidth]{./Likelihood_ratio_res_position_EEE.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\includegraphics[width=1\linewidth]{./Likelihood_ratio_res_position_SOC.pdf}
\end{minipage}
\caption{Maximum likelihood estimation for a semi-classical cross section for the un-normalized, EEE channel (up) and SOC channel (down) for with and without a Gaussian resonance, green and black respectively. P-values are 0.091 and 0.0088, , which corresponds to 1.7$\sigma$ and 2.6$\sigma$, for the EEE and SOC channels respectively.}
\label{fig:res_MLE}
\end{figure}
\section{conclusions}
In this work we present a method for controlling atom-ion collisional energy in the ultracold regime, with a one order of magnitude improved energy resolution as compared to previous methods, by optically shuttling the atoms across a single trapped-ion. The energy resolution of the reactions we study is maintained high by limiting the number of atom-ion collisions in each repetition of the experiment to be below one, and was thus limited only by EMM compensation to below 200 $\mu$K$\cdot$k$_B$ in this experiment with the potential to reach the 10's $\mu$K$\cdot$k$_B$ level with better control over the EMM over long periods of time.
As a demonstration of our method, we used it to measure the energy dependence of the collisional quench processes of the ion from an optically excited meta-stable state. We found that the cross section for these processes follow the semi-classical Langevin prediction. Finally, we identified suspect energies for the possible location of a quantum resonance. Further experimental investigation is necessary to determine whether a resonance is actually present.
Our method is generic and can be used for different species and for the study of different atom-ion reactions. With sufficient control of experimental parameters it can be used to measure atom-ion quantum scattering effects in the low partial-wave regime.
\section{Acknowledgments}
This work was supported by the Israeli Science Foundation and the Israeli ministry of Science Technology and Space.
|
1,108,101,564,681 | arxiv | \section{Introduction}
\IEEEPARstart{O}{PTIMIZATION} technologies have been widely used in many decision-making processes in the operation, control, and planning of power systems, such as optimal power flow (OPF). However, the increasing uncertainty introduced by distributed energy resources (DER) makes it extremely hard for operators to make accurate optimal decisions ahead of real time. There mainly exist three types of frameworks for modeling power system optimization problems that involve uncertainty: 1) stochastic framework, 2) robust framework, and 3) chance-constrained framework \cite{aien2016comprehensive}. Unfortunately, these frameworks are rather computationally expensive for large-scale, highly-nonconvex problems. As a result, a large portion of existing works investigate proper assumptions to simplify these frameworks for power system applications. In contrast, based on regression analysis \cite{murphy2012machine}, this paper develops a novel uncertainty-aware optimization (UaO) framework using a new measurement of uncertainty that considers the prediction errors of stochastic variables (see Section III for more details).
Convex optimization \cite{boyd2004convex} has applications in a broad range of disciplines including power system engineering, mainly because: 1) many classes of convex optimization problems are computationally tractable as they admit polynomial-time algorithms; and, 2) it plays a fundamental role in the theories of both distributed optimization and bi-level optimization. The general idea is to relax the nonconvex constraints into convex ones. However, the solutions of the resulting convex problem may be infeasible to the original nonconvex problem due to the nature of relaxations, which now becomes one of the bottlenecks of this technology. To mitigate the infeasibility issue, this paper proposes a data-driven approach to construct convex relaxations with stronger tightness and lower complexity (see Subsection II-B for more details). The resulting convex relaxation is applied to convexify the developed UoA framework. The paper demonstrates the UoA framework on a three-phase optimal power flow (3$\phi$OPF) problem with uncertainty introduced by distributed energy resources and uncontrollable loads. The 3$\phi$OPF is balanced for transmission networks while unbalanced for distribution networks. It is worth noting that, theoretically, the proposed methods can be
applied to general optimization problems under uncertainty.
\section{Theory of Data-Driven Convexification}
\subsection{Three-Phase Power Flow Equations}
In an OPF problem, the objective function is generally convex or linear. Thus, we focus on the main nonconvex constraints, i.e. the power flow (PF) equations, which are also considered as the mathematical model of power networks. Let $\mathcal{N}$ and $\Phi$ denote the sets of buses and phases respectively. For each $i$, $j \in \mathcal{N}$ and $\phi$, $\phi^\prime\in \Phi$, the compact formulation of three-phase PF equations \cite{hu2019ensemble} is given as
\begin{subequations} \label{PF}
\begin{align}
&e_i^\phi\sum_{j\in \mathcal{N}}\sum_{\phi^\prime\in \Phi}(G_{ij}^{\phi \phi^\prime} e_j^{\phi^\prime}-B_{ij}^{\phi \phi^\prime} f_j^{\phi^\prime}) \nonumber \\
&+f_i^\phi\sum_{j\in \mathcal{N}}\sum_{\phi^\prime\in \Phi}(B_{ij}^{\phi \phi^\prime} e_j^{\phi^\prime}
+G_{ij}^{\phi \phi^\prime} f_j^{\phi^\prime})=p_i^\phi +p_{S,i}^\phi\\
&f_i^\phi\sum_{j\in \mathcal{N}}\sum_{\phi^\prime\in \Phi}(G_{ij}^{\phi \phi^\prime} e_j^{\phi^\prime}-B_{ij}^{\phi \phi^\prime} f_j^{\phi^\prime}) \nonumber \\
&-e_i^\phi\sum_{j\in \mathcal{N}}\sum_{\phi^\prime\in \Phi}(B_{ij}^{\phi \phi^\prime} e_j^{\phi^\prime}+G_{ij}^{\phi \phi^\prime} f_j^{\phi^\prime})=q_i^\phi
+q_{S,i}^\phi\\
&e_i^\phi\sum_{\phi^\prime\in \Phi}[G_{ij}^{\phi \phi^\prime} (e_j^{\phi^\prime}-e_i^{\phi^\prime})-B_{ij}^{\phi \phi^\prime} (f_j^{\phi^\prime}-f_i^{\phi^\prime})]\nonumber \\
&+f_i^\phi\sum_{\phi^\prime\in \Phi}[B_{ij}^{\phi \phi^\prime} (e_j^{\phi^\prime}-e_i^{\phi^\prime})
+G_{ij}^{\phi \phi^\prime} (f_j^{\phi^\prime}-f_i^{\phi^\prime})]=p_{ij}^\phi \\
&f_i^\phi\sum_{\phi^\prime\in \Phi}[G_{ij}^{\phi \phi^\prime} (e_j^{\phi^\prime}-e_i^{\phi^\prime})-B_{ij}^{\phi \phi^\prime} (f_j^{\phi^\prime}-f_i^{\phi^\prime})] \nonumber \\
&-e_i^\phi\sum_{\phi^\prime\in \Phi}[B_{ij}^{\phi \phi^\prime} (e_j^{\phi^\prime}-e_i^{\phi^\prime})+G_{ij}^{\phi \phi^\prime} (f_j^{\phi^\prime}-f_i^{\phi^\prime})]=q_{ij}^\phi \\
&(e_i^\phi)^2+(f_i^\phi)^2=v_i
\end{align}
\end{subequations}
where $p_{S,i}^\phi$ and $q_{S,i}^\phi$ denote the stochastic components of active and reactive power injections at each bus. Each of the quadratic equations in (\ref{PF}) can be compactly formulated as
\begin{equation} \label{CompactPF}
g(x)=x^{\rm{T}}A x=y=z+u
\end{equation}
where $z$ and $u$ denote the deterministic and stochastic components of the power injections respectively. Further define a set $\Omega = \{(x,\,y) |\,\underline{p}_i^\phi \le p_i^\phi +p_{S,i}^\phi \le \overline{p}_i^\phi,\,\underline{q}_i^\phi \le q_i^\phi +q_{S,i}^\phi \le \overline{q}_i^\phi, \, (p_{ij}^\phi)^2+(q_{ij}^\phi)^2 \le \overline{S}_{ij}, \, \text{and}\, \underline{v}_i \le v_i \le \overline{v}_i, \, \forall i \in \mathcal{N}\, \text{and}\, \phi \in \Phi \}$, then the feasible set of 3$\phi$PF (\ref{PF}) is $\Psi = \{(x,\,y)\in \Omega |\,(2),\,\forall i \in \mathcal{N}\, \text{and}\, \phi \in \Phi \}$. Note that $\Omega$ is convex while $\Psi$ is not. Moreover, the three-phase \textit{DistFlow} model of radial networks is also a nonconvex quadratic system that can be represented in the form of (\ref{CompactPF}). That means the proposed methods can be directly applied to DistFlow-based 3$\phi$OPF.
\subsection{Data-Driven Convex Relaxation}
In this subsection, a methodology of data-driven convex relaxation is established and applied to construct a tight convex quadratic relaxation of 3$\phi$PF equations (\ref{PF}). For the sake of simplicity, we start from a deterministic case, namely $u=0$. Let $\mathcal{D}$ denotes a historical data set where the $k$th data point $D^{(k)}=( x^{(k)}, y^{(k)})$, we have $\mathcal{D} \subset \Psi$. The following regression algorithm is proposed to train $\mathcal{D}$ to obtain a positive semi-definite (PSD) matrix $P$ and a complementary vector $B$ and scalar $c$ for each quadratic equations in (\ref{CompactPF}) (i.e. (\ref{PF})):
\begin{subequations} \label{Regression}
\begin{align}
\min_{P,B,c}\;& \frac{1}{|\mathcal{D}|} \sum_kt^{(k)} \label{Reg1} \\
\mathrm{s.t.}\; & (x^{(k)})^{\rm{T}}Px^{(k)}+B^{\rm{T}}x^{(k)}+c - y^{(k)} = m^{(k)} \le 0 \label{Reg2} \\
&\left[
\begin{array}{cc} 1 & m^{(k)} \\
m^{(k)} & t^{(k)} \\
\end{array}
\right],\,P \succeq 0 \label{Reg3}
\end{align}
\end{subequations}
where $k=1,\,\ldots,\, |\mathcal{D}|$.
The dimensions of $P$'s, $B$'s, and $c$'s are consistent with the dimensions of the corresponding quadratic equations in (\ref{PF}). Note that the $A$ matrix in (1e) is already PSD. Therefore, we don't need to train a $P$ for (1e). The optimization model (\ref{Regression}) is a standard semidefinite programming problem which can be effectively and globally solved by mature solvers like MOSEK, GUROBI, and CPLEX.
Define a quadratic convex set:
\begin{equation}
\Theta =\{(x,\,y) \in \Omega\,|\, x^{\rm{T}}Px+B^{\rm{T}}x +c \le y, \forall i \in \mathcal{N}\, \text{and}\, \phi \in \Phi \}, \nonumber
\end{equation}
we have the following theorem.
\textbf{Theorem of Data-driven Convex Relaxation}. \textit{The set $\Theta$ is a convex relaxation of the feasible set $\Psi$ of the original three-phase AC power flow} (\ref{CompactPF}) \textit{if:}
\textit{a) the PSD matrices $P$'s, vectors $B$'s, and scalars $c$'s are obtained by training $\mathcal{D}$ using the regression algorithm (\ref{Regression}),}
\textit{b) $\mathcal{D}$ contains all extreme points\footnote{An extreme point of a convex set is a point in this set that does not line in any open line segment joining two points of this set \cite{bazaraa2013nonlinear}. We use this definition to define an extreme point of nonconvex sets.} of $\Psi$.}
\noindent
\textit{Proof}: Constraint (\ref{Reg2}) guarantees that each $D^{(k)} \in \mathcal{D}$ satisfies
\begin{equation}
(x^{(k)})^{\rm{T}}Px^{(k)}+B^{\rm{T}}x^{(k)} +c \le y^{(k)}, \nonumber
\end{equation}
which implies $\mathcal{D} \subset \Theta$. Therefore, $\Theta$ is a convex quadratic relaxation of $\mathcal{D}$ since $P$'s are PSD.
All extreme points of a feasible set are linearly independent according to the definition \cite{bazaraa2013nonlinear}. Suppose $\psi$ is an arbitrary point in $\Psi$, there must exist a vector of extreme points $X=[\theta_1,\theta_2,...,\theta_l]^{\rm{T}}$ of $\Psi$ and a vector of multipliers $\alpha=[\alpha_1, \alpha_2,...,\alpha_l]^{\rm{T}}$ that satisfy
\begin{equation}
\psi=\alpha^{\rm{T}}X , \nonumber
\end{equation}
where $0 \le \alpha_i \le 1$ ($i=1,\,\ldots,\, l$), $\sum_i^l \alpha_i =1$, and $l$ equals to the dimension of the ($x, y$)-space. Since all $\theta_i \in \mathcal{D} \subset \Theta$ according to condition b), then $\psi \in \Theta$ due to the convexity of $\Theta$. Therefore, $\Psi \subset \Theta$ as $\psi$ is an arbitrary point in $\Psi$, which means $\Theta$ is convex relaxation of $\Psi$. \hfill$\square$
\noindent
\textbf{Remark}. Condition b) in the theorem of data-driven convex relaxation is not easy to strictly satisfy. However, under the concept of \textit{Big Data}, it is reasonable to assume that the data set $\mathcal{D}$ is big enough to represent the original feasible set $\Psi$, which implies $\Theta$ is highly close to a strictly convex relaxation of $\Psi$. Moreover, regression (\ref{Regression}) is a convex optimization problem that can be globally solved, which implies that $\Theta$ is the \textit{tightest} quadratic convex relaxation of $\Psi$.
\section{Uncertainty-aware Optimization Framework}
\subsection{Novel Measurement of Uncertainty}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Uncertainty_2.pdf}
\caption{Illustration of the impact of uncertainty and the proposed method.}
\label{fig:uncertainty}
\end{figure}
In the stochastic and chance-constrained frameworks, uncertainty is measured by probability distributions, while it is captured by a deterministic sets under the robust framework \cite{aien2016comprehensive}. Actually, the impact of uncertainty on power system operation is illustrated in Fig. \ref{fig:uncertainty}. Suppose $u$ is the actual generation of stochastic resources at moment $t$ and its system response is $x$ as in Fig. \ref{fig:uncertainty}(a). The operation decision is generally made in advance (e.g. 5 minutes, 1 hour, or even a day before $t$) based on the forecast $\tilde{u}$ of $u$ as in Fig. \ref{fig:uncertainty}(b). One can consider that the uncertainty originates from the prediction error $|\tilde{u}-u|$, since the resulting model output error $|\tilde{x}-x|$ may lead to failures in ahead-of-real-time decision-making of power systems. In fact, a ``good prediction" and a ``bad prediction" may have totally different impacts on the operation, control, and planning of power systems. However, none of the existing optimization frameworks considers the information of forecast in the measurement of uncertainty, which increases the conservativeness. In this research, we use the prediction error $|\tilde{u}-u|$ as the new measurement of uncertainty which will be incorporated into the following machine learning process.
\subsection{Uncertainty-awre Modeling}
Researchers from the field of renewable generation and load forecast attempts to reduce the prediction errors $|\tilde{u}-u|$, while this research focuses on reducing the model output errors $|\hat{x}-x|$ in the process of system modeling given the information of $\tilde{u}$, for which the key is to properly learn a PSD matrix $P$ and complementary vector $B$ and scalar $c$ for each quadratic equation in (\ref{PF}). In order to do so, we first design a historical data set $\bar{\mathcal{D}}$ that contains information of forecast errors and its $k$th data point $\bar{D}^{(k)}=(\tilde{u}^{(k)}, u^{(k)},z^{(k)}, x^{(k)})$, where the historical operating point $(x^{(k)}, u^{(k)}+z^{(k)})$ is a solution of (\ref{CompactPF}). Then, the regression model (\ref{Regression}) is modified as
\begin{subequations} \label{U_regression}
\begin{align}
&\min_{P,b}\; \text{(\ref{Reg1})} \quad \mathrm{s.t.}\; \text{(\ref{Reg3})} \; \text{and} \\
& (x^{(k)})^{\rm{T}}Px^{(k)}+B^{\rm{T}}x^{(k)} +c - u^{(k)}-z^{(k)} \le 0 \label{U_reg2} \\
& (x^{(k)})^{\rm{T}}Px^{(k)}+B^{\rm{T}}x^{(k)}+c - \tilde{u}^{(k)}-z^{(k)} = m^{(k)}, \label{U_reg3}
\end{align}
\end{subequations}
to train the new data set $\bar{\mathcal{D}}$. With the $P$, $B$ and $c$ inferred by regression (\ref{U_regression}), the following equation (\ref{UAM}) is defined as an \textit{uncertainty-aware model} (UaM) of 3$\phi$PF (\ref{CompactPF}), which relies on the predictions rather than the actual values (i.e. perfect predictions) of the stochastic parameters:
\begin{equation} \label{UAM}
h(x)=x^{\rm{T}}P x+B^{\rm{T}}x+c=\tilde{y}=z+\tilde{u}
\end{equation}
Regression process (\ref{Regression}) aims at fitting a convex mapping between the actual system response $x$ and the actual input $u$ of the stochastic variables. In contrast, regression (\ref{U_regression}) infer the convex mapping between $x$ and $\tilde{u}$ (the aforehand prediction of $u$). For a future case ($x^{(|\mathcal{D}|+1)},y^{(|\mathcal{D}|+1)}$), a forecast $\tilde{u}^{(|\mathcal{D}|+1)}$ is used in the ahead-of-real-time decision-making since the actual value $u^{(|\mathcal{D}|+1)}$ of the stochastic parameters is not available at that moment. With $\tilde{u}^{(|\mathcal{D}|+1)}$ as input, the UaM (\ref{UAM}) provides a close prediction $\hat{x}^{(|\mathcal{D}|+1)}$ to the actual system response $x^{(|\mathcal{D}|+1)}$, as illustrated in Fig. \ref{fig:uncertainty}(c). Constraint (\ref{U_reg2}) guarantees that $\Theta$ with parameters inferred by (\ref{U_regression}) is a convex quadratic relaxation for the projection of $\bar{\mathcal{D}}$ on ($x$, $y$)-space.
\subsection{Uncertainty-aware 3$\phi$OPF}
With the typical objective that minimizes generation costs, the uncertainty-aware 3$\phi$OPF can be compactly formulated as
\begin{equation} \label{OPF}
\min_{z}\,\left \{ \sum_{i \in \mathcal{G}} \sum_{\phi \in \Phi}C(z^{\phi}_{i}):\,(x,z+\tilde{u}) \in \Theta \right\},
\end{equation}
where $\mathcal{G}$ denotes the generator set, and $P$'s, $B$'s, $c$'s in $\Theta$ are inferred by training $\bar{\mathcal{D}}$ using regression (\ref{U_regression}). A typical objective of OPF in distribution systems is to minimize the active power from transmission grids. It is worth noting that (\ref{OPF}) is a deterministic optimization problem which is much less-complex than the existing robust, stochastic, and chance-constrained optimization frameworks.
\section{Numerical Experiment}
Due to page limit, we only present the preliminary studies on three scenarios of two small balanced networks, i.e. a 5-bus and a 57-bus systems. More comprehensive numerical studies based on real-world data will be presented in future publications to compare the proposed methods with existing convex relaxations and uncertainty-involved optimization frameworks. The three scenarios compared in this experiment are: 1) the original ACOPF with perfect predictions $u$ of stochastic power injections which is used as the reference; 2) the original ACOPF with inaccurate predictions $\tilde{u}$ which simulates the actual practice; and 3) the proposed UaO framework with inaccurate predictions $\tilde{u}$.
In the first step of generating the training data sets $\bar{\mathcal{D}}$, a set of 5000 load profiles $y=z+u$ for each test system are randomly produced. Then, voltage profile $x$ for each load profile of each system is obtained by solving PF (\ref{PF}). Finally, the prediction $\tilde{u}$ is randomly generated based on the corresponding $u$ assuming that the maximum forecast error is $\pm 30\%$, namely $|\tilde{u}-u| \le 0.3|u|$. The UaO models (\ref{OPF}) are obtained by training data sets $\bar{\mathcal{D}}$ of the two systems, respectively, using regression (\ref{U_regression}). The three scenarios mentioned above are compared on 50 load cases for each test system. The average errors of objective values defined in in Table \ref{Results} are used to quantify the performance, where $C_k^{(i)}$ denotes the optimal cost of the $k$th scenario in the $i$th load case. It can be observed that, with inaccurate forecast, the original ACOPF (scenario 2) produces inaccurate solutions. Nevertheless, the UaO (scenario 3) can provide better solutions than the original ACOPF model does since the forecast errors are taken into account in the data-driven modeling process. Although the UaO framework is still not able to produce a strictly accurate solution, fortunately, it can be improved by training a larger, better data set due to its learning-based nature.
\begin{table}[h]
\centering
\caption{Average Errors of Optimal Costs}
\label{Results}
\begin{tabular}{lcc}
\hline \hline
& $E_1=\frac{1}{50}\sum_{i=1}^{50}\frac{|C_1^{(i)}-C_2^{(i)}|}{C_1^{(i)}}$ & $E_2=\frac{1}{50}\sum_{i=1}^{50}\frac{|C_1^{(i)}-C_3^{(i)}|}{C_1^{(i)}}$ \\ \hline
\textbf{5-bus} & 20.21\% & 3.72\% \\
\textbf{57-bus} & 21.43\% & 1.66\% \\ \hline \hline
\end{tabular}
\end{table}
\section{Conclusion and Future Work}
This paper presents a preliminary study on a novel UaO framework which shows that the UaO framework can effectively mitigate the impacts of uncertainty in solving 3$\phi$OPF. Our future work mainly consists of two aspects: first, explore advanced machine learning technologies, such as ensemble learning \cite{murphy2012machine}, to improve the efficiency of UaO framework; second, apply to UaO framework to model other power system optimzation problems other than OPF problems.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{}
|
1,108,101,564,682 | arxiv |
\section{Case Analysis}
\label{sec:caseanalysis}
In this section, we present some cases from the projects we studied, which may help to explain our findings.
\subsection{\emph{Java}{}}
One of the reasons that \emph{Java}{} has verbose code modification is the explicit marks of type inference, return types, and so on. For example, GitHub repository \emph{ReactiveX/RxJava} is a Java library for composing asynchronous and event-based programs using observable sequences for the Java VM. In RxJava, for fixing commit 0094304 fixes issue \#4430, the exception processing class exceptions need to be changed into ExceptionHelper. To do this, two types of modification are made:
1). Change the import code (88 line changed, totally):
\verbatimfont{\footnotesize}%
\begin{verbatim}
- import io.reactivex.exceptions.Exceptions;
+ import io.reactivex.internal.util.ExceptionHelper;
\end{verbatim}
2). Change the method calls (122 line changed, totally):
\verbatimfont{\footnotesize}%
\begin{verbatim}
- throw Exceptions.propagate(ex);
+ throw ExceptionHelper.wrapOrThrow(ex);
\end{verbatim}
However, all together, 210 lines, 60 source code files are changed, which is a large amount of code modification.
\subsection{\emph{Ruby}{}}
On the opposite, \emph{Ruby}{} is much more flexible. One flexible feature of \emph{Ruby}{} is ``monkey patching'', which refers to the practice of extending or modifying existing code by changing classes at run-time. It is a powerful technique that has become popular in the Ruby community. Any class can be re-opened at any time and amended in any way. However, monkey-patching in general leaves you wide open to major, potentially undiagnosable clashes. I have a class A. I have some kind of monkey-patching module MB that patches A to include method1, method2 and method3. I have another monkey-patching module MC that also patches A to include a method2, method3 and method4. Now I'm in a bind. I call instance_of_A.method2: whose method gets called? Such undiagnosable clashes make bugs very hard to track.
\section{Implication and Discussion}
\section{Implication}
\label{implication}
In this section, we discuss several implications of our results.
As already explained in the introduction, although the findings of our correlation analysis may not be fully generalized to imply the underlying causality nor thoroughly interpretable, they do nevertheless provide suggestions and guidance to developers and researchers.
\subsection{Implications for Developers and Managers}
Our results provide more support for developers when choosing languages, particularly when bug-handling effort is a concern. Of course,
the choosing of programming language for a project is a complex process, involving a variety of factors that may or may not be technical.
We do not claim that the result in this paper is in any way sufficient to solve this problem, but the findings clearly indicate that
the choice of programming languages has noticeable impact on bug-handling effort, and could be used by programmers as part of the consideration.
Managers may use our results too. Estimating and scheduling of tasks is a major part of software engineering process. The task of bug handling is no exception.
Our results show that the bugs of different languages have different handling costs, which shall be factored into the estimating process. This is particularly true for multi-language projects, where one may need to consider the language attribute of each bug when assigning them.
Moreover, some languages (e.g., \emph{Ruby}{}, \emph{Objective-C}{}, and \emph{JavaScript}{}) deserve more attention in testing, as they typically require more bug-handling time, and therefore are more costly to maintain.
\subsection{Implications for Researchers}
Our results could provide the following guidelines for researchers.
First, languages could be considered in the research of automatic bug-handling-effort prediction, a problem that has long been recognized as difficult, but with broad practical benefits~\cite{kaur2014software}. Many researchers~\cite{kaur2014software,riaz2009systematic,wohl1982maintainability,hayes2005maintainability} have made dedicated efforts to improve the precision of such predictions. However, none of the existing work has considered the impact of programming languages, which we think is a missed opportunity. In Section~\ref{sec:prediction}, we conducted an experiment with a very simple model, and demonstrated that predictive accuracy can indeed be improved using the data collected for different programming languages. The predictive model we used is obviously too simple to be useful for serious prediction, but nevertheless the positive result reaffirms our findings and suggests a possible more accurate approach for automatic prediction.
Second, different languages may need different sizes of patches in the research related to automatic bug fixing. Judged by the amount of line and file modification required, our results suggest that larger patches may be considered for automatically fixing for \emph{C\#}{}, \emph{Java}{}, and \emph{Go}{}. These languages also need larger search space (across more lines and files) for finding proper code patches, and thus may be more challenging to handle than others.
Moreover, the languages requiring more bug-handling time such as \emph{Ruby}{} and \emph{Objective-C}{} are more costly to maintain, and therefore shall be the focus
of automatic debugging and fixing research, as there are more to be gained.
\section{Threats, and Effort in Reducing them }
\label{threats}
The threat to internal validity lies in the implementation of the study. To reduce this threat, the first three authors independently reviewed the experimental scripts of the empirical study to ensure correctness.
The threats to external validity mainly lie with the subjects. We decided to pick the most popular projects of each languages, which by definition is not representative. However, we believe that it is more interesting to study the best efforts of the communities, as the alternative of randomly selecting projects is likely to pollute the data with non-serious projects which this study aims to avoid. The large number of projects
used in our experiment also helps in reducing this threat.
The threats to construct validity lie in how we accurately reflect the impact of languages. To reduce this threat, we have made a range of efforts.
\emph{Large dataset and Multiple measurement metrics.} Our experiment is of a large scale, and we employ a variety of metrics to measure the impact of
languages. The thinking is that while each of the metrics alone may not be a sufficient proxy of bug-handling effort, they may work together collectively and complement each other. Particularly as we have seen, the use of two categories of metrics (modification and time) resulted in a comprehensive set of findings, which more accurately reflect the complex nature of bug-handling.
\emph{Data validation.} We pay special attention to the validity of our dataset. We took a random sample of the data we collected, involving 585 commits from all selected projects, and manually checked them. We found that 90\% of it is clean (i.e., involving only the fixing of a single bug, and all the code modification is related to the bug-fixing), showing a high-degree of data validity.
Moreover, we use the interval between the opening time and the time of the last comment as bug-handling time, which is shown to be a more accurate measurement of bug-handling time, than the seemingly more obvious choice of the interval between the opening and closing time~\cite{Zheng:2015:MIC:2786805.2786866}.
\emph{Multiple analysis approaches.} To reduce the risk of bias caused by a single analysis approach, we adopt three different ones: direct observation, median-value analysis, and multiple-regression analysis. The consistency in the results of our different analyses affirm the reliability of each. Moreover, we use variable controlling, by considering absolute measurements as well as relative ones, and treating four well-known influential factors as control variables in multiple regression.
\section{Related Work}
\label{sec:relatedwork}
Except for the studies on bug-handling effort we have discussed in Section~\ref{sec:motivation}, there is other work that compares programming languages for other aspects, particularly software quality (i.e., the number bugs generated rather than the effort of handling them).
Phipps~\cite{phipps1999comparing} conducted an experiment to compare programmer productivity and defect rate for \emph{Java}{} and \emph{C++}{}, and concluded that \emph{Java}{} is superior.
Daly et al. \cite{daly2009work} empirically compared programmer behaviors under the standard \emph{Ruby}{} interpreter, and DRuby which adds static type checking to \emph{Ruby}{}. They found ``DRuby's warnings rarely provided information about potential errors".
Hanenberg et al.~\cite{hanenberg2010experiment} conducted an empirical study on the impact of a static type system for the development of a parser. The results show that ``the static type system has neither a positive nor a negative impact on an application's development time''.
Harrison et al. \cite{harrison1996comparing} conducted a quantitative evaluation on functional and object-oriented paradigms to investigate the code-quality difference between them. They found no significant difference in direct measures of the development metrics, such as the number of known errors, but found significant differences in indirect measures, such as the number of known errors per thousand non-comment source lines. Kochhar et al.~\cite{kochhar2016large} studied the effects of using multi-languages setting on code quality, and found that projects with multiple languages are error-prone. Ray et al.~\cite{ray2014large} investigated the effects of different programming languages on code quality. The results indicate that strong languages have better code quality than weak languages.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we present a large-scale study to investigate whether some or some categories of programming languages would require more bug-handling effort than others. The experimental results indicate various interesting findings that can provide guidelines for developers, managers, and researchers: \emph{Java}{} tends to require more line/file modification than other languages but less bug-handling time, while \emph{Ruby}{} tends to require more bug-handling time as well as more line/file modification; weak languages (e.g., \emph{Ruby}{}, \emph{JavaScript}{}, and \emph{Objective-C}{}) tend to require more time than strong languages (e.g., \emph{Go}{}, \emph{Java}{}, and \emph{Python}{}); static languages (e.g., \emph{Java}{}, \emph{C\#}{}, \emph{Objective-C}{}) require more line/file modification than dynamic ones; considering programming languages is able to improve the effectiveness when predicting bug-handling effort.
\section{Experimental Setup}
\label{ExperimentalSetup}
This study is designed to answer the following research questions.
\begin{itemize}
\item [RQ1:] What is the bug-handling effort of different languages?
\item [RQ2:] What is the bug-handling effort of different language categories?
\item [RQ3:] Do application domains impact the comparison results of different languages?
\item [RQ4:] Does considering programming languages improve the accuracy of bug-handling-effort prediction?
\end{itemize}
Note that RQ1, RQ2, and RQ3 focus on comparing bug-fixing effort across programming languages. RQ4 explores the feasibility of using our results in a specific context. The presented model in RQ4 is a proof-of-concept, not intended to be practical.
\subsection{Target Programming Languages}
\label{sec:categories}
We consult five rankings of the most popular languages~\cite{mostpopone,mostpoptwo,mostpopthree,mostpopfour,mostpopfive}, and choose the following 10 (in alphabetical order) as our targets: \emph{C}{}, \emph{C\#}{}, \emph{C++}{}, \emph{Go}{}, \emph{Java}{}, \emph{JavaScript}{}, \emph{Objective-C}{}, \emph{PHP}{}, \emph{Python}{}, and \emph{Ruby}{}.
As in previous work~\cite{ray2014large}, we categorize the languages according to two well-known classifications, namely compilation and typing, as shown in Table~\ref{tab:langcate}. The \emph{compilation} classification divides a target language into dynamic or static categories depending on whether types are checked dynamically during run time or statically during compilation. The \emph{type} classification divides a target language to strong typing and weak typing depending on how strictly types are distinguished~\cite{ray2014large}. We call statically and dynamically checked languages ``static languages'' and ``dynamic languages'', and call strong and weak typing languages ``strong languages'' and ``weak languages''. Note that as the main aim of this work is to study popular languages, we do not seek comprehensive coverage of classification. For example, the language list does not include any predominately functional language.
\begin{table*}[t]\small
\centering
\caption{Categories of the target programming languages}
\label{tab:langcate}
\vspace{0mm}
\begin{tabular}{p{3cm}|p{3cm}|c|c|c|c|c|c|c|c|c|c}
\toprule
Language Class&Categories&C&C\#&C++&Go&Java&JavaScript&Objective-C&PHP&Python&Ruby\\
\hline
\multirow{2}{*}{Compilation}&static&*&*&*&*&*&&*&&&\\
\cline{2-12}
&dynamic&&&&&&*&&*&*&*\\
\hline
\multirow{2}{*}{Type}&strong&&*&&*&*&&&&*&*\\
\cline{2-12}
&weak&*&&*&&&*&*&*&&\\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Subjects and Control Variables}
\label{sec:subjects}
All our subjects are open-source projects from GitHub~\cite{githubhomepage}. For each target language, we retrieve the project repositories that are primarily written in that language, and select 60 most popular projects based on their number of stars, as in prior work~\cite{starpopular, ray2014large,bhattacharya2011assessing}.
Figure~\ref{fig:violinsub} presents the density distribution of the basic information of all the projects. Four types of information are presented: 1) \emph{SLOC}: the physical executable lines of code, which is calculated by the tool \emph{CLOC}~\cite{clochomepage}. For multi-language projects, we only adopt the SLOC of the primary language reported by \emph{CLOC}. 2) \emph{\#Commit}: the total number of commits downloaded from the GitHub API~\cite{githubapihomepage}. 3) \emph{Age}: the age (year) of each project. We use the time \emph{T12:00:00Z, May 4, 2017} to minus the creation time (stored in the GitHub API) of each project to get its age (hours). 4) \emph{\#Contributor}: the number of contributors, which is also collected through the API.
From the figure, the projects of different languages tend to have different sizes, ages, and so on, which may influence the bug-handling effort. As in previous work~\cite{ray2014large}, we consider them as control variables in regression analysis (see more in Section~\ref{sec:analysisapproach}).
\begin{figure*}[t]
\center
\begin{tabular}{cccc}
\includegraphics[width = 0.24\linewidth,totalheight = 0.17\textheight]{plots/sloc_violin}
\includegraphics[width = 0.24\linewidth,totalheight = 0.17\textheight]{plots/commit_violin}
\includegraphics[width = 0.24\linewidth,totalheight = 0.17\textheight]{plots/age_violin}
\includegraphics[width = 0.24\linewidth,totalheight = 0.17\textheight]{plots/contributors_violin}
\end{tabular}
\vspace{-5mm}
\caption{Density distribution of subjects. Each language has a violin plot, which shows the probability density of the data at different values using the plot width. For example, in the last figure showing the number of contributors, most \emph{Java}{} and \emph{Objective-C}{} projects have a small number of contributors.}
\label{fig:violinsub}
\end{figure*}
\subsection{Measurements for Bug-Handling Effort}
\label{sec:criteria}
To measure bug-handling effort, prior work used amount of line modification~\cite{bhattacharya2011assessing} or bug-handling time~\cite{kleinschmager2012static} as criteria. But since bug-handling effort is complex to measure, using only a single metric is likely to introduce bias.
To relieve this problem, in this paper we measure bug-handling effort of a language in terms of three aspects: the amount of line modification, bug-handling time, and the amount of file modification. Since the size of a project may impact the above measurements, we consider both absolute and relative number of each criterion. Thus, in all, we use six measurement criteria as follows, where $SLOC$ and $\#totalfiles$ refer to the total number of lines of code and the total number of files (of the primary language) for a project. As mentioned above, we do not expect any single one of the measurements alone to be sufficient in reflecting bug-handling effort accurately. Instead, we aim to achieve a higher level of confidence by having multiple of them complementing each other.
\begin{enumerate}
\item $pLoc_{abs}${}: the absolute number of modified lines of code.
\item $pLoc_{rel}${}: the relative number of modified lines of code, i.e., $pLoc_{rel}${} $=$ $pLoc_{abs}${}$/$$SLOC$.
\item $pTime_{abs}${}: the absolute time for handling a bug.
\item $pTime_{rel}${}: the relative time for handling a bug, i.e., $pTime_{rel}${} $=$ $pTime_{abs}${}$/$$SLOC$.
\item $pFile_{abs}${}: the absolute number of modified files.
\item $pFile_{rel}${}: the relative number of modified files, i.e., $pFile_{rel}${} $=$ $pFile_{abs}${}$/$$\#totalfiles$.
\end{enumerate}
For each project, we collect the amount of line/file modification during bug handling by analyzing its commits, as in prior work~\cite{ray2014large,kamei2013large}. In particular, we
search the commits with messages containing both ``fix'' and ``bug'' (case insensitive), and treat them as bug-handling commits\footnote{We do not use other error-related keywords like ``issue'', ``mistake'', or ``fault'', because from our observation, these keywords are also widely used to describe problems unrelated to source code (e.g., problems in documents) and may pollute the screening results.}. Then we count the number of modified program files as well as the number of modified lines belonging to the project's primary language, so as to calculate $pLoc_{abs}${}, $pLoc_{rel}${}, $pFile_{abs}${}, and $pFile_{rel}${}.
When a bug get fixed, a range of files may be modified/updated. In our measurement, we exclude non-code modifications such as documentations, but count all code changes of both source and test programs. This choice is deliberate, as we believe testing is an integrated part of development, and the effort involved in updating test code is naturally part of bug-handling and is language dependent. One obvious threat is that some bug-handling commits may contain code modification unrelated to the bug, particular refactorings which are likely to affect a disproportionally large amount of code. To check the severity of this bias, we manually analyze 585 randomly-chosen bug-handling commits from all our projects, and found that only 10.6\% commits involve dealing with more than a single bug or other form of code modifications, showing a high level data integrity. To further reduce this bias, for each project, we use the median value of all the bug-handling commits to represent the project's general level of line/file modification amount, which is ``less affected by outliers and skewed data''~\cite{bissyande2013got}.
For each project, we acquire the time spent on bug handling by analyzing issue reports, as prior work did~\cite{bissyande2013got}. Note that we do not use commit information here as it only gives us the end time, not the corresponding start time of bug-handling.
Instead we search the issue tracking system for closed issues with labels containing
``bug'' (case insensitive), and extract
information from them.
Inspired by the work of Zheng et al.~\cite{Zheng:2015:MIC:2786805.2786866}, we define the handling time of each bug as the interval between the issue creation time and the time of the last comment, which is proven to be more accurate (than the interval between creation and closing time, which most previous work adopted~\cite{Zheng:2015:MIC:2786805.2786866}). Again, we use the median of all the time as a representation of the typical level of a project's bug-handling time so as to remove the impact of extreme values.
\subsection{Statistical Analysis Used in the Study}
\label{sec:analysisapproach}
We statistically analyze the experimental results from different aspects to improve the analysis reliability. If the conclusions are consistent, it is highly likely that the results are reliable.
First, since we collect a sufficient number of projects for each programming language, we directly present the density distribution of their bug-handling effort for each language~\cite{kampstra2008beanplot} and make comparison. For example, if most projects of language $L$ have lower $pTime_{abs}${}, then $L$ is likely to need less bug-handling time.
Second, we use the median value to represent a language's central tendency of bug-handling effort~\cite{srinivasan2007new,bissyande2013got,wonnacott1972introductory,meanandmedian}, and rank the languages with it, as it is known that median values are better than average values in avoiding the bias of outliers (i.e., extreme values that differ greatly from other values)~\cite{bissyande2013got,wonnacott1972introductory,meanandmedian}.
Third, we use multiple linear regression to indicate the contribution of different languages to bug-handling effort~\cite{ray2014large}. The comparison between the bug-handling efforts among different languages can be regarded as a importance determination problem of categorical variables, and thus we can use multiple regression to identify which languages contribute more to the effort values. Through multiple regression, each language has a coefficient, with higher coefficients indicating more bug-handling effort. Besides coefficients, we also present the results of 1) p-value: a low p-value ($<$ 0.05) indicates the rejection of the null hypothesis~\cite{westfall1993resampling}. 2) t-value: a statistic that measures the ratio between the coefficient and its standard error. It is a statistical test value~\cite{winer1971statistical}. 3)~standard error: it represents the average distance that the observed values fall from the regression line. 4) R-squared value: it represents how well the data fit the regression model.
\subsection{Experimental Procedure}
The experimental procedure of this study can be divided into data collection and data analysis.
\subsubsection{Data Collection}
Firstly, we collect the bug-handling-effort data of projects in various programming languages, which are used for further analysis.
\emph{Step 1. Information retrieval from GitHub API.} GitHub API provides comprehensive information on commits, issues, and project history. For commits, we download all the \emph{JSON} files of commits, which contain commit messages, the number of line additions and deletions, file changes, and so on. To compute bug-handling time,
we download the \emph{JSON} files of issues, which contain issue title,
labels, state, creation time, close time, and the times of every comments. Due to the restriction of GitHub API access (5,000 times per hour), we skip the projects\footnote{16 projects are skipped.} with very large commit
history (which cannot be downloaded within 24 hours).
\emph{Step 2. Extraction of related information}. As described in Section~\ref{sec:criteria}, we identify bug-handling commits and bug-handling issues through keyword searching. Some projects contain multiple languages, for which we only extract changed code belonging to their primary languages. Specifically, we use file extensions (e.g., ``.java'' for Java language) to identify relevant changes.
\emph{Step 3. Sanity check}. We observed that the``most-popular'' criterion implies good general metrics such as \#issues, \#developers, and \#commits (1 project has fewer than 10 issues; 6 have fewer than 20 commits). Therefore, we focused on sanity metrics specific to our measurements: when checking bug-fixing line/file modification, we removed projects with no bug-fixing commit (65 removed), and chose 50 per-language from the remaining; when checking bug-fixing time, we removed projects with no bug-fixing issues (137 removed), and chose 35 projects per-language.
\subsubsection{Data Analysis}
After collecting the data, to answer RQ1, we use violin plots~\cite{hintze1998violin} to present the distribution of bug-handling effort across projects, then rank the languages based on the median values of all the projects of a language. Also, we calculate the multiple regression results as discussed in Section~\ref{sec:analysisapproach}. Finally, we combine the median-value and multiple-regression analysis results by adding up the rankings from the different analysis approaches for each language. For example, a language ranking the 3rd in median-value analysis and the 5th in multiple-regression analysis will have a total rank of 8 after the combination.
To answer RQ2, we conduct regression analysis using language categories instead of languages and compare their coefficients.
To answer RQ3, we follow previous work~\cite{ray2014large} in manually classifying projects\footnote{To reduce the bias of manual classification, two authors classify all the projects separately, and then a third author re-classify the projects with conflicting classification.} into seven domains (as shown in Table~\ref{tab:domain}). For each domain, we delete the languages with no more than five projects and re-perform multiple regression with the remaining projects. We then compare the rankings within each domain with the ranking across all domains, and check whether some languages have better/worse performance in specific domains.
Finally, to answer RQ4, we build a toy classification model to predict whether a project has high, medium, or low bug-handling time. We compare the effectiveness of this predictive model with and without using programming languages as a feature, and check if the prediction accuracy is impacted.
More details of the data analysis procedure can be found in Section~\ref{sec:results}.
\begin{table}[t]\small
\centering
\caption{Classification of domains}
\label{tab:domain}
\vspace{0mm}
\begin{tabular}{l|p{2.2cm}|l|r}
\toprule
Domain&Description&Example&\#Projects\\
\hline
Application&end user programs& bitcoin, macvim&112\\
\hline
Database&SQL and NoSQL databases& mongodb, influxdb&25\\
\hline
CodeAnalyzer&compiler, parser, interpreter& ruby, php-src&44\\
\hline
Middleware&operating systems, virtural machine& mitmproxy, codis&32\\
\hline
Library&APIs, libraries& opencv, tensorflow&196\\
\hline
Framework&SDKs,plugins& django, redux&130\\
\hline
Other&--& Ghost, leetcode&45\\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{table}
\section{Introduction}
\label{sec:introduction}
It is accepted wisdom that maintenance dominates software development costs~\cite{LewisModernizingLegacy2003}, with bug-handling
being a major contributor~\cite{Sutherland:1995:BOC:210376.210394,Jorgensen:2007:SRS:1248721.1248736}.
The effort required for handling bugs (including locating and fixing the faulty code, and updating the test suite as a result) is likely to be impacted by the programming languages the software is built with~\cite{pajankarpython}. However, which or which category of languages performs better with respect to bug-handling has long been debated in industry and academia alike. For example, believers of static typing argue that static languages tend to result in better software quality and lower bug-handling cost, because type checking is an effective way of tracking bugs~\cite{dynbadone,dynbadtwo}. On the other hand, advocates of dynamic typing hold the belief that the ease of reading, writing, and understanding of dynamic languages would make bugs easier to find and fix~\cite{nierstrasz2005revival}.
There has been a number of previous attempts to tackle this confusion. For example, Bhattacharya et al.~\cite{bhattacharya2011assessing} analyzed four open-source projects developed in \emph{C}{} and \emph{C++}{}; Kleinschmager and Hanenberg et al.~\cite{kleinschmager2012static,hanenberg2014empirical} compared bug-handling time for \emph{Java}{} and \emph{Groovy}. However, these work only check a small number of subjects, and mostly focus on pair-wise language comparisons, which cause threats to the reliability of their results.
This paper presents a systematic large-scale comparison of bug-handling effort among different programming languages. Our work is different from the previous work in the following aspects. First, we perform a comprehensive study of popular languages using a large number of projects. We choose 10 popular languages according to various rankings as our target languages and 600 projects (summing up to 70,816,938 SLOC and 3,096,009 commits). Second, we adopt a variety of measurement metrics (instead of one used in previous work): the (absolute and relative) amount of lines of modification, bug-handling time, and files of modification. Third, we take special care in removing or reducing the threats on the result validity: 1) we adopt a range of statistical analysis approaches and treat influential factors as control variables; 2)~we use the median values among a large number of projects and commits, which are considered ``less affected by outliers and skewed data'', to remove the bias caused by extreme circumstances~\cite{bissyande2013got,wonnacott1972introductory,meanandmedian}; 3)~we manually check the data we analyzed so as to make sure that our experimental setup is reasonable.
It is worth pointing out that the relationship between programming languages and bug-handling effort can be extremely complicated, potentially affected by many factors. For this reason, we perform correlation analysis \textbf{rather than causal analysis} in this work. When we say a language, we refer to \textbf{the whole ecosystem} of the language, including tool supports, developer experience, homogeneity of the code base and programming styles, adherence to or violation of best practices, the maturity of community, and so forth. When we say bug-handling effort, we refer to measurable criteria including the (absolute and relative) amount of line modification, bug-handling time, and file modification.
The study indicates that bug-handling effort is different among different programming languages. In particular, \emph{Java}{} and \emph{C\#}{} require more (absolute and relative) line modification but less (absolute) time during bug handling, \emph{Python}{} and \emph{PHP}{} tend to require less time and less line/file modification, while \emph{Ruby}{} and \emph{JavaScript}{} tend to require less absolute line/file of modification but more absolute and relative time. Static languages generally require more line/file modification than dynamic ones, while weakly and dynamically typed languages (abbreviated as ``weak language'' and ``strong language'' in this paper) have more bug-handling time. To further evaluate our results, we built a simple predictive model, which indicates that by considering programming languages as a factor, the effectiveness of bug-handling-effort prediction is increased by 18.8\%.
The results may impact the current software engineering practices in multiple ways.
For example, for developers who care about bug-handling effort, they now have a more objective reference for choosing languages; the same goes to managers who plan and schedule projects. On a more technical aspect, automatic program repair has been an area of growing popularity~\cite{arcuri2008novel,hansson2015automatic,arcuri2008automation}. Our results may provide hints on whether some languages typically require larger patches or more file modification, thus requiring larger search space for finding proper patches. Moreover, languages requiring high bug-handling effort may benefit more from automatic debugging, and thus could be better targets for such research.
These conclusions may not be fully generalized
to imply the underlying causality. Indeed, such a limitation also exists in previous
studies that analyze the relationships between programming languages and the characteristics of the software built with them
~\cite{bhattacharya2011assessing, kleinschmager2012static, hanenberg2014empirical, steinberg2011impact, nierstrasz2005revival, tratt2009dynamically, sanner1999python, oliphant2007python}.
The derived guidelines may not be thoroughly interpreted or actionable, but can still provide suggestions to developers and researchers.
Specifically, the main contributions of this paper are threefold.
\noindent \textbf{(1)A systematic and extensive study} on bug-handling effort among different programming languages. We perform a comprehensive study of 10 popular languages, and adopt a variety of measurement metrics to measure bug-handling effort. We analysis the threats on the result validity and take actions to remove or reduce them.
\noindent \textbf{(2) Empirical evidence} that \emph{Java}{} requires more line/file modification and less bug-handling time, while \emph{Ruby}{} requires less line/file modification and more bug-handling time. Static and strong languages tend to require less bug-handling time.
\noindent \textbf{(3) Practical guidelines} in both industry and academia. Developers now have more references when choosing languages. Managers may use our results for more accurate scheduling (especially on multi-language projects). For researchers, dynamic languages may need more automatic-tool support for bug handling; when automatically generating patches for weak languages, more lines of code and files may need to be searched, and larger patches may be preferred. When doing bug-handling-effort prediction for a project, it is beneficial to consider programming languages. Some languages, such as \emph{Ruby}{}, \emph{Objective-C}{}, and \emph{JavaScript}{}, deserve more attention during software testing activities because their bug-handling process tends to be demanding.
The remaining parts of this paper are organized as follows. Section~\ref{sec:motivation} motivates our work by introducing
the current status of online debates over the bug-handling effort among different programming languages. Section~\ref{ExperimentalSetup} presents the details of our experiment design; Section~\ref{sec:results} introduces the findings as well as the corresponding analysis; Section~\ref{implication} discusses the implications of our findings on developers, managers, and researchers. Section~\ref{threats} discusses the threats to validity and our efforts in reducing them. Section~\ref{sec:relatedwork} introduces the related work. Section~\ref{sec:conclusion} concludes the paper.
\section{Motivation}
\label{sec:motivation}
ß
In this section, we highlight the current status of confusion by presenting the contrasting views of practitioners as well as researchers to further motivate our work. The aim is to highlight the existence of the debate (which motivated us), rather than presenting a comprehensive survey. Therefore, we select the most representative online discussions and most related academic studies.
We surveyed the online discussions\footnote{``Online'' refers to the views of practitioners published through blogs, forums, homepages, QA websites, and so on.} on the topic by googling with the following query: ``programming languages'' + ``maintai-\\nability$\mid$bug-handling effort$\mid$bug-fixing effort''. We collect all the views of the first three pages returned (on \emph{05/01/2017}). Due to space limit, we only give a snap-shot of these results, shown in Table~\ref{tab:online}. The full results are on our homepage(link omitted to preserve anonymity). Column ``Link'' refers to different online sources, and Column ``Effort'' indicates the views expressed (whether the bug-handling effort is high or low for the (category of) language). For example, from the first two rows, three websites~\cite{webbackendlang,csharpgood,dyngoodthree} contain the view that dynamic languages have lower bug-handling effort, while some others~\cite{dynbadtwo,dynbadone,dynbadthree} contain the opposite view. Similarly, from the following two rows, three websites~\cite{pajankarpython,pythongood,dyngoodthree} contain the view that \emph{Python}{} have lower bug-handling effort, whereas some others~\cite{pythonbadtwo,manylang} contain the opposite view.
\begin{table}[t]\small
\centering
\caption{Online debate}
\label{tab:online}
\vspace{-1mm}
\begin{tabular}{p{2cm}|l|l|p{1cm}}
\toprule
&Target&Link&Effort\\
\hline
\multirow{2}{*}{ Category}&\multirow{2}{*}{ dynamic languages}
&~\cite{webbackendlang,csharpgood,dyngoodthree}&low\\
&&~\cite{dynbadtwo,dynbadone,dynbadthree}&high\\
\hline
\multirow{4}{*}{Language}&\multirow{2}{*}{\emph{Python}{}}&~\cite{pajankarpython,pythongood,dyngoodthree}&low\\
&&~\cite{pythonbadtwo,manylang}&high\\
\cline{2-4}
&\multirow{2}{*}{\emph{C\#}{}}&~\cite{manylang,csharpgood}&low\\
&&~\cite{oobad}&high\\
\cline{2-4}
&\multirow{2}{*}{\emph{C++}{}}&~\cite{manylang}&low\\
&&~\cite{dynbadthree}&high\\
\bottomrule
\end{tabular}
\end{table}
The inconsistency of opinions is also shared in academia.
Bhattacharya et al.~\cite{bhattacharya2011assessing} statistically analyzed four open-source projects developed in \emph{C}{} and \emph{C++}{}. They measured maintainability by the number of lines modified during bug-handling and found that the move from \emph{C}{} to \emph{C++}{} results in improved software quality and reduced maintenance effort. Kleinschmager and Hanenberg et al.~\cite{kleinschmager2012static,hanenberg2014empirical} compared the bug-handling time for \emph{Java}{} and \emph{Groovy}. Their results indicate \emph{Groovy}, which belongs to dynamic languages, requires more time in bug-handling, and concluded that static types are indeed beneficial in reducing bug-handling time. Steinberg~\cite{steinberg2011impact} found that the static typing has a positive impact on debugging time if only non-type errors are considered.
On the other hand, some researchers are against the use of static languages.
Nierstrasz et al.~\cite{nierstrasz2005revival} described static languages as ``the enemy of change'', claiming that dynamic languages are easier to maintain. Tratt et al.~\cite{tratt2009dynamically} also mentioned that compared to dynamic languages, static languages have higher development cost and require more complex changes.
Sanner et al.~\cite{sanner1999python} described \emph{Python}{} as a ``smaller, simpler, easy to maintain, and platform independent'' language due to its dynamic typing features. Oliphant et al.~\cite{oliphant2007python} gave a similar verdict.
A common characteristic of these existing academic studies is that they are all of a small scale, mostly focusing on pair-wise language comparisons aiming at isolating the effect of certain language features. These studies use only a small number of subjects, and mostly one measurement criterion of bug-handling effort. In contrast to these studies, in this paper we aim to look at the big picture, by considering a range of programming languages, bug-handling-effort measurement criteria, and analysis approaches. We also use a large number of projects to reduce bias and derive more reliable results.
\section{Results and Analysis}
\label{sec:results}
For each research question, we first present the direct observations through three types of analysis approaches (i.e., density distribution, median-value ranking, and multiple regression), and then summarize the conclusions, followed by reasoning and analysis.
\subsection{RQ1: Bug-Handling Effort among Programming Languages}
\subsubsection{Direct observations}
We first present the density distribution of different criteria for each language in Figure~\ref{fig:pldistribution}. From the figure, we can observe obvious individual differences of each language. For example, when looking at line modification, most \emph{PHP}{}, \emph{Python}{}, \emph{Ruby}{}, and \emph{C}{} projects have very low $pLoc_{abs}${} values, indicating that these languages tend to need less line modification during bug handling. On the other hand, the $pLoc_{abs}${} values of many \emph{C\#}{} and \emph{Java}{} projects are high. When looking at $pLoc_{rel}${}, most projects of \emph{C++}{} and \emph{C}{} have very low values. When looking at bug-handling time, most projects of \emph{Go}{}, \emph{Java}{}, \emph{Python}{}, and \emph{C\#}{} have low $pTime_{abs}${} whereas most projects of \emph{C}{} and \emph{C++}{} have low $pTime_{rel}${}. When looking at file modification, \emph{C}{}, \emph{C++}{}, and \emph{Python}{} tend to modify fewer files, while \emph{C\#}{}, \emph{Java}{}, \emph{Ruby}{}, and \emph{JavaScript}{} tend to modify more files. Additionally, most projects of \emph{C++}{}, \emph{C\#}{}, and \emph{Java}{} tend to modify smaller proportion of files.
\begin{figure}[t]
\center
\begin{tabular}{cc}
\includegraphics[width = 0.47\linewidth,totalheight = 0.115\textheight]{plots/ploc_abs_violin}
\includegraphics[width = 0.47\linewidth,totalheight = 0.115\textheight]{plots/ploc_pro_violin}\\
\includegraphics[width = 0.47\linewidth,totalheight = 0.115\textheight]{plots/ptime_abs_violin}
\includegraphics[width = 0.47\linewidth,totalheight = 0.115\textheight]{plots/ptime_pro_violin}\\
\includegraphics[width = 0.47\linewidth,totalheight = 0.115\textheight]{plots/pfile_violin}
\includegraphics[width = 0.47\linewidth,totalheight = 0.115\textheight]{plots/pfilerop_violin}
\end{tabular}
\vspace{-3mm}
\caption{The distribution of bug-handling effort. The width of each violin plot shows the probability density of the data at different values. }
\label{fig:pldistribution}
\end{figure}
Next, to enable a linear ranking of languages, we compute the median of criterion values for each language, with the results shown in Figure~\ref{fig:median}. From the figure, we have similar observations as Figure~\ref{fig:pldistribution}. In particular, \emph{PHP}{}, \emph{Python}{}, \emph{Ruby}{}, and \emph{C}{} have lower median $pLoc_{abs}${} of around 10 lines of code, while \emph{Java}{} and \emph{C\#}{} have higher median $pLoc_{abs}${} of around 20 lines. \emph{C++}{}, \emph{Go}{}, and \emph{C}{} have lower $pLoc_{rel}${} while \emph{Ruby}{} and \emph{Objective-C}{} have higher $pLoc_{rel}${}. \emph{Go}{}, \emph{Java}{}, \emph{PHP}{}, and \emph{Python}{} have lower $pTime_{abs}${} than \emph{Ruby}{}, \emph{JavaScript}{}, and \emph{Objective-C}{}. \emph{Go}{}, \emph{C\#}{}, and \emph{Java}{} have lower $pTime_{rel}${} than \emph{Objective-C}{} and \emph{Ruby}{}. For $pFile_{abs}${}, \emph{C\#}{}, \emph{Java}{}, and \emph{Go}{} have median values of two file modifications, \emph{JavaScript}{} has 1.5, while the remaining languages have 1.
\begin{figure}[t]
\center
\begin{tabular}{cc}
\includegraphics[width = 0.48\linewidth,totalheight = 0.115\textheight]{plots/ploc_abs}
\includegraphics[width = 0.48\linewidth,totalheight = 0.115\textheight]{plots/ploc_pro}\\
\includegraphics[width = 0.49\linewidth,totalheight = 0.115\textheight]{plots/ptime_abs}
\includegraphics[width = 0.48\linewidth,totalheight = 0.115\textheight]{plots/ptime_pro}\\
\includegraphics[width = 0.48\linewidth,totalheight = 0.115\textheight]{plots/pfile}
\includegraphics[width = 0.48\linewidth,totalheight = 0.115\textheight]{plots/pfile_pro}
\end{tabular}
\vspace{-5mm}
\caption{Ranking of bug-handling effort. Languages are ranked by their median values in increasing order. From this figure, we get conclusions similar to Figure~\ref{fig:pldistribution}.}
\label{fig:median}
\end{figure}
Next, we perform multiple regression analysis by treating the variables\footnote{We perform log transformation to stabilize the variance and improve the model fit~\cite{ray2014large}.} introduced in Figure~\ref{fig:violinsub} as control variables, which are also regarded as inputs to the regression model. When doing the regression for the relative values (such as $pLoc_{rel}$ and $pTime_{rel}$), we remove $SLOC$ from the control variables, because the relative values are calculated by dividing $SLOC$, which is also an approach of variable controlling.
The results are shown in Table~\ref{tab:regression-language}. The coefficients of all the control variables ($SLOC$,$\#commit$,$age$, and $\#Contributor$) are not the focus of this paper, and thus omitted\footnote{These values can be referred on our homepage (link omitted to preserve anonymity).}.
The remaining coefficients of each language reflect the regression results under control.
For each measurement, the languages are ranked based on their coefficient values (smallest first). From Table~\ref{tab:regression-language}, most of the regression results are significant (the p-values are smaller than 0.05), indicating the effectiveness of the regressions. Additionally, the language rankings are similar to those from Figure~\ref{fig:pldistribution}. In particular, \emph{Java}{} has more line and file modifications but less bug-handling time. \emph{Ruby}{}, \emph{JavaScript}{}, and \emph{Objective-C}{} have higher absolute as well as relative bug-handling time.
\begin{table}[t]\small \renewcommand{\arraystretch}{0.8}
\centering
\caption{Multiple regression results of different languages. }
\label{tab:regression-language}
\vspace{0mm}
\begin{tabular}{l|lrrrr}
\toprule
\textbf{}&\textbf{Language}&\textbf{Coeff.}&\textbf{Std.Err.}&\textbf{t-value}&\textbf{Sig.}\\
\midrule
\multirow{10}{*}{ $pLoc_{abs}$}&
\emph{Python}{}&0.8303&0.6187&1.3420&\\
&\emph{C}{}&0.8552&0.6277&1.3630&\\
&\emph{Ruby}{}&0.9334&0.6353&1.4690&\\
&\emph{PHP}{}&0.9425&0.6287&1.4990&\\
&\emph{C++}{}&0.9908&0.6196&1.5990&\\
&\emph{JavaScript}{}&1.0079&0.6215&1.6220&\\
&\emph{Objective-C}{}&1.0500&0.6323&1.6610&.\\
&\emph{Go}{}&1.0905&0.6169&1.7680&.\\
&\emph{C\#}{}&1.1641&0.6254&1.8610&.\\
&\emph{Java}{}&1.2015&0.6254&1.9210&.\\
\midrule
\multirow{10}{*}{ $pLoc_{rel}$}&
\emph{C}{}&-2.3037&0.9085&-2.5360&*\\
&\emph{Go}{}&-2.2567&0.8895&-2.5370&*\\
&\emph{C\#}{}&-2.2545&0.9015&-2.5010&*\\
&\emph{C++}{}&-2.2429&0.8952&-2.5060&*\\
&\emph{Python}{}&-2.1008&0.8979&-2.3400&*\\
&\emph{Java}{}&-2.0292&0.9041&-2.2440&*\\
&\emph{PHP}{}&-2.0056&0.9128&-2.1970&*\\
&\emph{JavaScript}{}&-1.9721&0.9015&-2.1880&*\\
&\emph{Objective-C}{}&-1.9629&0.9174&-2.1400&*\\
&\emph{Ruby}{}&-1.8545&0.9247&-2.0060&*\\
\midrule
\multirow{10}{*}{ $pTime_{abs}$}&
\emph{Java}{}&2.1315&0.2858&7.4590&***\\
&\emph{C}{}&2.2908&0.3061&7.4850&***\\
&\emph{PHP}{}&2.3266&0.2975&7.8210&***\\
&\emph{Python}{}&2.3294&0.2939&7.9270&***\\
&\emph{Go}{}&2.3909&0.3078&7.7680&***\\
&\emph{C\#}{}&2.5586&0.3065&8.3470&***\\
&\emph{Objective-C}{}&2.5929&0.2597&9.9840&***\\
&\emph{JavaScript}{}&2.6070&0.2983&8.7400&***\\
&\emph{C++}{}&2.6186&0.3066&8.5410&***\\
&\emph{Ruby}{}&2.6932&0.2835&9.4980&***\\
\midrule
\multirow{10}{*}{ $pTime_{rel}$}&
\emph{Java}{}&0.5725&0.3042&1.8820&.\\
&\emph{Go}{}&0.6713&0.3255&2.0620&*\\
&\emph{C}{}&0.8069&0.3345&2.4120&*\\
&\emph{C\#}{}&0.8297&0.3233&2.5660&*\\
&\emph{C++}{}&1.0428&0.3311&3.1500&**\\
&\emph{Python}{}&1.0717&0.3280&3.2670&**\\
&\emph{PHP}{}&1.0718&0.3327&3.2210&**\\
&\emph{JavaScript}{}&1.2926&0.3315&3.8990&***\\
&\emph{Objective-C}{}&1.3388&0.2841&4.7130&***\\
&\emph{Ruby}{}&1.6224&0.3215&5.0460&***\\
\midrule
\multirow{10}{*}{ $pFile_{abs}$}&
\emph{C}{}&1.1852&0.5506&2.1530&*\\
&\emph{Python}{}&1.5221&0.5319&2.8620&**\\
&\emph{Go}{}&1.5819&0.5474&2.8900&**\\
&\emph{Objective-C}{}&1.6209&0.4670&3.4710&***\\
&\emph{C++}{}&1.6463&0.5487&3.0000&**\\
&\emph{PHP}{}&1.6770&0.5311&3.1580&**\\
&\emph{JavaScript}{}&1.9123&0.5381&3.5540&***\\
&\emph{Ruby}{}&1.9616&0.5215&3.7620&***\\
&\emph{Java}{}&2.1764&0.5121&4.2500&***\\
&\emph{C\#}{}&2.6081&0.5551&4.6980&***\\
\midrule
\multirow{10}{*}{ $pFile_{rel}$}&
\emph{Java}{}&0.2447&0.0287&8.5170&***\\
&\emph{C\#}{}&0.2562&0.0307&8.3430&***\\
&\emph{Go}{}&0.2577&0.0306&8.4280&***\\
&\emph{C}{}&0.2701&0.0318&8.4910&***\\
&\emph{Objective-C}{}&0.2772&0.0268&10.3310&***\\
&\emph{Ruby}{}&0.2833&0.0314&9.0170&***\\
&\emph{C++}{}&0.2880&0.0313&9.2000&***\\
&\emph{PHP}{}&0.2908&0.0314&9.2500&***\\
&\emph{JavaScript}{}&0.3028&0.0318&9.5260&***\\
&\emph{Python}{}&0.3373&0.0316&10.6840&***\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] [1] Significant codes: p-value $< 2.2e-16$: `***'; p-value $<$ 0.001: `**'; p-value $<$ 0.01: `*'; p-value $<$ 0.05: `.'
\item[2] [2] The R-squared values for line-modification and time prediction are above 0.90. The R-squared values for file-modification and time prediction are above 0.50.
\end{tablenotes}
\vspace{-3mm}
\end{table}
To combine the results of median-value (in Figure~\ref{fig:pldistribution}) and multiple regression (in Table~\ref{tab:regression-language}) analysis, for each language, we pick out its ranking results of both analysis approaches and present them in Figure~\ref{fig:combination}. The blue and white bars are for rankings in median-value analysis and multiple-regression analysis respectively. For example, in the first sub-figure, when analyzing $pLoc_{abs}$, the language \emph{C}{} ranks No.4 in median-value analysis and No.2 in multiple-regression analysis, and thus its combined ranking number is 6.
From Figure~\ref{fig:combination}, we have the following observations. First, the blue and white parts inside each bar mostly have similar length, indicating that the median-value and multiple-regression analysis results are highly coherent. This observation mutually indicate the reliability of each analysis approach. Second, we can now conclude with confidence that \emph{Java}{} language would require a high level of line and file modification, but less so with regard to time. \emph{Ruby}{} has high absolute and relative bug-handling time, whereas \emph{PHP}{}, \emph{Python}{}, and \emph{C}{} have low bug-handling time as well as less code modification.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=0.33]{plots/compare-plocabs}&
\hspace{-4mm}\includegraphics[scale=0.33]{plots/compare-plocrel}\\
\includegraphics[scale=0.33]{plots/compare-ptimeabs}&
\hspace{-4mm}\includegraphics[scale=0.33]{plots/compare-ptimerel}\\
\includegraphics[scale=0.33]{plots/compare-pfileabs}&
\hspace{-4mm}\includegraphics[scale=0.33]{plots/compare-pfilerel}\\
\end{tabular}
\caption{\label{fig:combination} Combination of median-value and multiple-regression analysis results. Blue and white bars represent each language's rankings in median-value and multiple-regression analysis respectively. }
\end{figure}
\subsubsection{Conclusions and Analysis}
Combining these observations, we have the following findings.
\vspace{2mm}
\begin{mdframed}
Finding 1: Different programming languages require different bug-handling effort. For example, \emph{Java}{} tends to require more (absolute and relative) line modification but less handling time than other languages, and \emph{Python}{} requires less bug-handling effort in terms of both line modification and time.
\end{mdframed}
\vspace{2mm}
{\bf Findings on \emph{Java}{}.}
\emph{Java}{} \emph{tends to require more line/file modification, but less bug-handling time.} This finding matches well with the widely-recognized understanding that \emph{Java}{} is a verbose language~\cite{broussard2006method}. Our result shows that this verbosity carries over to bug handling. Another language known for its verbosity is \emph{C\#}{}, which also has a high level of line modification. However, \emph{C\#}{} projects tend to be very large (see Figure~\ref{fig:violinsub}), resulting in its $pLoc_{rel}${} value being moderated. Despite requiring large line modifications, \emph{Java}{} is one of the languages with short bug-handling time, which is particularly observable through its $pTime_{abs}${} value. This result suggests that bug-handling in \emph{Java}{} requires a relatively small, and uniform amount of time, irrespective of the overall project size. One of the reasons may be that the large number of declarations required in \emph{Java}{}, including type declaration, method parameter types, return types, access levels of classes, exception handling~\cite{broussard2006method}, and so on, which make the language verbose but at the same time provide additional documentation for readers, making the code easier to understand and debug~\cite{arnold2000java}. Additionally, \emph{Java}{} has a history of over 20 years. Its long commercial life and wide adoption has created a robust ecosystem of documentation, libraries and frameworks. This may also contribute to \emph{Java}{}'s good performance in bug handling efficiency.
{\bf Findings on \emph{Go}{}.} Similar to \emph{Java}{}, \emph{Go}{} \emph{ tends to require more line/file modification and less bug-handling time considering the absolute values, but its relative values are small across the board.} This reinforces our understanding that the elaborate requirement of declaring of variable types, method parameter types, return types, and so on, which is shared between \emph{Java}{} and \emph{Go}{} may cause large number of line modifications but at the same time making debugging relatively quick. The difference between the two languages is that \emph{Go}{} projects are much larger than \emph{Java}{}'s (see Figure~\ref{fig:violinsub}), resulting in the lower relative values.
{\bf Findings on \emph{Python}{} and \emph{PHP}{}.} \emph{Python}{} and \emph{PHP}{} \emph{need less absolute line/file modification as well as time.}
\emph{Python}{} is widely recognized to have a large set of scientific libraries and a very active community, which make it easier for developers to find support during bug handling. It is also reported that there has been a trend in the \emph{Python}{} community to improve code quality by dictating ``one right way''~\cite{startup}. This maturity of community and the effort of adhering to best practices is likely to facilitate bug handling.
\emph{PHP}{} is also a mature language which has a vast ecosystem of developers, frameworks and libraries. The quality of the projects using \emph{PHP}{} has the reputation of being more polarised, ranging ``from horrible to awesome''~\cite{startup}. In our study, \emph{PHP}{} performs very well. This might be due to the fact that
we select the most popular projects as the analysis target, which are likely to be in the ``awesome" bucket.
Further discussion of this potential bias can be found in Section~\ref{threats}.
{\bf Findings on \emph{Ruby}{}.} Opposite to \emph{Go}{},
\emph{Ruby}{}\emph{ tends to require less absolute line/file modification, but more bug-handling time. And its relative measurements are large across the board.} As a dynamic language, \emph{Ruby}{} is designed to make programming a ``pleasant" and ``productive" experience~\cite{pythongood}, which does not have hard rules on writing code, and is very close to spoken languages~\cite{flanagan2008ruby}. Such features make \emph{Ruby}{} code short and expressive. However, they also make debugging more difficult. One example of the flexible features of \emph{Ruby}{} is ``monkey patching'', which refers to the extension or modification of existing code by changing classes at run-time. It is a powerful technique that has become popular in the \emph{Ruby}{} community. Any class can be re-opened at any time and amended in any way. However, this flexible monkey-patching may lead to hard-to-diagnose clashes~\cite{monkeypatching}.
The \emph{Ruby}{}'s compiler does not expose many bugs, and allows some problematic programs to compile and execute. This results in a certain form of technical debt\footnote{Technical debt is a ``concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution.''~\cite{kruchten2012technical}}, and complex bugs that are hard to diagnose. Moreover, as \emph{Ruby}{} programs are usually not large, as shown in Figure~\ref{fig:violinsub}, its relative measurements are usually high. Additionally, \emph{Ruby}{} is community driven, which means quality documentation and support can be more difficult to find.
{\bf Comparison between similar languages.} When comparing \emph{C++}{} and \emph{Objective-C}{}, we observe that the former requires less bug-handling effort (including line modification and time) than the latter. We suspect that this is because \emph{Objective-C}{} has a mixture of static and dynamic typing, whereas plain \emph{C++}{} objects are always statically typed which simplifies understanding. Regarding \emph{Python}{} and \emph{Ruby}{}, the former requires less bug-handling effort than the latter. As discussed above, we suspect that this is partly due to \emph{Ruby}{}'s relentless pursue for flexibility, which may result in hard-to-track bugs~\cite{rubyvspython}.
On the other hand, \emph{Python}{} takes a more direct approach to programming, with light and uncluttered syntax.
This sacrifices some of the ``coolness" \emph{Ruby}{} has, but gives \emph{Python}{} a big advantage when it comes to debugging. Regarding \emph{Java}{} and \emph{C\#}{}, the former requires slightly less bug-handling effort than the latter. One of the reasons may be that \emph{C\#}{} is more flexible than \emph{Java}{}, which creates, returns, and stores anonymous objects at runtime.
Another pattern we can observe is that \emph{Go}{}, \emph{Java}{}, and \emph{C\#}{} all require more line/file modification but much less bug-handling time, indicating the inconsistency between the two criteria, which leads to the following finding.
\vspace{2mm}
\begin{mdframed}
Finding 2: Languages requiring more line/file modification do not necessarily need more bug-handling time.
\end{mdframed}
\vspace{2mm}
We think this finding may partially explain the contradictory views regarding the impact of programming languages on bug-handling effort, shown in online discussions (in Section~\ref{sec:online}) and previous work (in Section~\ref{sec:related}). That is, programmers or researchers may have used different measurement criteria, e.g., the amount of line modification or the amount of time spent in bug handling, and consequently draw very different conclusions. For example, Kleinschmager et al.~\cite{kleinschmager2012static} and Hanenberg et al.~\cite{hanenberg2014empirical} showed static languages have lower bug-handling effort because their empirical studies used bug-handling time as the only measurement criterion, while Tratt et al.~\cite{tratt2009dynamically} called static languages ``the enemy of change'' because static languages require more complex code modification.
\subsection{RQ2: Bug-Handling Effort among Language Categories}
To answer the second research question, we check the multiple regression results on different language categories, as shown in Tables~\ref{tab:categorycombination}. From the table, dynamic languages require less absolute code modification, whereas static languages, as well as strong languages, tend to have less bug-handling time.
\begin{table}[t]\small
\centering
\caption{Multiple regression results of language categories. }
\label{tab:categorycombination}
\vspace{0mm}
\begin{tabular}{p{1.2cm}|lrrrr}
\toprule
\textbf{Measure}&\textbf{Category}&\textbf{Coeff.}&\textbf{Std.Err.}&\textbf{t-value}&\textbf{Sig.}\\
\hline
\multirow{4}{*}{ $pLoc_{abs}$}&
dynamic&0.9366&0.6104&1.5340&\\
&static&1.0617&0.6094&1.7420&.\\
\cline{2-6}
&weak&0.9540&0.6138&1.5540&\\
&strong&1.0277&0.6123&1.6780&.\\
\hline
\multirow{4}{*}{ $pLoc_{rel}$}&
static&-2.9418&0.8541&-3.4440&***\\
&dynamic&-2.7590&0.8610&-3.2040&**\\
\cline{2-6}
&strong&-3.1698&0.8603&-3.6840&***\\
&weak&-3.1671&0.8641&-3.6650&***\\
\hline
\multirow{4}{*}{ $pTime_{abs}$}&
static&2.5059&0.2581&9.7090&***\\
&dynamic&2.5271&0.2616&9.6600&***\\
\cline{2-6}
&strong&2.4785&0.2587&9.5820&***\\
&weak&2.5462&0.2581&9.8660&***\\
\hline
\multirow{4}{*}{ $pTime_{rel}$}&
static&1.1030&0.2795&3.9460&***\\
&dynamic&1.4456&0.2970&4.8680&***\\
\cline{2-6}
&strong & 1.0477 & 0.2850 & 3.6760& ***\\
&weak & 1.2103 & 0.2883 & 4.1980&***\\
\hline
\multirow{4}{*}{ $pFile_{abs}$}&
dynamic&1.7732&0.4724&3.7540&***\\
&static&1.7817&0.4634&3.8450&***\\
\cline{2-6}
&weak&1.6210&0.4610&3.5160&***\\
&strong&1.9770&0.4637&4.2640&***\\
\hline
\multirow{4}{*}{ $pFile_{rel}$}&
static&0.2634&0.0257&10.2650&***\\
&dynamic&0.3019&0.0278&10.8770&***\\
\cline{2-6}
&strong&0.2576&0.0263&9.7990&***\\
&weak&0.2688&0.0265&10.1360&***\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] [1] Significant codes: p-value $< 2.2e-16$: `***'; p-value $<$ 0.001: `**'; p-value $<$ 0.01: `*'; p-value $<$ 0.05: `.'
\item[2] [2] The R-squared values for line-modification and time prediction are above 0.90. The R-squared values for file-modification and time prediction are above 0.50.
\end{tablenotes}
\end{table}
These observations can be summarized into the following finding.
\vspace{2mm}
\begin{mdframed}
Finding 3: Static languages tend to require more absolute line and file modification. Weak/dynamic languages tend to require more bug-handling time.
\end{mdframed}
\vspace{2mm}
The reason for the former observation is that dynamic languages are typically less verbose than static ones, avoiding type declarations on variables, parameters, and return values, while the reason for the latter observation may be that the compilers of strong languages provide earlier bug detection, which eliminates some tough bugs and reduces technical debt.
Note that although the categories of languages do impact on bug-handling effort, our results indicate that no absolute conclusion can be drawn.
In other words, it is unreliable to decide bug-handling-effort level based solely on the language's category; for example, \emph{Ruby}{} has strong typing, but also high bug-handling time.
\subsection{RQ3: Impact of Domains}
To investigate the impact of domains on bug-handling effort, similar to previous work~\cite{ray2014large} we divide the target projects into different domains (i.e., application, database, code-analyzer, middleware, library, framework, and others. (See Table~\ref{tab:domain})). For each domain, we only consider languages that have more than 5 projects, and use multiple regression as the analysis technique.
Based on the new coefficient derived in this new setting, we rank the languages, and compare the new ranking results with the previous one (without considering domains) in Table~\ref{tab:regression-language}. The comparison results (in Table~\ref{tab:domains}) demonstrate the difference of bug-handling effort for a programming language between its overall usage (i.e., including all domains) and specific usage for a certain domain. Due to space-limit, we only present the results of bug-handling time. The full results are on our homepage (omitted for double-blind review).
Only three domains
have enough projects for an interesting number of languages.
The first column shows the languages, the remaining columns are the coefficients of each language in the new multiple regression within each domain, where ``-'' represents omitted languages that do not have more than 5 projects for the domain. The values inside the brackets are the changes in ranking\footnote{When there are less than 10 languages in a domain, the original ranking is updated by removing the absent languages.}. For example, for ``Application'' domain, \emph{C++}{} has the smallest coefficients, and thus ranks the first, while from Table~\ref{tab:regression-language}, \emph{C++}{} ranks the fifth among the seven languages. Thus, \emph{C++}{} projects belonging to the ``Application'' domain tend to have less bug-handling time than those belonging to other domains.
From the table, we have the following finding.
\vspace{2mm}
\begin{mdframed}
Finding 4: The impacts of programming languages on bug-handling effort are different among different domains.
\end{mdframed}
\vspace{2mm}
In the future, we will use more projects in each domain to further investigate the impact of domains on bug-handling effort.
\begin{table}[t]\small
\centering
\caption{Multiple regression results of bug-handling time for languages in different domains.}
\label{tab:domains}
\vspace{0mm}
\begin{tabular}{p{3.2cm}rrrr}
\toprule
\textbf{Language}&\textbf{App.}&\textbf{Frame.}&\textbf{Lib.}\\
\midrule
\emph{C}{}&3.051~(1) &--& 2.994~(-1)\\
\emph{C\#}{}&3.151~(0) &1.548~(-2)& 3.428~(-5)\\
\emph{C++}{}& 2.889~(4) &1.678~(-2)& 3.623~(-5)\\
\emph{Go}{}& 3.375~(-3)&1.331~(-2) &--\\
\emph{Java}{} & 3.438~(-5) &--& 2.551~(0)\\
\emph{Python}{}& 3.121~(3)& 1.3108~(3)& 3.065~(1)\\
\emph{Ruby}{}& --&1.720~(2)& 3.344~(2)\\
\emph{JavaScript}{} & 3.723~(0)&1.809~(-1)& 3.172~(1)\\
\emph{Objective-C}{}&--& 1.931~(-1)& 3.146~(3) \\
\emph{PHP}{} &-- & 1.330~(3)& 2.980~(4)\\
\bottomrule
\end{tabular}
\end{table}
\subsection{RQ4: Contribution to Bug-Handling-Effort Prediction}
\label{sec:prediction}
In the preceding analyses, we have concluded that programming languages may affect bug-handling effort, including line modification and bug-handling time. We now try to see whether this newly gained knowledge may help with the bug-handling-effort prediction problem, which is well-recoginzed as an important but difficult problem~\cite{weiss2007long}.
One category of prediction is to estimate the handling-time of a specific bug in a project~\cite{weiss2007long,zhang2013predicting}. For multi-language projects, bugs belonging to different languages may have different bug-handling time, but no previous work has considered the impact of programming languages. The other category is to predict the general level of bug-handling effort of a project, rather than a specific bug~\cite{hayes2005maintainability,wohl1982maintainability}. As far as we are aware, no work has considered the impact of programming languages either.
In this section, we empirically investigate whether considering programming languages can contribute to bug-handling-effort prediction. In particular, we build a toy classification model to predict the general level of bug-handling time of a project, i.e., whether a project has high, medium, or low bug-handling effort\footnote{In this study we use 60 and 180 hours as a lines to distinguish high/medium/low bug-handling effort, so as to get almost the same number of subjects into each side (110 high, 100 median, and 140 low). We do not use the complex existing prediction approaches because this paper is not about delivering a realistic prediction model.} based on \emph{SLOC}, \emph{\#commit}, \emph{age}, and \emph{contributor} (the number of developers). We compare the effectiveness of this predictive model with or without using programming language as a feature. Moreover, we use \emph{Naive Bayers} algorithm for the classification, and 10-fold cross validation for the evaluation.
The results are shown in Table~\ref{tab:predictive}. From the table, when considering programming languages, the effectiveness of prediction has improved notably. For example, the prediction precision improves by 18.8\% (i.e., $(0.462-0.389)/0.389$), the AUC improves by 5.5\% (i.e., $(0.637-0.604)/0.604$).
\begin{table}[t]\small
\centering
\caption{Bug-handling-time prediction with/without considering programming languages}
\label{tab:predictive}
\vspace{0mm}
\begin{tabular}{p{2.6cm}llll}
\toprule
\textbf{feature}&\textbf{precision}&\textbf{recall}&\textbf{f-measure}&\textbf{AUC.}\\
\midrule
with languages& 0.462$\uparrow$ & 0.463$\uparrow$ & 0.453$\uparrow$ & 0.637$\uparrow$\\
without languages&0.389 & 0.429 & 0.388 & 0.604\\
\bottomrule
\end{tabular}
\end{table}
\vspace{2mm}
\begin{mdframed}
Finding 5: The inclusion of programming languages as a factor will improve the effectiveness of bug-handling-effort prediction.
\end{mdframed}
\vspace{2mm}
|
1,108,101,564,683 | arxiv | \section{Introduction}
Lattice models are objects from statistical mechanics which have also seen a number of surprising applications in mathematics. These models are built with a graph (often a rectangular grid) whose edges are labeled, and locally satisfy some property. If this property is satisfied, we call the state \emph{admissible}. Each vertex of the graph is assigned a weight depending on the labels of the edges around it. The goal is to make conclusions about the global behavior of the graph. One way of studying this is by determining the \emph{partition function} of the lattice model, which is calculated by taking a product of the vertex weights, and summing over all possible states:
\[Z(\mathfrak{G}) = \sum_{\text{states}}\prod_{v\in V} \text{wt}(v). \]
Baxter \cite{Baxter} studied various models in this way. An important method that he used was repeated application of the \emph{Yang-Baxter equation}, from which he deduced certain symmetry properties of partition functions. These symmetries are the source of several connections to representation theory. For instance, if we assign the vertices a particular set of weights, then the partition function we obtain is a Schur function \cite{BBF}. This allows one to prove symmetric function identities using lattice models, such as the dual Cauchy identity \cite{DualCauchy} and the Weyl character formula \cite{WeylCharacter}.
We will primarily focus our attention on the six-vertex and the eight-vertex square lattice models. The six-vertex model was famously used by Kuperberg \cite{Kuperberg} to prove the alternating sign matrix conjecture, originally proven by Zeilberger \cite{Zeilberger}. Among the first steps of the proof was a correspondence between admissible lattice states of the six vertex model and alternating sign matrices. In this paper, we will find combinatorial interpretations of lattice states of a similar flavor.
The primary technique that we will use to study the states of lattice models will be the use of discrete differential forms on a rectangular grid. Discrete differential calculus has been studied recently over arbitrary graphs; for a more general description of it, see \cite{DifferentialForms}. Our methods rely on the observation that for certain edge values $f_{i,j}$ and $g_{i,j}$, we have
\[g_{i+1,j} - g_{i,j} = f_{i,j+1} - f_{i,j}.\]
We rewrite this as
\[D_xg = D_yf,\]
where $D_x$ and $D_y$ are \emph{discrete partial derivatives}. Drawing an analogy to the continuous case, we say that the 1-form $fdx + gdy$ is \emph{closed}. Since rectangular grids are discrete analogues of open balls in $\mathbb{R}^2$, we see that closed 1-forms on the rectangular grid are exact. This gives us a new proof that there are three times as many 3-colorings of a rectangular grid as there are admissible lattice states in the six-vertex model, a fact proven by Lenard \cite{3color}.
Our viewpoint becomes more fruitful when we look at more complicated lattices. Specifically, we study the six-vertex model with toroidal boundary conditions. This is a discrete analogue of a torus, so we expect it to have nontrivial cohomology. By computing this cohomology, we are able to find a new combinatorial interpretation of the admissible states with toroidal boundary conditions. In particular, we see that the number of admissible states are not simply in correspondence with 3-colorings of a toroidal grid graph.
After studying the six-vertex model, we turn our attention to admissible states of the eight-vertex model. Although we could use our previous differential forms strategy, we find that the structure of the admissible states is even simpler. In particular, there is a natural description of the admissible states as a vector space over $\mathbb{F}_2$. This lets us explicitly determine the number of admissible states in the eight-vertex model. Moreover, we can determine this number for any given set of boundary conditions, and it is essentially independent of the boundary.
Finally, we turn our attention back to the Yang-Baxter equation in the eight-vertex model. Partition functions for the eight-vertex model have been studied by Fan and Wu \cite{FanWu} and Galleas and Martins \cite{GalleasMartins}. In their analysis, they make the assumption that for certain parameters $c_1, c_{-1}, d_1, d_{-1}$, we have $c_1 = c_{-1}$ and $d_1 = d_{-1}$. We prove our results while relaxing these assumptions, so we are able to generalize their previous work. We determine conditions the Boltzmann weights of the lattice that are required in order to find a solution to the Yang-Baxter equation.
\section{Admissible States of the Six-Vertex Model}
Here we will quickly define the six-vertex lattice model. Let us consider a rectangular grid in which the edges are labeled as either 0 or 1. We will refer to such a labeling of the edges as a \emph{state}.
\begin{figure}[h]
\[
\scalebox{.95}{\begin{tikzpicture}
\draw [line width=0.45mm] (1,0)--(1,6);
\draw [line width=0.45mm] (3,0)--(3,6);
\draw [line width=0.45mm] (5,0)--(5,6);
\draw [line width=0.45mm] (7,0)--(7,6);
\draw [line width=0.45mm] (9,0)--(9,6);
\draw [line width=0.45mm] (0,1)--(10,1);
\draw [line width=0.45mm] (0,3)--(10,3);
\draw [line width=0.45mm] (0,5)--(10,5);
\draw[line width=0.45mm, fill=white] (1,6) circle (.35);
\draw[line width=0.45mm, fill=white] (3,6) circle (.35);
\draw[line width=0.45mm, fill=white] (5,6) circle (.35);
\draw[line width=0.45mm, fill=white] (7,6) circle (.35);
\draw[line width=0.45mm, fill=white] (9,6) circle (.35);
\draw[line width=0.45mm, fill=white] (1,4) circle (.35);
\draw[line width=0.45mm, fill=white] (3,4) circle (.35);
\draw[line width=0.45mm, fill=white] (5,4) circle (.35);
\draw[line width=0.45mm, fill=white] (7,4) circle (.35);
\draw[line width=0.45mm, fill=white] (9,4) circle (.35);
\draw[line width=0.45mm, fill=white] (1,2) circle (.35);
\draw[line width=0.45mm, fill=white] (3,2) circle (.35);
\draw[line width=0.45mm, fill=white] (5,2) circle (.35);
\draw[line width=0.45mm, fill=white] (7,2) circle (.35);
\draw[line width=0.45mm, fill=white] (9,2) circle (.35);
\draw[line width=0.45mm, fill=white] (1,0) circle (.35);
\draw[line width=0.45mm, fill=white] (3,0) circle (.35);
\draw[line width=0.45mm, fill=white] (5,0) circle (.35);
\draw[line width=0.45mm, fill=white] (7,0) circle (.35);
\draw[line width=0.45mm, fill=white] (9,0) circle (.35);
\draw[line width=0.45mm, fill=white] (0,5) circle (.35);
\draw[line width=0.45mm, fill=white] (2,5) circle (.35);
\draw[line width=0.45mm, fill=white] (4,5) circle (.35);
\draw[line width=0.45mm, fill=white] (6,5) circle (.35);
\draw[line width=0.45mm, fill=white] (8,5) circle (.35);
\draw[line width=0.45mm, fill=white] (10,5) circle (.35);
\draw[line width=0.45mm, fill=white] (0,3) circle (.35);
\draw[line width=0.45mm, fill=white] (2,3) circle (.35);
\draw[line width=0.45mm, fill=white] (4,3) circle (.35);
\draw[line width=0.45mm, fill=white] (6,3) circle (.35);
\draw[line width=0.45mm, fill=white] (8,3) circle (.35);
\draw[line width=0.45mm, fill=white] (10,3) circle (.35);
\draw[line width=0.45mm, fill=white] (0,1) circle (.35);
\draw[line width=0.45mm, fill=white] (2,1) circle (.35);
\draw[line width=0.45mm, fill=white] (4,1) circle (.35);
\draw[line width=0.45mm, fill=white] (6,1) circle (.35);
\draw[line width=0.45mm, fill=white] (8,1) circle (.35);
\draw[line width=0.45mm, fill=white] (10,1) circle (.35);
\path[fill=white] (1,1) circle (.2);
\node at (1,1) {$\bullet$};
\path[fill=white] (3,1) circle (.2);
\node at (3,1) {$\bullet$};
\path[fill=white] (5,1) circle (.2);
\node at (5,1) {$\bullet$};
\path[fill=white] (7,1) circle (.2);
\node at (7,1) {$\bullet$};
\path[fill=white] (9,1) circle (.2);
\node at (9,1) {$\bullet$};
\path[fill=white] (1,3) circle (.2);
\node at (1,3) {$\bullet$};
\path[fill=white] (3,3) circle (.2);
\node at (3,3) {$\bullet$};
\path[fill=white] (5,3) circle (.2);
\node at (5,3) {$\bullet$};
\path[fill=white] (7,3) circle (.2);
\node at (7,3) {$\bullet$};
\path[fill=white] (9,3) circle (.2);
\node at (9,3) {$\bullet$};
\path[fill=white] (1,5) circle (.2);
\node at (1,5) {$\bullet$};
\path[fill=white] (3,5) circle (.2);
\node at (3,5) {$\bullet$};
\path[fill=white] (5,5) circle (.2);
\node at (5,5) {$\bullet$};
\path[fill=white] (7,5) circle (.2);
\node at (7,5) {$\bullet$};
\path[fill=white] (9,5) circle (.2);
\node at (9,5) {$\bullet$};
\node at (1,6) {$1$};
\node at (3,6) {$0$};
\node at (5,6) {$1$};
\node at (7,6) {$0$};
\node at (9,6) {$1$};
\node at (1,4) {$0$};
\node at (3,4) {$0$};
\node at (5,4) {$1$};
\node at (7,4) {$1$};
\node at (9,4) {$0$};
\node at (1,2) {$0$};
\node at (3,2) {$0$};
\node at (5,2) {$0$};
\node at (7,2) {$1$};
\node at (9,2) {$0$};
\node at (1,0) {$0$};
\node at (3,0) {$0$};
\node at (5,0) {$0$};
\node at (7,0) {$0$};
\node at (9,0) {$0$};
\node at (0,5) {$0$};
\node at (2,5) {$1$};
\node at (4,5) {$1$};
\node at (6,5) {$1$};
\node at (8,5) {$0$};
\node at (10,5) {$1$};
\node at (0,3) {$0$};
\node at (2,3) {$0$};
\node at (4,3) {$0$};
\node at (6,3) {$1$};
\node at (8,3) {$1$};
\node at (10,3) {$1$};
\node at (0,1) {$0$};
\node at (2,1) {$0$};
\node at (4,1) {$0$};
\node at (6,1) {$0$};
\node at (8,1) {$1$};
\node at (10,1) {$1$};
\node at (1.00,6.8) {$ 1$};
\node at (3.00,6.8) {$ 2$};
\node at (5.00,6.8) {$ 3$};
\node at (7.00,6.8) {$ 4$};
\node at (9.00,6.8) {$ 5$};
\node at (-.75,1) {$ 1$};
\node at (-.75,3) {$ 2$};
\node at (-.75,5) {$ 3$};
\end{tikzpicture}}
\]
\caption{A state of a rectangular lattice.}
\end{figure}
We are particularly interested in the states in which the edges around a vertex are labeled in the following way.
\begin{figure}[H]
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
\gammaice{0}{0}{0}{0} &
\gammaice{1}{1}{1}{1} &
\gammaice{0}{1}{0}{1} &
\gammaice{1}{0}{1}{0} &
\gammaice{1}{0}{0}{1} &
\gammaice{0}{1}{1}{0}\\
\hline\end{array}\]
\caption{Admissible labelings in the six-vertex model.}
\label{uncoloredbw}
\end{figure}
If all of the vertices of a state are in one of the six configurations of Figure 2, then we call the state \emph{admissible}. For instance, the state in Figure 1 is admissible.
For the rest of this section, as well as sections 3 and 4, fix $m, n \geq 2$. For a rectangular lattice with $m$ columns and $n$ rows of interior vertices, let $v_{i,j}$ denote the vertex that is in the $i$-th column from the left and the $j$-th row from the bottom. Let $g_{i,j}$ be the label of the horizontal edge which is $i$-th from the left and $j$-th from the bottom. Similarly, let $f_{i,j}$ be the label of the vertical edge which is $i$-th from the left and $j$-th from the bottom. For $1 \leq i \leq m$ and $1 \leq j \leq n$, we call the entries $g_{1,j}, g_{m+1,j}, f_{i,1}, f_{i,n+1}$ the \emph{boundary values} of the lattice. At any vertex $v_{i,j}$, we have the following labeling.
\begin{figure}[H]
\[\begin{tikzpicture}[scale=2]
\coordinate (a) at (-.75, 0);
\coordinate (b) at (0, .75);
\coordinate (c) at (.75, 0);
\coordinate (d) at (0, -.75);
\coordinate (aa) at (-.75,.5);
\coordinate (cc) at (.75,.5);
\draw (a)--(c);
\draw (b)--(d);
\draw[fill=white] (a) circle (.25);
\draw[fill=white] (b) circle (.25);
\draw[fill=white] (c) circle (.25);
\draw[fill=white] (d) circle (.25);
\node at (0,1) { };
\node at (a) {$g_{i,j}$};
\node at (b) {$f_{i,j+1}$};
\node at (c) {$g_{i+1,j}$};
\node at (d) {$f_{i,j}$};
\path[fill=white] (0,0) circle (.2);
\node at (0,0) {\Huge $\bullet$};
\end{tikzpicture}\]
\end{figure}
It is easy to verify that this labeling is admissible if and only if
\begin{align}
g_{i+1,j} - g_{i,j} \equiv f_{i,j+1} - f_{i,j} \pmod 3. \label{admissible-condition}
\end{align}
In light of this, we will view $f$ and $g$ as functions from $[m] \times [n + 1]$ to $\mathbb{F}_3$ and $[m + 1] \times [n]$ to $\mathbb{F}_3$ respectively, where $[n]$ denotes the set $\{1,2,\cdots,n\}$.
Let $h: [m+1] \times [n+1] \to \mathbb{F}_3$ be a function. We define its \emph{discrete partial derivatives} $D_xh: [m] \times [n+1] \to \mathbb{F}_3$ and $D_yh: [m+1] \times [n] \to \mathbb{F}_3$ by
\[(D_x h)_{i,j} = h_{i+1,j} - h_{i,j} \]
and
\[(D_yh)_{i,j} = h_{i,j+1} - h_{i,j}. \]
We define a (discrete) \emph{1-form} to be a formal expression
\[fdx + gdy,\]
where $f: [m] \times [n+1] \to \mathbb{F}_3$ and $g: [m+1] \times [n] \to \mathbb{F}_3$ are functions. If $h: [m+1] \times [n+1] \to \mathbb{F}_3$ is a function, we define its \emph{exterior derivative} $dh$ by
\[dh = (D_xh)dx + (D_yh)dy. \]
We say that a 1-form $fdx + gdy$ is \emph{closed} if
\[D_y f = D_x g, \]
and \emph{exact} if $fdx + gdy = dh$ for some function $h$. It is easy to see that every exact 1-form is closed. In fact, by the following discrete version of the Poincar\'{e} lemma, the converse is also true.
\begin{lemma}\label{poincare}
Let $\alpha = fdx + gdy$ be a closed 1-form. Then $\alpha$ is exact.
\end{lemma}
\begin{proof}
Define $h: [m+1] \times [n+1] \to \mathbb{F}_3$ by
\[h_{i,j} = \sum_{a = 1}^{i-1} f_{a,1} + \sum_{b = 1}^{j-1} g_{i,b}.\]
Then,
\begin{align*}
(D_yh)_{i,j} &= \left(\sum_{a = 1}^{i-1} f_{a,1} + \sum_{b = 1}^{j} g_{i,b}\right) - \left(\sum_{a = 1}^{i} f_{a,1} + \sum_{b = 1}^{j-1} g_{i,b}\right)\\
&= g_{i,j}
\end{align*}
and
\begin{align*}
(D_xh)_{i,j} &= \left(\sum_{a = 1}^{i} f_{a,1} + \sum_{b = 1}^{j-1} g_{i+1,b}\right) - \left(\sum_{a = 1}^{i-1} f_{a,1} + \sum_{b = 1}^{j-1} g_{i,b}\right) \\
&= f_{i,1} + \sum_{b = 1}^{j - 1} (D_xg)_{i,b}\\
&= f_{i,1} + \sum_{b = 1}^{j - 1} (D_y f)_{i,b}\\
&= f_{i,j}.
\end{align*}
So $dh = \alpha$, and $\alpha$ is exact.
\end{proof}
Call a 1-form $fdx + gdy$ \emph{admissible} if for all $i$ and $j$, $f_{i,j}$ and $g_{i,j}$ are not equal to 2. Using this language, we can describe admissible states in terms of differential forms.
\begin{lemma}
There is a one-to-one correspondence between admissible states of the six-vertex model and admissible closed 1-forms.
\end{lemma}
\begin{proof}
Consider an admissible state $A$ of a lattice with with $m$ columns and $n$ rows. Let $g_{i,j}$ and $f_{i,j}$ be the entries of the horizontal and vertical edges as above. Then for all $i$ and $j$, $g_{i,j}$ and $f_{i,j}$ are not equal to 2. Equation (\ref{admissible-condition}) holds exactly when
\[D_y f = D_xg,\]
so we associate $A$ with the admissible closed 1-form $fdx + gdy$.
\end{proof}
As an application of the above correspondence, we will describe admissible states in terms of colorings of a rectangular grid. For the original proof of this result, see \cite{3color}. Recall that a $k$-coloring of a graph is an assignment of values from the set $\{0,1,\cdots,k-1\}$ to each vertex of the graph, such that two adjacent vertices are not assigned the same value.
\begin{theorem}
There are three times as many 3-colorings of a rectangular grid with $m+1$ columns and $n+1$ rows as there are admissible states of the six-vertex model.
\end{theorem}
\begin{proof}
Let $\mathcal{S}_1$ denote the set of admissible closed 1-forms, and let $\mathcal{S}_2$ denote the set of functions $h: [m+1] \times [n+1] \to \mathbb{F}_3$ such that for all $i$ and $j$, $h_{i+1,j} \neq h_{i,j}$ and $h_{i,j+1} \neq h_{i,j}$. In other words, $\mathcal{S}_2$ is the set of functions $h$ such that $D_xh$ and $D_yh$ are nonzero everywhere. It is easy to see that $\mathcal{S}_2$ is in bijection with the set of 3-colorings of the rectangular grid. To prove the theorem, then, it suffices to find a bijection between $\mathcal{S}_1 \times \mathbb{F}_3$ and $\mathcal{S}_2$. By Lemma \ref{poincare}, every element of $\mathcal{S}_1$ is exact, so we can write every element of $\mathcal{S}_1$ in the form $dh$, where $h:[m+1]\times[n+1]\to\mathbb{F}_3$ is a function. Now define $F: \mathcal{S}_1 \times \mathbb{F}_3\to \mathcal{S}_2$ by
\[F(dh, t)_{i,j} = h_{i,j} - h_{1,1} + t + i + j - 2. \]
Note that $F$ is well-defined, since if $dh = dh'$, then $h - h'$ is constant, and $h_{i,j} - h_{1,1} = h'_{i,j} - h'_{1,1}$ for all $i,j$. To see that $F$ does indeed map $\mathcal{S}_1 \times \mathbb{F}_3$ into $\mathcal{S}_2$, observe that
\begin{align*}
(D_x F(dh,t))dx + (D_y F(dh,t))dy &= dF(dh,t)\\
&= dh + dx + dy.
\end{align*}
If we write $dh= fdx + gdy$, then $f$ and $g$ are nowhere equal to 2, since $dh$ is admissible. So, comparing the first and last expressions in the equation above, we see that $D_xF(dh,t)$ and $D_yF(dh,t)$ are nowhere 0. Thus, $F$ is a well-defined map into $\mathcal{S}_2$. Next, we define $G: \mathcal{S}_2 \to \mathcal{S}_1 \times \mathbb{F}_3$ by
\[G(h) = (dh - dx - dy, h_{1,1}). \]
To see that $G$ maps $\mathcal{S}_2$ into $\mathcal{S}_1 \times \mathbb{F}_3$, observe that if we write $dh = fdx + gdy$, then $f$ and $g$ are nowhere zero, so the components of $dh - dx - dy$ are nowhere equal to 2. Now let $x,y:[m+1]\times[n+1]\to\mathbb{F}_3$ be functions defined by
\begin{align*}
x_{i,j} &= i\\
y_{i,j} &= j.
\end{align*}
Then we see that
\begin{align*}
(F \circ G)(h)_{i,j} &= F(d(h - x - y),h_{1,1})_{i,j}\\
&= (h_{i,j} - x_{i,j} - y_{i,j}) - (h_{1,1} - x_{1,1} - y_{1,1}) + h_{1,1} + i + j - 2 \\
&= h_{i,j}
\end{align*}
and
\begin{align*}
(G \circ F)(dh,t) &= (dF(dh,t) - dx - dy, F(dh,t)_{1,1})
\\&= ((dh + dx + dy) - dx - dy, t)\\
&= (dh, t).
\end{align*}
Thus, $F$ and $G$ are inverses of each other, and $\mathcal{S}_1 \times \mathbb{F}_3$ and $\mathcal{S}_2$ are in bijection.
\end{proof}
\section{Toroidal Boundary Conditions}
We will now apply the methods used in the previous section to lattices with toroidal boundary conditions. In this section, we will make the additional assumption that $m$ and $n$ are not divisible by 3. Let $A$ be an admissible state of the six-vertex model, with vertical entries $f_{i,j}$ and horizontal entries $g_{i,j}$ as previously. We say that $A$ has \emph{toroidal boundary conditions} if its boundary values satisfy
\begin{align*}
g_{1,j} &= g_{m+1,j}\\
f_{i,1} &= f_{i,n+1}
\end{align*}
for all $1 \leq i \leq m$ and $1 \leq j \leq n$.
\begin{figure}[h]
\[
\scalebox{.95}{\begin{tikzpicture}
\draw [line width=0.45mm] (1,0)--(1,6);
\draw [line width=0.45mm] (3,0)--(3,6);
\draw [line width=0.45mm] (5,0)--(5,6);
\draw [line width=0.45mm] (7,0)--(7,6);
\draw [line width=0.45mm] (9,0)--(9,6);
\draw [line width=0.45mm] (0,1)--(10,1);
\draw [line width=0.45mm] (0,3)--(10,3);
\draw [line width=0.45mm] (0,5)--(10,5);
\draw[line width=0.45mm, fill=white] (1,6) circle (.35);
\draw[line width=0.45mm, fill=white] (3,6) circle (.35);
\draw[line width=0.45mm, fill=white] (5,6) circle (.35);
\draw[line width=0.45mm, fill=white] (7,6) circle (.35);
\draw[line width=0.45mm, fill=white] (9,6) circle (.35);
\draw[line width=0.45mm, fill=white] (1,4) circle (.35);
\draw[line width=0.45mm, fill=white] (3,4) circle (.35);
\draw[line width=0.45mm, fill=white] (5,4) circle (.35);
\draw[line width=0.45mm, fill=white] (7,4) circle (.35);
\draw[line width=0.45mm, fill=white] (9,4) circle (.35);
\draw[line width=0.45mm, fill=white] (1,2) circle (.35);
\draw[line width=0.45mm, fill=white] (3,2) circle (.35);
\draw[line width=0.45mm, fill=white] (5,2) circle (.35);
\draw[line width=0.45mm, fill=white] (7,2) circle (.35);
\draw[line width=0.45mm, fill=white] (9,2) circle (.35);
\draw[line width=0.45mm, fill=white] (1,0) circle (.35);
\draw[line width=0.45mm, fill=white] (3,0) circle (.35);
\draw[line width=0.45mm, fill=white] (5,0) circle (.35);
\draw[line width=0.45mm, fill=white] (7,0) circle (.35);
\draw[line width=0.45mm, fill=white] (9,0) circle (.35);
\draw[line width=0.45mm, fill=white] (0,5) circle (.35);
\draw[line width=0.45mm, fill=white] (2,5) circle (.35);
\draw[line width=0.45mm, fill=white] (4,5) circle (.35);
\draw[line width=0.45mm, fill=white] (6,5) circle (.35);
\draw[line width=0.45mm, fill=white] (8,5) circle (.35);
\draw[line width=0.45mm, fill=white] (10,5) circle (.35);
\draw[line width=0.45mm, fill=white] (0,3) circle (.35);
\draw[line width=0.45mm, fill=white] (2,3) circle (.35);
\draw[line width=0.45mm, fill=white] (4,3) circle (.35);
\draw[line width=0.45mm, fill=white] (6,3) circle (.35);
\draw[line width=0.45mm, fill=white] (8,3) circle (.35);
\draw[line width=0.45mm, fill=white] (10,3) circle (.35);
\draw[line width=0.45mm, fill=white] (0,1) circle (.35);
\draw[line width=0.45mm, fill=white] (2,1) circle (.35);
\draw[line width=0.45mm, fill=white] (4,1) circle (.35);
\draw[line width=0.45mm, fill=white] (6,1) circle (.35);
\draw[line width=0.45mm, fill=white] (8,1) circle (.35);
\draw[line width=0.45mm, fill=white] (10,1) circle (.35);
\path[fill=white] (1,1) circle (.2);
\node at (1,1) {$\bullet$};
\path[fill=white] (3,1) circle (.2);
\node at (3,1) {$\bullet$};
\path[fill=white] (5,1) circle (.2);
\node at (5,1) {$\bullet$};
\path[fill=white] (7,1) circle (.2);
\node at (7,1) {$\bullet$};
\path[fill=white] (9,1) circle (.2);
\node at (9,1) {$\bullet$};
\path[fill=white] (1,3) circle (.2);
\node at (1,3) {$\bullet$};
\path[fill=white] (3,3) circle (.2);
\node at (3,3) {$\bullet$};
\path[fill=white] (5,3) circle (.2);
\node at (5,3) {$\bullet$};
\path[fill=white] (7,3) circle (.2);
\node at (7,3) {$\bullet$};
\path[fill=white] (9,3) circle (.2);
\node at (9,3) {$\bullet$};
\path[fill=white] (1,5) circle (.2);
\node at (1,5) {$\bullet$};
\path[fill=white] (3,5) circle (.2);
\node at (3,5) {$\bullet$};
\path[fill=white] (5,5) circle (.2);
\node at (5,5) {$\bullet$};
\path[fill=white] (7,5) circle (.2);
\node at (7,5) {$\bullet$};
\path[fill=white] (9,5) circle (.2);
\node at (9,5) {$\bullet$};
\node at (1,6) {$0$};
\node at (3,6) {$1$};
\node at (5,6) {$1$};
\node at (7,6) {$0$};
\node at (9,6) {$0$};
\node at (1,4) {$1$};
\node at (3,4) {$1$};
\node at (5,4) {$0$};
\node at (7,4) {$0$};
\node at (9,4) {$0$};
\node at (1,2) {$0$};
\node at (3,2) {$1$};
\node at (5,2) {$1$};
\node at (7,2) {$0$};
\node at (9,2) {$0$};
\node at (1,0) {$0$};
\node at (3,0) {$1$};
\node at (5,0) {$1$};
\node at (7,0) {$0$};
\node at (9,0) {$0$};
\node at (0,5) {$1$};
\node at (2,5) {$0$};
\node at (4,5) {$0$};
\node at (6,5) {$1$};
\node at (8,5) {$1$};
\node at (10,5) {$1$};
\node at (0,3) {$0$};
\node at (2,3) {$1$};
\node at (4,3) {$1$};
\node at (6,3) {$0$};
\node at (8,3) {$0$};
\node at (10,3) {$0$};
\node at (0,1) {$0$};
\node at (2,1) {$0$};
\node at (4,1) {$0$};
\node at (6,1) {$0$};
\node at (8,1) {$0$};
\node at (10,1) {$0$};
\node at (1.00,6.8) {$ 1$};
\node at (3.00,6.8) {$ 2$};
\node at (5.00,6.8) {$ 3$};
\node at (7.00,6.8) {$ 4$};
\node at (9.00,6.8) {$ 5$};
\node at (-.75,1) {$ 1$};
\node at (-.75,3) {$ 2$};
\node at (-.75,5) {$ 3$};
\end{tikzpicture}}
\]
\caption{An admissible state with toroidal boundary conditions.}
\end{figure}
If $h: \mathbb{Z} \times \mathbb{Z} \to \mathbb{F}_3$ is a function, we say that $h$ is \emph{doubly periodic} if
\begin{align*}
h_{i+m,j} = h_{i,j + n} = h_{i,j}
\end{align*}
for all $i,j \in \mathbb{Z}$. Just as before, for such a function $h$, we define its \emph{partial derivatives} $D_xh, D_yh: \mathbb{Z} \times \mathbb{Z} \to \mathbb{F}_3$ by
\[(D_xh)_{i,j} = h_{i+1,j} - h_{i,j} \]
and
\[(D_yh)_{i,j} = h_{i,j+1}- h_{i,j}. \]
Note that $D_xh$ and $D_yh$ are doubly periodic. We define a \emph{toroidal 1-form} to be a formal expression $fdx + gdy$, where $f,g: \mathbb{Z} \times \mathbb{Z} \to \mathbb{F}_3$ are doubly periodic functions. For a doubly periodic function $h$, we define its \emph{exterior derivative} $dh$ as the toroidal 1-form given by
\[dh = (D_xh)dx + (D_yh)dy. \]
We say that a toroidal 1-form $fdx + gdy$ is \emph{closed} if
\[D_yf = D_xg\]
and \emph{exact} if $fdx + gdy = dh$ for some doubly periodic $h: \mathbb{Z} \times \mathbb{Z} \to \mathbb{F}_3$. A toroidal 1-form $fdx + gdy$ is \emph{admissible} if for all $i$ and $j$, $f_{i,j}$ and $g_{i,j}$ are not equal to 2. In this section, we will simply refer to a toroidal 1-form as a 1-form. The following lemma is an analogue of Lemma \ref{poincare}; it computes the 1-dimensional cohomology of the discrete torus.
\begin{lemma}\label{cohomology}
Every closed 1-form can be written uniquely in the form
\[rdx + sdy + \omega, \]
where $r,s \in \mathbb{F}_3$ and $\omega$ is exact.
\end{lemma}
\begin{proof}
Let $fdx + gdy$ be a closed 1-form, where $f,g:\mathbb{Z}\times\mathbb{Z}\to\mathbb{F}_3$ are doubly periodic. Let
\begin{align*}r &= \frac{1}{m}\sum_{i = 1}^{m}f_{i,1},\\ s &= \frac{1}{n}\sum_{j=1}^{n} g_{1,j}.
\end{align*}
We claim that $\omega = fdx + gdy - rdx - sdy$ is exact. To see this, define a function $h: \mathbb{Z} \times \mathbb{Z} \to \mathbb{F}_3$ by
\[h_{i,j} = \sum_{a = 1}^{\tilde{i}-1}(f_{a,1} - r) + \sum_{b = 1}^{\tilde{j} - 1}(g_{i,b} - s),\]
where $\tilde{i}$ is the unique integer such that $1 \leq \tilde{i} \leq m$ with $\tilde{i} \equiv i \pmod{m}$, and $\tilde{j}$ is the integer such that $1 \leq \tilde{j} \leq n$ with $\tilde{j} \equiv j \pmod{n}$. It is clear that $h$ is doubly periodic. Observe that if $i \not\equiv 0 \pmod{m}$, we have $\widetilde{i + 1} = \tilde{i} + 1$, so
\[ \sum_{a = 1}^{\widetilde{i + 1}-1}(f_{a,1} - r) = \sum_{a = 1}^{\tilde{i}}(f_{a,1} - r).\]
If $i \equiv 0 \pmod{m}$, then we also have
\begin{align*}
\sum_{a = 1}^{\widetilde{i + 1}-1}(f_{a,1} - r) &= \sum_{a = 1}^{0}(f_{a,1} - r) \\
&= 0 \\
&= \sum_{a = 1}^{m} (f_{a,1} - r)\\
&= \sum_{a = 1}^{\tilde{i}}(f_{a,1} - r).
\end{align*}
A similar argument shows that for all $j$,
\[\sum_{b = 1}^{\widetilde{j + 1} - 1}(g_{i,b} - s) = \sum_{b = 1}^{\tilde{j}}(g_{i,b} - s).\]
We now compute
\begin{align*}
(D_x h)_{i,j} &= \left(\sum_{a = 1}^{\widetilde{i+1}-1}(f_{a,1} - r) + \sum_{b = 1}^{\tilde{j} - 1}(g_{i+1,b} - s)\right) - \left(\sum_{a = 1}^{\tilde{i}-1}(f_{a,1} - r) + \sum_{b = 1}^{\tilde{j} - 1}(g_{i,b} - s)\right) \\
&= \left(\sum_{a = 1}^{\tilde{i}}(f_{a,1} - r) + \sum_{b = 1}^{\tilde{j} - 1}(g_{i+1,b} - s)\right) - \left(\sum_{a = 1}^{\tilde{i}-1}(f_{a,1} - r) + \sum_{b = 1}^{\tilde{j} - 1}(g_{i,b} - s)\right)\\
&= (f_{\tilde{i},1} - r) + \sum_{b = 1}^{\tilde{j}-1}(g_{i+1,b} - g_{i,b}) \\
&= (f_{i,1} - r) + \sum_{b = 1}^{\tilde{j}-1}(D_xg)_{i,b}\\
&= (f_{i,1} - r) + \sum_{b = 1}^{\tilde{j}-1}(D_yf)_{i,b}\\
&= f_{i,\tilde{j}} - r \\
&= f_{i,j} - r
\end{align*}
and
\begin{align*}
(D_yh)_{i,j} &= \left(\sum_{a = 1}^{\tilde{i}-1}(f_{a,1} - r) + \sum_{b = 1}^{\widetilde{j+1} - 1}(g_{i,b} - s)\right) - \left(\sum_{a = 1}^{\tilde{i}-1}(f_{a,1} - r) + \sum_{b = 1}^{\tilde{j} - 1}(g_{i,b} - s) \right) \\
&= \left(\sum_{a = 1}^{\tilde{i}-1}(f_{a,1} - r) + \sum_{b = 1}^{\tilde{j}}(g_{i,b} - s)\right) - \left(\sum_{a = 1}^{\tilde{i}-1}(f_{a,1} - r) + \sum_{b = 1}^{\tilde{j} - 1}(g_{i,b} - s) \right)\\
&= g_{i,\tilde{j}} - s \\
&= g_{i,j} - s.
\end{align*}
Thus, we have shown that $dh = \omega$, so $\omega$ is exact.
To check uniqueness, it suffices to show that if $rdx + sdy$ is exact, then $r = s = 0$. Suppose that $rdx + sdy = df$, where $f$ is doubly periodic. Then
\begin{align*}
0 &= \frac{1}{m} \sum_{i = 1}^{m}(f_{i+1,1} - f_{i,1})\\
&= \frac{1}{m}\sum_{i = 1}^{m}(D_xf)_{i,1}\\
&= \frac{1}{m}\sum_{i = 1}^{m}r \\
&= r.
\end{align*}
A similar argument shows that $s = 0$.
\end{proof}
\begin{lemma}
There is a one-to-one correspondence between admissible closed 1-forms and admissible states of the six-vertex model with toroidal boundary conditions.
\end{lemma}
\begin{proof}
Let $f_{i,j}$ and $g_{i,j}$ be the vertical and horizontal entries of an admissible state $A$ with toroidal boundary conditions. Then we may uniquely extend $f_{i,j}$ and $g_{i,j}$ to doubly periodic functions $\tilde{f},\tilde{g}:\mathbb{Z}\times\mathbb{Z}\to\mathbb{F}_3$. We associate $A$ with the closed admissible 1-form $\tilde{f}dx + \tilde{g}dy$. It is not difficult to see that this is a one-to-one correspondence.
\end{proof}
Call a doubly periodic function $h: \mathbb{Z} \times \mathbb{Z} \to \mathbb{F}_3$ \emph{sparse} if neither $D_xh$ nor $D_yh$ is surjective, and $h_{1,1} = 0$. We can characterize admissible states with toroidal boundary conditions in terms of sparse functions, in the same way that we classified admissible states in the last section in terms of 3-colorings.
\begin{theorem}
There is a one-to-one correspondence between sparse functions and admissible states of the six-vertex model with toroidal boundary conditions.
\end{theorem}
\begin{proof}
Let $\mathcal{S}_1$ denote the set of closed admissible 1-forms, and let $\mathcal{S}_2$ be the set of sparse functions. It suffices to find a bijection between $\mathcal{S}_1$ and $\mathcal{S}_2$. Define a map $F: \mathcal{S}_2 \to \mathcal{S}_1$ as follows. If $h$ is a sparse function, define
\[F(h) = r(h)dx + s(h)dy + (D_xh)dx + (D_yh)dy = r(h)dx + s(h)dy + dh, \]
where $2-r(h) \notin \text{Im}(D_xh)$ and $2-s(h) \notin \text{Im}(D_yh)$. We will show that $F$ is a bijection. To see that $F$ is injective, suppose that $F(h) = F(h')$. Then, by the uniqueness result in Lemma \ref{cohomology}, $dh = dh'$, which implies that $h - h'$ is constant. But $h_{1,1} = (h')_{1,1} = 0$, so $h = h'$ and $F$ is injective. To see that $F$ is surjective, let
\[\omega = rdx + sdy + (D_xh)dx + (D_yh)dy \]
be an element of $\mathcal{S}_1$ (by Lemma \ref{cohomology}). Without loss of generality, we may assume that $h_{1,1} = 0$. Then, by definition, $r + D_xh$ and $s + D_yh$ are not surjective, implying that $D_xh$ and $D_yh$ are not surjective. So $\omega = F(h)$. Thus, we have a bijection.
\end{proof}
\section{Admissible States of the Eight-Vertex Model}
In this section, we will consider a different set of admissible labelings.
\begin{figure}[H]
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
\gammaice{0}{0}{0}{0} &
\gammaice{1}{1}{1}{1} &
\gammaice{0}{1}{0}{1} &
\gammaice{1}{0}{1}{0}\\
\hline
\gammaice{1}{0}{0}{1} &
\gammaice{0}{1}{1}{0} &
\gammaice{1}{1}{0}{0} &
\gammaice{0}{0}{1}{1}
\\
\hline\end{array}\]
\caption{Admissible labelings in the eight-vertex model.}
\label{uncoloredbw}
\end{figure}
In the eight-vertex model, we call a state \emph{admissible} if all of its vertices are in one of the eight configurations of Figure 4. Let $f_{i,j}$ and $g_{i,j}$ be the labelings of the vertical edges and horizontal edges as before. At a vertex $v_{i,j}$, we have the labeling
\begin{figure}[H]
\[\begin{tikzpicture}[scale=2]
\coordinate (a) at (-.75, 0);
\coordinate (b) at (0, .75);
\coordinate (c) at (.75, 0);
\coordinate (d) at (0, -.75);
\coordinate (aa) at (-.75,.5);
\coordinate (cc) at (.75,.5);
\draw (a)--(c);
\draw (b)--(d);
\draw[fill=white] (a) circle (.25);
\draw[fill=white] (b) circle (.25);
\draw[fill=white] (c) circle (.25);
\draw[fill=white] (d) circle (.25);
\node at (0,1) { };
\node at (a) {$g_{i,j}$};
\node at (b) {$f_{i,j+1}$};
\node at (c) {$g_{i+1,j}$};
\node at (d) {$f_{i,j}$};
\path[fill=white] (0,0) circle (.2);
\node at (0,0) {\Huge $\bullet$};
\end{tikzpicture}\]
\end{figure}
as before, so a state is admissible exactly when
\begin{align}
f_{i,j} + g_{i,j} + f_{i,j+1} + g_{i+1,j} \equiv 0 \pmod 2 \label{admissible8}
\end{align}
for all $1 \leq i \leq m$ and $1 \leq j \leq n$. So in this section, we will view $f$ and $g$ as functions from $[m] \times [n+1]$ to $\mathbb{F}_3$ and $[m+1] \times [n]$ to $\mathbb{F}_3$ respectively. Since the condition for a state to be admissible is a linear condition in this case, it is significantly easier to find the number of admissible states.
\begin{theorem}\label{8admissible}
The number of admissible states of the eight-vertex model is $2^{m + n + mn}$.
\end{theorem}
\begin{proof}
Let $V$ be the $\mathbb{F}_2$-vector space of pairs of functions $(f,g)$ with $f: [m] \times [n+1] \to \mathbb{F}_2$ and $g: [m+1] \times [n] \to \mathbb{F}_2$. Let $W$ be $mn$-dimensional vector space with basis vectors $e_{i,j}$ for $1\leq i \leq m$ and $1 \leq j \leq n$. Define a linear map $\varphi: V \to W$ by
\[\varphi(f,g) = \sum_{i = 1}^m \sum_{j = 1}^n (f_{i,j} + g_{i,j} + f_{i,j+1} + g_{i+1,j})e_{i,j}. \]
By equation (\ref{admissible8}), we may view $V' = \ker \varphi$ as the set of admissible states of the eight-vertex model. Fix some $a \in [m]$ and $b \in [n]$. Let $g = 0$ and define $f$ by $f_{i,j} = 0$ if $i \neq a$ or $j \leq b$, and $f_{i,j} = 1$ otherwise. Then,
\begin{align*}
\varphi(f,g) &= \sum_{i = 1}^m \sum_{j = 1}^n (f_{i,j} + f_{i,j + 1})e_{i,j} \\
&= \sum_{j = b}^n (f_{a,j} + f_{a,j + 1})e_{a,j}\\
&= e_{a,b}.
\end{align*}
Since $a$ and $b$ were arbitrary, this shows that $\varphi$ is surjective. Thus,
\begin{align*}
\dim V' = \dim V - \dim W = m(n + 1) + (m + 1)n - mn = mn + m + n.
\end{align*}
So $V'$ contains $2^{m + n + mn}$ elements, and the result follows.
\end{proof}
Now we focus our attention on the number of admissible states with a given set of boundary conditions.
\begin{theorem}\label{sumto0}
There exists an admissible state $(f,g)$ with boundary conditions
\[f_{i,1}, f_{i,n+1},g_{1,j},g_{m+1,j} \]
if and only if
\begin{align}\label{goodBC}
\sum_{i = 1}^m(f_{i,1} + f_{i,n+1}) + \sum_{j = 1}^n(g_{1,j} + g_{m + 1,j}) = 0.
\end{align}
\end{theorem}
\begin{proof}
Consider an admissible state $(f,g)$ with vertical edges $f_{i,j}$ and horizontal edges $g_{i,j}$. By equation (\ref{admissible8}), we have
\begin{align*}
0 &= \sum_{i = 1}^m \sum_{j = 1}^n (f_{i,j} + g_{i,j} + f_{i,j+1} + g_{i+1,j}) \\
&= \sum_{i = 1}^m\sum_{j = 1}^n (f_{i,j} + g_{i,j}) + \sum_{i = 1}^m\sum_{j = 2}^{n + 1} f_{i,j} + \sum_{i = 2}^{m+1}\sum_{j = 1}^n g_{i,j}\\
&= \sum_{i = 1}^m \left(f_{i,1} + \sum_{j = 2}^nf_{i,j} + \sum_{j=1}^ng_{i,j} + f_{i,n+1} + \sum_{j = 2}^n f_{i,j}\right) + \sum_{i = 2}^{m+1}\sum_{j = 1}^n g_{i,j} \\
&= \sum_{i = 1}^m(f_{i,1} + f_{i,n+1}) + \sum_{i = 1}^m\sum_{j = 1}^n g_{i,j} + \sum_{i = 2}^{m+1}\sum_{j=1}^ng_{i,j}\\
&= \sum_{i = 1}^m(f_{i,1} + f_{i,n+1}) + \sum_{j = 1}^n(g_{1,j}+g_{m+1,j}).
\end{align*}
To prove the converse, let \[f_{i,1}, f_{i,n+1},g_{1,j},g_{m+1,j}\]
be prescribed boundary conditions such that equation (\ref{goodBC}) holds. We will find an admissible state $(f,g)$ with these boundary conditions. Let us define
\[g_{i,j} = 0\]
for $2 \leq i \leq m$ and $2 \leq j \leq n$, and
\[g_{i,1} = \sum_{b = 1}^n g_{1,b} + \sum_{a = 1}^{i-1}(f_{a,1}+f_{a,n+1})\]
for $2 \leq i \leq m$. We also define
\[f_{i,j} = f_{i,n+1} \]
for $2 \leq i \leq m - 1$ and $2 \leq j \leq n$,
\[f_{1,j} = f_{1,n+1} + \sum_{b = 1}^{n - j + 1}g_{1,n - b + 1}\]
for $2 \leq j \leq n$, and
\[f_{m,j} = f_{m,n+1} + \sum_{b = 1}^{n - j + 1}g_{m+1,n - b + 1} \]
for $2 \leq j \leq n$. We claim that this defines an admissible state. To see this, we will look at several different cases. If $i = 1$ and $j = 1$, then
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= f_{1,1} + \left(f_{1,n+1} + \sum_{b = 1}^{n-1} g_{1,n-b+1}\right)\\&+ g_{1,1} + \left(\sum_{b = 1}^n g_{1,b} + f_{1,1} + f_{1,n+1} \right) \\
&= 0.
\end{align*}
If $i = 1$ and $2 \leq j \leq n - 1$, then
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= \left(f_{1,n+1} + \sum_{b = 1}^{n - j + 1}g_{1,n-b+1}\right) + \left(f_{1,n+1} + \sum_{b = 1}^{n - j} g_{1,n - b + 1} \right)+ g_{1,j} + 0\\
&= 0.
\end{align*}
If $i = 1$ and $j = n$, then
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= \left(f_{1,n+1} + g_{1,n} \right) + f_{1,n+1} + g_{1,n} + 0\\
&= 0.
\end{align*}
If $2 \leq i \leq m - 1$ and $j = 1$, then
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= f_{i,1} + f_{i,n+1} + \left(\sum_{b = 1}^n g_{1,b} + \sum_{a = 1}^{i - 1}(f_{a,1} + f_{a,n+1}) \right) \\&+ \left(\sum_{b = 1}^n g_{1,b}
+ \sum_{a = 1}^i (f_{a,1} + f_{a,n+1})\right)\\
&= 0.
\end{align*}
If $2 \leq i \leq m - 1$ and $2 \leq j \leq n - 1$, then
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= f_{i,n+1} + f_{i,n+1} + 0 + 0\\
&= 0.
\end{align*}
If $2 \leq i \leq m - 1$ and $j = n$, then
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= f_{i,n+1} + f_{i,n+1} + 0 + 0\\
&= 0.
\end{align*}
If $i = m$ and $j = 1$, then by equation (\ref{goodBC}),
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= f_{m,1} + \left(f_{m,n+1} + \sum_{b = 1}^{n - 1}g_{m+1,n-b+1} \right)\\
&+ \left(\sum_{b = 1}^n g_{1,b} + \sum_{a = 1}^{m - 1}(f_{a,1} + f_{a,n+1}) \right) + g_{m+1,1} \\
&= \sum_{b = 1}^{n}(g_{1,b} + g_{m + 1,b}) + \sum_{a = 1}^m (f_{a,1} + f_{a,n+1}) \\
&= 0.
\end{align*}
If $i = m$ and $2 \leq j \leq n - 1$, then
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= \left(f_{m,n+1} + \sum_{b = 1}^{n - j + 1}g_{m + 1,n-b+1} \right) + \left(f_{m,n+1} + \sum_{b = 1}^{n - j}g_{m+1,n-b+1} \right)\\
&+0 + g_{m+1,j}\\
&= 0.
\end{align*}
If $i = m$ and $j = n$, then
\begin{align*}
f_{i,j} + f_{i,j+1} + g_{i,j} + g_{i + 1,j} &= \left(f_{m,n+1} + g_{m + 1,n} \right) + f_{m,n+1} + 0 + g_{m + 1,n}\\
&= 0.
\end{align*}
Thus, the state we have defined is admissible.
\end{proof}
\begin{theorem}
Let $f_{i,1}, f_{i,n+1},g_{1,j},g_{m+1,j}\in \mathbb{F}_2$ be values such that
\[\sum_{i = 1}^m(f_{i,1} + f_{i,n+1}) + \sum_{j = 1}^n(g_{1,j} + g_{m + 1,j}) = 0.\] Then the number of admissible states with boundary conditions $f_{i,1}, f_{i,n+1}, g_{1,j},g_{m+1,j}$ is $2^{(m-1)(n-1)}$.
\end{theorem}
\begin{proof}
Let $\mathcal{S}$ denote the set of admissible states with the boundary conditions above, and let $\mathcal{S}_0$ denote the set of admissible states with boundary values all 0. We claim that $\mathcal{S}$ and $\mathcal{S}_0$ have the same number of elements. To see this, note first that $\mathcal{S}$ is non-empty by Theorem \ref{sumto0}, so let $(f^1, g^1)\in\mathcal{S}$. Then we define a map $\psi: \mathcal{S}_0 \to \mathcal{S}$ by
\[\psi(f^0, g^0) = (f^0 + f^1, g^0 + g^1). \]
It is easy to see that $\psi$ is injective. To see that $\psi$ is surjective, suppose that $(f^2, g^2) \in \mathcal{S}$. Then $(f^2 - f^1, g^2 - g^1) \in \mathcal{S}_0$, so $\psi(f^2 - f^1, g^2 - g^1) = (f^2, g^2)$. Thus, $\psi$ is a bijection between $\mathcal{S}_0$ and $\mathcal{S}$. This shows that all sets of boundary conditions having at least one admissible state have the same number of admissible states. It is not difficult to see that the number of sets of boundary conditions satisfying (\ref{goodBC}) is $2^{2m + 2n - 1}$. But by Theorem \ref{8admissible}, there are $2^{m + n + mn}$ admissible states across all boundary conditions. So for a given set of boundary conditions, there are
\[\frac{2^{m + n + mn}}{2^{2m + 2n - 1}} = 2^{(m - 1)(n - 1)} \]
admissible states.
\end{proof}
\section{Yang-Baxter Equation for the Eight-Vertex Model}
We continue to study the eight-vertex model. First, we review some basic properties of the Yang-Baxter equation. To each admissible labeling, we assign a value in some field $\mathbb{F}$, which we call its \emph{Boltzmann weight}.
\begin{figure}[H]
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
\gammaice{0}{0}{0}{0} &
\gammaice{1}{1}{1}{1} &
\gammaice{0}{1}{0}{1} &
\gammaice{1}{0}{1}{0}\\
\hline
a_1 & a_{-1} & b_1 & b_{-1}\\
\hline
\gammaice{1}{0}{0}{1} &
\gammaice{0}{1}{1}{0} &
\gammaice{1}{1}{0}{0} &
\gammaice{0}{0}{1}{1}
\\
\hline
c_1 & c_{-1} & d_1 & d_{-1}\\
\hline\end{array}\]
\caption{Admissible labelings, along with their Boltzmann weights.}
\label{uncoloredbw}
\end{figure}
We will also assign Boltzmann weights to diagonally oriented vertices.
\begin{figure}[H]
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
\gammagamma{0}{0}{0}{0} &
\gammagamma{1}{1}{1}{1} &
\gammagamma{0}{1}{0}{1} &
\gammagamma{1}{0}{1}{0}\\
\hline
a_1 & a_{-1} & b_1 & b_{-1}\\
\hline
\gammagamma{1}{0}{0}{1} &
\gammagamma{0}{1}{1}{0} &
\gammagamma{1}{1}{0}{0} &
\gammagamma{0}{0}{1}{1}
\\
\hline
c_1 & c_{-1} & d_1 & d_{-1}\\
\hline\end{array}\]
\caption{Admissible labelings, along with their Boltzmann weights, for diagonally oriented vertices.}
\label{uncoloredbw}
\end{figure}
To each such assignment of Boltzmann weights, we associate a matrix
\[R = \begin{pmatrix} a_1 & & & d_1\\
& b_1 & c_1 & \\
& c_{-1} & b_{-1} &\\
d_{-1} & & & a_{-1}
\end{pmatrix} = \begin{pmatrix} a_1(R) & & & d_1(R)\\
& b_1(R) & c_1(R) & \\
& c_{-1}(R) & b_{-1}(R) &\\
d_{-1}(R) & & & a_{-1}(R)
\end{pmatrix}.\]
Let $V$ be a vector space with basis $v_0$ and $v_1$. Then we may view $R$ as an endomorphism of $V \otimes V$ with respect to the basis $v_0 \otimes v_0$, $v_0 \otimes v_1$, $v_1 \otimes v_0$, and $v_1 \otimes v_1$. We write
\[R(v_\nu \otimes v_\beta) = \sum_{\theta,\gamma}R_{\nu\beta}^{\theta\gamma}v_\theta \otimes v_\gamma.\]
For example, we have $R_{01}^{10} = c_{-1}(R)$. Observe that $R_{\nu\beta}^{\theta\gamma}$ is the Boltzmann weight of the following labelings.
\begin{figure}[H]
\[
\gammagamma{\nu}{\beta}{\theta}{\gamma} \qquad \gammaice{\nu}{\beta}{\theta}{\gamma}\]
\label{uncoloredbw}
\end{figure}
Now if $\phi$ is an endormorphism of $V \otimes V$, we define endomorphisms $\phi_{12}, \phi_{23}, \phi_{13}$ of $V\otimes V \otimes V$ as follows. If $\phi = \phi' \otimes \phi''$ for $\phi', \phi'' \in \text{End}(V)$, then we define
\begin{align*}
\phi_{12} &= \phi' \otimes \phi'' \otimes 1\\
\phi_{13} &= \phi' \otimes 1 \otimes \phi''\\
\phi_{23} &= 1 \otimes \phi \otimes \phi''.
\end{align*}
We extend these definitions for all $\phi$ by linearity. For $\phi, \psi,\chi \in \text{End}(V\otimes V \otimes V)$, we define their \emph{Yang-Baxter commutator} $[[\phi,\psi,\chi]]$ by
\[[[\phi,\psi,\chi]] = \phi_{12}\psi_{13}\chi_{23} - \chi_{23}\psi_{13}\phi_{12}.\]
For $R, S, T \in \text{End}(V \otimes V)$, we say that the \emph{star-triangle relation} holds if
\begin{equation}
\label{eqn:ybe}
\hfill
\sum_{\gamma,\mu,\nu}\quad
\begin{tikzpicture}[baseline=(current bounding box.center)]
\draw (0,1) to [out = 0, in = 180] (2,3) to (4,3);
\draw (0,3) to [out = 0, in = 180] (2,1) to (4,1);
\draw (3,0) to (3,4);
\draw[fill=white] (0,1) circle (.3);
\draw[fill=white] (0,3) circle (.3);
\draw[fill=white] (3,4) circle (.3);
\draw[fill=white] (4,3) circle (.3);
\draw[fill=white] (4,1) circle (.3);
\draw[fill=white] (3,0) circle (.3);
\draw[fill=white] (2,3) circle (.3);
\draw[fill=white] (2,1) circle (.3);
\draw[fill=white] (3,2) circle (.3);
\node at (0,1) {$\sigma$};
\node at (0,3) {$\tau$};
\node at (3,4) {$\beta$};
\node at (4,3) {$\theta$};
\node at (4,1) {$\rho$};
\node at (3,0) {$\alpha$};
\node at (2,3) {$\nu$};
\node at (3,2) {$\gamma$};
\node at (2,1) {$\mu$};
\path[fill=white] (3,3) circle (.3);
\node at (3,3) {$S$};
\path[fill=white] (3,1) circle (.3);
\node at (3,1) {$T$};
\path[fill=white] (1,2) circle (.3);
\node at (1,2) {$R$};
\end{tikzpicture}\quad
= \sum_{\delta,\phi,\psi}\quad
\begin{tikzpicture}[baseline=(current bounding box.center)]
\draw (0,1) to (2,1) to [out = 0, in = 180] (4,3);
\draw (0,3) to (2,3) to [out = 0, in = 180] (4,1);
\draw (1,0) to (1,4);
\draw[fill=white] (0,1) circle (.3);
\draw[fill=white] (0,3) circle (.3);
\draw[fill=white] (1,4) circle (.3);
\draw[fill=white] (4,3) circle (.3);
\draw[fill=white] (4,1) circle (.3);
\draw[fill=white] (1,0) circle (.3);
\draw[fill=white] (2,3) circle (.3);
\draw[fill=white] (1,2) circle (.3);
\draw[fill=white] (2,1) circle (.3);
\node at (0,1) {$\sigma$};
\node at (0,3) {$\tau$};
\node at (1,4) {$\beta$};
\node at (4,3) {$\theta$};
\node at (4,1) {$\rho$};
\node at (1,0) {$\alpha$};
\node at (2,3) {$\psi$};
\node at (1,2) {$\delta$};
\node at (2,1) {$\phi$};
\path[fill=white] (1,3) circle (.3);
\node at (1,3) {$T$};
\path[fill=white] (1,1) circle (.3);
\node at (1,1) {$S$};
\path[fill=white] (3,2) circle (.3);
\node at (3,2) {$R$};
\end{tikzpicture}\quad.
\end{equation}
Here we associate a lattice state with the product of the Boltzmann weights of its vertices. In other words, the star-triangle relation holds when
\[\sum_{\gamma,\mu,\nu} R_{\sigma\tau}^{\nu\mu}S_{\nu\beta}^{\theta\gamma}T_{\mu\gamma}^{\rho\alpha} = \sum_{\delta,\phi,\psi}T_{\tau\beta}^{\psi\delta}S_{\sigma\delta}^{\phi\alpha}R_{\phi\psi}^{\theta\rho}.\]
The main fact we will use in order to do computations is that the star-triangle relation is equivalent to the vanishing of the Yang-Baxter commutator. For a proof of this statement, see \cite{BBF}.
\begin{lemma}
Let $R, S, T \in \operatorname{End}(V\otimes V)$. Then the star-triangle relation holds for $R,S,T$ if and only if $[[R,S,T]] = 0$.
\end{lemma}
Given matrices $S$ and $T$, we will determine necessary conditions for there to exist $R$ such that $[[R,S,T]] = 0$. We will follow the approach of Galleas and Martins \cite{GalleasMartins}. Our analysis will be more general, since we do not assume that the Boltzmann weights $c_1, c_{-1}, d_1, d_{-1}$ satisfy $c_1 = c_{-1}$ and $d_1 = d_{-1}$. Suppose that $R, S, T \in \text{End}(V \otimes V)$ are Boltzmann weights such that $[[R,S,T]] = 0$. Moreover, assume that $a_1(S)$, $b_1(S)$, $c_1(S)$, $d_1(S)$, $a_2(S)$, $b_2(S)$, $c_2(S)$, $d_2(S)$, $a_1(T)$, $b_1(T)$, $c_1(T)$, $d_1(T)$, $a_2(T)$, $b_2(T)$, $c_2(T)$, $d_2(T)$ are nonzero. A computation shows that the condition $[[R,S,T]] = 0$ can be expressed as the system of 28 equations
\begin{align}
a_j(T)a_j(S)d_i(R) + d_i(T)c_i(S)a_{-j}(R) &= c_i(T)d_i(S)a_j(R) + b_{-j}(T)b_{-j}(S)d_i(R)\label{eqn:1}\\
d_i(T)b_j(S)c_i(R) + a_j(T)d_i(S)b_{-j}(R) &= b_j(T)d_i(S)a_j(R) + c_{-i}(T)b_{-j}(S)d_i(R)\label{eqn:2}\\
d_{i}(T)b_j(S)b_j(R) + a_j(T)d_{i}(S)c_{-i}(R) &= d_{i}(T)a_j(S)a_j(R) + a_{-j}(T)c_{-i}(S)d_{i}(R)\label{eqn:3}\\
c_i(T)a_j(S)c_i(R) + b_j(T)c_i(S)b_{-j}(R) &= a_j(T)c_i(S)a_j(R) + d_{-i}(T)a_{-j}(S)d_{i}(R)\label{eqn:4}\\
c_i(T)a_j(S)b_j(R) + b_j(T)c_i(S)c_{-i}(R) &= c_i(T)b_j(S)a_j(R) + b_{-j}(T)d_{-i}(S)d_i(R)\label{eqn:5}\\
b_{-j}(T)a_j(S)c_i(R) + c_{-i}(T)c_i(S)b_{-j}(R) &= d_{-i}(T)d_i(S)b_j(R) + a_j(T)b_{-j}(S)c_i(R)\label{eqn:6}\\
c_1(T)c_{-1}(S)c_1(R) &= c_{-1}(T)c_1(S)c_{-1}(R)\label{eqn:7}\\
d_1(T)c_1(S)d_{-1}(R) &= d_{-1}(T)c_{-1}(S)d_1(R)\label{eqn:8}\\
c_1(T)d_1(S)d_{-1}(R) &= c_{-1}(T)d_{-1}(S)d_1(R)\label{eqn:9}\\
d_1(T)d_{-1}(S)c_1(R) &= d_{-1}(T)d_1(S)c_{-1}(R)\label{eqn:10}
\end{align}
for $i,j \in \{-1, 1\}$. By equation $(k, r, s)$, we mean equation $k$ with $i=r$ and $j=s$ substituted in. Solving for $a_{-j}(R)$ in (\ref{eqn:4},$i,-j$) gives us
\begin{align}
a_{-j}(R) = \frac{c_{i}(T)a_{-j}(S)c_{i}(R) + b_{-j}(T)c_{i}(S)b_{j}(R) - d_{-i}(T)a_{j}(S)d_{i}(R)}{a_{-j}(T)c_{i}(S)} \label{eqn:11}
\end{align}
and solving for $a_{j}(R)$ in (\ref{eqn:5},$i,j$) gives us
\begin{align}
a_j(R) = \frac{c_i(T)a_j(S)b_j(R)+b_j(T)c_i(S)c_{-i}(R)-b_{-j}(T)d_{-i}(S)d_i(R)}{c_i(T)b_j(S)}. \label{eqn:12}
\end{align}
Substituting (\ref{eqn:11},$i,j$) and (\ref{eqn:12},$i,j$) into (\ref{eqn:1},$i,j$) yields
\begin{align*}
&a_j(T)a_j(S)d_i(R)\\&+ d_i(T)c_i(S)\left(\frac{c_{i}(T)a_{-j}(S)c_{i}(R) + b_{-j}(T)c_{i}(S)b_{j}(R) - d_{-i}(T)a_{j}(S)d_{i}(R)}{a_{-j}(T)c_{i}(S)}\right)\\
&= c_i(T)d_i(S)\left(\frac{c_i(T)a_j(S)b_j(R)+b_j(T)c_i(S)c_{-i}(R)-b_{-j}(T)d_{-i}(S)d_i(R)}{c_i(T)b_j(S)}\right)\\&+b_{-j}(T)b_{-j}(S)d_i(R).
\end{align*}
Rearranging this equation gives us
\begin{align}
&b_j(R)[d_i(T)b_{-j}(T)c_i(S)b_j(S) - d_i(S)c_i(T)a_j(S)a_{-j}(T)] = -c_{i}(R)[d_i(T)b_j(S)c_i(T)a_{-j}(S)]\nonumber\\
&+c_{-i}(R)[d_i(S)b_j(T)c_i(S)a_{-j}(T)] + d_i(R)[a_{-j}(T)b_{-j}(T)[b_j(S)b_{-j}(S) - d_i(S)d_{-i}(S)]\nonumber\\
&+a_j(S)b_j(S)[d_i(T)d_{-i}(T) - a_j(T)a_{-j}(T)]].\label{eqn:13}
\end{align}
Equation (\ref{eqn:7},$i,j$) implies that
\begin{align}
c_{-i}(R) &= c_i(R)\frac{c_i(T)c_{-i}(S)}{c_{-i}(T)c_i(S)}.\label{eqn:14}
\end{align}
Substituting this expression for $c_{-i}(R)$ into (\ref{eqn:13},$i,j$) gives us
\begin{align}
&b_j(R)[d_i(T)b_{-j}(T)c_i(S)b_j(S) - d_i(S)c_i(T)a_j(S)a_{-j}(T)]\nonumber\\
&=c_{i}(R)\left[\frac{-c_{-i}(T)d_i(T)b_j(S)c_i(T)a_{-j}(S)+c_i(T)c_{-i}(S)d_i(S)b_j(T)a_{-j}(T)}{c_{-i}(T)}\right]\nonumber\\&+ d_i(R)[a_{-j}(T)b_{-j}(T)[b_j(S)b_{-j}(S) - d_i(S)d_{-i}(S)]\nonumber\\
&+a_j(S)b_j(S)[d_i(T)d_{-i}(T) - a_j(T)a_{-j}(T)]]\label{eqn:15}
\end{align}
We repeat the above process for equations \ref{eqn:2} and \ref{eqn:3}. Solving for $a_{-j}(R)$ in (\ref{eqn:2},$i,-j$) gives us
\begin{align}
a_{-j}(R) &= \frac{d_i(T)b_{-j}(S)c_i(R) + a_{-j}(T)d_i(S)b_{j}(R) - c_{-i}(T)b_{j}(S)d_i(R)}{b_{-j}(T)d_i(S)}\label{eqn:16}
\end{align}
and solving for $a_{j}(R)$ in (\ref{eqn:3},$i,j$) gives us
\begin{align}
a_{j}(R) &= \frac{d_i(T)b_{j}(S)b_{j}(R) + a_{j}(T)d_i(S)c_{-i}(R) - a_{-j}(T)c_{-i}(S)d_i(R)}{d_i(T)a_{j}(S)}.\label{eqn:17}
\end{align}
Substituting (\ref{eqn:16},$i,j$) and (\ref{eqn:17},$i,j$) into (\ref{eqn:1},$i,-j$) yields
\begin{align*}
&a_{-j}(T)a_{-j}(S)d_i(R) \\&+d_i(T)c_i(S)\left(\frac{d_i(T)b_{j}(S)b_{j}(R) + a_{j}(T)d_i(S)c_{-i}(R) - a_{-j}(T)c_{-i}(S)d_i(R)}{d_i(T)a_{j}(S)}\right)\nonumber\\
&=c_i(T)d_i(S)\left(\frac{d_i(T)b_{-j}(S)c_i(R) + a_{-j}(T)d_i(S)b_{j}(R) - c_{-i}(T)b_{j}(S)d_i(R)}{b_{-j}(T)d_i(S)}\right)\nonumber\\
&+b_{j}(T)b_{j}(S)d_i(R).
\end{align*}
Rearranging this equation gives us
\begin{align}
&b_j(R)[c_i(S)d_i(T)b_j(S)b_{-j}(T) - c_i(T)a_{-j}(T)d_i(S)a_j(S)] = c_i(R)[c_i(T)d_i(T)b_{-j}(S)a_j(S)]\nonumber\\
&-c_{-i}(R)[c_i(S)a_j(T)d_i(S)b_{-j}(T)]+d_i(R)[a_{-j}(T)b_{-j}(T)[c_i(S)c_{-i}(S) - a_j(S)a_{-j}(S)]\nonumber\\&+ a_j(S)b_j(S)[b_j(T)b_{-j}(T) - c_i(T)c_{-i}(T)]].\label{eqn:18}
\end{align}
Substituting (\ref{eqn:14},$i,j$) into (\ref{eqn:18},$i,j$) yields
\begin{align}
&b_j(R)[c_i(S)d_i(T)b_j(S)b_{-j}(T) - c_i(T)a_{-j}(T)d_i(S)a_j(S)]\nonumber\\&=c_i(R)\left[\frac{c_{-i}(T)c_i(T)d_i(T)b_{-j}(S)a_j(S)-c_i(T)c_{-i}(S)a_j(T)d_i(S)b_{-j}(T)}{c_{-i}(T)}\right]\nonumber\\&+d_i(R)[a_{-j}(T)b_{-j}(T)[c_i(S)c_{-i}(S) - a_j(S)a_{-j}(S)]\nonumber\\&+ a_j(S)b_j(S)[b_j(T)b_{-j}(T) - c_i(T)c_{-i}(T)]].\label{eqn:19}
\end{align}
Since the left hand sides of (\ref{eqn:15},$i,j$) and (\ref{eqn:19},$i,j$) are identical, setting their right hand sides equal to each other gives us
\begin{align}
&\frac{c_i(R)c_i(T)}{c_{-i}(T)}[c_{-i}(T)d_i(T)[b_{-j}(S)a_j(S)+b_j(S)a_{-j}(S)]\nonumber\\
&- c_{-i}(S)d_i(S)[a_j(T)b_{-j}(T)+b_j(T)a_{-j}(T)]]\nonumber\\
&= d_i(R)[a_{-j}(T)b_{-j}(T)F(S)-a_j(S)b_j(S)F(T)] \label{eqn:20}
\end{align}
where
\[F(\psi) = a_1(\psi)a_{-1}(\psi) + b_1(\psi)b_{-1}(\psi) - c_1(\psi)c_{-1}(\psi) - d_1(\psi)d_{-1}(\psi). \]
Now we repeat everything above, except with equation (\ref{eqn:6}) in place of equation (\ref{eqn:1}), and the variables $b$ instead of $a$. We can rewrite equation (\ref{eqn:2},$i,j$) as
\begin{align}
d_i(S)b_{-j}(R) &= \frac{b_{j}(T)d_i(S)a_{j}(R) + c_{-i}(T)b_{-j}(S)d_i(R) - d_i(T)b_{j}(S)c_i(R)}{a_{j}(T)}\label{eqn:21}
\end{align}
and equation (\ref{eqn:5},$-i,j$) as
\begin{align}
c_{-i}(T)b_{j}(R) = \frac{c_{-i}(T)b_{j}(S)a_{j}(R) + b_{-j}(T)d_i(S)d_{-i}(R) - b_{j}(T)c_{-i}(S)c_{i}(R)}{a_{j}(S)}.\label{eqn:22}
\end{align}
Substituting equations (\ref{eqn:21},$i,j$) and (\ref{eqn:22},$i,j$) into (\ref{eqn:6},$i,-j$) yields
\begin{align}
&a_{j}(R)[c_i(S)c_{-i}(T)b_{j}(S)a_{j}(T) - d_{-i}(T)b_{j}(T)d_i(S)a_{j}(S)]\nonumber\\
&=c_i(R)[a_{j}(T)b_{j}(T)[c_i(S)c_{-i}(S) - a_{-j}(S)a_{j}(S)] +a_{j}(S)b_{j}(S)[a_{-j}(T)a_{j}(T) - d_i(T)d_{-i}(T)]]\nonumber\\
&+ d_i(R)[d_{-i}(T)c_{-i}(T)b_{-j}(S)a_{j}(S)] - d_{-i}(R)[c_i(S)b_{-j}(T)d_i(S)a_{j}(T)].\label{eqn:23}
\end{align}
Equation (\ref{eqn:8},$i$) implies that
\begin{align}
d_{-i}(R) = d_i(R)\frac{d_{-i}(T)c_{-i}(S)}{d_i(T)c_i(S)}.\label{eqn:24}
\end{align}
Substituting this expression into (\ref{eqn:23},$i,j$) gives us
\begin{align}
&a_{j}(R)[c_i(S)c_{-i}(T)b_{j}(S)a_{j}(T) - d_{-i}(T)b_{j}(T)d_i(S)a_{j}(S)]\nonumber\\
&=c_i(R)[a_{j}(T)b_{j}(T)[c_i(S)c_{-i}(S) - a_{-j}(S)a_{j}(S)] +a_{j}(S)b_{j}(S)[a_{-j}(T)a_{j}(T) - d_i(T)d_{-i}(T)]]\nonumber\\
&+ \frac{d_i(R)}{d_i(T)}[d_i(T)d_{-i}(T)c_{-i}(T)b_{-j}(S)a_j(S)-d_{-i}(T)c_{-i}(S)b_{-j}(T)d_i(S)a_j(T)].\label{eqn:25}
\end{align}
We can rewrite equation (\ref{eqn:3},$-i,j$) as
\begin{align}
d_{-i}(T)b_j(R) = \frac{d_{-i}(T)a_j(S)a_j(R) + a_{-j}(T)c_{i}(S)d_{-i}(R) - a_j(T)d_{-i}(S)c_{i}(R)}{b_j(S)}\label{eqn:26}
\end{align}
and equation (\ref{eqn:4},$i,j$) as
\begin{align}
c_i(S)b_{-j}(R) = \frac{a_j(T)c_i(S)a_j(R) + d_{-i}(T)a_{-j}(S)d_i(R) - c_i(T)a_j(S)c_i(R)}{b_j(T)}\label{eqn:27}.
\end{align}
Substituting equations (\ref{eqn:26},$i,j$) and (\ref{eqn:27},$i,j$) into (\ref{eqn:6},$i,j$) gives us
\begin{align}
&a_j(R)[c_{-i}(T)a_j(T)c_i(S)b_j(S) - d_i(S)d_{-i}(T)a_j(S)b_j(T)]\nonumber\\
&= c_i(R)[a_j(S)b_j(S)[c_i(T)c_{-i}(T) - b_j(T)b_{-j}(T)] + a_j(T)b_j(T)[b_j(S)b_{-j}(S)-d_i(S)d_{-i}(S)]]\nonumber\\
&+ d_{-i}(R)[d_i(S)a_{-j}(T)c_i(S)b_j(T)] - d_i(R)[c_{-i}(T)d_{-i}(T)a_{-j}(S)b_j(S)].\label{eqn:28}
\end{align}
Substituting equation (\ref{eqn:24},$i,j$) into (\ref{eqn:28},$i,j$) gives us
\begin{align}
&a_j(R)[c_{-i}(T)a_j(T)c_i(S)b_j(S) - d_i(S)d_{-i}(T)a_j(S)b_j(T)]\nonumber\\
&= c_i(R)[a_j(S)b_j(S)[c_i(T)c_{-i}(T) - b_j(T)b_{-j}(T)] + a_j(T)b_j(T)[b_j(S)b_{-j}(S)-d_i(S)d_{-i}(S)]]\nonumber\\
&+ \frac{d_i(R)}{d_i(T)}[d_{-i}(T)c_{-i}(S)d_i(S)a_{-j}(T)b_j(T) - d_i(T)c_{-i}(T)d_{-i}(T)a_{-j}(S)b_j(S)].\label{eqn:29}
\end{align}
Since equations (\ref{eqn:25},$i,j$) and (\ref{eqn:29},$i,j$) have the same left hand side, we have
\begin{align}
&c_i(R)[a_j(T)b_j(T)F(S) - a_j(S)b_j(S)F(T)]\nonumber\\
&= \frac{d_i(R)d_{-i}(T)}{d_i(T)}[c_{-i}(T)d_i(T)[b_{-j}(S)a_j(S) + a_{-j}(S)b_j(S)] - c_{-i}(S)d_i(S)[b_{-j}(T)a_j(T) + a_{-j}(T)b_j(T)]].\label{eqn:30}
\end{align}
Let
\begin{align*}
&G_i(S,T) = [ c_{-i}(T)d_i(T)[b_{-1}(S)a_1(S)+ a_{-1}(S)b_1(S)]- c_{-i}(S)d_i(S)[b_{-1}(T)a_1(T)+a_{-1}(T)b_1(T)]].
\end{align*}
Our calculations above allow us to prove the following condition.
\begin{theorem}
Let $S, T \in \operatorname{End}(V\otimes V)$. Suppose that there exists $R \in \text{End}(V\otimes V)$ such that $c_{-1}(R),c_1(R),d_{-1}(R),d_1(R)$ are nonzero and $[[R,S,T]] = 0$. Then, we have \begin{align}
a_1(T)b_1(T)F(S) &= a_{-1}(T)b_{-1}(T)F(S)\label{condition1}\\
a_1(S)b_1(S)F(T) &= a_{-1}(S)b_{-1}(S)F(T)\label{condition2}\\
\frac{c_i(T)d_{-i}(T)}{c_{-i}(T)d_i(T)}G_i(S,T)^2 &= [a_1(T)b_1(T)F(S) - a_1(S)b_1(S)F(T)]^2\label{condition3}\\
\frac{c_1(T)c_{-1}(S)}{c_{-1}(T)c_1(S)} &= \frac{d_1(T)d_{-1}(S)}{d_{-1}(T)d_1(S)}.\label{condition4}
\end{align}
\end{theorem}
\begin{proof}
We write the system of equations given by $(\ref{eqn:20})$ and $(\ref{eqn:30})$ as the matrix equation
\[\begin{pmatrix}
\frac{c_i(T)}{c_{-i}(T)}G_i(S,T) && - \alpha_1(S,T)\\
\frac{c_i(T)}{c_{-i}(T)}G_i(S,T) && -\alpha_2(S,T)\\
\beta_1(S,T) && -\frac{d_{-i}(T)}{d_i(T)}G_i(S,T)\\
\beta_2(S,T) && -\frac{d_{-i}(T)}{d_i(T)}G_i(S,T)
\end{pmatrix}\begin{pmatrix}
c_i(R)\\d_i(R)
\end{pmatrix} = 0,
\]
where
\[\alpha_j(S,T) = a_j(T)b_j(T)F(S) - a_{-j}(S)b_{-j}(S)F(T) \]
and
\[\beta_j(S,T) = a_j(T)b_j(T)F(S) - a_j(S)b_j(S)F(T).\]
First we observe that if $G_i(S,T) = 0$, then we must have $\alpha_1(S,T) = \alpha_2(S,T) = \beta_1(S,T) = \beta_2(S,T) = 0$ in order for both $c_i(R)$ and $d_i(R)$ to be nonzero. It is easy to see that this implies equations (\ref{condition1}), (\ref{condition2}), (\ref{condition3}). Now suppose that $G_i(S,T) \neq 0$. A necessary condition for one of $c_{-1}(R),c_1(R),d_{-1}(R),$ and $d_1(R)$ to be nonzero is for all 6 of the 2 by 2 minors of the above matrix to vanish for $i = 1,-1$. The vanishing of the minors of the rows $(1,2),(3,4),(1,3)$ can be rewritten as
\begin{align}
\alpha_1(S,T) &= \alpha_2(S,T)\label{eqn:35}\\
\beta_1(S,T) &= \beta_2(S,T)\label{eqn:36}\\
\frac{c_i(T)d_{-i}(T)}{c_{-i}(T)d_i(T)}G_i(S,T)^2 &= \alpha_1(S,T)\beta_1(S,T).\label{eqn:37}
\end{align}
By combining equations (\ref{eqn:35}) and (\ref{eqn:36}), we obtain conditions (\ref{condition1}) and (\ref{condition2}). Then, using equations (\ref{eqn:37}) and (\ref{condition1}),we get
\begin{align*}
\frac{c_i(T)d_{-i}(T)}{c_{-i}(T)d_i(T)}G_i(S,T)^2 &= [a_1(T)b_1(T)F(S) - a_2(S)b_2(S)F(T)][a_1(T)b_1(T)F(S) - a_1(S)b_1(S)F(T)]\\
&= [a_1(T)b_1(T)F(S) - a_1(S)b_1(S)F(T)][a_1(T)b_1(T)F(S) - a_1(S)b_1(S)F(T)],
\end{align*}
which shows that (\ref{condition3}) holds. So in all cases, conditions (\ref{condition1}), (\ref{condition2}), and (\ref{condition3}) hold.
To get the last equation, we combine equations (\ref{eqn:7},$i$) and (\ref{eqn:10},$i$).
\end{proof}
\section{Future Directions}
A main theme of this paper was interpreting properties of lattice models in terms of differential forms. We have only scratched the surface of the problems this technique can be applied to. In particular, we restricted our attention to lattice models without weights assigned at each vertex. A first step toward understanding weighted partition functions in this viewpoint would be to find an interpretation of the Yang-Baxter equation using differential forms.
Another potential area of further investigation is to describe interesting families of weights for which the eight-vertex Yang-Baxter equation holds. This has been explored in the case $c_1 = c_{-1}$ and $d_1 = d_{-1}$ by Cuerno et al \cite{cuerno}. It is natural to ask if the weights can be generalized to families where $c_1 \neq c_{-1}$ or $d_1 \neq d_{-1}$.
\section{Acknowledgements}
This research was conducted at the 2020 University of Minnesota Twin Cities REU with the support of the NSF RTG grant DMS-1745638. I would like to thank Ben Brubaker and Claire Frechette for their mentorship and support.
\bibliographystyle{amsplain}
|
1,108,101,564,684 | arxiv | \section{Introduction}
\label{S:intro}
The ability to non-destructively measure local lattice distortion effects in crystalline materials with X-ray microscopy enables connections to be made between nano- and micro-scale structure with macroscopic properties under working conditions, thus playing an important role in advancing materials science and engineering, chemistry, and solid state physics.
For example, recent successful studies include characterizing electro-catalytic processes \emph{in situ} \cite{Ulvestad2015,Ulvestad2015a}, informing the design of structural, functional, and quantum materials \cite{Hruszkewycz2012,Highland2017,Hruszkewycz2018}, and testing and validating theoretical models of polycrystals under various real-world stimuli\cite{Suter2008,Schuren2015}.
Radiation of different energies in the hard X-ray regime is employed for a variety of such diffraction-based measurements at synchrotron light sources.
At the lower end of this energy range (7-15 keV) where coherent illumination is readily achieved, scattering methods such as Bragg coherent diffractive imaging (BCDI) and Bragg ptychography allow the measurement of the local lattice strain in single crystals with nanoscale spatial resolution \cite{Robinson2001,Miao2015,Robinson2009,Hruszkewycz2012,Hruszkewycz2017a}, and such approaches have recently been demonstrated for imaging individual grains in a polycrystal \cite{Yau2017}.
However, because the penetration length of these X-ray photons is typically a few tens of micrometers in dense solids, coherent diffraction imaging methods cannot be used to interrogate grains deep within polycrystalline bulks beyond measurement of surface-facing grains ~\cite{Vaxelaire2014,Yau2017,Cherukara2018}.
At the other end of the hard X-ray energy range (higher than 50 keV), high-energy diffraction microscopy (HEDM) techniques that do not rely on beam coherence permit the imaging and volume-averaged strain characterization of thousands of grains in a polycrystalline bulk of up to a millimeter in size\cite{Suter2006,Bernier2011}.
Querying such large samples in this manner becomes possible owing to
the higher penetration at high X-ray energies, and the current state of the art in HEDM allows the average strain state, lattice orientation, unit cell structure, and morphology of individual grains that make up the polycrystal to be determined at a spatial resolution of about $1.5~\mu$m~\cite{Bernier2011}.
Presently HEDM and BCDI have their respective places in materials research,
and combining the two techniques into a single multi-component microscopy approach presents a tremendous new opportunity.
A composite measurement scheme that draws from the strengths of both methods could potentially open the door to a new range of possibilities in materials characterization by enabling strain imaging of bulk materials with highly intricate crystallographies and domain morphologies, resolved from millimeters to nanometers.
Measurements of this kind will enhance our understanding of grain and interface dynamics in solids by enabling access to length scales that elucidate a variety of lattice distortion features (such as macroscopic stress concentration sites down to individual dislocations), and would present immensely valuable insights.
One important example is that of the mechanical behavior of bulk polycrystalline structural materials, for which fully descriptive models of deformation and failure have not been fully realized despite decades of study~\cite{Kapoor2018}.
Further progress towards this and other important materials questions depends on \emph{in situ} experimental methods that can measure structural processes over many length scales, as could be realized by combining the HEDM and BCDI modalities.
Several critical steps towards the unification of BCDI and HEDM are demonstrated in this article, anticipating the capabilities of fourth-generation light sources coming online worldwide now and in the near future~\cite{Barber2014}.
These new synchrotron light sources will provide a several-hundred-fold increase in coherent X-ray flux as compared to today's facilities, making high-energy BCDI practical and routine.
In this context, a high-throughput materials characterization capability that makes use of high X-ray energies and that consists of combined and fully integrated HEDM and BCDI modalities will be realizable.
We present proofs-of-concept of several elements of such a measurement using X-rays with an energy $52$ keV at the Advanced Photon Source, a third generation 7 GeV synchrotron equipped with a superconducting undulator for enhanced photon flux~\cite{Ivanyushenkov2017}.
Several key aspects are established:
\begin{enumerate}
\item With the appropriate adaptation of HEDM X-ray optics, it is possible to implement high-energy BCDI (HE-BCDI) on an isolated sub-micron-scale crystal at 52 keV, provided partial coherence effects of the X-ray beam are accounted for.
\item It is possible to integrate far-field HEDM (ff-HEDM) and HE-BCDI and to make these measurements in succession on individual grains in a polycrystalline material.
\item In a polycrystal, the orientation information of each grain within a broad illumination footprint that is obtained from ff-HEDM can be used to efficiently perform HE-BCDI on multiple Bragg reflections of a single grain within a wide window of reciprocal space, a capability that is extremely difficult to realize in a standalone BCDI measurement.
\end{enumerate}
These demonstrations, detailed below, lay the foundation for a flexible new multi-modal means of studying polycrystalline materials in situations where the long penetration depths of high energy X-rays is critical.
\section{Demonstration of HE-BCDI}
\label{SS.hebcdidemo}
The ability to perform HE-BCDI measurements with a partially coherent high-energy beam is a key requirement for the eventual integration of the HEDM and BCDI measurement modalities.
To this end, one of the goals of our study was to develop an experimental configuration suitable for HE-BCDI at a third generation synchrotron, by imaging an isolated nanocrystal obtained by de-wetting a gold film on a silicon substrate.
A simplified schematic of the X-ray optical configuration used for this purpose is shown in Fig.~\ref{fig:expSchematic}(a), which yielded a coherence volume and detector geometry suitable for HE-BCDI imaging at the 1-ID-E end station of the Advanced Photon Source (see Supplementary Material, Section 1).
The critical components needed to realize this were white beam slits opened to $0.5$ mm $\times~0.5$ mm, a collimating lens, a high-resolution monochromator~\cite{Shastri2004}, a vertically-focusing sawtooth lens~\cite{Shastri2007}, a sample-detector distance of $6.12$ m, and a finely pixelated photon-counting area detector with good quantum efficiency at 52 keV (Lynx 1800 detector manufactured by Amsterdam Scientific Instruments, with a Medipix3 chip and a 55 $\mu$m pixel pitch).
With this arrangement, a $1.5 \times 50$ $\mu$m beam was delivered to the sample.
The detector was mounted along the far wall of the experimental enclosure with the two degrees of freedom (radial distance $R$ and azimuthal angle $\eta$) needed to reach Bragg peaks with a scattering angle of $\sim 15^\circ$.
With this arrangement, the three-dimensional diffraction pattern in the vicinity of a $\left\langle 111 \right\rangle$ Bragg peak from the gold nanoparticle was measured using the high-precision sample manipulation system at the 1-ID-E end station~\cite{Benda2016}.
The sample was oriented such that the diffracted exit beam passed through the silicon substrate en route to the detector.
The rotation angle $\omega$ of the sample was incremented about the vertical axis in fine steps of $\Delta \omega = 0.01^\circ$ spanning a total of $0.6^\circ$ about the angle of the Bragg peak maximum. At each sample angle, area detector images of the diffraction patterns were collected with an exposure time of 600 seconds per angle.
The three-dimensional intensity data acquired in this manner encoded the morphology and the lattice distortions of the diffracting crystal.
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{figure-1}
\caption{
\textbf{(a)} Simplified schematic of Beamline 1-ID-E at the Advanced Photon Source, showing the essential components. Only the top sawtooth lens was used to provide focus.
See Supplementary Material, Section 1 for details on why this configuration was chosen.
\textbf{(b)} Central slice of the partially coherent 3D diffraction signal from the gold nanoparticle.
\textbf{(c)-(e)} Mutually perpendicular amplitude cross sections of the gold nanoparticle after phase retrieval.
\textbf{(f)} Lattice strain profile on the surface of the 3D nanoparticle.
The isosurface region with a seemingly sudden drop in strain is actually due to the 3D lighting effects that better highlight the contours of the object.
The arrow indicates the direction of horizontal transverse coherence of the X-ray beam (laboratory $X$).
\textbf{(g)} Profile of the Gaussian coherence function along the direction of the arrow shown in (f).
The half-width at half maximum is an estimate of the beam coherence length along the direction of interest.
}
\label{fig:expSchematic}
\end{figure*}
A 2D slice of the diffraction around the Bragg peak is seen in Figure~\ref{fig:expSchematic}(b), in which the fringes are oversampled (as typically required in BCDI), but showing that the fringe contrast is less than 100\%, indicating partially coherent illumination.
The 3D image of the scatterer was reconstructed using a BCDI phase retrieval approach that accounts for partial coherence, based on recently published methods ~\cite{Tran2005,Clark2011,Clark2012}.
Within the phase retrieval algorithms used, the partial coherence correction was achieved by modeling the measured diffraction as a convolution of the fully coherent diffraction intensity pattern with a blurring kernel~\cite{Tran2005,Clark2011,Clark2012}, which was chosen to be a multivariate Gaussian.
The phase retrieval recipe consisted of alternating cycles of the Gerchberg-Saxton error-reduction and hybrid input-output~\cite{Fienup1982} along with intermittent updating of the real-space support \emph{via} a shrinkwrap algorithm~\cite{Marchesini2003} and optimization of the six parameters of the unknown Gaussian blurring kernel (see Supplementary Material, Section 2 for the exact phase retrieval algorithm and kernel parameterization).
This approach resulted in a complex-valued real-space image of the diffracting crystal $\boldsymbol{\rho}$ that encodes information about lattice displacement distribution within the particle.
Specifically, the phase of $\boldsymbol{\rho}$ is related to the projection of the spatial distribution of atomic lattice displacement perturbations $\mathbf{u}$ along a reciprocal lattice vector $\mathbf{Q}$~\cite{Robinson2009}, such that: $\boldsymbol{\rho} = \left|\boldsymbol{\rho}\right| \exp\left(2\pi i \mathbf{u} \cdot \mathbf{Q}\right)$.
Via the phase of $\boldsymbol{\rho}$ (\emph{i.e.} $\angle{\boldsymbol{\rho}}$), one can obtain a component of the displacement field: $u_{111} = \angle{\boldsymbol{\rho}} / | 2\pi \mathbf{Q}_{111} |$.
Subsequently, the component of the strain tensor along $\mathbf{Q}_{111}$ can be determined at a location $x_{111}$, by computing the partial derivative $\partial u_{111}/\partial x_{111}$.
Figure~\ref{fig:expSchematic}(c)-(e) denote mutually perpendicular amplitude cross-sections of the reconstructed nanoparticle (\emph{i.e.} $\left|\boldsymbol{\rho}\right|$), obtained from phase retrieval.
Figure~\ref{fig:expSchematic}(f) shows a 3D image of the particle with the color scale representing the surface variations of the strain component $\partial u_{111}/\partial x_{111}$.
The particle itself has a maximum diameter of about 400 nm, displays distinct facets, and has relatively low levels of strain -- all characteristics typical of gold particles obtained by dewetting~\cite{Cha2016}.
The arrow in Figure~\ref{fig:expSchematic}(f) indicates the horizontal ($X$-) direction in the laboratory frame, and the profile of the real-space representation of the blurring kernel along this direction is shown in Figure~\ref{fig:expSchematic}(g).
We find that in this direction the half-width at half-maximum of the Gaussian kernel is about 350 nm. This length is a rough estimate of the 50\% coherence threshold in that dimension.
This result establishes a baseline for the HE-BCDI methodology and suggests that with the X-ray optical setup employed in this work, crystallites with diameters corresponding roughly to the 50\%-coherence threshold of the beam can be imaged with HE-BCDI. Crucially, this HE-BCDI reconstruction of a symmetrically faceted, relatively strain-free nanoparticle provided a sufficiently well-constrained measurement of the blurring kernel that could be used to model the partial coherence of the X-ray beam for subsequent HE-BCDI measurements of individual grains that in general displayed more disordered fringe patterns owing to potentially greater extents of lattice strain and less regular faceting.
\section{Integrated ff-HEDM and HE-BCDI measurements}
\label{SS.hedmbcdigrain}
To demonstrate the integration of the HE-BCDI measurement modality described above with standard HEDM measurements, we utilized a polycrystalline gold film~\cite{Yau2017} which was deposited by electron beam evaporation on an amorphous carbon substrate.
The film thickness was around 300 nm, with a characteristic in-plane grain size of 400 nm.
A schematic showing the different detectors used in the combined ff-HEDM and HE-BCDI experiment is shown in Figure~\ref{fig:HEDMBCDI}.
The sample was oriented with the substrate surface initially normal to the incident beam, and first measured with ff-HEDM.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure-2}
\caption{
Schematic of the multi-scale measurement technique with the HEDM and BCDI components.
The ff-HEDM measurement is made at a scattering distance of $1$ m while the HE-BCDI measurement is made at a scattering distance of $6$ m.
The detector images respectively denote the relatively coarse resolution of the Bragg peaks from the grains in the sample (ff-HEDM), and the finer details of the fringe pattern in the vicinity of one such peak.
We note that the beam dimensions shown here only correspond to the ff-HEDM measurement.
The beam size was actually set to $1.5 \times 50 \mu$m for all HE-BCDI measurements.
}
\label{fig:HEDMBCDI}
\end{figure}
The line-focused X-ray beam obtained with the vertical saw-tooth focusing lens had a footprint on the sample of $1.5 \times 100$ $\mu$m, with an X-ray energy of 52 keV, as before.
The ff-HEDM measurement was performed by rotating the sample about the $Y$ axis through $360^\circ$ using a high-precision Aerotech rotation stage (designated $\omega$ in Figure~\ref{fig:HEDMBCDI}) that scanned the sample in angular increments of $0.01^\circ$, resulting in 36,000 acquired diffraction images.
Bragg peaks were measured during this scan with a bank of four GE-41RT detectors positioned $\sim 1$ m from the sample with $200 \mu m$ pixels that subtended complete Debye-Sherrer rings out to scattering angles of up to $\sim 15^\circ$.
Over the course of the entire scan, tens of thousands of diffraction peaks were collected, corresponding to the $\left\langle 111 \right\rangle$, $\left\langle 200 \right\rangle$, $\left\langle 220 \right\rangle$, and $\left\langle 311 \right\rangle$ Bragg reflections of the face-centered cubic gold lattice.
A visualization of a subset of these measured Bragg peaks is shown in Figure~\ref{fig:zoomins}, in which the distinct Debye spheres of each order of Bragg diffraction from the illuminated grains are denoted by differently colored markers.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure-3}
\caption{
Scatter plot of the reciprocal lattice points corresponding to the Bragg peaks from the illuminated grains, acquired during the ff-HEDM measurement.
Shown here is a subset of the observed Bragg reflections, up to the $\left\langle 311 \right\rangle$ reflection for fcc crystals.
The bold markers denote a Friedel pair of $\left\langle 111 \right\rangle$ reflections from a single grain.
Also shown are the ``zoomed-in'' central slices of the measured partially coherent diffraction signal in the vicinity of these reflections (reoriented to emphasize the centrosymmetry about the line denoted by the two arrows, corresponding to reciprocal lattice vector directions $\pm \mathbf{Q}_{111}$ respectively).
}
\label{fig:zoomins}
\end{figure}
The HEDM software suite MIDAS~\cite{Sharma2012,Sharma2012a} was used to automatically identify and index all of the peaks in the measured detector images and to map each peak back to one of the 6768 grains within the illuminated beam volume. The details of this indexing process are given in Ref.~\cite{Bernier2011}, and is standard procedure for ff-HEDM experiments.
The quantities $\omega$ and $\eta$ were determined up to an uncertainty of $0.01^\circ$.
This provided angular information of sufficient precision in order to orient the sample and the ASI detector at 6.12 m from the sample for a HE-BCDI measurement from a chosen grain in the polycrystal (described in the next section).
We note that both ff-HEDM and HE-BCDI measurements were made sequentially with the same arrangement of X-ray optics and goniometer hardware, differing only in the sample-detector distance and the detector module used.
This demonstration points towards the potential of smooth physical integration of ff-HEDM and HE-BCDI, and also indicates the possibility of integration with other high energy microscopy imaging modes, including near field HEDM (nf-HEDM)~\cite{Suter2006} and diffraction contrast tomography (DCT)~\cite{Ludwig2008}.
Physically realizing this unification requires significant development that is currently ongoing, both in terms of improving the spatial resolution of near-field HEDM detectors and line-focusing x-ray optics to reach sub-micrometer length scales (thereby overlapping with the demonstrated length regime of HE-BCDI), and in terms of improving synchrotron storage ring technology that will enable HE-BCDI of ten-micron-scale grains (thus intersecting with current nf-HEDM capabilities).
\section{HE-BCDI of a Bragg peak Friedel pair from a single grain}
\label{SS:grainhebcdi}
The grain orientation and Bragg peak indexing information from the ff-HEDM measurement was used to measure and image two different $\left\langle 111\right\rangle$ Bragg peaks from the same grain with HE-BCDI. Specifically, we sought to demonstrate unambiguously that the selected peaks indeed originated from the same grain by reconstructing HE-BCDI data from a Friedel pair of Bragg peaks ($\left[111\right]$ and $\left[\bar{1} \bar{1} \bar{1}\right]$) which are centrosymmetric about the origin of reciprocal space.
Since Friedel pairs encode equivalent structural information due to their centrosymmetry (see Supplementary Material, section 3), the subsequent HE-BCDI reconstructions were expected to show equivalent strain profiles.
The absolute reciprocal space positions of the chosen Friedel pair reflections (obtained via ff-HEDM) are shown in Figure~\ref{fig:zoomins}, along with 2D images of the fringe detail about each of these peaks, obtained with the HE-BCDI detector.
It is clear that the Friedel pair of peaks are located in centrosymmetric positions about the reciprocal space origin $\mathbf{Q} = 0$, and that the ``zoomed-in'' view of these peaks provided by the HE-BCDI detector images also shows consistent centrosymmetric fringe patterns.
This pair of peaks was chosen in order to ensure successful HE-BCDI reconstructions based on the fact that the diffracted signal was relatively strong from the originating grain, and the volume of reciprocal space in the vicinity of the peak did not overlap with Bragg peaks from other grains.
The HE-BCDI measurements were similar to the case of the isolated gold nanoparticle ($60$ angular steps of size $\Delta \omega = 0.01^\circ$, each with an exposure of $600$ seconds).
Image reconstruction was done using the same phase retrieval approach as above, accounting for partial coherence by utilizing the Gaussian blurring kernel determined from the isolated crystal as a starting guess and allowing further minor refinement of the Gaussian parameters during phase retrieval.
The 3D image reconstructions from both centrosymmetric Bragg peaks are shown in Figure~\ref{fig:strain_cs} with surface-strain coloration. Also shown are cuts through the interior of the grain that show the spatial distribution of the strain component $\partial u_{111}/\partial x_{111}$.
We see that the center of the grain is relatively strain free, while the regions of relatively significant strain are seen to be along the interfaces with neighboring grains (\emph{i.e.} the grain boundaries).
In comparing the 3D image reconstructions, we immediately recognize that the
two images are of the same grain,
indicating that the two Bragg peaks that were measured from among tens of thousands emanating from the illuminated sample were indeed a Friedel pair.
The morphology, orientation, and strain state of the two images are very similar, as expected, and small differences can be ascribed to factors such as the low signal-to-noise ratios in these measurements and to the inherently different sampling of 3D reciprocal space in the two measurements.
Importantly, the HE-BCDI reconstruction gave access to the lattice distortion in the interior of this grain with an approximate spatial resolution of $47.5$ nm (see Section 4 of the Supplementary material), pointing to the broader potential of extending the HEDM technique (whose spatial resolution is about $1.5~\mu$m) with coherent diffraction imaging.
This measurement demonstrates the efficacy of using information obtained from ff-HEDM to efficiently execute a complementary HE-BCDI measurement that resolves intra-granular strain fields critical to the behavior and properties of polycrystalline materials.
Our work described a demonstration involving two
equivalent HE-BCDI images of a single grain obtained from centrosymmetric Bragg peaks. However, the full power of the approach will be the ability to measure many more non-equivalent Bragg peaks from a given grain in order to spatially resolve the full $6$-component strain tensor, as has been demonstrated for isolated nanocrystals at lower energies ($< 10$ keV)~\cite{Newton2010,Hofmann2017}.
In this regard, utilizing high energy x-rays is particularly appealing because it presents convenient access to higher-order Bragg peaks (as shown in Figure \ref{fig:zoomins}) which provide higher strain resolution and which are difficult or impossible to access at lower beam energies.
As a step in this direction,
we present an image reconstruction of another grain in the polycrystal film, obtained from a HE-BCDI measurement from a $\left\langle 200 \right\rangle$ Bragg peak (see Section 5 of the Supplementary Material), that
represents another benchmark in the eventual realization of integrated multi-scale measurements with high-energy coherent X-rays.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure-4}
\caption{
\textbf{(a)} Reconstructed image of the grain of interest from the polycrystalline film, obtained from the HE-BCDI data set corresponding to the $\left[111\right]$ reflection.
The associated strain field component is superposed on its surface.
\textbf{(b)} The same grain imaged from the data set corresponding to the $\left[\bar{1}\bar{1}\bar{1}\right]$ reflection.
As with Figure~\ref{fig:expSchematic}(f), the regions of seemingly sudden drops in strain are actually due to 3D lighting effects.
\textbf{(c)-(e)} Strain profile cross-sections of the grain reconstruction shown in (a), through the center of the grain.
}
\label{fig:strain_cs}
\end{figure}
\section{Conclusions}
\label{S:conclusions}
We have demonstrated key steps towards the realization of a fully integrated multi-modal high energy diffraction microscopy capability at synchrotron light sources.
Such a capability will enable fundamentally new in-situ structural imaging studies of polycrystalline materials over four decades of length scale, in deeply embedded environments, and with strain sensitivity as fine as $1 \times 10^{-5}$.
High-throughput implementation of such an approach, while impractical today due to the very limited coherence of high energy X-rays at today's third generation light sources, will be achievable at fourth generation synchrotrons coming online in the near future that promise up to three orders of magnitude increase in coherent flux within a wide range of X-ray energies.
In this context, HE-BCDI measurements accelerated to few-minutes time scales integrated with HEDM will be one of the many possible means by which to capitalize on the much-improved coherence properties of these fourth-generation sources.
Finally, we note that the HE-BCDI measurements envisioned in such future experiments could differ from the approach presented in this paper, which consisted of a standard-practice variety of BCDI (\emph{i.e.} diffraction pattern oversampling ensured via long sample-to-detector distance, followed by reconstruction with standard algorithms).
In particular, recent work has been devoted to developing new strategies for HE-BCDI that relax the requirement for the very long sample-to-detector distance by employing novel hardware~\cite{Pedersen2018,Pedersen2018a}, phase retrieval~\cite{Maddali2018a}, and signal processing solutions~\cite{Maddali2018} that will significantly aid in the realization of coherence-aided multi-modal high energy materials microscopy methods.
\section{Acknowledgements}
\label{S:acknowledge}
Conceptualization of the high-energy BCDI experiment and its integration with the far-field HEDM experimental modality, as well as subsequent phase retrieval and data analysis was supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357.
The experimental demonstration of the method was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division.
This research uses the resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.
\section{Author contributions}
\label{S:authcontrib}
The manuscript was written by SM and SOH.
The measurements were carried out by SM, SOH, JSP, HS and PK.
SS developed the high-energy X-ray optics and optimized it for coherent diffraction measurements.
The gold film was fabricated by MJH.
The gold nano-particle sample was provided by RH.
The phase retrieval reconstructions and subsequent strain analysis were done by SM with help from SOH, RH, YN, JSP and PK.
All authors contributed to the refining of the manuscript.
|
1,108,101,564,685 | arxiv | \section{Introduction}
The resonance structure with mass $M=4433\pm4\pm2$\,MeV and width
$\Gamma=45^{+18+30}_{-13-13}$ MeV in the charged quarkonium system
$\pi^{\pm}\psi'$ was found by the Belle Collaboration
\cite{Belle:2007}. On the other hand, BABAR Collaboration
\cite{BABAR:2008} did not see significant evidence for a
$Z(4430)^-$ signal in any of the processes investigated , neither
in the total $J/\psi \pi^-$ or $\psi(2S)\pi^-$ mass
distribution, nor in the corresponding distributions for the
regions of $K\pi^-$ mass for which observation of the $Z(4430)^-$
signal was reported. Several mechanisms have been proposed to
explain the properties of the new resonance
\cite{Rosner:2007mu}-\cite{Bugg:2007vp}. In particular, Rosner
\cite{Rosner:2007mu} pointed out to the close by threshold of
$D_1(2420)\bar{D}^*(2010)$ state and suggested a mechanism of
production of $\pi\psi'$ in the decay $B\rightarrow KZ(4430)$,
$Z(4430)\rightarrow\pi^+\psi'$. The proximity of the threshold
invokes a possible near-threshold singularity, either due to a
pole of the amplitude (virtual or real loosely coupled bound state
of $D_1D^*$) \cite{Rosner:2007mu}-\cite{Meng:2007fu} or else due
to the threshold cusp \cite{Bugg:2007vp}.
In this letter we are trying to understand whether the Z(4430)
resonance can be due to pseudoresonance mechanism known for $\pi
d$ system \cite{Simonov_Velde:1978}. We analyze the structure of
the scattering amplitude for the reaction $\pi \psi'\rightarrow\pi
\psi'$ near the $D_1D^*$ threshold in the same way as was done
for $\pi d$ system near the $\triangle N$ resonance. It is well
known that the peak in the cross section for pion-nucleon ($\pi
N$) scattering around $T_{\pi}=180$ MeV is associated with the
$\triangle(1232)$. An analogous peak is observed in the cross
section for pion-deuteron ($\pi d$) scattering near $\triangle N$
threshold shifted slightly in position and broadened with respect
to the $\pi N$ peak (see Figure \ref{fig.1}). Therefore, one can
not exclude that the Z(4430) resonance, which lies near $D_1D^*$
threshold could be connected to the $D_1(2420)$ resonance, as it
takes place in the $\triangle(1232)$. The $D_1(2420)$ state with
mass $M=2420^{+1+2}_{-2-2}$ MeV and width
$\Gamma=20^{+6+3}_{-5-3}$ MeV was observed in
$D^{*\pm}(2010)\pi^{\mp}$ invariant distribution. Therefore the
dynamical picture of the pion charmonium scattering in our
approach is: the p-wave off-energy-shell charmonium decay to
$D^*\bar{D}^*$, then in the $\pi D^*$ scattering the creation of
$D_1(2420)$ resonance. The diagram corresponding to this reaction
is shown in Figure \ref{fig.2}.
In our paper, firstly, we calculate the scattering amplitude for
$\pi d$ system using a single Breit-Wigner resonance for
$\triangle(1232)$ and obtain a good description of $\triangle N$
resonance. Then we apply the same formulas to $\pi \psi'$
scattering in which the vertex of the $\psi'\rightarrow
D^*\bar{D}^*$ decay is calculated in the many channel formalism
developed in \cite{Simonov_Di-pion_decays:2007}. For simplicity
reasons we didn't include rescattering terms which slightly shift
the peak in the $\pi d$ case.
We pay a special attention to the influence of different
properties of deuteron and charmonium family. First of all its
different size: the deuteron is a large object with size
$R_d\sim4.3$ fm while charmonium $\psi'$ state has only
$R_{\psi'}\sim0.5$ fm. The analysis of the results and discussion
are given in the last section.
\section{The amplitude for $\pi d$ system}
For the sake of simplicity we neglect any spin dependence and
write the single-scattering non-relativistic term for the $\pi d$
amplitude:
\begin{equation}\label{M}
M(\vec{k'},\vec{k})=\int\frac{d^3p}{(2\pi)^3}~\phi^*(\vec{p}-\frac{1}{2}\vec{k'})~M_{\pi
N}(\vec{x'},\vec{x},W_1)~\phi(\vec{p}-\frac{1}{2}\vec{k})
\end{equation}
where
$\sqrt{s}=\sqrt{\vec{k}^2+m_{\pi}^2}+\sqrt{\vec{k}^2+m_{d}^2}$ is
the total invariant energy of the $\pi d$ system. In (\ref{M}) the
$\pi N$ amplitude depends on
\begin{eqnarray}\label{}\nonumber
&&\vec{x}=\vec{k}-\eta(p)\vec{p}\quad \vec{x'}=\vec{k'}-\eta(p)\vec{p}\\
\nonumber&&\eta(p)=\frac{\sqrt{\vec{p}^2+m_{\pi}^2}}{\sqrt{\vec{p}^2+m_{\pi}^2}+m_{N}}
\end{eqnarray}
and on the total invariant $\pi N$ energy $W_1$
\begin{equation}\label{}
W_1=\sqrt{\left(\sqrt{s}-\sqrt{\vec{p}^2+m_{N}^2}\right)^2-\vec{p}^2}.
\end{equation}
The $\pi N$ amplitude will be truncated to include only the
dominant resonance p wave in the following way:
\begin{equation}\label{}
M_{\pi N}=\frac{64}{3}\pi W_1 \vec{x'}~
\vec{x}\left(-\frac{\Gamma_R}{2q}\right)
\frac{1}{W_1-M_R+\frac{1}{2}i\Gamma_R}
\end{equation}
with momentum $q$ of the $\pi N$ system
\begin{equation}\label{}
q=\sqrt{\frac{(W_1^2-(m_{N}^2+m_{\pi}^2)^2)(W_1^2-(m_{N}^2-m_{\pi}^2)^2)}{4W_1^2}}.
\end{equation}
The deuteron wave function contains the deuteron pole:
\begin{equation}\label{}
\phi(\vec{p})=\frac{\sqrt{\alpha}}{(p^2+\alpha^2)(p^2+c^2)}
\end{equation}
with $\alpha=\sqrt{m_N\varepsilon_D}$, $\varepsilon_D$ being the
deuteron binding energy and $c\approx0.4$ GeV.
\begin{figure}
\includegraphics[angle=270,width=0.45\textwidth]{Figure1.eps}
\caption{Figure 1. The squared $\pi$ $d$ scattering amplitude.}\label{fig.1}
\end{figure}
One can see in Figure \ref{fig.1} that the forward scattering
$(k=k')$ amplitude has a quite good resonance form which agrees
with the experiment result (see, for example
\cite{Thomas:1979xu}).
\begin{figure}
\center{\includegraphics[angle=270,width=0.40\textwidth]{Figure2.eps}}
\caption{Figure 2. Representation of the single scattering $\pi\psi'$ diagram.}\label{fig.2}
\end{figure}
\section{The amplitude for $\pi$ $\psi'$ system}
The $\phi(\vec{p})$ in (\ref{M}) for $\pi$ $\psi'$ system includes
the propogator and overlapped integral of the process
$\psi'\rightarrow D^*\bar{D}^*$
\begin{equation}\label{}
\phi(\vec{p})=\frac{J(\vec{p})M_\omega}{E_{\psi'}-E_{D^*}-E_{\bar{D}^*}}=
\frac{J(\vec{p})M_\omega}{M_{\psi'} - 2M_{D^*} - \frac{p^2}{M_{D^*}}}
\end{equation}
where $J(\vec{p})$ is an overlapped matrix element between wave
functions $\Psi(nS)$ of the n-th charmonium state and $\psi(1S)$
of $D^*$($\bar{D}^*$) mesons states, which were derived in the
framework of many-channel formalism with decay channel coupling
\cite{Simonov_Di-pion_decays:2007}:
\begin{eqnarray}\label{J(p)}
J(\vec{p})&=&\int \bar{y}_{123}\frac{d^3
q}{(2\pi)^3}~\Psi(nS;c\vec{p}+\vec{q})~\psi(1S;\vec{q})~\psi(1S;\vec{q})
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[angle=270,width=0.45\textwidth]{Figure3.eps}
\caption{Figure 3. The squared $\pi$ $\psi'$ scattering amplitude.}\label{fig.3}
\end{figure}here $c=\frac{\Omega}{\Omega+\omega}$ ($\Omega$, $\omega$ is the
energy of heavy and light quarks in $D^*$ meson); $\bar{y}_{123}$
is defined by the Dirac traces of the amplitude given in appendix.
In eq. (\ref{J(p)}) $\Psi(nS), \psi(1S)$ are a series of
oscillator wave functions which are fitted to realistic wave
functions. We obtain them from the solution of the Relativistic
String Hamiltonian, described in \cite{Simonov_Dub_Kaid:1993fk},
\cite{Danilkin:2008bi}.
Figure \ref{fig.3} shows the squared $\pi$ $\psi'$ scattering
amplitude averaging over vector polarization
$\frac{1}{3}\sum\limits_{ii'}|M|^2$. As can be seen, the structure
has a too large width and a peak located near energy
$\sqrt{s}\sim4.7$ GeV and cannot be associated with Z(4430).
\section{Discussion}
An important distinction between $\pi d$ and $\pi \psi'$ is the
difference in the deuteron and charmonium sizes, p wave decay
$\psi'$ to $D^*\bar{D}^*$ is also significant. It is interesting
that we can obtain a desirable resonance structure if $\psi'$ has
admixture of the near-threshold state with size $R\sim5$ fm due to
the coupling to the $D^*\bar{D}^*$ channel. In this case the width
turns out to be smaller $\Gamma\sim60$~MeV and the peak is shifted
to the position $\sqrt{s}\sim4.5$ GeV. This result is shown in
Figure \ref{fig.4}.
In our paper we have used dynamical picture of pion interaction
with heavy quarkonia corresponding to the diagram in Figure
\ref{fig.2}. Our analysis shows that there is no resonance near
$\sqrt{s}\sim4430$ energy in the $\psi'\pi$ system, unless an
admixture of large size near-the-threshold state is taken into
account.
\begin{figure}
\centering
\includegraphics[angle=270,width=0.45\textwidth]{Figure4.eps}
\caption{Figure 4. The squared $\pi$ $\psi'$ scattering amplitude in case of large charmonium size.}\label{fig.4}
\end{figure}
We are grateful to Yu.A.Simonov for useful discussions. This work
is supported by the Grant NSh-4961.2008.2. One of the authors
(I.V.D.) is also supported by the grant of the {\it Dynasty
Foundation} and the {\it Russian Science Support Foundation}.
|
1,108,101,564,686 | arxiv | \section*{Introduction}
A group is of \emph{type $\operatorname{F}_m$} if it has a classifying space with compact $m$-skeleton. These \emph{finiteness properties} of groups are natural generalizations of finite generation ($\operatorname{F}_1$) and finite presentability ($\operatorname{F}_2$). In 1987 and 1988, Bieri, Neumann, Strebel and Renz introduced a family of geometric invariants $\Sigma^m(G)$ ($m\in\mathbb{N}$), defined whenever $G$ is of type $\operatorname{F}_m$, which reveal a wealth of information about $G$ and $\operatorname{Hom}(G,\mathbb{R})$. However, since the $\Sigma^m(G)$ contain so much information, e.g., they serve as a complete catalog of precisely which subgroups of $G$ containing $[G,G]$ have which finiteness properties, they are in general quite difficult to compute.
Thanks to this difficulty, there are very few groups whose higher $\Sigma$-invariants are completely known.
If $\operatorname{Hom}(G,\mathbb{R})$ is trivial then all $\Sigma^m(G)$ are empty, so in that case the question is uninteresting, e.g., for groups with finite abelianization. Focusing on groups for which $\operatorname{Hom}(G,\mathbb{R})$ is sufficiently large, the only really robust family of groups for which the question of all the higher $\Sigma$-invariants is 100\% solved is the family of right-angled Artin groups, done independently by Bux--Gonzalez \cite{bux99} and Meier--Meinert--VanWyk \cite{meier98}. Other interesting families of groups for which there are substantial partial results about the higher $\Sigma$-invariants include Artin groups \cite{meier01}, solvable $S$-arithmetic groups \cite{bux04}, and metabelian groups \cite{meinert96,meinert97,kochloukova99}. The question of the higher $\Sigma$-invariants of a direct product, in terms of the invariants of the factors, is also solved \cite{bieri10a}.
The generalized Thompson groups $F_{n,\infty}$ ($n\ge2$), \textbf{which we will just denote by $F_n$ from now on}, can be quickly defined by their standard presentations
$$F_n\cong\gen{x_i~(i\in\mathbb{N}_0)\mid x_j x_i = x_i x_{j+(n-1)} \text{ for all }i<j}\text{.}$$
These groups were first introduced by Brown in \cite{brown87} as an ``$F$-like'' version of the Higman--Thompson groups $V_{n,r}$. They generalize Thompson's group $F$, namely $F=F_2$. The $F_n$ are all of type $\operatorname{F}_\infty$ \cite{brown87}. The group $F_n$ can also be described as the group of orientation preserving piecewise linear self homeomorphisms of $[0,1]$ with slopes powers of $n$ and breakpoints in $\mathbb{Z}[1/n]$. These groups are interesting for many reasons; from the perspective of $\Sigma$-invariants they are interesting for instance since every proper quotient of $F_n$ is abelian \cite{brown87,brin98}, and so the $\Sigma$-invariants reveal the finiteness properties of \emph{every} normal subgroup of $F_n$. Also, $F_n$ abelianizes to $\mathbb{Z}^n$, and so homomorphisms to $\mathbb{R}$ become more and more prevalent as $n$ goes up. In contrast, the ``type $V$'' Higman--Thompson groups $V_{n,r}$ are virtually simple \cite{higman74}, so have no non-trivial maps to $\mathbb{R}$ (and their $\Sigma$-invariants are empty).
The main result of the present work is a complete computation of $\Sigma^m(F_n)$ for all relevant $m$ and $n$. The previously known results are as follows. First, $\Sigma^1(F_2)$ was computed in the original Bieri--Neumann--Strebel paper \cite{bieri87}. In \cite{bieri10}, Bieri, Geoghegan and Kochloukova computed $\Sigma^m(F_2)$ for all $m$. In the other ``variable'', $n$, Kochloukova computed $\Sigma^2(F_n)$ for all $n$ in \cite{kochloukova12}. The techniques used there however proved difficult to extend to the cases when $n$ and $m$ are both greater than $2$. Our approach differs from those in \cite{bieri10} and \cite{kochloukova12}. We look at the action of $F_n$ on a proper $\operatorname{CAT}(0)$ cube complex $X_n$, and use topological and combinatorial tools to compute all the $\Sigma^m(F_n)$. This builds off work of the author and Witzel, in \cite{witzel15}, where the $\Sigma^m(F_2)$ computations from \cite{bieri10} were redone using such an action of $F=F_2$.
Taking Kochloukova's computation of $\Sigma^2(F_n)$ for granted, our main result can be phrased succinctly as:
\begin{mainresult}
For any $n,m \ge 2$, we have $\Sigma^m(F_n)=\Sigma^2(F_n)$.
\end{mainresult}
Note that for any group $G$ of type $\operatorname{F}_\infty$ one always has
$$\Sigma^1(G)\supseteq \Sigma^2(G) \supseteq \cdots \supseteq \Sigma^\infty(G) \text{.}$$
A more detailed description of $\Sigma^m(F_n)$ requires a lot of terminology and notation: we show that for $2\le n,m$, if $\chi=a\chi_0+c_0\psi_0+\cdots+c_{n-3}\psi_{n-3}+b\chi_1$ is a character of $F_n$ then $[\chi]$ fails to lie in $\Sigma^m(F_n)$ if and only if all $c_i$ are zero, and both $a$ and $b$ are non-negative. The reader will have to consult Section~\ref{sec:groups_and_chars} to see what all this means.
Computing $\Sigma$-invariants has historically proved difficult, and here one difficulty is in finding a way to realize an arbitrary character of $F_n$ as a height function on $X_n$. We do this by first introducing some measurements (``proto-characters'') on $n$-ary trees and forests, and extrapolating these to characters on $F_n$ and height functions on $X_n$. Once all the characters are cataloged, we use Morse theory and combinatorial arguments to compute all the $\Sigma^m(F_n)$. One key tool, Lemma~\ref{lem:popular_simplex}, is a new technique for proving higher connectivity properties of a simplicial complex, building off of recent work of Belk and Forrest.
A pleasant consequence of Theorem~A is the following, which is immediate from Citation~\ref{cit:bnsr_fin_props} below, plus the aforementioned fact that every proper quotient of $F_n$ is abelian.
\begin{corollary*}
Let $N$ be any normal subgroup of $F_n$. Then as soon as $N$ is finitely presented, it is already of type $F_\infty$. \qed
\end{corollary*}
It should be noted that it is possible to find subgroups of $F_n$ that are finitely presented but not of type $\operatorname{FP}_3$, and hence not of type $\operatorname{F}_\infty$ \cite[Theorem~B]{bieri10}. However, the corollary says that for normal subgroups this cannot happen.
Another immediate application of Theorem~A comes from \cite{kochloukova14}, namely Kochloukova's Theorem~C in that paper holds for all $F_n$. In words, not only is the deficiency gradient of $F_n$ zero with respect to any chain of finite index subgroups with index going to infinity, but so too are all the higher dimensional analogs. This can be viewed as a strong finiteness property. For more details and background, see \cite{kochloukova14}.
\medskip
At the end of the present work, we discuss the problem of computing the higher $\Sigma$-invariants of the \emph{Houghton groups} $H_n$. The group $H_n$ is of type $\operatorname{F}_{n-1}$ but not $\operatorname{F}_n$ \cite[Theorem~5.1]{brown87}, so one can ask what $\Sigma^m(H_n)$ is for $1\le m\le n-1$. We compute large parts of each $\Sigma^m(H_n)$ (Theorem~\ref{thrm:houghton_pos}), using the action of $H_n$ on a $\operatorname{CAT}(0)$ cube complex, and conjecture that anything not accounted for by the theorem must lie outside $\Sigma^m(H_n)$ (Conjecture~\ref{conj:houghton_neg}). The conjecture holds for $m=1,2$, but it seems that proving it for higher $m$ will require new ideas.
\medskip
The paper is organized as follows. After some topological setup in Section~\ref{sec:prelims}, we define the groups $F_n$ and their characters in Section~\ref{sec:groups_and_chars}. In Section~\ref{sec:stein_farley} we discuss a $\operatorname{CAT}(0)$ cube complex $X_n$ on which $F_n$ acts, and in Section~\ref{sec:links_matchings} we provide a combinatorial model for links in $X_n$. In Section~\ref{sec:computations} we prove Theorem~A. Section~\ref{sec:houghton} is devoted to the Houghton groups $H_n$; we compute lower bounds on $\Sigma^m(H_n)$, and discuss the problem of trying to make this bound sharp.
\subsection*{Acknowledgments} I would first like to acknowledge Stefan Witzel, my coauthor on \cite{witzel15}; some of the tools used here (e.g., Lemma~\ref{lem:morse}) were developed there, and working on that paper spurred me to attempt this problem. I am grateful to Robert Bieri and Desi Kochloukova for first suggesting I try this problem and for helpful conversations along the way, and to Matt Brin for many fruitful discussions as well.
\section{Topological setup}\label{sec:prelims}
Let $G$ be a finitely generated group. A \emph{character} of $G$ is a homomorphism $\chi\colon G\to \mathbb{R}$. If $\chi(G)\cong\mathbb{Z}$, then $\chi$ is \emph{discrete}. The \emph{character sphere} of $G$, denoted $S(G)$, is $\operatorname{Hom}(G,\mathbb{R}) \cong \mathbb{R}^d$ with $0$ removed and modulo positive scaling, so $S(G)\cong S^{d-1}$, where $d$ is the rank of $G/[G,G]$. The \emph{Bieri--Neumann--Strebel (BNS) invariant} $\Sigma^1(G)$ of $G$ is the subset of $S(G)$ defined by:
\[
\Sigma^1(G) \mathrel{\mathop{:}}= \{[\chi]\in S(G)\mid \Gamma_{0\le\chi} \text{ is connected}\} \text{.}
\]
Here $\Gamma$ is the Cayley graph of $G$ with respect to some finite generating set, and $\Gamma_{0\le\chi}$ is the full subgraph spanned by those vertices $g$ with $0\le\chi(g)$. We write $[\chi]$ for the equivalence class of $\chi$ in $S(G)$.
The \emph{Bieri--Neumann--Strebel--Renz (BNSR) invariants}, also called \emph{$\Sigma$-invariants} $\Sigma^m(G)$ ($m\in\mathbb{N}\cup\{\infty\}$), introduced in \cite{bieri88}, are defined for groups $G$ of type $\operatorname{F}_m$. Our working definition for $\Sigma^m(G)$ is almost identical to Definition~8.1 in \cite{bux04}:
\begin{definition}[$\Sigma$-invariants]\label{def:bnsr}
Let $G$ be of type $\operatorname{F}_m$, and let $Y$ be an $(m-1)$-connected $G$-CW complex. Suppose $Y^{(m)}$ is $G$-cocompact and the stabilizer of any $k$-cell is of type $\operatorname{F}_{m-k}$. For $0\ne\chi\in\operatorname{Hom}(G,\mathbb{R})$, there is a \emph{character height function}, denoted $h_\chi$, i.e., a continuous map $h_\chi\colon Y\to\mathbb{R}$, such that $h_\chi(gy)=\chi(g)+h_\chi(y)$ for all $y\in Y$ and $g\in G$. Then $[\chi]\in\Sigma^m(G)$ if and only if the filtration $(Y^{t\le h_\chi})_{t\in\mathbb{R}}$ is essentially $(m-1)$-connected\footnote{Meaning that for all $t\in\mathbb{R}$ there exists $s\le t$ such that the inclusion $Y^{t\le h_\chi} \to Y^{s\le h_\chi}$ induces the trivial map in $\pi_k$ for all $k\le m-1$.}.
\end{definition}
Here $Y^{t\le h_\chi}$ is defined to be the full\footnote{A subcomplex is \emph{full} if as soon as it contains a simplex's vertices, it also contains the simplex.} subcomplex of $Y$ supported on those vertices $y$ with $t\le h_\chi(y)$. The only difference between our definition and \cite[Definition~8.1]{bux04} is that we use $Y^{t\le h_\chi}$ instead of $h_\chi^{-1}([t,\infty))$. However, the first filtration is essentially $(m-1)$-connected if and only if the second is, so our definition is equivalent.
As mentioned in \cite{bux04}, this definition of $\Sigma^m(G)$ is independent of the choices of $Y$ and $h_\chi$. We will sometimes abuse notation and write $\chi$ instead of $h_\chi$, for both the character and the character height function.
One important application of the $\Sigma$-invariants is:
\begin{cit}\cite[Theorem~1.1]{bieri10}\label{cit:bnsr_fin_props}
Let $G$ be a group of type $\operatorname{F}_m$ and $N$ a subgroup of $G$ containing $[G,G]$ (so $N$ is normal). Then $N$ is of type $\operatorname{F}_m$ if and only if for every $\chi\in\operatorname{Hom}(G,\mathbb{R})$ with $\chi(N)=0$ we have $[\chi]\in\Sigma^m(G)$.
\end{cit}
For example, if $\chi \colon G\twoheadrightarrow\mathbb{Z}$ is a discrete character, then $\ker(\chi)$ is of type $\operatorname{F}_m$ if and only if $[\pm\chi]\in\Sigma^m(G)$.
\medskip
The setup of Definition~\ref{def:bnsr} is particularly tractable in the situation where $Y$ is an affine cell complex and $\chi$ is affine on cells. Then discrete Morse theory enters the picture, and higher (essential) connectivity properties can be deduced from higher connectivity properties of ascending/descending links.
An \emph{affine cell complex} $Y$ is the quotient of a disjoint union of euclidean polytopes modulo an equivalence relation that maps every polytope injectively into $Y$, with images called \emph{cells}, such that such cells intersect in faces (see \cite[Definition~I.7.37]{bridson99}). In particular, every cell has an affine structure. The link $\operatorname{lk}_Y v$ of a vertex $v$ of $Y$ is the set of directions in $Y$ emanating out of $v$. The link is naturally a spherical simplicial complex, whose closed cells consist of directions pointing into closed cells of $Y$. If every cell is a cube of some dimension, we call $Y$ an affine cube complex.
The following is taken directly from \cite{witzel15}:
\begin{definition}[Morse function]\label{def:morse}
The most general kind of \emph{Morse function} on $Y$ that we will be using is a map $(h,s) \colon Y \to \mathbb{R} \times \mathbb{R}$ such that both $h$ and $s$ are affine on cells. The codomain is ordered lexicographically, and the conditions for $(h,s)$ to be a Morse function are the following: the function $s$ takes only finitely many values on vertices of $Y$, and there is an $\varepsilon > 0$ such that every pair of adjacent vertices $v$ and $w$ either satisfy $\abs{h(v) - h(w)} \ge \varepsilon$, or else $h(v) = h(w)$ and $s(v)\ne s(w)$.
\end{definition}
Let us summarize some setup from \cite{witzel15}: We call $h$ the \emph{height}, $s$ the \emph{secondary height} and $(h,s)$ the \emph{refined height}. Every cell has a unique vertex of maximal refined height and a unique vertex of minimal refined height. The \emph{ascending star} $\operatorname{st}^{(h,s)\uparrow}_Y v$ of a vertex $v$ (with respect to $(h,s)$) is the subcomplex of $\operatorname{st}_Y v$ consisting of cells $\sigma$ such that $v$ is the vertex of minimal refined height in $\sigma$. The \emph{ascending link} $\operatorname{lk}^{(h,s)\uparrow}_Y v$ of $v$ is the link of $v$ in $\operatorname{st}^{(h,s)\uparrow}_Y v$. The \emph{descending star} and the \emph{descending link} are defined analogously. Since $h$ and $s$ are affine, ascending and descending links are full subcomplexes. We denote by $Y^{p \le h \le q}$ the full subcomplex of $Y$ supported on vertices $v$ with $p \le h(v) \le q$.
With our definition of Morse function as above, we have the following Morse Lemma, which was proved in \cite{witzel15} (compare to \cite[Corollary~2.6]{bestvina97}):
\begin{lemma}[Morse Lemma]\label{lem:morse}
Let $p,q,r \in \mathbb{R} \cup \{\pm \infty\}$ with $p \le q \le r$. If for every vertex $v \in Y^{q < h \le r}$ the descending link $\operatorname{lk}^{(h,s)\downarrow}_{Y^{p \le h}} v$ is $(k-1)$-connected then the pair $(Y^{p \le h \le r},Y^{p \le h \le q})$ is $k$-connected. If for every vertex $v \in Y^{p \le h < q}$ the ascending link $\operatorname{lk}^{(h,s)\uparrow}_{Y^{h \le r}} v$ is $(k-1)$-connected then the pair $(Y^{p \le h \le r},Y^{q \le h \le r})$ is $k$-connected.
\end{lemma}
\begin{proof}
For the sake of keeping things self-contained, we redo the proof from \cite{witzel15}.
The ``ascending'' version is like the ``descending'' version with $(h,s)$ replaced by $-(h,s)$, so we only prove the descending version. Using induction (and compactness of spheres if $r = \infty$) we can assume that $r-q \le \varepsilon$, where $\varepsilon > 0$ is as in Definition~\ref{def:morse}. By compactness of spheres, it suffices to show that there exists a well order $\preceq$ on the vertices of $Y^{q < h \le r}$ such that the pair
\[
(S_{\preceq v},S_{\prec v}) \mathrel{\mathop{:}}= \left(Y^{p \le h \le q} \cup \bigcup_{w \preceq v} \operatorname{st}^{(h,s)\downarrow}_{Y^{p \le h}} w \text{, } Y^{p \le h \le q} \cup \bigcup_{w \prec v} \operatorname{st}^{(h,s)\downarrow}_{Y^{p \le h}} w\right)
\]
is $k$-connected for every vertex $v\in Y^{q < h \le r}$. Let $\preceq$ be any well order satisfying $v\prec v'$ whenever $s(v)<s(v')$ (this exists since $s$ takes finitely many values on vertices). Note that $S_{\preceq v}$ is obtained from $S_{\prec v}$ by coning off $S_{\prec v} \cap \partial \operatorname{st} v$. We claim that this intersection equals the boundary $B$ of $\operatorname{st}^{(h,s)\downarrow} v$ in $Y_{p \le h}^{(h,s)\le (h,s)(v)}$, which is homeomorphic to $\operatorname{lk}^{(h,s)\downarrow}_{Y^{p \le h}} v$ and hence $(k-1)$-connected by assumption. The inclusion $S_{\prec v} \cap \partial \operatorname{st} v \subseteq B$ is evident. Since $S_{\prec v} \cap \partial \operatorname{st} v$ is a full subcomplex of $\partial \operatorname{st} v$, for the converse it suffices to verify that any vertex $w$ adjacent to $v$ with $(h,s)(w) < (h,s)(v)$ lies in $S_{\prec v}$. If $h(w) < h(v)$ then $h(w) \le h(v) - \varepsilon \le r - \varepsilon \le q$, so $w \in Y^{p \le h \le q}$. Otherwise $s(w) < s(v)$ and hence $w \prec v$.
\end{proof}
In practice, the following form is all we will need.
\begin{corollary}\label{cor:morse}
If $Y$ is $(m-1)$-connected and for every vertex $v \in Y^{h < q}$ the ascending link $\operatorname{lk}^{(h,s)\uparrow}_Y v$ is $(m-1)$-connected, then $Y^{q \le h}$ is $(m-1)$-connected.
\end{corollary}
\begin{proof}
This follows from the Morse Lemma using $p=-\infty$ and $r=\infty$.
\end{proof}
\section{The groups and characters}\label{sec:groups_and_chars}
Thompson's group $F$ admits many generalizations. In this paper we will be concerned with a family of groups usually denoted $F_{n,\infty}$, which we abbreviate to $F_n$ ($2\le n\in\mathbb{N}$); the group $F_2$ is $F$. As a warning, when dealing with generalizations of Thompson groups, e.g., in \cite{brown87,brin98}, the notation $F_n$ often refers to a different group, in which $F_{n,\infty}$ sits with finite index (not to mention that $F_n$ also often denotes the free group of rank $n$). We will not be concerned with these though, so here \textbf{the notation $F_n$ will always refer to the group denoted $F_{n,\infty}$ in \cite{brown87,brin98,kochloukova12}}. In this section we give three viewpoints of $F_n$ and its characters. The three viewpoints of $F_n$ are: its standard infinite presentation, as a group of homeomorphisms of $[0,1]$, and as a group of $n$-ary tree pairs. The equivalence of these was proved in the original paper by Brown \cite[Section~4]{brown87}. For all three ways of viewing $F_n$, we also discuss characters of $F_n$ from that viewpoint. The last one will be the most important, since it is the one we use later to compute the $\Sigma^m(F_n)$.
\subsection{Presentation}\label{sec:presentation}
The standard infinite presentation for $F_n$ (\cite[Proposition~4.8]{brown87}) is
$$F_n\cong\gen{x_i~(i\in\mathbb{N}_0)\mid x_j x_i = x_i x_{j+(n-1)} \text{ for all }i<j}\text{.}$$
It is easy to abelianize this presentation, and get that $F_n/[F_n,F_n] \cong \mathbb{Z}^n$. One basis for this is $\bar{x}_0,\dots,\bar{x}_{n-1}$. From this, one could get a basis for $\operatorname{Hom}(F_n,\mathbb{R}) \cong \mathbb{R}^n$ by taking the dual basis. This was one tool used in \cite{kochloukova12} to compute $\Sigma^2(F_n)$.
\subsection{Piecewise linear homeomorphisms}\label{sec:homeos}
A more hands-on basis for $\operatorname{Hom}(F_n,\mathbb{R})$ can be described by viewing $F_n$ as piecewise linear self homeomorphisms of $[0,1]$. We will not prove anything in this subsection, since the model for $F_n$ we will actually use comes in the next subsection; here we are just giving some intuition for $F_n$ and its characters. Each element $f\in F_n$ is an orientation preserving homeomorphism $f\colon[0,1] \to [0,1]$ that is piecewise linear with slopes powers of $n$, and whose finitely many points of non-differentiability lie in $\mathbb{Z}[1/n]$. Already this gives us two interesting characters, usually denoted $\chi_0$ and $\chi_1$. The character $\chi_0$ is the log base $n$ of the right derivative at $0$, and $\chi_1$ is the log base $n$ of the left derivative at $1$.
Any such $f\in F_n$ is determined by certain sets of \emph{breakpoints} in the domain and range, as we now describe. Build a finite set $P \subseteq [0,1]$ by starting with the points $\{0,1\}$, and then do finitely many iterations of the following procedure:
\begin{quote}
Pick two points $x$ and $x'$ already in $P$, with no points in between them yet in $P$, and then add to $P$ the $n-1$ new points $\frac{(n-i)x+ix'}{n}$ for $0<i<n$.
\end{quote}
For example, after one iteration of this, $P$ consists of $\{0,1/n,2/n,\dots,(n-1)/n,1\}$. Call $P$ a \emph{legal set of breakpoints}. If $Q$ is another legal set of breakpoints with $|P|=|Q|$, then we can define $f \colon [0,1] \to [0,1]$ by sending the points of $P$, in order, to the points of $Q$, and then extending affinely between breakpoints. By construction, slopes will be powers of $n$ and breakpoints will lie in $\mathbb{Z}[1/n]$. Moreover, every $f\in F_n$ arises in this way \cite[Proposition~4.4]{brown87}.
One can show that every element of $\mathbb{Z}[1/n] \cap [0,1]$ appears in some legal set of breakpoints. Moreover, while a point can appear in more than one legal set of breakpoints, and have a different ``position'' in different legal sets of breakpoints, the ``position modulo $n-1$'' is a well defined measurement. The equivalence classes induced by this measurement are in fact the $F_n$-orbits in $\mathbb{Z}[1/n] \cap (0,1)$. (Again, proofs are left to the reader.) For each $0\le i\le n-2$, let $O_i$ denote the $F_n$-orbit of points of $\mathbb{Z}[1/n] \cap (0,1)$ appearing in a legal set of breakpoints in a position congruent to $i$ modulo $n-1$.
Now we can define characters on $F_n$. For a point $x\in(0,1]$ define $LD|_x \colon F_n \to \mathbb{Z}$ to be the log base $n$ of the left derivative at $x$. Similarly for $x\in[0,1)$ let $RD|_x$ be the log base $n$ of the right derivative at $x$. These are not group homomorphisms. However, summing these over a complete $F_n$-orbit $O_i$ would define a homomorphism. To get these sums to be finite, we will actually sum up $LD|_x - RD|_x$, since then for a given $f$ this can be nonzero at only finitely many points. For $0\le i\le n-2$ define:
$$\psi_i(f) \mathrel{\mathop{:}}= \sum\limits_{x\in O_i} LD|_x (f) - RD|_x (f) \text{.}$$
This is a group homomorphism $\psi_i \colon F_n \to \mathbb{Z}$. As a remark, the characters $-\chi_0$ and $\chi_1$ are also of this form, namely for $-\chi_0$ we sum over the orbit of $0$ (which is just $\{0\}$) and for $\chi_1$ we sum over the orbit $\{1\}$. (Technically this only makes sense if we declare $LD|_0=0$ and $RD|_1=0$.)
Note that $\sum_{i=0}^{n-2} \psi_i = \chi_0 - \chi_1$. However, one can check that $\chi_0,\psi_0,\dots,\psi_{n-3},\chi_1$ are linearly independent, and so form a basis of $\operatorname{Hom}(F_n,\mathbb{R})\cong \mathbb{R}^n$. In the next subsection we will redefine the $\psi_i$ using a different model for $F_n$, and in particular will prove all of these facts.
\subsection{$n$-ary trees}\label{sec:trees}
This brings us to the descriptions of the $F_n$ and their characters that we will use for the rest of the paper, namely making use of $n$-ary trees.
An \emph{$n$-ary tree} will always mean a finite connected tree with a single vertex of degree $n$ or $0$, its \emph{root}, some number of degree $1$ vertices, the \emph{leaves}, and all other vertices of degree $n+1$. The \emph{trivial tree} $\mathrm{I}$ is the one where the root has degree $0$ (so there are no leaves or other vertices). The \emph{$n$-caret} $\Lambda_n$ is the non-trivial $n$-ary tree in which every vertex is either the root or a leaf. Every $n$-ary tree can be obtained as a union of $n$-carets.
For an $n$-ary tree $T$, each leaf of $T$ has a unique reduced path to the root. The length of this path (i.e., its number of edges) defines the \emph{depth} of that leaf. As a remark, the trivial tree is characterized as having a leaf of depth $0$, and the $n$-caret is characterized as having all its leaves of depth $1$.
For each $n$-ary tree $T$, say with $r$ leaves, we fix a planar embedding of $T$, and hence an order on the leaves. We label the leaves $0$ through $r-1$, left to right. The next definition is of various measurements that we will call \emph{proto-characters} on $T$, which will later be used to define characters on elements of $F_n$.
\begin{definition}[Proto-characters]\label{def:proto_chars}
Let $T$ be an $n$-ary tree with leaves labeled $0$ through $r-1$, left to right. Define $L(T)$ to be the depth of the $0$th leaf. Define $R(T)$ to be the depth of the $(r-1)$st leaf. For each $0\le j\le r-1$, define $d_j(T)$ to be the depth of the $j$th leaf, and then for each $0\le j\le r-2$ define
$$\delta_j(T) \mathrel{\mathop{:}}= d_j(T) - d_{j+1}(T) \text{.}$$
This is the \emph{$j$th change of depth} of $T$. For each $0\le i\le n-2$ define
$$D_i(T) \mathrel{\mathop{:}}= \sum \{\delta_j(T) \mid 0\le j \le r-2 \text{, } j\equiv i \mod (n-1)\} \text{.}$$
\end{definition}
As a quick (and trivial) example, if $\Lambda_n$ is the $n$-caret then $L(\Lambda_n)=R(\Lambda_n)=1$, and $D_i(\Lambda_n)=0$ for all $i$, since $d_j(\Lambda_n)=1$ for all $j$. A less trivial example is given in Figure~\ref{fig:proto}.
\begin{figure}[htb]
\begin{tikzpicture}\centering
\draw (0,0) -- (1,1) -- (2,0) (1,1) -- (1,0) (-1,-1) -- (0,0) -- (1,-1) (0,0) -- (0,-1) (-1,-2) -- (0,-1) -- (1,-2) (0,-1) -- (0,-2);
\filldraw (-1,-1) circle (1.5pt) (0,-2) circle (1.5pt) (1,-1) circle (1.5pt) (2,0) circle (1.5pt);
\filldraw[white] (-1,-2) circle (1.5pt) (1,-2) circle (1.5pt) (1,0) circle (1.5pt);
\draw (-1,-2) circle (1.5pt) (1,-2) circle (1.5pt) (1,0) circle (1.5pt);
\end{tikzpicture}
\caption{A $3$-ary tree $T$ with $r=7$. Leaves $0$, $2$, $4$ and $6$ are labeled with a black dot (those congruent to $0$ mod $2$), and leaves $1$, $3$ and $5$ with a white dot (congruent to $1$ mod $2$). Visibly, $L(T)=2$ and $R(T)=1$. To compute $D_0$, we add $\delta_0+\delta_2+\delta_4$ and get $D_0(T)=-1+0+1=0$. To compute $D_1$ we add $\delta_1+\delta_3+\delta_5$ and get $D_1(T)=0+1+0=1$.}\label{fig:proto}
\end{figure}
An \emph{$n$-ary tree pair} $(T_-,T_+)$ consists of $n$-ary trees $T_-$ and $T_+$ such that $T_-$ and $T_+$ have the same number of leaves. Two $n$-ary tree pairs are \emph{equivalent} if they can be transformed into each other via a sequence of reductions and expansions. An \emph{expansion} amounts to adding an $n$-caret to the $k$th leaf of $T_-$ and one to the $k$th leaf of $T_+$, for some $k$. A \emph{reduction} is the reverse of an expansion. We denote the equivalence class of $(T_-,T_+)$ by $[T_-,T_+]$.
These $[T_-,T_+]$ are the elements of $F_n$. The multiplication, say of $[T_-,T_+]$ and $[U_-,U_+]$, written $[T_-,T_+] \cdot [U_-,U_+]$, is defined as follows. First note that $T_+$ and $U_-$ admit an $n$-ary tree $S$ that contains them both, so using expansions we have $[T_-,T_+] = [\hat{T}_-,S]$ and $[U_-,U_+] = [S,\hat{U}_+]$ for some $\hat{T}_-$ and $\hat{U}_+$. Then we define
$$[T_-,T_+] \cdot [U_-,U_+] \mathrel{\mathop{:}}= [\hat{T}_-,S] \cdot [S,\hat{U}_+] = [\hat{T}_-,\hat{U}_+] \text{.}$$
This multiplication is well defined, and it turns out the resulting structure is a group, namely $F_n$.
Having described elements of $F_n$ using the $n$-ary tree pair model, we now describe characters. We make use of the proto-characters from Definition~\ref{def:proto_chars}.
\begin{definition}[Characters]\label{def:chars}
Let $f=[T,U]=(T,U)\in F_n$. Define
$$\chi_0(f) \mathrel{\mathop{:}}= L(U) - L(T) \text{ and } \chi_1(f) \mathrel{\mathop{:}}= R(U) - R(T) \text{.}$$
For $0\le i\le n-2$ define
$$\psi_i(f) \mathrel{\mathop{:}}= D_i(U) - D_i(T) \text{.}$$
\end{definition}
\begin{lemma}\label{lem:chars_well_def}
The functions $\chi_0$, $\chi_1$ and $\psi_i$ ($0\le i\le n-2$) are well defined group homomorphisms from $F_n$ to $\mathbb{Z}$.
\end{lemma}
\begin{proof}
For well definedness, we need to show that for $\chi\in\{\chi_0,\chi_1,\psi_i\}_{i=0}^{n-2}$, if $T'$ (respectively $U'$) is obtained from $T$ (respectively $U$) by adding an $n$-caret to the $k$th leaf, then $\chi(T',U')=\chi(T,U)$. If suffices to show that for $A\in \{L,R,D_i\}_{i=0}^{n-2}$, the value $A(T')-A(T)$ depends only on $i$, $k$ and $r$, where $r$ is the number of leaves of $T$. Since $U$ has the same number of leaves, this will show that $A(T')-A(T) = A(U')-A(U)$, and so $A(U')-A(T') = A(U)-A(T)$ and $\chi(T',U')=\chi(T,U)$. For $A=L,R$ this is clear: $L(T')-L(T)=1$ if $k=0$ and $L(T')-L(T)=0$ otherwise, and $R(T')-R(T)=1$ if $k=r-1$ and $R(T')-R(T)=0$ otherwise. Now let $A=D_i$. We then have the following:
\begin{enumerate}
\item If $0<k$ and $k-1\equiv_{n-1} i$, then $D_i(T')=D_i(T)-1$.
\item If $k<r-1$ and $k\equiv_{n-1} i$, then $D_i(T')=D_i(T)+1$.
\item Otherwise $D_i(T')-D_i(T) = 0$.
\end{enumerate}
In particular, $D_i(T')-D_i(T)$ depends only on $i$, $k$ and $r$.
It is now easy to check that the $\chi$ are group homomorphisms. If we have two elements to multiply, represent them with a common tree and get $[T,U]\cdot [U,V] = [T,V]$; then for $A\in\{L,R,D_i\}$ we have $A(U) - A(T) + A(V) - A(U) = A(V) - A(T)$, so any $\chi\in\{\chi_0,\chi_1,\psi_i\}$ is a homomorphism.
\end{proof}
As the proof showed, we now know how the measurements $L$, $R$ and $D_i$ change when an $n$-caret is added to the $k$th leaf of an $n$-ary tree. For example if $0<k$ and $k-1\equiv_{n-1} i$ then $D_i$ goes down by $-1$, and if $k<r-1$ and $k\equiv_{n-1} i$ then $D_i$ goes up by $1$. See Figure~\ref{fig:proto_change} for an example.
\begin{figure}[htb]
\begin{tikzpicture}\centering
\draw (-0.5,0) -- (4.5,0) -- (2,2) -- (-0.5,0) (0,0) -- (0,-1) (1,0) -- (1,-1) (2,0) -- (2,-1) (3,0) -- (3,-1) (4,0) -- (4,-1);
\filldraw (0,-1) circle (1.5pt) (2,-1) circle (1.5pt) (4,-1) circle (1.5pt);
\filldraw[white] (1,-1) circle (1.5pt) (3,-1) circle (1.5pt);
\draw (1,-1) circle (1.5pt) (3,-1) circle (1.5pt);
\node at (2,0.75) {$T$};
\begin{scope}[xshift=6.5cm]
\draw (-0.5,0) -- (4.5,0) -- (2,2) -- (-0.5,0) (0,0) -- (0,-1) (1,0) -- (1,-1) (2,0) -- (2,-1) (3,0) -- (3,-1) (4,0) -- (4,-1);
\filldraw (0,-1) circle (1.5pt) (4,-1) circle (1.5pt);
\filldraw[white] (1,-1) circle (1.5pt) (3,-1) circle (1.5pt);
\draw (1,-1) circle (1.5pt) (3,-1) circle (1.5pt);
\draw (1.5,-2) -- (2,-1) -- (2.5,-2) (2,-1) -- (2,-2);
\filldraw (1.5,-2) circle (1.5pt) (2.5,-2) circle (1.5pt);
\filldraw[white] (2,-2) circle (1.5pt);
\draw (2,-2) circle (1.5pt);
\node at (2,0.75) {$T'$};
\end{scope}
\end{tikzpicture}
\caption{A $3$-caret is added to the second leaf (counting starts at zero) of some $3$-ary tree $T$ with five leaves, to get a new $3$-ary tree $T'$, so $k=2$ and $r=5$. For $T$ we have some changes of depth $\delta_0,\dots,\delta_3$, and for $T'$ we have some changes of depth $\delta_0',\dots,\delta_5'$. The relationships are $\delta_0'=\delta_0$, $\delta_1'=\delta_1-1$, $\delta_2'=0$, $\delta_3'=0$, $\delta_4'=\delta_2+1$ and $\delta_5'=\delta_3$. Hence $D_0'=D_0+1$ and $D_1'=D_1-1$.}\label{fig:proto_change}
\end{figure}
\begin{proposition}[Basis]\label{prop:char_basis}
As elements of $\operatorname{Hom}(F_n,\mathbb{R})\cong \mathbb{R}^n$, the $n$ characters
$$\chi_0,\psi_0,\dots,\psi_{n-3},\chi_1$$
are linearly independent, and hence form a basis. A dependence involving $\psi_{n-2}$ is that $\psi_0+\cdots+\psi_{n-2}=\chi_0 - \chi_1$.
\end{proposition}
\begin{proof}
For the second statement, just note that for any tree $T$, $D_0(T)+\cdots D_{n-2}(T) = L(T) - R(T)$.
We turn to the statement about linear independence. For $0\le k\le n-1$, let $T_k$ be the tree consisting of an $n$-caret with another $n$-caret on its $k$th leaf, so $T_k$ has leaves labeled $0$ through $2n-2$. It is straightforward to compute $L(T_0)=2$, $L(T_k)=1$ for $k>0$, $R(T_{n-1})=2$, $R(T_k)=1$ for $k<n-1$, and the following for the $D_i$ ($0\le i\le n-2$):
\begin{align*}
D_i(T_k) = \left\{\begin{array}{ll} -1 & \text{if } i = k-1 \\
1 & \text{if } i = k \\
0 & \text{else.}
\end{array}\right.
\end{align*}
For $0\le i\le n-2$, we therefore have
$$(D_i(T_0),D_i(T_1),\dots,D_i(T_{n-1})) = (0,\dots,0,1,-1,0,\dots,0)$$
with the $1$ at $D_i(T_i)$. We will also need to use trees $T_k'$, obtained by attaching the root of $T_k$ to the last leaf of an $n$-caret. For each $k$ we have $L(T_k')=1$, $R(T_k')=R(T_k)+1$ and $D_i(T_k')=D_i(T_k)$ for all $0\le i\le n-3$.
Consider the $n$ elements $[T_0,T_{n-1}],\dots,[T_{n-2},T_{n-1}],[T_0',T_{n-1}']$ of $F_n$. Our goal now is to hit them with the $n$ characters $\chi_0,\psi_0,\dots,\psi_{n-3},\chi_1$ to get an $n$-by-$n$ matrix, and then show that this matrix is non-singular. In particular this will prove that these $n$ characters are linearly independent. The $\chi_0$ row is $(-1,0,\dots,0)$ and the $\chi_1$ row is $(1,\dots,1)$. For $0\le i\le n-3$, $D_i(T_{n-1})=0$, so $\psi_i([T_k,T_{n-1}])=-D_i(T_k)$ for $0\le k\le n-2$, and similarly $\psi_i([T_0',T_{n-1}'])=-D_i(T_0')$. Hence we can compute the rows for $\psi_i$, using our previous computation of the $D_i(T_k)$. We get that the $\psi_0$ row is $(-1,1,0,\dots,0,-1)$, the $\psi_1$ row is $(0,-1,1,0,\dots,0)$, and so forth up to the $\psi_{n-3}$ row, which is $(0,\dots,0,-1,1,0)$. Arranging these rows into a matrix, we need to show non-singularity of the matrix:
\begin{align*}
\begin{pmatrix}
-1 & 0 & 0 & 0 & \dots & 0 & 0 & 0 \\
-1 & 1 & 0 & 0 & \dots & 0 & 0 & -1\\
0 & -1 & 1 & 0 & \dots & 0 & 0 & 0 \\
0 & 0 & -1 & 1 & \dots & 0 & 0 & 0 \\
0 & 0 & 0 & -1 & \dots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & 0 & \dots & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & \dots & -1 & 1 & 0 \\
1 & 1 & 1 & 1 & \dots & 1 & 1 & 1
\end{pmatrix}
\end{align*}
This is visibly ``almost'' lower triangular; the second row (the $\psi_0$ row) is the only problem, if $n>2$ (note that if $n=2$ then the only rows are $\chi_0$ and $\chi_1$, and this matrix is lower triangular and non-singular). We hit this row with elementary row operations, namely if $r_i$ is the $i$th row we replace $r_2$ with
$$r_2 + r_n - r_{n-1} - 2r_{n-2} - 3r_{n-3} - \cdots - (n-3)r_3 \text{.}$$
The new second row is $(0,n-1,0,\dots,0)$, and hence the matrix reduces to a lower triangular matrix whose determinant is readily computed to be $-(n-1)$. This matrix is therefore non-singular, and so the characters $\chi_0,\psi_0,\dots,\psi_{n-3},\chi_1$ are linearly independent elements of $\operatorname{Hom}(F_n,\mathbb{R})\cong \mathbb{R}^n$.
\end{proof}
Clearly this proof would have been faster if, instead of $\psi_0$, we used the character $\psi_0 + \chi_1 - \psi_{n-3} - 2\psi_{n-4} - 3\psi_{n-5} - \cdots - (n-3)\psi_1$, but since computing the $\Sigma^m(F_n)$ will involve being able to tell whether our basis characters increase, decrease, or neither under certain moves, it will be advantageous to have basis characters with the easiest possible descriptions.
\begin{remark}
The $\psi_i$ here agree with the $\psi_i$ in Subsection~\ref{sec:homeos}, provided the connection between the homeomorphism model and the $n$-ary tree pair model is made correctly. For $(T,U)$ we view $U$ as the ``domain tree'' and $T$ as the ``range tree''. Each tree defines a subdivision of $[0,1]$ into as many subintervals as there are leaves. Then, the subdivision given by the domain tree is taken to the subdivision given by the range tree, defining a homeomorphism as described in Subsection~\ref{sec:homeos}. It is straightforward to check that the two definitions of $\psi_i$ agree.
\end{remark}
\section{Stein--Farley complexes}\label{sec:stein_farley}
In this section we recall the Stein--Farley $\operatorname{CAT}(0)$ cube complex $X_n$ on which $F_n$ acts, and extend the characters $\chi \colon F_n \to \mathbb{R}$ to functions $\chi \colon X_n \to \mathbb{R}$. The complex $X_n$ was first constructed by Stein \cite{stein92} building off ideas of Brown, and shown to be $\operatorname{CAT}(0)$ by Farley \cite{farley03}, who viewed $F_n$ as a \emph{diagram group}, \`a la Guba and Sapir \cite{guba97}. To define $X_n$, we first expand from considering $n$-ary trees to considering $n$-ary forests. An \emph{$n$-ary forest} is a disjoint union of finitely many $n$-ary trees. The roots and leaves of the trees are \emph{roots} and \emph{leaves} of the forest. We fix an order on the trees, and hence on the leaves. An \emph{$n$-ary forest pair} $(E_-,E_+)$ consists of $n$-ary forests $E_-$ and $E_+$ such that $E_-$ and $E_+$ have the same number of leaves. We call the roots of $E_-$ \emph{heads} and the roots of $E_+$ \emph{feet} of the pair (the terminology comes from flipping $E_+$ upside down and identifying the leaves of $E_-$ and $E_+$).
Just like the tree case, we have a notion of equivalence. Two $n$-ary forest pairs are \emph{equivalent} if they can be transformed into each other via a sequence of reductions or expansions. We denote the equivalence class of $(E_-,E_+)$ by $[E_-,E_+]$. Let $\mathcal{P}$ be the set of equivalence classes of $n$-ary forest pairs.
This set has two important pieces of structure. First, it is a groupoid. If $[E_-,E_+]$ has $k$ heads and $\ell$ feet, and $[D_-,D_+]$ has $\ell$ heads and $m$ feet, then we can define their product, written $[E_-,E_+] \cdot [D_-,D_+]$, which is an $n$-ary forest pair with $k$ heads and $m$ feet. Like in $F_n$, with $n$-ary tree pairs, to define the product we first note that $E_+$ and $D_-$ admit an $n$-ary forest $C$ that contains them both. Then applying expansions we can write $[E_-,E_+] = [\hat{E}_-,C]$ and $[D_-,D_+] = [C,\hat{D}_+]$ for some $\hat{E}_-$ and $\hat{D}_+$, and then define
$$[E_-,E_+] \cdot [D_-,D_+] \mathrel{\mathop{:}}= [\hat{E}_-,C] \cdot [C,\hat{D}_+] = [\hat{E}_-,\hat{D}_+] \text{.}$$
For $\mathcal{P}$ to be a groupoid with this multiplication, we need identities and inverses. A forest in which all trees are trivial is called a \emph{trivial forest}. The trivial forest with $\ell$ trees is denoted $\operatorname{id}_\ell$. We can view an $n$-ary forest $E$ as an $n$-ary forest pair via $E \mapsto [E,\operatorname{id}_\ell]$, where $\ell$ is the number of leaves of $E$. It is clear that for any element with $k$ heads and $\ell$ feet, $[\operatorname{id}_k,\operatorname{id}_k]$ is the left identity and $[\operatorname{id}_\ell,\operatorname{id}_\ell]$ is the right identity. We also have inverses, namely the (left and right) inverse of $[E_-,E_+]$ is $[E_+,E_-]$.
Since $F_n$ lives in $\mathcal{P}$ as the set of elements with one head and one foot, we have an action of $F_n$, by multiplication, on the subset $\mathcal{P}_1$ of elements with one head.
The second piece of structure on $\mathcal{P}$ is an order relation. The order is defined by: $[E_-,E_+] \le [D_-,D_+]$ whenever there is an $n$-ary forest $C$ such that $[E_-,E_+] \cdot C = [D_-,D_+]$. We informally refer to right multiplication by an $n$-ary forest pair of the form $[C,\operatorname{id}_\ell]$ as \emph{splitting} the feet of $[E_-,E_+]$. Multiplying by $[\operatorname{id}_\ell,C]$ is called \emph{merging}. This terminology comes from viewing $E_+$ upside down with its leaves attached to those of $E_-$, forming a ``strand diagram'' \`a la \cite{belk14}. It is straightforward to check that $\le$ is a partial order, so $\mathcal{P}$ is a poset. The subset $\mathcal{P}_1$ of elements with one head is a subposet.
The topological realization of the poset $(\mathcal{P}_1,\le)$ is a contractible simplicial complex on which $F_n$ acts, and the \emph{Stein--Farley complex} $X_n$ is a certain invariant subcomplex with a natural cubical structure. Given $n$-ary forest pairs $[E_-,E_+] \le [E_-,E_+] \cdot E$, write $[E_-,E_+] \preceq [E_-,E_+] \cdot E$ whenever $E$ is an \emph{elementary $n$-ary forest}. This means that each $n$-ary tree of $E$ is either trivial or a single $n$-caret. Now $X_n$ is defined to be the subcomplex of $|\mathcal{P}_1|$ consisting of chains $x_0<\cdots<x_k$ with $x_i \preceq x_j$ for all $i\le j$. The cubical structure is given by intervals: given $x\preceq y$ ($x$ with $r$ feet), the interval $[x,y]\mathrel{\mathop{:}}=\{z\mid x\le z\le y\}$ is a Boolean lattice of dimension $r$, and so the simplices in $[x,y]$ form an $r$-cube. Note that $x\prec y$ are adjacent, i.e., share a $1$-cube, if and only if $y=x \cdot E$ for $E$ an elementary $n$-ary forest with just a single $n$-caret.
\begin{theorem}\cite{farley03}
$X_n$ is a $\operatorname{CAT}(0)$ cube complex.
\end{theorem}
Every cube $\sigma$ has a unique vertex $x$ with fewest feet and a unique vertex $y$ with most feet. There is a unique elementary $n$-ary forest $E$ with $y=x\cdot E$, and the other vertices of $\sigma$ are obtained by multiplying $x$ by subforests of $E$. We use the following notation: suppose $x$ has $\ell$ feet and $E=(A_0,\dots,A_{\ell-1})$, where each $A_i$ is either $\mathrm{I}$ or $\Lambda_n$; here $\mathrm{I}$ is the trivial tree and $\Lambda_n$ is the tree with one $n$-caret. Let $\Phi$ be the set of subforests of $E$, written $\Phi \mathrel{\mathop{:}}= \gen{A_0,\ldots,A_{\ell-1}}$. Then the vertex set of $\sigma$ is precisely $x\Phi$.
If we center ourselves at a different vertex $z$ of $\sigma$, then we also have to allow merges. Say $z$ has $r > \ell$ feet. Then we can write $\sigma = z\Psi$ where $\Psi$ is of the form $\gen{A_0,\ldots,A_{\ell-1}}$, where each $A_i$ is either $\mathrm{I}$, $\Lambda_n$ or $\mathrm{V}_n$. Here $\mathrm{V}_n$ is the inverse of the tree with one $n$-caret (so an upside-down $n$-caret). The tuple $(A_0,\ldots,A_{\ell-1})$ can be thought of as an $n$-ary forest pair, with all the $\Lambda_n$ in the first forest and all the $\mathrm{V}_n$ in the second forest (and some $\mathrm{I}$s included if necessary). Then the set $\Psi$ is the set of all $n$-ary forest pairs that can be obtained by removing some of the carets. As before, the vertex set of $\sigma$ is $z\Psi$.
Note that the action of $F_n$ on $X_n$ is free, since the action on vertices is given by multiplication in a groupoid, and if an element stabilizes a cube $[x,y]$ then it fixes $x$.
\subsection{Character height functions}\label{sec:char_height_fxns}
In Subsection~\ref{sec:trees}, we defined characters on $F_n$ by first defining ``proto-characters'' on $n$-ary trees, and then viewing elements of $F_n$ as $n$-ary tree pairs. It is straightforward to extend these proto-characters to be defined on $n$-ary forests. To be precise, each leaf of an $n$-ary forest is connected to a unique root, which gives it a depth, so $n$-ary forests $E$ admit the measurements $L(E)$, $R(E)$, $\delta_j(E)$ and $D_i(E)$. Much like the proto-characters on $n$-ary trees induce the characters (group homomorphisms) $\chi_0$, $\chi_1$ and $\psi_i$ from $F_n$ to $\mathbb{Z}$, also the proto-characters on $n$-ary forests induce groupoid homomorphisms $\mathcal{P} \to \mathbb{Z}$ extending these characters.
In particular, the $\chi_i$ and $\psi_i$ can now be evaluated on vertices of $X_n$. Moreover, any character $\chi$ on $F_n$ can be written as a linear combination
\begin{equation}
\label{eq:character_linear_combination}
\chi = a\chi_0 + c_0\psi_0 + \cdots + c_{n-3}\psi_{n-3} + b\chi_1
\end{equation}
thanks to Proposition~\ref{prop:char_basis}. Hence $\chi$ extends to arbitrary $n$-ary forest pairs by interpreting \eqref{eq:character_linear_combination} as a linear combination of the extended characters.
It will be important to know how our basis characters vary between adjacent vertices of $X_n$.
\begin{lemma}[Varying characters]\label{lem:vary_chars}
Let $x$ be a vertex in $X_n$, say with feet numbered $0$ through $r-1$, left to right. Let $\Lambda_n(r,k)$ be the elementary $n$-ary forest with $r$ roots and a single non-trivial tree, namely an $n$-caret on the $k$th root. Let $y=x\cdot \Lambda_n(r,k)$. We have the following:
\begin{enumerate}
\item If $k=0$ then $\chi_0(y)=\chi_0(x)-1$.
\item If $k>0$ then $\chi_0(y)=\chi_0(x)$.
\item If $k<r-1$ then $\chi_1(y)=\chi_1(x)$.
\item If $k=r-1$ then $\chi_1(y)=\chi_1(x)-1$.
\item If $0<k$ and $k-1\equiv_{n-1} i$, then $\psi_i(y) = \psi_i(x) + 1$.
\item If $k<r-1$ and $k\equiv_{n-1} i$, then $\psi_i(y) = \psi_i(x) - 1$.
\item Otherwise $\psi_i(y) = \psi_i(x)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\chi\in\{\chi_0,\chi_1,\psi_i\}_{i=0}^{n-3}$ be a basis character. Let $A\in\{L,R,D_i\}_{i=0}^{n-3}$ be the corresponding proto-character. Since $\chi$ is a groupoid morphism $\mathcal{P} \to \mathbb{Z}$, we have
$$\chi(y) = \chi(x)+\chi([\Lambda_n(r,k),\operatorname{id}_{r+(n-1)}]) = \chi(x) - A(\Lambda_n(r,k)) \text{.}$$
Hence, to check the cases in the statement, it suffices to check the following, all of which are readily verified:
\begin{enumerate}
\item If $k=0$ then $L(\Lambda_n(r,k)) = 1$.
\item If $k>0$ then $L(\Lambda_n(r,k)) = 0$.
\item If $k<r-1$ then $R(\Lambda_n(r,k)) = 0$.
\item If $k=r-1$ then $R(\Lambda_n(r,k)) = 1$.
\item If $0<k$ and $k-1\equiv_{n-1} i$, then $D_i(\Lambda_n(r,k)) = -1$.
\item If $k<r-1$ and $k\equiv_{n-1} i$, then $D_i(\Lambda_n(r,k)) = 1$.
\item Otherwise $D_i(\Lambda(r,k)) = 0$.
\end{enumerate}
\end{proof}
Note that since we only consider $0\le i\le n-3$, if $y=x \cdot \Lambda_n(r,r-1)$ (i.e., if we get from $x$ to $y$ by splitting the last foot), then no $\psi_i$ changes, since $r-2\equiv_{n-1} n-2$.
\medskip
So far we know that any character $\chi$ on $F_n$ can be extended to all the vertices of $X_n$. Now we extend it to the entire complex.
\begin{lemma}\label{lem:affine_extend}
Any character $\chi$ extends to an affine map $\chi \colon X_n \to \mathbb{R}$.
\end{lemma}
Before proving this, we reduce the problem using the following:
\begin{lemma}\cite[Lemma~2.4]{witzel15}\label{lem:affine_on_cubes}
Let $\varphi \colon \{0,1\}^r \to \mathbb{R}$ be a map that can be affinely extended to the $2$-faces of the cube $[0,1]^r$. Then $\varphi$ can be affinely extended to all of $[0,1]^r$.
\end{lemma}
\begin{proof}
This was proved in \cite{witzel15}, and we repeat the proof here for the sake of being self contained. There is a unique affine function $\tilde{\varphi} \colon \mathbb{R}^r \to\mathbb{R}$ that agrees with $\varphi$ on the zero vector and the $r$ standard basis vectors. We claim that $\tilde{\varphi}$ agrees with $\varphi$ on all the other vertices of $[0,1]^r$ as well, and hence defines an affine extension of $\varphi$ to all of $[0,1]^r$. Let $v=(v_1,\dots,v_r)$ be a vertex with at least two entries equal to $1$ (and the others all $0$). Pick $i\ne j$ with $v_i=v_j=1$. For any $w$ obtained from $v$ by zeroing out $v_i$, $v_j$, or both, we have by induction that $\tilde{\varphi}(w)=\varphi(w)$. These three $w$ vertices, plus $v$, define a $2$-face of $[0,1]^r$. By assumption, $\varphi$ can be affinely extended to this $2$-face, and the value on $v$ is uniquely determined by the values on the other three vertices. Hence $\tilde{\varphi}(v)=\varphi(v)$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:affine_extend}]
Let $\square_2=v \Phi$ be a $2$-cube in $X_n$, say $\Phi=\gen{A_0,\dots,A_{r-1}}$, with exactly two $A_i$ being $\Lambda_n$ and all others being $\mathrm{I}$. Thanks to Lemma~\ref{lem:affine_extend}, we just need to show that $\chi$ extends affinely to $\square_2$. Say $A_j=A_k=\Lambda_n$ for $j<k$, and let $v_j=v\gen{\mathrm{I},\dots,A_j,\dots,\mathrm{I}}$, $v_k=v\gen{\mathrm{I},\dots,A_k,\dots,\mathrm{I}}$ and $v_{j,k}=v\gen{\mathrm{I},\dots,A_j,\dots,A_k,\dots,\mathrm{I}}$. Hence the vertices of $\square_2$ are $v$, $v_j$, $v_k$ and $v_{j,k}$. Now we just need to show that $\chi(v_j)-\chi(v) = \chi(v_{j,k})-\chi(v_k)$.
It suffices to do this for $\chi\in\{\chi_0,\chi_1,\psi_i\}_{i=0}^{n-3}$. It is clear that $\chi_0(v_j)-\chi_0(v) = \chi_0(v_{j,k})-\chi_0(v_k)$, namely they equal $-1$ if $j=0$ and equal $0$ otherwise, and similarly we always have $\chi_1(v_j)-\chi_1(v) = \chi_1(v_{j,k})-\chi_1(v_k) = 0$. Next consider $\psi_i$. By Lemma~\ref{lem:vary_chars}, we have that $\psi_i(v_j)-\psi_i(v)=1$ if and only if $0<j$ and $j-1\equiv_{n-1} i$, which also holds if and only if $\psi_i(v_{j,k})-\psi_i(v_k)=1$. Also, $\psi_i(v_j)-\psi_i(v)=-1$ if and only if $j\equiv_{n-1} i$ if and only if $\psi_i(v_{j,k})-\psi_i(v_k)=-1$ (since $j<k$, we know $j$ cannot be the highest index of a foot of either $v$ or $v_k$). The only other option is $\psi_i(v_j)-\psi_i(v) = \psi_i(v_{j,k})-\psi_i(v_k) = 0$.
\end{proof}
These extended characters $\chi$ will be our height functions. Our secondary height will be given by the number of feet function $f$.
\begin{observation}\label{obs:feet_affine}
There is a map $f \colon X_n \to \mathbb{R}$ that is affine on cubes and assigns to any vertex its number of feet. It is a Morse function.
\end{observation}
\begin{proof}
That $f$ extends affinely is straightforward. When we say that $f$ is a Morse function, in the language of Definition~\ref{def:morse} this means that $(f,0)$ is a Morse function. This is true because adjacent vertices $v$ and $w$ satisfy $\abs{f(v)-f(w)}=n-1$.
\end{proof}
Let $X_n^{p\le f\le q}$ be the subcomplex of $X_n$ supported on vertices $v$ with $p\le f(v)\le q$.
\begin{proposition}\label{prop:char_morse}
Let $\chi$ be a character. The pair $(\chi,f)$ is a Morse function on $X_n^{p\le f\le q}$ for any $p\le q<\infty$.
\end{proposition}
\begin{proof}
We check the conditions required by Definition~\ref{def:morse}. We have extended $\chi$ and $f$ to affine functions in Lemma~\ref{lem:affine_extend} and Observation~\ref{obs:feet_affine}. By construction $f$ takes finitely many values on $X_n^{p\le f\le q}$. Write $\chi=a\chi_0+c_0\psi_0+\cdots+c_{n-3}\psi_{n-3}+b\chi_1$. Let
$$\varepsilon\mathrel{\mathop{:}}= \min\{\abs{d}\mid d=\alpha a + \beta b + \gamma_0 c_0 +\cdots+\gamma_{n-3} c_{n-3} \ne 0 \text{ for } \alpha,\beta,\gamma_i\in\{-1,0,1\}\}\text{.}$$
Since we only consider such $d$ that are non-zero, and there are finitely many, we have $0<\varepsilon$. For any pair of adjacent vertices $v$ and $w$, we know from Lemma~\ref{lem:vary_chars} that for any basis character $\phi\in\{\chi_0,\chi_1,\psi_i\}_{i=0}^{n-3}$, we have $\phi(v) - \phi(w) \in \{-1,0,1\}$. Hence for any character $\chi$ we have $\chi(v) - \chi(w) = \alpha a + \beta b + \gamma_0 c_0 +\cdots+\gamma_{n-3} c_{n-3}$ for some $\alpha,\beta,\gamma_i\in\{-1,0,1\}$. In particular, either $\abs{\chi(v) - \chi(w)} \ge \varepsilon$ or else $\chi(v)=\chi(w)$. The condition $f(v)\ne f(w)$ is always satisfied anyway for adjacent vertices, so we conclude that $(\chi,f)$ is a Morse function.
\end{proof}
\section{Links and matchings}\label{sec:links_matchings}
We will use Morse theory to reduce the computation of $\Sigma^m(F_n)$ to questions about ascending links in $X_n$. In this section we discuss a useful model for links in $X_n$.
\begin{definition}
Let $\Delta$ be a simplicial complex, say of dimension $d$. Let $D\subseteq\{0,\dots,d\}$. A \emph{$D$-matching} is a subset $\mu$ of $\Delta$, consisting of $k$-simplices for $k\in D$ such that any two simplices in $\mu$ are disjoint. If $D=\{k\}$ is a singleton we may write ``$k$-matching'' instead of ``$\{k\}$-matching''. For example a $0$-matching is just any collection of $0$-simplices, and a $1$-matching is what is usually called a \emph{matching} on the graph $\Delta^{(1)}$. For our purposes, we will be interested in certain $(n-1)$-dimensional complexes $\Delta=\Delta^n(r)$, defined below, and $D=\{0,n-1\}$, so $D$-matchings are collections of pairwise disjoint $0$-simplices and $(n-1)$-simplices. In general, the $D$-matchings of $\Delta$ form a simplicial complex, denoted $\mathcal{M}_D(\Delta)$, with face relation given by inclusion, called the \emph{$D$-matching complex} of $\Delta$.
\end{definition}
Define $\Delta^n(r)$ as follows. It is a simplicial complex on $r$ vertices, labeled $v_0$ through $v_{r-1}$, such that a collection of vertices spans a simplex precisely when $\abs{i-j}<n$ for all vertices $v_i$ and $v_j$ in the collection. For example, $\Delta^1(r)$ is a discrete set of $r$ vertices, and $\Delta^2(r)$ is the linear graph on $r$ vertices. The complex $\Delta^3(9)$ is shown in Figure~\ref{fig:Delta^3(9)}. To keep notation straight, we reiterate that $r$ is the number of vertices of $\Delta^n(r)$, and $n$ is the maximum number of vertices that may share a simplex.
\begin{figure}[htb]
\begin{tikzpicture}\centering
\filldraw[lightgray] (0,0) -- (8,0) -- (7,1) -- (1,1);
\draw (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,0) -- (5,1) -- (6,0) -- (7,1) -- (8,0) (0,0) -- (8,0) (1,1) -- (7,1);
\filldraw (0,0) circle (1.5pt) (2,0) circle (1.5pt) (4,0) circle (1.5pt) (6,0) circle (1.5pt) (8,0) circle (1.5pt);
\filldraw[white] (1,1) circle (1.5pt) (3,1) circle (1.5pt) (5,1) circle (1.5pt) (7,1) circle (1.5pt);
\draw (1,1) circle (1.5pt) (3,1) circle (1.5pt) (5,1) circle (1.5pt) (7,1) circle (1.5pt);
\node at (0,-.3) {$0$}; \node at (1,1.3) {$1$}; \node at (2,-.3) {$2$}; \node at (3,1.3) {$3$}; \node at (4,-.3) {$4$}; \node at (5,1.3) {$5$}; \node at (6,-.3) {$6$}; \node at (7,1.3) {$7$}; \node at (8,-.3) {$8$};
\end{tikzpicture}
\caption{The complex $\Delta^3(9)$. The vertices are numbered $0$ to $8$, left to right, with the even vertices labeled by a black circle and the odd vertices labeled by a white circle. (The distinction will be important later.)}\label{fig:Delta^3(9)}
\end{figure}
For any $0\le i\le j\le r-1$ with $j-i<n$, let $e_{[i,j]}$ denote the $(j-i)$-simplex $\{i,i+1,\dots,j\}$, so $\{e_{[i,j]}\}$ is a $0$-simplex in the $(j-i)$-matching complex. When a matching $\{e\}$ consists of a single simplex $e$, we will usually abuse notation and just write $e$ for the matching. For example $e_{[i,j]}$ now represents both a $(j-i)$ simplex in $\Delta^n(r)$ and a $0$-simplex in $\mathcal{M}_{j-i}(\Delta^n(r))$, and $v_k$ represents both a $0$-simplex in $\Delta^n(r)$ and a $0$-simplex in $\mathcal{M}_0(\Delta^n(r))$.
\begin{lemma}[$(n-1)$-matchings]\label{lem:top_matching_conn}
For $n,r\in\mathbb{N}$, $\mathcal{M}_{n-1}(\Delta^n(r))$ is $(\lfloor\frac{r-n}{2n-1}\rfloor-1)$-connected.
\end{lemma}
\begin{proof}
Note that $n$ is fixed. We induct on $r$. The base case is that $\mathcal{M}_{n-1}(\Delta^n(r))$ is non-empty when $n\le r$, which is true. Now assume that $3n-1\le r$. In this case, for any $(n-1)$-matching $\mu$ in $\mathcal{M}_{n-1}(\Delta^n(r))$, either $e_{[i,i+(n-1)]}\in \mu$ for some $0\le i\le n-1$, or else every $0$-simplex of $\mu$ is an $(n-1)$-simplex of $\Delta^n(r)$ that is disjoint from $e_{[0,n-1]}$. In particular, $\mathcal{M}_{n-1}(\Delta^n(r))$ is covered by the contractible subcomplexes $S_i\mathrel{\mathop{:}}= \operatorname{st}(e_{[i,i+(n-1)]})$ for $0\le i\le n-1$. The $S_i$ all contain the matching $e_{[r-n,r-1]}$, since $3n-1\le r$ implies $2n-2<r-n$, so the nerve of the covering is contractible (a simplex). Any intersection $S_{i_1}\cap\cdots \cap S_{i_t}$ for $t>1$ is isomorphic to a matching complex of the form $\mathcal{M}_{n-1}(\Delta^n(r'))$ for $r'\ge r-(2n-1)$. By induction this is $(\lfloor\frac{r-(2n-1)-n}{2n-1}\rfloor-1)$-connected, and hence $(\lfloor\frac{r-n}{2n-1}\rfloor-2)$-connected. The result now follows from the Nerve Lemma \cite[Lemma~1.2]{bjoerner94}.
\end{proof}
For example, $\mathcal{M}_2(\Delta^3(9))$ is connected, which is clear from Figure~\ref{fig:Delta^3(9)}.
\medskip
There is an analogy between $\{0,n-1\}$-matchings on $\Delta^n(r)$ and points in the link of a vertex $x\in X_n$ with $r$ feet. That is, each $0$-matching is a single vertex of $\Delta^n(r)$, so corresponds to splitting a foot of $x$ into $n$ new feet, and each $(n-1)$-matching is a collection of $n$ sequential vertices of $\Delta^n(r)$, so corresponds to merging $n$ sequential feet of $x$ into one new foot. We make this rigorous in the next lemma.
Let $x$ be a vertex of $X_n$ with $r$ feet. The cofaces of $x$ are the cells $\sigma = x\Psi$, for every $\Psi$ such that $x\Psi$ makes sense. If $\Psi=\gen{A_0,\dots,A_{\ell-1}}$ for $A_i\in\{\mathrm{I},\Lambda_n,\mathrm{V}_n\}$ ($0\le i\le \ell-1$), then the rule is that $r$ must equal the number of $A_i$ that are $\mathrm{I}_n$ or $\Lambda_n$, plus $n$ times the number that are $\mathrm{V}_n$.
\begin{lemma}[Link model]\label{lem:vertex_link}
If a vertex $x\in X_n$ has $r$ feet then $\operatorname{lk} x \cong \mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$.
\end{lemma}
\begin{proof}
Define a map $g\colon \operatorname{lk} x \to \mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$ as follows. For a coface $x\Psi$ with $\Psi=\gen{A_0,\dots,A_{\ell-1}}$, $g$ sends $x\Psi$ to a $\{0,n-1\}$-matching of $\Delta^n(r)$ where each $\Lambda_n$ is a $0$-simplex in $\Delta^n(r)$ and each $\mathrm{V}_n$ is an $(n-1)$-simplex in $\Delta^n(r)$. More precisely, for each $0\le i\le \ell-1$, let $m_i$ be the number of $0\le j<i$ such that $A_j=\mathrm{V}_n$, and then
\[
g(x\Psi) \mathrel{\mathop{:}}= \{v_{k+(n-1)m_k},e_{[\ell+(n-1)m_\ell,\ell+(n-1)m_\ell+(n-1)]}\mid A_k=\Lambda_n \text{ and } A_\ell=\mathrm{V}_n\} \text{.}
\]
For example, $g(x\gen{\mathrm{I},\mathrm{V}_n,\Lambda_n})=\{e_{[1,n]},v_{n+1}\}$. It is straightforward to check that $g$ is a simplicial isomorphism.
\end{proof}
See Figure~\ref{fig:lk_model} for an example of the correspondence $\operatorname{lk} x \cong \mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$.
\begin{figure}[htb]
\begin{tikzpicture}\centering
\draw (-0.5,0) -- (4.5,0) -- (2,2) -- (-0.5,0) (0,0) -- (0,-1) (1,0) -- (1,-1) (2,0) -- (2,-1) (3,0) -- (3,-1) (4,0) -- (4,-1);
\filldraw (0,-1) circle (1.5pt) (2,-1) circle (1.5pt) (4,-1) circle (1.5pt);
\filldraw[white] (1,-1) circle (1.5pt) (3,-1) circle (1.5pt);
\draw (1,-1) circle (1.5pt) (3,-1) circle (1.5pt);
\draw[red,dashed] (1,-1) -- (2,-2) -- (3,-1) (2,-2) -- (2,-1);
\draw[blue,dashed] (3.5,-2) -- (4,-1) -- (4.5,-2) (4,-1) -- (4,-2);
\node at (2,0.75) {$x$};
\node at (5.5,0) {$\mapsto$};
\begin{scope}[xshift=6.5cm]
\filldraw[lightgray] (0,0) -- (4,0) -- (3,1) -- (1,1);
\filldraw[red] (1,1) -- (2,0) -- (3,1);
\draw (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,0) (0,0) -- (4,0) (1,1) -- (3,1);
\filldraw (0,0) circle (1.5pt) (2,0) circle (1.5pt);
\filldraw[blue] (4,0) circle (2.5pt);
\filldraw[white] (1,1) circle (1.5pt) (3,1) circle (1.5pt);
\draw (1,1) circle (1.5pt) (3,1) circle (1.5pt);
\end{scope}
\end{tikzpicture}
\caption{The example $x\gen{\mathrm{I},\mathrm{V}_3,\Lambda_3} \mapsto \{e_{[1,3]},v_4\}$ from the proof of Lemma~\ref{lem:vertex_link}. The $\mathrm{V}_3$ and $e_{[1,3]}$ are in red and the $\Lambda_3$ and $v_4$ are in blue.}\label{fig:lk_model}
\end{figure}
For a vertex $x \in X_n$ recall that $f(x)$ denotes its number of feet. The function $f$ extends to an affine Morse function on $X_n$ (Observation~\ref{obs:feet_affine}). Viewing $\operatorname{lk} x$ as $\mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$ for $x$ with $f(x)=r$, the ascending link of $x$ with respect to $f$ is $\mathcal{M}_0(\Delta^n(r))$ and the descending link is $\mathcal{M}_{n-1}(\Delta^n(r))$.
\begin{corollary}[$f$-ascending/descending links]\label{cor:f_lks}
For $x$ a vertex with $f(x)=r$, $\operatorname{lk}^{f\uparrow}_{X_n} x$ is contractible and $\operatorname{lk}^{f\downarrow}_{X_n} x$ is $(\lfloor\frac{r-n}{2n-1}\rfloor-1)$-connected.
\end{corollary}
\begin{proof}
We have $\operatorname{lk}^{f\uparrow}_{X_n} x \cong \mathcal{M}_0(\Delta^n(r))$, which is an $(r-1)$-simplex, hence contractible. We have $\operatorname{lk}^{f\downarrow}_{X_n} x \cong \mathcal{M}_{n-1}(\Delta^n(r))$, which is $(\lfloor\frac{r-n}{2n-1}\rfloor-1)$-connected by Lemma~\ref{lem:top_matching_conn}.
\end{proof}
In Section~\ref{sec:computations} we will need a subcomplex of the form $X_n^{p \le f \le q}$ that is $(m-1)$-connected. It will be convenient to have one of the form $X_n^{p \le f \le pn^2}$.
\begin{lemma}\label{lem:sublevel_conn}
For any $p\ge m$ the complex $X_n^{p \le f \le pn^2}$ is $(m-1)$-connected.
\end{lemma}
\begin{proof}
We first claim that $X_n^{f \le pn^2}$ is $(m-1)$-connected. By the Morse Lemma (specifically Corollary~\ref{cor:morse}) it suffices to show that for any vertex $x$ with $f(x)>pn^2$, the $f$-descending link $\operatorname{lk}^{f\downarrow}_{X_n} x$ is $(m-1)$-connected. Setting $r=f(x)$, we know from Corollary~\ref{cor:f_lks} that the $f$-descending link is $(\lfloor\frac{r-n}{2n-1}\rfloor-1)$-connected. Since $r\ge pn^2+1 \ge mn^2+1$, this is $(\lfloor\frac{mn^2-n+1}{2n-1}\rfloor-1)$-connected. To see that $mn^2-n+1 \ge m(2n-1)$ (which now suffices), we note that the roots of the polynomial $mx^2+(-2m-1)x+(m+1)$ are $1$ and $1+\frac{1}{m}$.
Now we pass from $X_n^{1 \le f \le pn^2}$ to $X_n^{p \le f \le pn^2}$. In fact these are homotopy equivalent, since ascending links of vertices with respect to $f$ are contractible (Corollary~\ref{cor:f_lks}), and for a vertex with fewer than $p$ feet, the entire ascending link is contained in $X_n^{1 \le f \le pn^2}$.
\end{proof}
\begin{observation}\label{obs:cocompact}
For $p,q\in\mathbb{N}$, the action of $F_n$ on $X_n^{p \le f \le q}$ is cocompact.
\end{observation}
\begin{proof}
For each $r$, $F_n$ acts transitively on vertices with $r$ feet. The result is thus immediate since $X_n$ is locally compact.
\end{proof}
\medskip
In particular, we now have highly connected spaces on which our groups act freely and cocompactly, which is part of the setup for Definition~\ref{def:bnsr}. To compute the $\Sigma$-invariants using Morse theory, we will use our knowledge of how characters vary between adjacent vertices (Lemma~\ref{lem:vary_chars}). Since we are modeling vertex links by $\{0,n-1\}$-matching complexes on $\Delta^n(r)$, we need to translate Lemma~\ref{lem:vary_chars} into the language of $\{0,n-1\}$-matchings.
\begin{definition}
Let $\chi$ be a character of $F_n$, extended to $X_n$ as in Lemma~\ref{lem:affine_extend}. Let $x\in X_n$ be a vertex with $r=f(x)$ feet, so $\operatorname{lk} x \cong \mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$. Under this isomorphism, call a vertex of $\mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$ \emph{$\chi$-ascending} if the corresponding vertex $y$ in $\operatorname{lk} x$ has $\chi(y)>\chi(x)$. Analogously define \emph{$\chi$-descending} and \emph{$\chi$-preserving}. Say a simplex $\mu$ in $\mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$ is $\chi$-ascending/descending/preserving if all its vertices are.
\end{definition}
\begin{observation}[Ascending matching complexes]\label{obs:asc_lks_matchings}
Let $(\chi,f) \colon X_n^{p\le f\le q} \to \mathbb{R}\times \mathbb{R}$ be a Morse function as in Proposition~\ref{prop:char_morse}. Let $x$ be a vertex in $X_n^{p\le f\le q}$ with $r=f(x)$ feet. Then the $(\chi,f)$-ascending link of $x$ in $X_n$ is isomorphic to the full subcomplex of $\mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$ supported on those $0$-simplices $v_k$ and $e_{[k,k+(n-1)]}$ such that $v_k$ is either $\chi$-ascending or $\chi$-preserving, and $e_{[k,k+(n-1)]}$ is $\chi$-ascending. The $(\chi,f)$-ascending link of $x$ in $X_n^{p\le f\le q}$ is then obtained by removing any $\{0,n-1\}$-matchings $\mu$ such that $r+(n-1)\mu_0>q$ or $r-(n-1)\mu_{n-1}<p$, where $\mu_i$ is the number of vertices of $\mu$ that are $i$-matchings.
\end{observation}
\begin{proof}
To increase $(\chi,f)$, we must either increase $\chi$ or else preserve $\chi$ and increase $f$. The $v_k$ correspond to vertices in $\operatorname{lk} x$ with $r+(n-1)$ feet, and the $e_{[k,k+(n-1)]}$ to vertices in $\operatorname{lk} x$ with $r-(n-1)$ feet. Hence the first claim follows. For the second claim, just note that $\mu$ corresponds to a simplex in $\operatorname{lk} x$, and hence to a cube in $X_n$ containing $x$, and $r+(n-1)\mu_0$ is the maximum number of feet of a vertex in that cube; similarly $r-(n-1)\mu_{n-1}$ is the minimum number of feet of a vertex in that cube.
\end{proof}
\begin{corollary}\label{cor:char_match}
If $k=0$ then $v_k$ is $\chi_0$-descending and $e_{[k,k+(n-1)]}$ is $\chi_0$-ascending. Otherwise they are both $\chi_0$-preserving. If $k=r-1$ then $v_k$ is $\chi_1$-descending and $e_{[k-(n-1),k]}$ is $\chi_1$-ascending. Otherwise they are both $\chi_1$-preserving. If $0<k$ and $k-1\equiv_{n-1} i$, then $v_k$ is $\psi_i$-ascending and $e_{[k,k+(n-1)]}$ is $\psi_i$-descending. If $k<r-1$ and $k\equiv_{n-1} i$, then $v_k$ is $\psi_i$-descending and $e_{[k-(n-1),k]}$ is $\psi_i$-ascending. Anything not covered by these cases is $\psi_i$-preserving. In all of these cases, ``ascending'' entails an increase by $+1$ and ``descending'' entails a decrease by $-1$.
\end{corollary}
\begin{proof}
Translating to $\operatorname{lk} x$, $v_k$ corresponds to $x \cdot [\Lambda_n(r,k),\operatorname{id}_{r+(n-1)}]$, $e_{[k,k+(n-1)]}$ corresponds to $x \cdot [\operatorname{id}_r,\Lambda_n(r-(n-1),k)]$ and $e_{[k-(n-1),k]}$ corresponds to $x \cdot [\operatorname{id}_r,\Lambda_n(r-(n-1),k-(n-1))]$. Hence Lemma~\ref{lem:vary_chars} implies all of these facts.
\end{proof}
Note that in particular if $r-1>0$ then $v_{r-1}$ is $\psi_i$-preserving for all $0\le i\le n-3$, since $r-2\equiv_{n-1} n-2$.
Some examples of $\psi_i$-ascending, descending or preserving $0$-simplices in $\mathcal{M}_{\{0,2\}}(\Delta_3(5))$, as governed by Corollary~\ref{cor:char_match}, are shown in Figure~\ref{fig:char_match}.
\begin{figure}[htb]
\begin{tikzpicture}\centering
\filldraw[lightgray] (0,0) -- (4,0) -- (3,1) -- (1,1);
\draw (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,0) (0,0) -- (2,0) -- (4,0) (1,1) -- (3,1);
\filldraw[red] (0,0) circle (2.5pt); \filldraw (2,0) circle (1.5pt) (4,0) circle (1.5pt);
\filldraw[white] (1,1) circle (1.5pt) (3,1) circle (1.5pt);
\draw (1,1) circle (1.5pt) (3,1) circle (1.5pt);
\node at (0,-.4) {$v_0$};
\begin{scope}[xshift=5cm]
\filldraw[lightgray] (0,0) -- (4,0) -- (3,1) -- (1,1);
\draw (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,0) (0,0) -- (2,0) -- (4,0) (1,1) -- (3,1);
\filldraw (0,0) circle (1.5pt) (2,0) circle (1.5pt) (4,0) circle (1.5pt);
\filldraw[white] (1,1) circle (2.5pt) (3,1) circle (1.5pt);
\draw[blue,line width=1.5pt] (1,1) circle (2.5pt); \draw (3,1) circle (1.5pt);
\node at (1,1.3) {$v_1$};
\end{scope}
\begin{scope}[yshift=-2.5cm]
\filldraw[lightgray] (0,0) -- (4,0) -- (3,1) -- (1,1);
\filldraw[green] (1,1) -- (2,0) -- (3,1);
\draw (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,0) (0,0) -- (2,0) -- (4,0) (1,1) -- (3,1);
\filldraw (0,0) circle (1.5pt) (2,0) circle (1.5pt) (4,0) circle (1.5pt);
\filldraw[white] (1,1) circle (1.5pt) (3,1) circle (1.5pt);
\draw (1,1) circle (1.5pt) (3,1) circle (1.5pt);
\node at (2,1.3) {$e_{[1,3]}$};
\end{scope}
\begin{scope}[yshift=-2.5cm,xshift=5cm]
\filldraw[lightgray] (0,0) -- (4,0) -- (3,1) -- (1,1);
\filldraw[orange] (2,0) -- (3,1) -- (4,0);
\draw (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,0) (0,0) -- (2,0) -- (4,0) (1,1) -- (3,1);
\filldraw (0,0) circle (1.5pt) (2,0) circle (1.5pt) (4,0) circle (1.5pt);
\filldraw[white] (1,1) circle (1.5pt) (3,1) circle (1.5pt);
\draw (1,1) circle (1.5pt) (3,1) circle (1.5pt);
\node at (3,-.4) {$e_{[2,4]}$};
\end{scope}
\end{tikzpicture}
\caption{Some $0$-simplices in $\mathcal{M}_{\{0,2\}}(\Delta^3(5))$. First is $v_0$ (red), which is $\psi_0$-descending and $\psi_1$-preserving; then $v_1$ (blue) is $\psi_0$-ascending and $\psi_1$-descending; next $e_{[1,3]}$ (green) is $\psi_0$-descending and $\psi_1$-ascending; finally $e_{[2,4]}$ (orange) is $\psi_0$-preserving and $\psi_1$-descending.}\label{fig:char_match}
\end{figure}
\section{Proof of theorem~A}\label{sec:computations}
In this section we prove Theorem~A, that $\Sigma^m(F_n)=\Sigma^2(F_n)$ for all $n,m\ge 2$. The forward inclusion always holds, so the work to do is the reverse inclusion. Throughout this section, $\chi$ is a character of $F_n$ with $[\chi]\in\Sigma^2(F_n)$.
For the first three lemmas, we will make use of a certain ascending HNN-extension of $F_n$. (We should mention that there is nothing novel here, and the reduction done over the course of these three lemmas was already contained in the work of Kochloukova \cite{kochloukova12}.) Let $F_n(1)$ be the subgroup of $F_n$ generated by the $x_i$ for $i>0$ (see Subsection~\ref{sec:presentation}). It is well known that $F_n=F_n(1)*_{x_0}$ and $F_n(1)\cong F_n$.
\begin{lemma}\label{lem:poles}
If $\chi=-\chi_i$ for $i=0,1$ then $[\chi]\in\Sigma^\infty(F_n)$.
\end{lemma}
\begin{proof}
By symmetry it suffices to do the $i=0$ case. We know $F_n=F_n(1)*_{x_0}$ and that $F_n(1)\cong F_n$ is of type $\operatorname{F}_\infty$. Also, $-\chi_0(F_n(1))=0$ and $-\chi_0(x_0)=1$, so the result follows from \cite[Theorem~2.1]{bieri10}.
\end{proof}
Now suppose $\chi=a\chi_0+b\chi_1$. Since $[\chi]\in\Sigma^2(F_n)$, we know from \cite[Proposition~9,Theorem~10]{kochloukova12} that $a<0$ or $b<0$. This could also be deduced using the action of $F_n$ on $X_n$, following the proof of the $n=2$ case in \cite{witzel15}. This would take many pages of technical details though, so we content ourselves with just citing Kochloukova to say that we know $a<0$ or $b<0$.
\begin{lemma}\label{lem:equator}
If $\chi=a\chi_0+b\chi_1$ with $a<0$ or $b<0$ then $[\chi]\in\Sigma^\infty(F_n)$.
\end{lemma}
\begin{proof}
By symmetry we can assume $b<0$. Since $\chi_0(F_n(1))=0$, we have that $\chi|_{F_n(1)}$ is equivalent to $-\chi_1$ when restricted to $F_n(1)$. Now, $F_n(1)\cong F_n$ is of type $F_\infty$ and $F_n=F_n(1)*_{x_0}$, so by \cite[Theorem~2.3]{bieri10} and Lemma~\ref{lem:poles} $[\chi]\in\Sigma^\infty(F_n)$.
\end{proof}
Now we can assume $n>2$ and $\chi$ has non-zero $\psi_i$ component for some $i$.
\begin{lemma}\label{lem:push_to_hemispheres}
Assume that we already know every non-trivial character of the form $\chi'=\sum_{i=0}^{n-3} c_i \psi_i$ has $[\chi']\in\Sigma^\infty(F_n)$. Then for any $\chi=a\chi_0 + \sum_{i=0}^{n-3} c_i \psi_i + b\chi_1$ with $c_i\ne 0$ for at least one $i$, we have $[\chi]\in\Sigma^\infty(F_n)$.
\end{lemma}
\begin{proof}
Note that such a $\chi$ restricted to $F_n(1)$ is still non-trivial. As in the previous proof, we can restrict to $F_n(1)$ and ensure that without loss of generality $a=0$. If $b\ne 0$ then appealing to symmetry, we can rather assume $a\ne 0$ but $b=0$. Now by the first sentence we can reduce to the case $a=b=0$. But this is exactly the case already handled in the assumption.
\end{proof}
This brings us to the final case, where we assume that $\chi$ is a linear combination of the $\psi_i$ for $0\le i\le n-3$, and show that $[\chi]\in\Sigma^\infty(F_n)$. This is where Kochloukova's approach in \cite{kochloukova12} became difficult to extend beyond the $\Sigma^2$ case, and where our setup from the previous sections will prove to be able to handle all the $\Sigma^m$.
Let $m\in\mathbb{N}$ and set $p\mathrel{\mathop{:}}= 4m+5$. Let $Y\mathrel{\mathop{:}}= X_n^{p\le f\le pn^2}$. Then $Y$ is $(m-1)$-connected (Lemma~\ref{lem:sublevel_conn}), and $F_n$ acts freely and cocompactly on $Y$ (Observation~\ref{obs:cocompact}), so we have the requisite setup of Definition~\ref{def:bnsr}. (Of course $Y$ would have already been $(m-1)$-connected just using $p=m$, but having $p=4m+5$ will be important in the proof of Proposition~\ref{prop:asc_lk_conn}.) According to Definition~\ref{def:bnsr}, we need to show that $(Y^{t\le\chi})_{t\in\mathbb{R}}$ is essentially $(m-1)$-connected and then we will have $[\chi]\in\Sigma^m(F_n)$. In fact we will show that every $Y^{t\le\chi}$ is $(m-1)$-connected.
The proof that $Y^{t\le\chi}$ is $(m-1)$-connected will be quite technical, so we first sketch the proof here, to serve as an outline for what follows.
\begin{proof}[Sketch of proof]
Thanks to Morse theory, it suffices to show that all $(\chi,f)$-ascending links of vertices $x$ are $(m-1)$-connected. Since we are working in $Y$, we know the number of feet of $x$ lies between $p$ and $pn^2$. We consider the cases $p\le f(x)\le pn$ and $pn \le f(x) \le pn^2$ separately. In the first case, even if we split every foot of $x$, we remain in $Y$, so all splittings are ``legal''. In particular if there is some ascending splitting move that is joinable in $X_n$ to every other ascending move, then these joins can even take place in $Y$, and the ascending link is a (contractible) cone. It turns out that the move where we split the rightmost foot serves as such a cone point. Now consider the second case, $pn \le f(x) \le pn^2$. Here there may be splitting moves that push us out of $Y$, but every merging move keeps us inside $Y$. It is too much to hope for to find an ascending merging move joinable to every other ascending move. However, we do find an ascending merging move consisting of a ``large'' simplex $\sigma_q$ such that every ascending vertex is joinable to ``almost all'' of $\sigma_q$. We prove in Lemma~\ref{lem:popular_simplex} that this is sufficient to get high connectivity.
\end{proof}
Now we begin the technicalities. First we need a lemma that is a useful tool for proving higher connectivity of certain complexes. Heuristically, if there is a simplex $\sigma$ such that every vertex is joinable to ``most'' of $\sigma$, then we can conclude higher connectivity properties. The case when the complex is finite and flag was proved by Belk and Forrest, and written down by Belk and Matucci in \cite{belk15}. Here we show that the requirement of being finite can easily be relaxed. We also replace the requirement of being flag with something weaker, and rephrase the condition from \cite{belk15} so that in the flag case it is the same. This is a necessary modification, since the complexes we will apply this lemma to in the proof of Proposition~\ref{prop:asc_lk_conn} are not flag.
\begin{definition}
Let $\Delta$ be a simplicial complex. Two simplices $\rho_1$ and $\rho_2$ are \emph{joinable} to each other if they lie in a common simplex. For a fixed simplex $\sigma$ in $\Delta$, we will call $\Delta$ \emph{flag with respect to $\sigma$} if whenever $\rho$ is a simplex and $\sigma'$ is a face of $\sigma$ such that every vertex of $\rho$ is joinable to every vertex of $\sigma'$, already $\rho$ is joinable to $\sigma'$. For example if $\Delta$ is flag with respect to every simplex, then it is flag.
\end{definition}
\begin{lemma}\label{lem:popular_simplex}
Let $\Delta$ be a simplicial complex, and let $k\in\mathbb{N}$. Suppose there exists an $\ell$-simplex $\sigma$ such that $\Delta$ is flag with respect to $\sigma$, and for every vertex $v$ in $\Delta$, $v$ is joinable to some $(\ell-k)$-face of $\sigma$. Then $\Delta$ is $(\lfloor\frac{\ell}{k}\rfloor-1)$-connected.
\end{lemma}
\begin{proof}
First note that our hypotheses on $\Delta$ are preserved under passing to any full subcomplex $\Delta'$ containing $\sigma$. Indeed, joinability of two simplices is preserved under passing to any full subcomplex containing them, so $\Delta'$ is still flag with respect to $\sigma$ and every vertex of $\Delta'$ is still joinable to some $(\ell-k)$-face of $\sigma$. In particular, without loss of generality $\Delta$ is finite. Indeed, any homotopy sphere lies in some full subcomplex $\Delta'$ of $\Delta$ that contains $\sigma$ and is finite, and the complex $\Delta'$ still satisfies our hypotheses since it is full and contains $\sigma$. If the sphere is nullhomotopic in $\Delta'$ then it certainly is nullhomotopic in $\Delta$, so from now on we may indeed assume $\Delta$ is finite.
We induct on the number $V$ of vertices of $\Delta$. If $V=\ell+1$ then $\Delta=\sigma$ is contractible. Now suppose $V>\ell+1$, so we can choose a vertex $v\in \Delta \setminus \sigma$. The subcomplex obtained from $\Delta$ by removing $v$ along its link $L\mathrel{\mathop{:}}= \operatorname{lk} v$ has fewer vertices than $\Delta$, is full, and contains $\sigma$, so by the first paragraph and by induction it is $(\lfloor\frac{\ell}{k}\rfloor-1)$-connected. It now suffices to show that $L$ is $(\lfloor\frac{\ell}{k}\rfloor-2)$-connected.
Let $\tau \mathrel{\mathop{:}}= \sigma\cap \operatorname{st} v$, so $\tau$ also equals $\sigma\cap L$. Since $\Delta$ is flag with respect to $\sigma$, $\tau$ is a face of $\sigma$. Say $\tau$ is an $(\ell-k')$-simplex, which since $v$ is joinable to an $(\ell-k)$-face of $\sigma$ tells us that $k'\le k$. Now let $w$ be a vertex in $L$. By similar reasoning we know that $\tau_w \mathrel{\mathop{:}}= \sigma \cap \operatorname{st} w$ is an $(\ell-k'_w)$-simplex for $k'_w\le k$. Intersecting the two faces $\tau$ and $\tau_w$ of $\sigma$ thus yields a face $\omega_w$ that is an $(\ell-k'-k'')$-simplex for $k''\le k'_w\le k$. Since $v$ and $w$ are joinable to $\omega_w$, and $\Delta$ is flag with respect to $\sigma$, the edge connecting $v$ and $w$ is also joinable to $\omega_w$. In particular $\omega_w$ is joinable to $w$ in $L$. We have shown that there is an $(\ell-k')$-simplex, $\tau$, in $L$ such that every vertex $w$ of $L$ is joinable in $L$ to an $(\ell-k'-k)$-face of $\tau$, namely any $(\ell-k'-k)$-face of $\omega_w$. We also claim that $L$ is flag with respect to $\tau$. Indeed, if $\rho$ is a simplex in $L$ and $\tau'$ is a face of $\tau$ such that every vertex of $\rho$ is joinable to every vertex of $\tau'$, then $\rho*v$ is a simplex in $\Delta$ all of whose vertices are joinable in $\Delta$ to all the vertices of $\tau'$, so $\rho*v$ is joinable in $\Delta$ to $\tau'$ and indeed $\rho$ is joinable in $L=\operatorname{lk} v$ to $\tau'$. Now we can apply the induction hypothesis to $L$, and conclude that $L$ is $(\lfloor\frac{\ell-k'}{k}\rfloor-1)$-connected, and hence $(\lfloor\frac{\ell}{k}\rfloor-2)$-connected.
\end{proof}
As a trivial example (which works for any $\Delta$), if there exists a simplex $\sigma$ such that every vertex is joinable to some vertex of $\sigma$, so we can use $k=\ell$, then $\Delta$ is $0$-connected. To tie Lemma~\ref{lem:popular_simplex} to the version in \cite{belk15}, note that if $\Delta$ is flag, then $v$ being joinable to an $(\ell-k)$-face of $\sigma$ is equivalent to $v$ being joinable to all but at most $k$ vertices of $\sigma$.
\medskip
We return to the complex $Y=X_n^{p\le f\le pn^2}$ (recall $p=4m+5$) and the problem of showing that every $Y^{t\le\chi}$ is $(m-1)$-connected. Consider
$$h\mathrel{\mathop{:}}=(\chi,f) \colon Y \to \mathbb{R} \times \mathbb{R} \text{,}$$
ordered lexicographically. This is a Morse function by Proposition~\ref{prop:char_morse}, so by the Morse Lemma~\ref{lem:morse} (specifically Corollary~\ref{cor:morse}), it suffices to show the following:
\begin{proposition}\label{prop:asc_lk_conn}
Let $x$ be a vertex in $Y$. Then the $h$-ascending link $\operatorname{lk}^{h\uparrow}_Y x$ is $(m-1)$-connected.
\end{proposition}
\begin{proof}
Let $r\mathrel{\mathop{:}}= f(x)$. We view $\operatorname{lk} x$ as $\mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$, so the $h$-ascending link is as described in Observation~\ref{obs:asc_lks_matchings}. We will split the problem into two cases: when $r$ is ``small'' and when $r$ is ``big''. First suppose $p\le r\le pn$. In this case, for any $\{0,n-1\}$-matching $\mu$, with $\mu_0$ the number of $0$-simplices in $\mu$ that are $0$-matchings (so $\mu_0 \le pn$), we have $r+(n-1)\mu_0 \le pn + (n-1)pn = pn^2$. In particular the addition of $0$-matchings to a $\{0,n-1\}$-matching will never push us out of $Y$.
We know from Corollary~\ref{cor:char_match} that $v_{r-1}$ is $\psi_i$-preserving for all $0\le i\le n-3$, and hence $\chi$-preserving. If $e_{[r-n,r-1]}$ represents a vertex of $\operatorname{lk}_Y x$, i.e., if $p\le r-(n-1)$, then $e_{[r-n,r-1]}$ is $\chi$-preserving since $r-1>0$. Hence $v_{r-1}$ is $h$-ascending and $e_{[r-n,r-1]}$ is not, by Observation~\ref{obs:asc_lks_matchings}. But $e_{[r-n,r-1]}$ is the only $0$-simplex of $\mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$ not joinable to $v_{r-1}$, so $\operatorname{lk}^{h\uparrow}_Y x$ is contractible, via the conical contraction $\mu \le \mu\cup\{v_{r-1}\} \ge \{v_{r-1}\}$.
Now suppose $pn \le r\le pn^2$. In this case, for any $\{0,n-1\}$-matching $\mu$, with $\mu_{n-1}$ the number of $0$-simplices in $\mu$ that are $(n-1)$-matchings (so $n\mu_{n-1} \le r$), we claim that $r-(n-1)\mu_{n-1} \ge p$. Indeed, if $\mu_{n-1} \ge p$, then $r-(n-1)\mu_{n-1} \ge \mu_{n-1} \ge p$, and if $\mu_{n-1}<p$ then $r-(n-1)\mu_{n-1}> pn - (n-1)p =p$. In analogy to the previous case, this means that the addition of $(n-1)$-matchings to a $\{0,n-1\}$-matching will never push us out of $Y$.
For $0\le q\le n-2$, let $\sigma_q$ be the $((s/2)-1)$-simplex
$$\sigma_q \mathrel{\mathop{:}}= \{e_{[q+(n-1),q+2(n-1)]},e_{[q+3(n-1),q+4(n-1)]}, \dots, e_{[q+(s-1)(n-1),q+s(n-1)]}\} \text{,}$$
where $s\in 2\mathbb{N}$ is as large as possible such that $q+s(n-1)<r-1$; see Figure~\ref{fig:sigma_0} for an example. Since $r\ge pn \ge 9n$, such an $s$ certainly exists. By maximality of $s$, we must have $q+(s+2)(n-1) \ge r-1$, and since $r \ge pn$ and $q\le n-2$, we then have $s\ge \lfloor\frac{(p-3)n+3}{n-1}\rfloor$. By definition $p=4m+5$, and it is straightforward to check that this bound gives us the bound $s\ge 4m+2$.
We now want to cleverly choose $q$ so that $\sigma_q$ is $\chi$-ascending, and hence $h$-ascending. Recall that $\chi=c_0\psi_0+\cdots+c_{n-3}\psi_{n-3}$, and now also set $c_{n-2}\mathrel{\mathop{:}}= 0$. Let $0\le q\le n-2$ be any value such that, with subscripts considered mod $(n-1)$, we have $c_{q-1}<c_q$. Since the $c_i$ cannot all be zero, such a $q$ exists. For this choice of $q$, and any $1\le t\le s-1$, we claim that $e_{[q+t(n-1),q+(t+1)(n-1)]}$ is $\chi$-ascending, which will then imply that $\sigma_q \in \operatorname{lk}^{h\uparrow}_Y x$. By Corollary~\ref{cor:char_match}, since $0<q+(n-1)$ and $q+s(n-1)<r-1$, we know that $e_{[q+t(n-1),q+(t+1)(n-1)]}$ is $\psi_{q-1}$-descending (subscript taken mod $(n-1)$), $\psi_q$-ascending, and $\psi_i$-preserving for all other $0\le i\le n-2$. Then since $c_{q-1}<c_q$, Corollary~\ref{cor:char_match} tells us that $e_{[q+t(n-1),q+(t+1)(n-1)]}$ is indeed $\chi$-ascending, and so $h$-ascending, namely it increases $\chi$ by $c_q-c_{q-1}>0$.
With this $h$-ascending $((s/2)-1)$-simplex $\sigma_q$ in hand, we want to apply Lemma~\ref{lem:popular_simplex} to $\operatorname{lk}^{h\uparrow}_Y x$. Note that $\operatorname{lk}^{h\uparrow}_{X_n} x$ is flag, but $\operatorname{lk}^{h\uparrow}_Y x$ might not be, since filling in missing simplices might require pushing $f$ above $pn^2$. However, we claim $\operatorname{lk}^{h\uparrow}_Y x$ is flag with respect to $\sigma_q$. Indeed, if $\rho$ is a simplex and $\sigma'_q$ is a face of $\sigma_q$ such that every vertex of $\rho$ is joinable to every vertex of $\sigma_q'$, then since $X_n$ is flag we can consider the simplex $\rho*\sigma'_q$ in $X_n$, and since $\sigma_q$ consists only of $(n-1)$-matchings, $f$ achieves its maximum on $\rho*\sigma'_q$ already on $\rho$. Hence if $\rho$ came from $Y$, then $\rho*\sigma'_q$ is also in $Y$, and so $\operatorname{lk}^{h\uparrow}_Y x$ is flag with respect to $\sigma_q$. Now we want to show that every vertex of $\operatorname{lk}^{h\uparrow}_Y x$ is joinable to ``most'' of $\sigma_q$. Let $\mu$ be any $0$-simplex in $\mathcal{M}_{\{0,n-1\}}(\Delta^n(r))$. We claim that $\mu$ is joinable to all but at most two vertices of $\sigma_q$. Indeed, if $\mu=\{v_j\}$ then $\mu$ fails to be joinable to $\{e_{[k,k+(n-1)]}\}$ if and only if $k\le j\le k+(n-1)$, and there is at most one such $\{e_{[k,k+(n-1)]}\}$ in $\sigma_q$ with this property. Similarly if $\mu=\{e_{[j,j+(n-1)]}\}$ then $\mu$ fails to be joinable to $\{e_{[k,k+(n-1)]}\}$ if and only if $k\le j\le k+(n-1)$ or $k\le j+(n-1)\le k+(n-1)$, and there are at most two such $\{e_{[k,k+(n-1)]}\}$ in $\sigma_q$ with this property. We now know that for any $0$-simplex $\mu$ in $\operatorname{lk}^{h\uparrow}_Y x$, $\mu$ is joinable in $\operatorname{lk}^{h\uparrow}_Y x$ to an $((s/2)-3)$-face of $\sigma_q$. By Lemma~\ref{lem:popular_simplex}, we conclude that $\operatorname{lk}^{h\uparrow}_Y x$ is $(\lfloor\frac{(s/2)-1}{2}\rfloor-1)$-connected. Recall that $s\ge 4m+2$, and so $\operatorname{lk}^{h\uparrow}_Y x$ is $(m-1)$-connected.
\end{proof}
\begin{figure}[htb]
\begin{tikzpicture}\centering
\filldraw[lightgray] (0,0) -- (14,0) -- (13,1) -- (1,1);
\filldraw[red] (2,0) -- (3,1) -- (4,0) (6,0) -- (7,1) -- (8,0) (10,0) -- (11,1) -- (12,0);
\draw (0,0) -- (14,0) -- (13,1) -- (1,1) -- (0,0) (1,1) -- (2,0) -- (3,1) -- (4,0) -- (5,1) -- (6,0) -- (7,1) -- (8,0) -- (9,1) -- (10,0) -- (11,1) -- (12,0) -- (13,1);
\filldraw (0,0) circle (1.5pt) (2,0) circle (1.5pt) (4,0) circle (1.5pt) (6,0) circle (1.5pt) (8,0) circle (1.5pt) (10,0) circle (1.5pt) (12,0) circle (1.5pt) (14,0) circle (1.5pt);
\filldraw[white] (1,1) circle (1.5pt) (3,1) circle (1.5pt) (5,1) circle (1.5pt) (7,1) circle (1.5pt) (9,1) circle (1.5pt) (11,1) circle (1.5pt) (13,1) circle (1.5pt);
\draw (1,1) circle (1.5pt) (3,1) circle (1.5pt) (5,1) circle (1.5pt) (7,1) circle (1.5pt) (9,1) circle (1.5pt) (11,1) circle (1.5pt) (13,1) circle (1.5pt);
\node at (3,-.3) {$e_{[2,4]}$}; \node at (7,-.3) {$e_{[6,8]}$}; \node at (11,-.3) {$e_{[10,12]}$};
\end{tikzpicture}
\caption{The $2$-simplex $\sigma_0$ in $\mathcal{M}_{\{0,2\}}(\Delta^3(15))$. Here $s=6$.}\label{fig:sigma_0}
\end{figure}
We summarize this section by writing down the proof of Theorem~A.
\begin{proof}[Proof of Theorem~A]
Let $\chi=a\chi_0+c_0\psi_0+\cdots+c_{n-3}\psi_{n-3}+b\chi_1$ with $[\chi]\in\Sigma^2(F_n)$. If all the $c_i$ are zero then since $[\chi]\in\Sigma^2(F_n)$ we know either $a<0$ or $b<0$, and so $[\chi]\in\Sigma^\infty(F_n)$ by Lemma~\ref{lem:equator}. Now suppose the $c_i$ are not all zero. By Lemma~\ref{lem:push_to_hemispheres}, without loss of generality $a=b=0$. Then by Proposition~\ref{prop:asc_lk_conn} and Corollary~\ref{cor:morse}, $Y^{t\le\chi}$ is $(m-1)$-connected for all $t\in\mathbb{R}$. Hence by Definition~\ref{def:bnsr} we conclude that $[\chi]\in\Sigma^\infty(F_n)$.
\end{proof}
\section{Houghton groups}\label{sec:houghton}
Let $(H_n)_{n\in\mathbb{N}}$ be the family of Houghton groups, introduced in \cite{houghton78}. An element $\eta$ of $H_n$ is an automorphism of $\{1,\dots,n\}\times\mathbb{N}$ such that for each $1\le i\le n$ there exists $m_i\in\mathbb{Z}$ and $N_i\in\mathbb{N}$ such that for all $x\ge N_i$ we have $(i,x)\eta=(i,x+m_i)$. That is, $\eta$ ``eventually acts as translations'' on each ray $\{i\}\times\mathbb{N}$. We have that $H_n$ is of type $\operatorname{F}_{n-1}$ but not of type $\operatorname{F}_n$ \cite[Theorem~5.1]{brown87}.
It is known that $\operatorname{Hom}(H_n,\mathbb{R})$ is generated by characters $\chi_1,\dots,\chi_n$, given by $\chi_i(\eta)=m_i$ for each $i$ (with $m_i$ as above). Since $\eta$ is an automorphism, $\sum m_i = 0$ for any $\eta$, and hence $\chi_1+\cdots+\chi_n=0$ as characters. In fact for $n\ge 2$, $\chi_1,\dots,\chi_{n-1}$ form a basis of $\operatorname{Hom}(H_n,\mathbb{R})\cong\mathbb{R}^{n-1}$. Since $H_n$ is of type $\operatorname{F}_{n-1}$, one can ask about $\Sigma^m(H_n)$ for $m\le n-1$. Bieri and Strebel [unpublished], and independently Brown \cite[Proposition~8.3]{brown87bns}, proved that for $n\ge2$ the complement of $\Sigma^1(H_n)$ is $\{[-\chi_i]\}_{i=1}^n$. Note that when $n=2$, $S(H_2)=\{[\chi_1],[-\chi_1]\}$ and $\chi_1=-\chi_2$ so in fact $\Sigma^1(H_2)=\emptyset$.
In this section we prove:
\begin{theorem}\label{thrm:houghton_pos}
Let $n\in\mathbb{N}$ and let $\chi = a_1\chi_1 + \cdots + a_n \chi_n$ be a non-trivial character, i.e., the $a_i$ are not all equal. Up to symmetry, we can assume $a_1\le \cdots \le a_n$. Let $1\le m(\chi)\le n-1$ be maximal such that $a_{m(\chi)}\ne a_n$. Then $[\chi]\in\Sigma^{m(\chi)-1}(H_n)$.
\end{theorem}
For example, $[\chi_n]\in \Sigma^{n-2}(H_n)$. Note that since $\chi_1+\cdots+\chi_n=0$, without loss of generality $a_{m(\chi)+1}=a_n=0$. With this convention we would for example not write $\chi_n$ but rather $-\chi_1-\cdots-\chi_{n-1}$. Also note that the only $\chi$ with $m(\chi)=1$ are those equivalent to $-\chi_i$, so we recover the fact that the $[-\chi_i]$ are the only things in the complement of $\Sigma^1(H_n)$.
This leaves open the question of whether $[\chi]\not\in\Sigma^{m(\chi)}(H_n)$ always holds, which we expect should be true.
\begin{conjecture}\label{conj:houghton_neg}
With the setup of Theorem~\ref{thrm:houghton_pos}, moreover $[\chi]\not\in\Sigma^{m(\chi)}(H_n)$.
\end{conjecture}
This conjecture holds for low values of $m(\chi)$. When $m(\chi)=1$, without loss of generality $\chi=-\chi_1$, and $[-\chi_1]\not\in\Sigma^1(H_n)$ as mentioned above. When $m(\chi)=2$ (so $n\ge3$), without loss of generality $[\chi]$ lies in the convex hull in $S(H_n)$ of $[-\chi_1]$ and $[-\chi_2]$. Since these are not in $\Sigma^1(H_n)$, \cite[Theorem~A1]{kochloukova02} tells us that $[\chi]$ is not in $\Sigma^2(H_n)$. In general, Conjecture~\ref{conj:houghton_neg} is equivalent to conjecturing that the complement of $\Sigma^m(H_n)$ is the union of all convex hulls of all $\le m$-tuples from the complement of $\Sigma^1(H_n)$; for metabelian groups this is conjectured to always hold, and is called the $\Sigma^m$-Conjecture (see, e.g., \cite[Section~1.3]{bieri10}).
\medskip
We will prove Theorem~\ref{thrm:houghton_pos} by inspecting the proper action of $H_n$ on a proper $\operatorname{CAT}(0)$ cube complex $X_n$. Our reference for $X_n$ is \cite{lee12} (this is a preprint including the author's PhD thesis results). This cube complex was also remarked upon by Brown in \cite{brown87}, though not explicitly constructed. We will not prove everything in the setup here, but will sometimes just cite \cite{lee12}. The vertices of $X_n$ are elements of the monoid $M$ of injective maps $\{1,\dots,n\}\times\mathbb{N} \hookrightarrow \{1,\dots,n\}\times\mathbb{N}$ that are eventually translations. In particular $H_n$ sits in $X_n$ as a discrete set of vertices, namely those maps $\phi$ that are bijective. To describe the higher dimensional cells of $X_n$, we need to discuss $M$ in a bit more detail.
There are $n$ elements of $M$ of particular interest, namely for each $1\le i\le n$ we have a map
$$t_i \colon \{1,\dots,n\}\times\mathbb{N} \to \{1,\dots,n\}\times\mathbb{N}$$
given by sending $(j,x)$ to itself if $j\ne i$ and $(i,x)$ to $(i,x+1)$ for all $x\in\mathbb{N}$. It is clear that for any $\phi\in M$, there exists a product of $t_i$s, say $\tau$, such that $\tau \circ \phi$ is a product of $t_i$s. Here our maps act on the right, so this composition means first do $\tau$, then do $\phi$. Heuristically, $\phi$ acts as translations outside of some finite region $S\subseteq \{1,\dots,n\}\times\mathbb{N}$, so just choose $\tau$ such that the range of $\tau$ lies outside $S$.
Back to defining the higher cells of $X_n$, we now declare that two vertices $\phi,\psi$ share an edge whenever $\phi = t_i \circ \psi$ or $\psi = t_i \circ \phi$ for some $1\le i\le n$. Already we have that $X_n$ is connected, thanks to the discussion in the previous paragraph. Now for $2\le k\le n$, we declare that we have a $k$-cube supported on every set of vertices of the following form: start with a vertex $\phi$, let $K$ be a subset of $\{1,\dots,n\}$ with $|K|=k$, and look at the set of $2^k$ vertices
$$\left\{\left(\prod_{i\in J} t_i\right)\circ\phi \mid J\subseteq K\right\} \text{.}$$
Since the $t_i$ all commute, we do not need to specify an order in which to compose them. These vertices span a $k$-cube in $X_n$. For example, when $n\ge2$ any vertex $\phi$ lies in the square $\{\phi,t_1\circ\phi,t_2\circ\phi, t_1\circ t_2\circ \phi\}$.
It is known that $X_n$ is a $\operatorname{CAT}(0)$ cube complex \cite{lee12}. The group $H_n$ acts on the vertices of $X_n$ via $(\phi)\eta \mathrel{\mathop{:}}= \phi \circ \eta$, and this extends to an action of $H_n$ on $X_n$. There is an $H_n$-invariant height function $f$ (called $h$ in \cite{lee12}) on the vertices of $X_n$, namely if $\phi\in M$ and $F(\phi) \mathrel{\mathop{:}}= (\{1,\dots,n\}\times\mathbb{N}) \setminus \image(\phi)$, so $F(\phi)$ is finite, then
$$f(\phi) \mathrel{\mathop{:}}= |F(\phi)| \text{.}$$
Note that $f(\phi)=0$ if and only if $\phi$ is bijective, i.e., $\phi\in H_n$. It is clear that $f$ is $H_n$-invariant. Also note that for any cube $\sigma$ in $X_n$, there is a unique vertex of $\sigma$ with minimal $f$-value, and so any cube stabilizer is contained in a vertex stabilizer. Vertex stabilizers are finite, since if $\phi \circ \eta = \phi$ then $\eta$ must fix all points outside $F(\phi)$.
In summary, $H_n$ acts properly on the $n$-dimensional proper $\operatorname{CAT}(0)$ cube complex $X_n$. The action is not cocompact, since it is $f$-invariant and $f$ takes infinitely many values, but it is cocompact on $f$-sublevel sets:
\begin{lemma}\label{lem:houghton_cocpt}
The action of $H_n$ on any $X_n^{p\le f\le q}$ is cocompact.
\end{lemma}
\begin{proof}
Since $X_n$ is locally compact, we just need to see that $H_n$ is transitive on vertices with the same $f$ value. Let $\phi$ and $\psi$ be vertices with $f(\phi)=f(\psi)$. Let $\alpha\in S_\infty\le H_n$ be any bijection taking $F(\phi)$ bijectively to $F(\psi)$ (since $f(\phi)=f(\psi)$ such a $\alpha$ exists). Define $\eta\in H_n$ via:
$$(x)\eta \mathrel{\mathop{:}}= \left\{\begin{array}{ll}
(y)\psi & \text{if } x=(y)\phi \\
(x)\alpha & \text{if } x\in F(\phi) \text{.}
\end{array}\right.$$
Now $\phi \circ \eta = \psi$ by definition, and $\eta$ clearly eventually acts by translations, so we just need to show $\eta$ is bijective. Note that $\eta$ takes $\image(\phi)$ bijectively to $\image(\psi)$, and also takes $F(\phi)$ bijectively to $F(\psi)$, so indeed $\eta$ is bijective.
\end{proof}
Extending $f$ to a Morse function on $X_n$ (technically the Morse function is $(f,0)$, if we use our definition of Morse function in Definition~\ref{def:morse}), to figure out the higher connectivity properties of the $X_n^{p\le f\le q}$ it suffices to look at $f$-descending links of vertices.
\begin{cit}\cite[Lemma~3.52]{lee12}\label{cit:houghton_f_desc_lk_conn}
Let $\phi$ be a vertex in $X_n$. If $f(\phi)\ge 2n-1$ then the descending link of $\phi$ is $(n-2)$-connected.
\end{cit}
In particular, Corollary~\ref{cor:morse} says that $X_n^{f\le q}$ is $(n-2)$ connected for $q\ge 2n-2$. Setting $Y\mathrel{\mathop{:}}= X_n^{f\le 3n-3}$, we have the whole setup of Definition~\ref{def:bnsr}, namely $H_n$ acts properly and cocompactly on the $(n-2)$-connected complex $Y$ (it will become clear later why we use $3n-3$ instead of $2n-2$). Hence to understand $\Sigma^m(H_n)$, we can inspect filtrations of the form $(Y^{t\le \chi})_{t\in\mathbb{R}}$ for $\chi\in\operatorname{Hom}(H_n,\mathbb{R})$.
We have to explain what $\chi$ means as a function $Y \to \mathbb{R}$. The generating characters $\chi_i$ of $H_n$ measure the length of the eventual translations that every element of $H_n$ must have. Of course vertices of $X_n$ are also functions on $\{1,\dots,n\}\times\mathbb{N}$ that act as eventual translations, so the $\chi_i$ are all naturally defined on vertices of $X_n$, and extend affinely to $X_n$. Note that whereas $\chi_1+\cdots+\chi_n=0$ as characters on $H_n$, now more generally $\chi_1+\cdots+\chi_n=f$ as functions on $X_n$.
Let $\chi=a_1\chi_1+\cdots a_n\chi_n$ be a non-trivial character of $H_n$, so the $a_i$ are not all equal. Up to symmetry assume $a_1\le\cdots\le a_n$. Choose $1\le m(\chi)\le n-1$ maximal with $a_{m(\chi)}\ne a_n$. Since $\chi_1+\cdots+\chi_n=0$ as a character of $H_n$, without loss of generality $a_{m(\chi)+1}=a_n=0$. For instance, instead of $\chi_n$, we consider the equivalent character $-\chi_1-\cdots-\chi_{n-1}$. In general now the first $m(\chi)$ many coefficients of $\chi$ are negative, and all the coefficients from the $(m(\chi)+1)$st one on are zero.
Consider the function $h\mathrel{\mathop{:}}= (\chi,f)\colon Y \to \mathbb{R}$ with $(\chi,f)$ ordered lexicographically. This is a Morse function, \`a la Definition~\ref{def:morse}, for reasons similar to those in the proof of Proposition~\ref{prop:char_morse} for $F_n$. The key property is that the basis characters vary by $0$, $1$, or $-1$ between adjacent vertices. We now claim that all $h$-ascending links of vertices in $Y$ are $(m(\chi)-2)$-connected, and then Theorem~\ref{thrm:houghton_pos} will follow from Corollary~\ref{cor:morse}.
\begin{lemma}\label{lem:houghton_asc_lk_model}
Let $\phi$ be a vertex in $Y$. An adjacent vertex $\psi$ is in the $h$-ascending link of $\phi$ if and only if either
\begin{enumerate}
\item $\psi=t_i \circ \phi$ for some $m(\chi)+1\le i\le n$, or else
\item $\phi=t_i \circ \psi$ for some $1\le i\le m(\chi)$.
\end{enumerate}
\end{lemma}
\begin{proof}
That $\psi$ is in the link of $\phi$ means there is $1\le i\le n$ such that $\psi=t_i \circ \phi$ or $\phi=t_i \circ \psi$. In the former case, $\psi$ has higher $\chi_i$ and $f$ values than $\phi$ and equal $\chi_j$ values (for $j\ne i$), and in the latter case $\psi$ has lower $\chi_i$ and $f$ values than $\phi$ and equal $\chi_j$ values (for $j\ne i$). Hence in the former case $\psi$ in the $h$-ascending link of $\phi$ if and only if $m(\chi)+1\le i\le n$, since then $\chi$ does not change but $f$ goes up, and in the latter case $\psi$ in the $h$-ascending link of $\phi$ if and only if $1\le i\le m(\chi)$, since then $\chi$ goes up.
\end{proof}
Note that if $\phi=t_i \circ \psi$ and $\psi'=t_j \circ \phi$ ($i\ne j$) then $\psi$ and $\psi'$ share an edge in $\operatorname{lk}\phi$. In particular $\operatorname{lk}^{h\uparrow}_Y \phi$ is a join, of its intersection with $\operatorname{lk}^{f\uparrow}_Y \phi$ and its intersection with $\operatorname{lk}^{f\downarrow}_Y \phi$. Call the former the \emph{ascending up-link} and the latter the \emph{ascending down-link}. The two cases in Lemma~\ref{lem:houghton_asc_lk_model} are thus complete descriptions of the vertices in, respectively, the ascending up-link and ascending down-link.
\begin{proposition}\label{prop:houghton_asc_lk_conn}
Let $\phi$ be a vertex in $Y$. Then $\operatorname{lk}^{h\uparrow}_Y \phi$ is $(m(\chi)-2)$-connected.
\end{proposition}
\begin{proof}
We know $0\le f(\phi)\le 3n-3$. First suppose $0\le f(\phi)\le 2n+m(\chi)-3$. The subscripts $i$ for which $t_i \circ \phi$ is ascending are those satisfying $m(\chi)+1\le i\le n$, so there are $n-m(\chi)$ of them, and since $(2n+m(\chi)-3) + (n-m(\chi)) = 3n-3$ we have in this case that the entire $f$-ascending link of $\phi$ in $X_n$ is contained in $Y$. This tells us that the ascending up-link of $\phi$ consists of the $(n-m(\chi)-1)$-simplex $\{t_{m(\chi)+1},\dots,t_n\}$, which is contractible, and hence $\operatorname{lk}^{h\uparrow}_Y \phi$ is contractible.
Now suppose $2n+m(\chi)-2 \le f(\phi)\le 3n-3$, so $Y$ does not contain the entire ascending up-link of $\phi$ in $X_n$, but rather only its $(3n-f(\phi)-4)$-skeleton. This is the $(3n-f(\phi)-4)$-skeleton of an $(n-m(\chi)-1)$-simplex, so it is $(3n-f(\phi)-5)$-connected. Since $f(\phi)\ge 2n+m(\chi)-2 \ge 2n-1$ though, in this case we have that the entire ascending down-link of $\phi$ in $X_n$ is contained in $Y$. Lemma~\ref{lem:houghton_asc_lk_model} tells us that this $h$-ascending down-link is isomorphic to the $f$-descending link in $X_{m(\chi)}$ of a vertex with $f$ value equal to $f(\phi)$. Since $f(\phi)\ge 2n+m(\chi)-2 \ge 2m(\chi)-1$, this is $(m(\chi)-2)$-connected by Citation~\ref{cit:houghton_f_desc_lk_conn}. In this case, taking the join, we see that $\operatorname{lk}^{h\uparrow}_Y \phi$ is $((3n-f(\phi)-4)+(m(\chi)-1))$-connected, and hence $(3n-f(\phi)+m(\chi)-5)$-connected. The result now follows since $f(\phi)\le 3n-3$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thrm:houghton_pos}]
The superlevel sets $Y^{t\le \chi}$ are all $(m(\chi)-2)$-connected by Corollary~\ref{cor:morse} and Proposition~\ref{prop:houghton_asc_lk_conn}, so by Definition~\ref{def:bnsr}, $[\chi]\in\Sigma^{m(\chi)-1}(H_n)$.
\end{proof}
\medskip
As for negative properties, i.e., Conjecture~\ref{conj:houghton_neg}, it is difficult in general to tell using Morse theory that a filtration is \emph{not} essentially $(m-1)$-connected. Even if we know the ascending link of a vertex is not $(m-1)$-connected, we do not know whether gluing in that vertex served to kill a pre-existing $(m-1)$-sphere, or served to create a new $m$-sphere. For example if a vertex's ascending link is two points, we do not know whether gluing in that vertex connects up two previous disconnected components, or creates a loop. This is basically what makes it so difficult to prove that character classes lie in $S(G)\setminus\Sigma^m(G)$; for example even when $G$ is metabelian this problem remains open in general.
As a remark, to show that $(Y^{t\le \chi})_{t\in\mathbb{R}}$ is not essentially $(m(\chi)-1)$-connected, it suffices to prove that $Y^{0\le\chi}$ is not $(m(\chi)-1)$-connected, by tricks for negative properties discussed in \cite{witzel15}. Also, thanks to how we have realized $\chi$ as a linear combination of the $\chi_i$ using non-positive coefficients, for any vertex $x\in X_n^{0\le\chi}$ the whole $f$-descending link of $x$ lies in $X_n^{0\le\chi}$. Hence $Y^{0\le\chi}$ is $(m(\chi)-1)$-connected if and only if $X_n^{0\le\chi}$ is. This reduces Conjecture~\ref{conj:houghton_neg} to proving that $X_n^{0\le\chi}$ is not $(m(\chi)-1)$-connected, but this is still a hard problem when $m(\chi)>2$, beyond the scope of our present techniques.
As a final remark, one typical trick for deducing negative properties is finding a retract onto a more manageable quotient with negative properties. However, for $H_n$, every proper quotient $Q$ has $\Sigma^1(Q)=S(Q)$, and so it seems very unlikely this trick could work. To see this fact about such $Q$, we note that for $n\ge3$, $[H_n,H_n]=S_\infty$, the infinite symmetric group, and the second derived subgroup $H_n^{(2)}$ is $A_\infty$, the infinite alternating group. One can check that every non-trivial normal subgroup of $H_n$ contains $A_\infty$, so any $Q$ as above is a quotient of $H_n/A_\infty$. But every kernel of a character on $H_n$ becomes finitely generated when taken mod $A_\infty$, so indeed $\Sigma^1(Q)=S(Q)$.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,564,687 | arxiv | \subsection*{Introduction}
A fundamental task of quantum simulation is to perform an
experiment \textit{in silico}.
Like traditional experimentalists, researchers using
quantum computers will often be interested in efficiently measuring a collection of properties.
For example, the electronic ground state problem is frequently cited as a
motivation for quantum simulation of chemistry, but determining the ground state
energy is only a starting point in most chemical applications.
Depending on context, it may be essential to measure the dipole moment and
polarizability, the electron density, the forces experienced by the classical
nuclei, or various other quantities~\cite{Pulay1979-gd, Gregory1997-yy}.
Similarly, in condensed matter physics and beyond, correlation functions play a
central role in the theory of quantum many-body phenomena due to
their interpretability and measurability in the lab~\cite{Damascelli2004-yk, Rickayzen2013-xj}.
In this letter, we consider the problem of accurately and efficiently estimating
multiple properties from a quantum computation. We focus on evaluating the
expectation values of a collection of \(M\) Hermitian operators \(\{O_j\}\) with
respect to a pure state \(\ket{\psi}\). We aim to evaluate each expectation
value to within additive error \(\varepsilon\) using as few calls as possible
to a state preparation oracle for \(\ket{\psi}\) (or its inverse). One simple
approach is to repeatedly prepare \(\ket{\psi}\) and projectively measure mutually commuting subsets of \(\{O_j\}\). Alternatively, strategies
based on amplitude estimation achieve a quadratic speedup with respect to
\(\varepsilon\) but entail measuring each observable separately~\cite{Brassard2000-oi, Knill2007-ci, Rall2020-in}. A range of newer
``shadow tomography'' techniques use joint measurements of multiple copies
of $\ket{\psi}$ to achieve polylogarithmic scaling with respect to \(M\) at the
expense of an unfavorable \(1/\varepsilon^4\) scaling~\cite{Aaronson2020-ei,
Brandao2017-gl, Van_Apeldoorn2019-xl, Huang2021-pr}. In certain situations,
randomized methods based on the idea of ``classical shadows'' of the state obtain
\(1/\varepsilon^2\) scaling while improving upon sampling
protocols with deterministic measurement settings~\cite{Huang2020-kn, Zhao2021-fv}. We review these existing approaches in \app{prior_estimation_work}{I} and compare them to our new strategy in \tab{cost_comparison} and \app{applications}{II}.
Our main contribution is an algorithm that achieves the
same \(1/\varepsilon\) scaling as methods based on amplitude estimation, but
also improves the scaling with respect to \(M\) from \(\bigot{M}\)
to \(\bigot{\sqrt{M}}\), where the tilde in \(\bigot{\cdot}\) hides logarithmic factors.
Our approach is to construct a function $f$ whose gradient yields the
expectation values of interest and encode $f$ in a
parameterized quantum circuit.
We can then apply Gily\'{e}n \emph{et al.}'s quantum algorithm for gradient
estimation~\cite{Gilyen2017-gk} to obtain the desired scaling.
The following theorem formalizes our result.
\begin{theorem}\label{thm:ub}
Let $\{O_j\}$ be a set of $M$ Hermitian operators on \(N\) qubits, with
spectral norms \(\|O_j\| \leq 1\) for all \(j\).
There exists a quantum algorithm that, for any $N$-qubit quantum state
$\ket{\psi}$ prepared by a unitary $U_\psi$, outputs estimates
$\widetilde{o_j}$ such that $ | \widetilde{o_j}
- \bra{\psi} O_j \ket{\psi}| \le \varepsilon$ for all \(j\) with probability at
least $2/3$, using \(\bigot{\sqrt{M}/\varepsilon}\) queries to $U_\psi$ and
$U_\psi^\dagger$, along with $\bigot{\sqrt{M} /\varepsilon}$ gates of the form
controlled-$e^{-ix O_j}$ for each \(j\), for various values of $x$ with \(|x| \in
\bigo{1/\sqrt{M}}\).
\end{theorem}
As we show in \cor{lower_bounds}, this query complexity is worst-case
optimal (up to logarithmic factors) in the high-precision regime where \(\varepsilon \in (0, \frac{1}{3\sqrt{M}})\).
After establishing this lower bound for our problem, we review the gradient
algorithm of \citen{Gilyen2017-gk} and present the proof of \thm{ub}.
We then discuss several extensions of our approach, including a strategy for estimating multiple dynamic correlation functions and a method that handles
observables with arbitrary norms (or precision requirements) based on a
generalization of the gradient algorithm.
\begin{table}[]
\begin{tabularx}{.483\textwidth}{@{}lXXX@{}}
\textbf{\;\;\;\;\;\;\;\;\;\;\;\;\;\;} & \textbf{Comm.} & \textbf{Non-comm.} & \textbf{\(k\)-RDM} \\ \toprule
Sampling & \(\bigo{\frac{\log M}{\varepsilon^2}}\) & \(\bigot{\frac{M}{\varepsilon^2}}\) & \(\bigot{\frac{N^k}{\varepsilon^2}}\)~\cite{Zhao2021-fv} \\ \midrule
Amp. Est.~\cite{Knill2007-ci} & \(\bigot{\frac{M}{\varepsilon}}\) & \(\bigot{\frac{M}{\varepsilon}}\) & \(\bigot{\frac{N^{2k}}{\varepsilon}}\) \\ \midrule
Shadow Tom.~\cite{Huang2021-pr} & \(\bigo{\frac{\log M}{\varepsilon^4}}\) & \(\bigo{\frac{\log{M}}{\varepsilon^4}}\) & \(\bigo{\frac{k \log{N}}{\varepsilon^4}}\) \\ \midrule \midrule
Gradient & \(\bigot{\frac{\sqrt{M}}{\varepsilon}}\) & \(\bigot{\frac{\sqrt{M}}{\varepsilon}}\) & \(\bigot{\frac{N^k}{\varepsilon}}\) \\
\end{tabularx}
\caption{ A comparison of the (worst-case) complexities, in terms of state
preparation oracle queries, of different approaches for measuring multiple
observables. We consider three applications: estimating the expectation
values of $M$ commuting or non-commuting observables, and
determining the fermionic \(k\)-RDM of an \(N\)-mode system. Here,
\(\varepsilon\) denotes the additive error to which each quantity is
estimated. We compare strategies based on naive sampling, amplitude
estimation, and shadow tomography to our gradient-based approach.
We cite the
specific works used to determine these complexities, including the
Pauli-specific shadow protocol of~\citen{Huang2021-pr}. Note that methods based on sampling and shadow tomography also work under a weaker input model where only copies of the state are provided.
}
\label{tab:cost_comparison}
\end{table}
\subsection*{Lower Bounds}
\label{sec:lower_bounds}
In \citen{Van_Apeldoorn2021-sk}, Apeldoorn proved a lower bound for a task that
is essentially a special case of our quantum expectation value problem.
We explain how a lower bound for our problem can be obtained as a
corollary.
Their results are expressed in terms of a particular quantum access model for
classical probability distributions:
\begin{definition}[Sample oracle for a probability distribution]
\label{def:probability_oracle_classical_distribution}
Let \(\vb{p}\) be a probability distribution over \(M\) outcomes, i.e.,
\(\vb{p} \in [0,1]^M\) with \(\|\vb{p}\|_1 = 1\).
A \textit{sample oracle} \(U_{\vb{p}}\) for \(\vb{p}\) is a unitary operator that acts
as
\begin{equation}
\label{eq:multi_dimensional_probability_oracle}
U_{\vb{p}}: \ket{0}\ket{0} \mapsto \sum_{j=1}^M \sqrt{p_j} \ket{j} \otimes \ket{\phi_j},
\end{equation}
where the \(\ket{\phi_j}\) are arbitrary normalized quantum states.
\end{definition}
We rephrase Lemma 13 of \citen{Van_Apeldoorn2021-sk} below.
Here and throughout this paper, we count queries to a unitary oracle \(U\) and to
its inverse \(U^\dagger\) as equivalent in cost.
\begin{theorem}[Lemma 13,~\citen{Van_Apeldoorn2021-sk} (rephrased)]
\label{thm:lower_bound_apeldoorn}
Let \(M\) be a positive integer power of \(2\) and let \(\varepsilon \in (0, \frac{1}{3\sqrt{M}})\).
There exists a known matrix \(A \in \{-1,+1\}^{M \times M}\) such that the
following is true.
Suppose \(\mathcal{A}\) is an algorithm that, for every probability distribution
\(\vb{p}\), accessed via a sample oracle \(U_{\vb{p}}\), outputs (with
probability at least \(2/3\)) a \(\vb{\tilde{q}}\) such that \(\|A \vb{p} -
\vb{\tilde{q}}\|_\infty \leq \varepsilon\).
Then \(\mathcal{A}\) must use \(\bigomega{{\sqrt{M}}/{\varepsilon}}\) queries to \(U_{\vb{p}}\) in the worst case.
\end{theorem}
We can use this theorem to derive the following corollary, establishing the near-optimality of the algorithm in~\thm{ub} in certain regimes.
\begin{corollary}
\label{cor:lower_bounds}
Let \(M\) be a positive integer power of \(2\) and let \(\varepsilon \in (0, \frac{1}{3\sqrt{M}})\).
Let \(\mathcal{A}\) be any algorithm that takes as an input an arbitrary
set of \(M\) observables \(\{O_j\}\).
Suppose that, for every quantum state \(\ket{\psi}\), accessed via a state
preparation oracle \(U_{\psi}\), \(\mathcal{A}\) outputs estimates of each
\(\ev{O_j}{\psi}\) to within additive error \(\varepsilon\) (with probability at
least \(2/3\)).
Then, there exists a set of observables \(\{O_j\}\) such that \(\mathcal{A}\)
applied to \(\{O_j\}\) must use \(\bigomega{{\sqrt{M}}/{\varepsilon}}\)
queries to \(U_{\psi}\).
\end{corollary}
\begin{proof}
Assume for the sake of contradiction that for any $\{O_j\}$ and $U_\psi$, the algorithm $\mathcal{A}$ uses $o(\sqrt{M}/\varepsilon)$
queries to $U_\psi$ to estimate every $\bra{\psi}O_j\ket{\psi}$ to within
error $\varepsilon$ (with success probability at least $2/3$).
For any sample oracle $U_{\vb{p}}$ of the form in
\eq{multi_dimensional_probability_oracle}, consider the state
\begin{equation}
\ket{\psi(U_{\vb{p}})} \defeq \sum_{j=1}^M \sqrt{p_j} \Big( \bigotimes_{i=1}^M \ket{\frac{1 - A_{ij}}{2}}\Big) \otimes \ket{j} \otimes \ket{\phi_j}.
\end{equation}
A quick computation verifies that the $i$-th entry of the vector $A\vb{p}$ is
equal to \(\ev{Z_i}{\psi(U_{\vb{p}})}\), where $Z_i$
denotes the Pauli $Z$ operator acting on the $i$-th qubit.
Since the matrix $A$ is known, it is clear that $\ket{\psi(U_{\vb{p}})} = U_A
(I\otimes U_{\vb{p}}) \ket{0}$ for a known unitary $U_A$:
\begin{equation}
\label{eq:O_ap}
U_{A} = \sum_{j} \big(\bigotimes_{i=1}^M X_{i}^{\delta_{A_{ij}, -1}}\big) \otimes \ketbra{j} \otimes \idmat.
\end{equation}
Therefore, we can apply algorithm \(\mathcal{A}\) with \(O_j = Z_j\) for \(j
\in \{1, \cdots, M\}\) and \(U_\psi = U_A (\idmat \otimes U_{\vb{p}})\).
By our assumption, this constitutes an algorithm that for every $U_{\vb{p}}$, estimates each entry of \(A\vb{p}\) to within
error \(\varepsilon\) using \(o(\sqrt{M}/\varepsilon)\)
queries to \(U_{\vb{p}}\), contradicting \thm{lower_bound_apeldoorn}, and
completing the proof.
\end{proof}
\subsection*{Background on Gily\'{e}n et al.'s gradient algorithm}
Our framework for simultaneously estimating multiple expectation values uses the improved quantum algorithm for gradient estimation of
Gily\'{e}n, Arunachalam, and Wiebe (henceforth, Gily\'{e}n \emph{et
al.})~\cite{Gilyen2017-gk}.
Gily\'{e}n \textit{et al.}~built on earlier work by Jordan~\cite{Jordan2005-hs},
which demonstrated an exponential quantum speedup for computing the gradient in
a particular black-box access model.
Specifically, Jordan's algorithm uses one query to a \textit{binary oracle} (see
\app{more_gradient_details}{III}) for a function $f$, along with phase
kickback and the quantum Fourier transform, to obtain an approximation
of the gradient $\nabla f$.
While we defer a technical discussion of Gily\'{e}n \emph{et al.}'s algorithm to \app{more_gradient_details}{III} (and refer the reader also to~\citen{Gilyen2017-gk}), we give a brief, colloquial description of their algorithm here.
It is helpful to review their definition for a \textit{probability oracle},
\begin{definition}[Probability oracle]
\label{def:probability_oracle_main_text}
Consider a function \(f: \mathbb{R}^M \rightarrow [0, 1]\).
A probability oracle \(U_f\) for \(f\) is a unitary operator that acts
as
\begin{align} \label{eq:U_f}
U_f:& \ket{\bm{x}}\ket{\bm{0}} \mapsto \\
&\ket{\bm{x}}\left(\sqrt{f(\bm{x})} \ket{1} \ket{\phi_1(\bm{x})} + \sqrt{1 - f(\bm{x})}\ket{0}\ket{\phi_0(\bm{x})}\right), \nonumber
\end{align}
where \(\ket{\bm{x}}\) denotes a discretization of the variable \(\bm{x}\)
encoded into a register of qubits, \(\ket{\bm{0}}\) denotes the all-zeros state of a register of ancilla qubits, and \(\ket{\phi_0(\bm{x})}\) and \(\ket{\phi_1(\bm{x})}\) are
arbitrary quantum states.
\end{definition}
Gily\'{e}n \textit{et al.} show how such a probability oracle can be used to encode a finite-difference approximation to a directional derivative of \(f\) in the phase of an ancilla register, e.g., a first-order approximation is implemented by
\begin{equation}
A_{f'_1}: \ket{\bm{x}}\ket{\bm{0}} \mapsto e^{i \left(f\left(\bm{x}\right) - f\left(\bm{-x}\right)\right)} \ket{\bm{x}}\ket{\bm{0}}.
\end{equation}
As in Jordan's original algorithm, a quantum Fourier transform can then be used to extract an approximate gradient from the phases accumulated on an appropriate superposition of basis states. By using higher-order finite-difference formulas,
Gily\'{e}n \textit{et al.} are able to estimate the gradient with a scaling that is optimal (up to logarithmic factors) for a particular family of smooth functions.
We restate the formal properties of their algorithm in the theorem below.
\begin{theorem}[Theorem 25, \citen{Gilyen2017-gk} (rephrased)]
\label{thm:gradient_algorithm}
Let \(\varepsilon\), \(c \in \mathbb{R}_{+}\) be fixed constants, with \(\varepsilon
\leq c\).
Let $M \in \mathbb{Z}_{+}$ and $\bm{x} \in \mathbb{R}^M$.
Suppose that \(f: \mathbb{R}^M \rightarrow \mathbb{R}\) is an analytic
function such that for every \(k \in \mathbb{Z}_+\), the following bound holds
for all \(k\)-th order partial derivatives of \(f\) at $\bm{x}$ (denoted by
\(\partial_{{\bm{\alpha}}} f(\bm{x})\)): $|\partial_{{\bm{\alpha}}} f(\bm{x})
| \leq c^kk^{\frac{k}{2}}$.
Then, there is a quantum algorithm that outputs an estimate
\(\widetilde{\bm{g}} \in \mathbb{R}^M\) such that $\|\nabla f(\bm{x}) -
\widetilde{\bm{g}}\|_\infty \leq \varepsilon$, with probability at least \(1 -
\delta\).
This algorithm makes \(\bigot{c \sqrt{M} \log(M/\delta)/\varepsilon }\) queries
to a probability oracle for \(f\).
\end{theorem}
\subsection*{Expectation values via the gradient algorithm}
\label{sec:speedup_by_gradient}
To construct our algorithm and prove \thm{ub}, we build a probability oracle for a function whose gradient encodes the expectation values of interest and apply the quantum algorithm for the gradient.
\begin{proof}[Proof of~\thm{ub}]
We begin by defining the parameterized unitary
\begin{equation}
\label{eq:unitary_of_x}
U(\bm{x}) \defeq \prod_{j=1}^M e^{-2 i x_j O_j}
\end{equation}
for $\bm{x} \in \mathbb{R}^M$.
The derivative of this unitary with respect to \(x_\ell\) is
\begin{equation}
\label{eq:d_unitary_of_x}
\frac{\partial U}{\partial x_\ell} = -2 i \Bigg(\prod_{j=1}^\ell e^{-2 i x_j O_j}\Bigg) O_\ell \Bigg( \prod_{k=\ell + 1}^M e^{-2 i x_k O_k}\Bigg).
\end{equation}
We are interested in the expectation of the \(O_j\) with respect to the state
\(\ket{\psi}\), so we define the following function \(f\):
\begin{equation}
\label{eq:f_def}
f(\bm{x}) \defeq -\frac{1}{2}\mathrm{Im}[\bra{\psi} U(\bm{x})\ket{\psi}] +\frac{1}{2}.
\end{equation}
Using \eq{d_unitary_of_x}, we have
\begin{equation}
\label{eq:derivative_magic_evaluation}
\frac{\partial f}{\partial x_\ell}\Bigg|_{\bm{x} = \bm{0}} = \ev{O_\ell}{\psi}.
\end{equation}
Therefore, the gradient $\nabla f(\bm{0})$ is precisely the collection of expectation values of interest.
Now, we verify that \(f\) satisfies the conditions of
\thm{gradient_algorithm}.
Observe that $f$ is analytic and that the \(k\)-th order partial derivative of \(f\) with respect to any
collection of indices \(\alpha \in \{1,\dots, M\}^k\) takes the form
\begin{equation}
\label{eq:kth_derivative}
\partial_\alpha f(\bm{x}) = (-2)^{k-1} {\rm Im}(i^{k} \ev{V(\bm{x}, {\bm{\alpha}})}{\psi}),
\end{equation}
for some operator \(V(\bm{x}, \alpha)\) which depends on both \(\alpha\)
and \(\bm{x}\).
Note that \(V\) is a product of terms which are either unitary, or from
\(\{O_j\}\).
Since $\|O_j\| \leq 1$ for all $j$, we have \(\|V\| \leq 1\), and therefore
$|\partial_\alpha f(\bm{0})| \leq 2^{k-1}$ for all $k$ and $\alpha$.
By setting \(c=2\), we satisfy the derivative conditions of
\thm{gradient_algorithm}.
To construct a probability oracle for \(f\)
(see~\defi{probability_oracle_main_text}), we need a
quantum circuit that encodes \(f(\bm{x})\) into the amplitudes of an
ancilla.
We construct such a circuit using the Hadamard test for the imaginary component
of \(\bra{\psi}U(\bm{x})\ket{\psi}\)~\cite{Yu_Kitaev1995-zv, Aharonov2009-nk}.
Let
\begin{equation}
\label{eq:param_circuit_def}
F(\bm{x}) \defeq
\big(H \otimes \idmat \big) \big( \controlled{U(\bm{x})} \big)
\big(S^\dagger H \otimes U_\psi\big),
\end{equation}
where \(H\) denotes the Hadamard gate, \(\controlled{U(\bm{x})}\) the
\(U(\bm{x})\) gate controlled on the first qubit, and \(S \coloneqq \ket{0}\!\!\bra{0} + i\ket{1}\!\!\bra{1}\) the phase gate.
Applied to $\ket{0}\otimes \ket{\bm{0}}$, this circuit encodes \(f(\bm{x})\)
in the amplitudes with respect to the computational basis states of the first qubit:
\begin{align}
\label{eq:amplitude_encode}
F(\bm{x})\ket{0}\otimes\ket{\bm{0}} = & \sqrt{f(\bm{x})}\ket{1}\otimes \ket{\phi_1(\bm{x})} + \\ & \sqrt{1 - f(\bm{x})}\ket{0}\otimes \ket{\phi_0(\bm{x})}, \nonumber
\end{align}
for some normalized states $\ket{\phi_0(\bm{x})}$ and $\ket{\phi_1(\bm{x})}$ (see \app{details}{IV} for more details).
Note that \(F(\bm{x})\) uses a single call to
the oracle \(U_\psi\).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{circuit_diagram.pdf}
\caption{Schematic depiction of the quantum circuit for \(U_f\), the probability oracle for
the function \(f(\bm{x})\) defined in \eq{U_f_def}.
The top registers encode the ($n=3$ bit in this case) binary representations of \(x_1, x_2, \cdots, x_M\).
The ancilla qubit whose amplitudes encodes \(f(\bm{x})\) (cf.~Eq.~\eqref{eq:param_circuit_def}) is indicated below the \(\bm{x}\) registers.
The final line represents the \(N\)-qubit system register.
The gates that act on the system register with colored circles represent the doubly-controlled time evolution by the various observables.
Estimating the expectation values of the \(M\) observables \(\{O_j\}\) requires executing this circuit and its inverse \(\bigot{\sqrt{M}/\varepsilon}\) times.
}
\label{fig:circuit_diagram}
\end{figure}
All that remains is to add quantum controls to the rotations in
\(F(\bm{x})\), so that
$F(\bm{x})$ is controlled on a register encoding $\bm{x}$.
Specifically, we consider the unitary
\begin{align}
\label{eq:U_f_def}
U_f \defeq \sum_{\bm{k} \in G_n^M} \ketbra{\bm{k}} \otimes F(\bm{k} x_{\max}),
\end{align}
where \(G_n^{M}\) is a set of \(2^{nM}\) points distributed in an
\(M\)-dimensional unit hypercube, with \(n = \bigo{\log(1/\varepsilon)}\), and
\(x_{\max}\) is a rescaling factor.
The values of \(x_{\max}\) and \(n\) are chosen to
satisfy the requirements of the gradient algorithm (see
\app{more_gradient_details}{IV}).
Here, \(\ket{\bm{k}} = \ket{k_1}\dots \ket{k_M}\) for \(\bm{k} \in G_n^M\)
denotes the basis state storing the binary representation of $\bm{k}$ in \(M\)
\(n\)-qubit index registers.
The controlled time evolution operator for each \(O_j\) can be implemented
efficiently as a product of \(n\) controlled-$e^{-ix O_j}$ gates with
exponentially spaced values of \(x\), each controlled on the appropriate
qubit of the \(j\)th index register.
We illustrate an example of such a \(U_f\) in \fig{circuit_diagram}.
\(U_f\) is a probability oracle for the function
\(f\), and each call to \(U_f\) involves a single call to the state preparation
oracle \(U_\psi\).
\thm{gradient_algorithm} then implies that with probability at least $2/3$, every component of the gradient of
$f$, and hence all of the expectation values $\bra{\psi}O_j\ket{\psi}$, can be
estimated to within an error \(\varepsilon\) using $\bigot{\sqrt{M}/\varepsilon}$ queries to $U_f$.
The complexity in terms of the controlled time evolutions follows from
multiplying the number of controlled time evolutions required for each query to
\(U_f\), i.e.,~\(\bigo{\log(M/\varepsilon)}\) per observable, by the total number of queries, i.e.,~\(\bigot{\sqrt{M}/\varepsilon}\).
As discussed in \app{more_gradient_details}{IV}, we have \(x_{\max} \in
\bigo{1/\sqrt{M}}\) as a consequence of the details of the proof of
\thm{gradient_algorithm} in \citen{Gilyen2017-gk}.
This completes the proof of \thm{ub}.
\end{proof}
Furthermore (see \app{more_gradient_details}{IV}), the space complexity of the
gradient algorithm is the same as that of the probability oracle up to an
additive logarithmic factor~\footnote{To achieve this space complexity we
actually need to compile the circuits for a logarithmic (in \(M\) and
\(\varepsilon^{-1}\)) number of probability oracles across a series of hypercubes
of varying sizes.
Otherwise there would be additional multiplicative logarithmic factors in the
space complexity.}.
Therefore, our algorithm uses \(\bigo{M\log(1/\varepsilon) + N}\) qubits.
\subsection*{Discussion}
In this letter, we considered the problem of simultaneously estimating the
expectation values of multiple observables with respect to a pure state
$\ket{\psi}$.
We presented an algorithm that uses
\(\widetilde{\mathcal{O}}(\sqrt{M}\varepsilon^{-1})\) applications of $U_\psi$ and
its inverse, where \(M\) denotes the number of observables and \(\varepsilon\) the
target error, and $U_\psi$ is a unitary that prepares $\ket{\psi}$.
We explained how a lower bound on a closely related problem posed
in~\citen{Van_Apeldoorn2021-sk} implies that, for algorithms given black-box
access to $U_\psi$, this query complexity is worst-case optimal up to logarithmic factors when \(\varepsilon \in (0, \frac{1}{3 \sqrt{M}})\).
In fact, our algorithm affirmatively resolves an open question
from~\citen{Van_Apeldoorn2021-sk} regarding the achievability of this bound for
the simultaneous estimation of classical random
variables~\footnote{Specifically, any matrix $A \in [-1,1]^M$ can be encoded in
a known unitary $U_A$ along similar lines as in the proof of
Corollary~\ref{cor:lower_bounds}; then $A\vb{p}$ can be estimated by applying
our algorithm to the single-qubit $Z$ operators $\{Z_j\}$, with state
preparation oracle $U_A(\mathbb{I}\otimes U_{\vb{p}})$.}.
These results imply that the optimal cost for expectation value
estimation can become exponentially worse with respect to \(M\) when one demands
a scaling that goes as \(\varepsilon^{-1}\) instead of \(\varepsilon^{-2}\).
Furthermore, the instances used in establishing our lower bounds involve a set of mutually commuting observables, implying that commutativity isn't necessarily helpful when one demands \(\varepsilon^{-1}\) scaling.
We presented a comparison with other approaches for
the estimation of expectation values in \tab{cost_comparison}, which we elaborate on in \app{prior_estimation_work}{I} and \app{applications}{II}.
For example, we find that our algorithm is capable of estimating each element of
the \(k\)-body fermionic reduced density matrix ($k$-RDM) of an \(N\)-mode system to within error \(\varepsilon\) using
\(\bigot{N^k/\varepsilon}\) state preparation queries.
This offers an unconditional asymptotic speedup compared to existing methods when \(\varepsilon = o(N^{-k/3})\).
This may be particularly useful in practical applications where we wish to
achieve a fixed error in extensive quantities by measuring the \(1\) or
\(2\)-RDM and summing \(\bigomega{N}\) elements.
Our gradient-based approach to estimating expectation values can be extended to other properties.
For example, consider the task of evaluating a collection of two-point dynamic
correlation functions.
These functions take the form
\begin{equation}
\label{eq:dynamic_correlation_func}
C_{A, B}(t) \defeq \ev{U(0,t) A^\dagger U(t, 0) B}{\psi},
\end{equation}
where \(A\) and \(B\) are some simple operators and \(U(t, t')\) is the time evolution
operator that maps the system from time \(t'\) to time \(t\).
These correlation functions are often directly accessible in experiment, as in the case of angle-resolved photoemission
spectroscopy~\cite{Damascelli2004-yk}, and are also central to hybrid
quantum-classical methods based on dynamical mean-field
theory~\cite{Bauer2016-hy, Georges1992-qf, Kotliar2006-rf}.
In \app{dynamic}{V}, we explain how a generalization of our approach can reduce the
number of state preparations required for estimating a collection of these
correlation functions.
Although we focused on quantifying the number of state preparation oracle queries, we also considered two other complexity
measures.
Our approach requires time evolution by each of the \(M\) observables.
The total duration of time evolution required scales as \(\bigot{M / \varepsilon}\).
We also need an additional \(\bigot{M \log(1/ \varepsilon)}\) qubits, although we can modify our approach to trade off between space and query complexities (see ~\app{time-space-trade-offs}{VI}).
When we are interested in simultaneously estimating \(O(N)\) expectation values, the asymptotic scaling of the space complexity is only logarithmically larger than that of storing the system itself.
This is the case in a variety of contexts, for example, in the evaluation of the momentum distribution~\cite{Meckel2008-mz}.
In other situations, the space overhead may be more substantial, though the capability of
modern simulation algorithms to use so-called ``dirty ancilla'' (temporarily
borrowing qubits in an arbitrary state) may offset this challenge in some
contexts~\cite{Lee2021-su, Von_Burg2021-gu, Low2018-dl}.
As a concrete example, we consider the double-factorized simulation of the electronic structure Hamiltionian proposed in \citen{Von_Burg2021-gu}. Von Burg \emph{et al.} find that the time complexity of their simulation algorithm can be minimized by using \(\bigot{N^{3/2}}\) qubits for data-lookup. These same qubits could be used by our algorithm for expectation value estimation to parallelize the measurement of \(\bigot{N^{3/2}}\) observables, offering a \(\bigot{N^{3/4}}\) asymptotic speedup without any additional qubit overhead.
Another potential reason for modifying our approach arises when the observables
of interest have different norms, or when the desired precision varies.
In
\app{eps_1_2_trade-offs}{VII}, we consider addressing this situation by measuring
certain observables using our strategy and measuring others using a
sampling-based method.
In \app{arbitrary_norms}{VIII}, we take a different approach,
and generalize Gily\'{e}n \emph{et al.'s} gradient estimation algorithm to
accommodate functions whose gradient components are not necessarily uniformly
bounded.
This allows us to simultaneously estimate the expectation values of
observables $\{O_j\}$ with arbitrary norms $\|O_j\|$ (possibly greater than $1$)
using $\widetilde{\mathcal{O}}(\sqrt{\sum_j \|O_j\|^2}/\varepsilon)$ queries.
By
rescaling the individual observables we can then also vary how precisely we
estimate each expectation value, thereby extending Theorem~\ref{thm:ub} to the
most general setting.
Our focus has been on the asymptotic scaling of our approach, but it will also be
desirable to understand the actual costs.
Performing a fault-tolerant resource estimate and a comparison against
other measurement strategies in the context of a practical application would be a useful
line of future work.
It is possible that our approach could be modified to obtain a further speedup
by taking advantage of the structure of the states and/or observables for
particular problems of interest.
Another potentially fruitful direction would be to explore extensions of the
gradient algorithm to yield quantum algorithms for the Hessian or even
higher-order derivatives.
Extracting useful information from a quantum computation, especially a quantum
simulation, is a bottleneck for many applications.
This is especially true in fields such as quantum chemistry and materials
science, where it may be necessary to couple high-level quantum calculations
with coarser approximations at other length scales in order to describe
macroscopic physical phenomena.
We expect that our gradient-based approach to the estimation of expectation values will be a useful tool and a
starting point for related approaches to other problems.
\subsection*{Acknowledgements}
The authors thank Bryan O'Gorman, Yuan Su, and Joonho Lee for helpful discussions and various referees for their constructive input.
NW worked on this project under a research grant from Google Quantum AI and was
also supported by the U.S.~Department of Energy, Office of Science, National
Quantum Information Science Research Centers, Co-Design Center for Quantum
Advantage under contract number DE-SC0012704.
Some discussion and collaboration on this project occurred while using facilities at the Kavli Institute for Theoretical Physics, supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
\nocite{Lin2020-iy,wan2020fast,Verteletskyi2020-ei, Huggins2021-vu,Chen2021-cq,
Hadfield2020-lg, Brandao2017-gl, Huang2021-prm, Davidson2012-ci, bonet2020nearly, Somma2002-wq, NCbook}
\input{main.bbl}
\widetext
|
1,108,101,564,688 | arxiv | \section{Introduction}
Face image super-resolution (FSR), also known as face hallucination, aims to recover a high-resolution (HR) face image from its low-resolution (LR) input, which is helpful to numerous facial analysis applications.
For many years image super-resolution (SR) has been a challenging research problem and data-driven deep learning draws a lot of attention recently. Deep learning has been employed to achieve an end-to-end single image super-resolution model, such as the Convolutional Neural Networks (CNN) based methods (e.g. SRCNN \cite{SRCNN}, SRResnet \cite{SRResnet} ) and recent approaches using Generative Adversarial Networks (GAN) \cite{GAN} (e.g. SRGAN \cite{SRGAN}, ESRGAN \cite{ESRGAN}), which out-perform most of traditional methods.
Specifically for face image super resolution task, many types of face priors are combined and applied to deep-learning-based SR methods. Most of them learn a straight mapping by a variety of regularizations, including facial masks \cite{fsrnet}, attributes \cite{attributes}, etc.
\begin{figure}[tp]
\centerline{
\includegraphics[width=0.49\textwidth,height=0.21\textwidth]{images/introduction2.png}} %
\caption{The reconstruction results of our PCA-SRGAN by incremental discrimination along with the gradually increased subspace. Above the images list the corresponding subspace dimensions and energy proportions.
} %
\label{Fig 1} %
\end{figure}
Compared to CNN-based models which aim at the less distortion, GAN-based SR models have demonstrated the superior perceptual performance. Extra perceptual loss like VGG \cite{perceptual,SRGAN} are often accompanied to regularize the generator for more stable results with less artifacts.
However, GAN-based models still have weakness on generating realistic and precise texture and produce fake details easily especially for face images which lead to the distortion.
To enhance the performance of GAN-based models, cumulative learning is considered as a effective strategy and introduced to super resolution task to lower the reconstruction difficulty.
Typical cumulative learning for super resolution concentrates on the incrementally network structure -- progressively grows both the networks and the upscale factors \cite{ProSR} or pyramid levels \cite{LAPSRN}.
Different from this kind of cumulative training in which the whole networks grow for different stages, we can maintain the network structure and seek a new cumulative training method for adversarial learning --- incrementally
discriminate the different levels of information or components for easier and more precise reconstruction.
As one of the classical statistical methods, principal component analysis (PCA) \cite{PCA} can give a perfect hierarchical representation of face images, namely the orthogonal projections on the PCA orthogonal subspace.
We can train the GAN-based SR model by discriminating the projections of face images on the PCA subspace cumulatively to reconstruct the super-resoluted images .
Therefore we propose a novel cumulative discrimination and reconstruction approach in this paper referred as \textit{PCA-SRGAN} and focus on the \textit{Incremental Orthogonal Projection Discrimination} in the PCA orthogonal subspace.
The orthogonal projections are obtained by projecting both generated SR images and HR images into the PCA subspace. The corresponding orthogonal projections containing hierarchical facial components from structure to details will be fed into the discriminator to guide the precise optimization and fine-grained reconstruction of generator.
In this way, our proposed model can lighten the difficulty of discrimination, stabilize the procedure of adversarial training and enhance the performance of generator, which ensures the ability to produce perceptual and realistic texture while brings relatively low level of distortion and artifacts without any help of auxiliary perceptual regularization.
An example of cumulative training process has been shown in Figure 1. With the growth of subspace proportion, our model has generated the super-resolved face with more realistic details gradually.
In summary, the main contributions of this work are presented as follows:
\begin{itemize}
\item we propose a novel cumulative learning strategy for the GAN-based SR method utilizing the incremental orthogonal projection discrimination in the PCA subspace to enhance the face SR task.
\item Our model provides a precise and stable adversarial training method without the need for auxiliary perceptual (VGG) loss, which achieves compelling visual effect with better perception-distortion trade-off than other methods.
\end{itemize}
\begin{figure*}[tp]
\centerline{
\includegraphics[width=0.80\textwidth,height=0.175\textwidth]{images/pca_projection_3.png}} %
\caption{
Visualization for different proportions of principal component projections which have been added to the average face $\overline{x}$. With the growth of proportion, the projections will contain increasing facial information from coarse to fine.
} %
\label{Fig 1} %
\end{figure*}
\section{Related Works}
\subsection{Face super resolution}
Face super-resolution,
is a special case of single image super-resolution (SISR).
The SISR problem has been extensively studied in the literature using a variety of deep-learning-based models. SRCNN, EDSR \cite{EDSR} and SRResnet employ deep convolutional network with various structures for super-resolution. Many works, e.g. SRGAN, ESRGAN, EPSR \cite{EPSR}, and RankSRGAN \cite{RankSRGAN}, also introduce the generative adversarial network to produce perception-oriented results.
The challenge PIRM2018 \cite{PIRM} has been also held to evaluate the SR methods by jointly
quantifying the accuracy and perceptual quality, namely the trade-off between distortion and perception.
Based on commonly used SISR methods above, the facial priors are often exploited and combined for face super resolution.
There are many types of prior such as masks, landmarks, heatmaps, applied to the face super resolution \cite{mask,component}.
Yang decomposes face images into facial components which can be super-resolved separately and fused to the complete face. Adrian Bulat learns to localize the facial landmarks as a constraint for face super resolution \cite{landmarks}. Yu Chen employs an extra network to predict the prior such like facial masks to enhance the performance for very low resolution face \cite{fsrnet}. Besides the structural prior mentioned above, statistical information has been explored for face hallucination. Principal components analysis is a classical statistical prior especially useful for human face problem. Wang replaces the base vectors of PCA dictionary for LR images with that for HR images and learn the coefficients mapping between LR and HR \cite{Eigentrans}. In the view of multi-scale space-frequency domain, Huang designs a wavelet-based network to reconstruct the facial high-frequency information \cite{WaveletSR}.
Among all kinds of face prior, PCA contains all the necessary information to build a real face and can be applied on end-to-end networks easily.
\subsection{Cumulative learning}
Cumulative learning is a effective learning strategy for complex learning task which refers to starting from an easier subtask and gradually increasing the task difficulty. It is very suitable for the image generation task, including the image super-resolution.
ProGAN \cite{ProGAN} enlarges the network and simultaneously increases the resolution of generated images for easier convergence and better image generation for GAN models. Similarly, Wang performs SR starting from $2\times$ upsampling and then blending with the portions of $4\times$ or larger scaling factors \cite{ProSR}, which are progressive both on architectures and training procedure. Deokyun Kim \cite{ProFSR} introduces attention mechanism to the progressive face SR for very small face images.
Different from the progressive method by increasing the up-sampling scale,
LapGAN \cite{LAPGAN} and LapSRN \cite{LAPSRN} decompose the generation process by the means of the Laplacian pyramid and train the generator progressively at every level of the pyramid.
Compared to common training procedure, the curriculum learning strategy can greatly reduce the training difficulty.
Inspired by this, we propose a new type of cumulative learning -- incrementally learn the discrimination for the PCA projection of face images.
\section{Proposed Method}
GAN-based models have weak performance on super-resolving realistic and perceptual face images.
To address the issue, we lower the learning difficulty of GAN by incrementally discriminating the SR image's projection on the subspace spanned by the elements of the PCA dictionary generated for HR images.
In this section, we firstly introduce the properties of orthogonal projection on principal component space, then describe the model structure and loss functions. At last, the overall algorithm will be summarized.
\subsection{PCA orthogonal projection}
Principal component analysis (PCA) is a statistical procedure using an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.
Given a set of HR face samples $\left \{ x_{1}^{h},x_{2}^{h},x_{3}^{h},...,x_{m}^{h}\right \}$, among them $x_{i}^{h}$ is a image vector which contain $n$ pixels and $m$ is the number of samples.
Firstly,
all samples are centralized by zero-mean normalization.
\setlength\abovedisplayskip{0.002\textwidth}
\setlength\belowdisplayskip{0.002\textwidth}
\begin{align}
\begin{split}
X = \left [ x_{1}^{h} - \overline{x},x_{2}^{h} - \overline{x},...,x_{i}^{h} - \overline{x} \right ],
\end{split}
\end{align}%
where $\overline{x}$ denotes the average face.
We construct the covariance matrix and perform the eigen decomposition.
\setlength\abovedisplayskip{0.002\textwidth}
\setlength\belowdisplayskip{0.002\textwidth}
\begin{align}
\begin{split}
XX^{T}P=\lambda P.
\end{split}
\end{align}
The projection matrix can be obtained from the eigenvectors of the covariance matrix.
\setlength\abovedisplayskip{0.001\textwidth}
\setlength\belowdisplayskip{0.001\textwidth}
\begin{align}
\begin{split}
P = (p_{1},p_{2},...,p_{d}),
\end{split}
\end{align}
where $p_{i}$ is the $i$th eigen vector and we use $\lambda_{i}$ to denote the corresponding eigen value.
A sample can be projected into the orthogonal PCA subspace as the orthogonal projection.
The dimensions of orthogonal subspace are determined by the vectors' number of the projection matrix.
As shown in Figure $2$, the principal components, or described as the orthogonal projections of a face image on different dimensions of orthogonal subspace, has included different levels of facial information.
It will be reasonable to feed these projections which represent face from structure to details into the discriminator to improve the reconstruction of super-resolved faces.
\subsection{Incremental orthogonal projection discrimination}
\begin{figure*}[t]
\centerline{
\includegraphics[width=0.88\textwidth,height=0.45\textwidth]{images/model.png}} %
\caption{The pipeline of the PCA-SRGAN. The progress bar reveals the changing process of subspaces, above which other part illustrates the discrimination or constraint for orthogonal projections on the subspaces.
} %
\label{Fig 1} %
\end{figure*}
The pipeline is illustrated in Figure $3$. We perform PCA on HR datasets and get the orthogonal space spanned by the PCA projection matrix $P$. The orthogonal space can be split into two orthogonal complementary subspace $W$ and $V$ respectively determined by $P_{W}$ and $P_{V}$. $P_{W}$ is a projection matrix which contains the first $n$ vectors of $P$ and $P_{V}$ contains the rest vectors. $G$ and $D$ denote the generator and discriminator separately.
Given a generated SR image $x^{s}$ which has been normalized by subtract the mean, $x_{W}^{s}=P_{W}x^{s}$ is the orthogonal projection of $x^{s}$ on the subspace $W$,
$x_{V}^{s}=P_{V}x^{s}$ is the orthogonal projection of $x^{s}$ on the subspace $V$. Similar operations are also applied to the HR groundtruth $x^{h}$.
$x_{W}^{s}$ will be sent to the discriminator to guide the adversarial learning for accurate face SR reconstruction. Both of the $x_{W}^{s}$ and $x_{V}^{s}$ will be constrained by the L1 loss.
The LR input will be bicubic-upsampled as the condition to assist the discrimination of GAN.
We also illustrate the incremental training procedure by the progress bar shown in the lower part of Figure $3$. %
The vernier will slide from the left to the right side during the training procedure, which means the subspace $W$ will expand from zero to the almost full space, while the opposite situation for subspace $V$. Corresponding to the visualized projections in Figure $2$, the growing subspace $W$ will absorb more and more detailed and precise face components prepared for discrimination, helpful to reduce the learning difficulty of discriminator and make the generator catch the distribution of original HR data progressively.
Meanwhile, the shrinking subspace $V$ is regarded as a repository which stores the constrained residual information to be added into $W$.
This curriculum learning strategy guarantees a stable and precise learning process and has the ability to synthesize compelling facial details without visible distortion.
To notice that, the subspace $W$ is increased to $99.9\%$ of the full space and the last $0.1\%$ percentage could be treated as the random noise space and not necessary to be regularized by the GAN model.
\noindent
{\bfseries{Networks structure}}\quad
The proposed model consists of only two networks, the generator and the discriminator, which construct the elementary structure of GAN.
The same to the baseline ESRGAN, RRDBnet is used as the generator which combines multilevel residual network and dense connections without BN layers, and the discriminator contains eight convolutional layers and two linear layers to generate the classification value.
\subsection{Loss functions}
We use pixel-wised L1 loss and GAN loss without any auxiliary feature-based or perceptual loss to obtain the final results. The GAN loss is applied to the orthogonal projection on the subspace $W$.
\noindent
{\bfseries{Conditional GAN loss}}\quad
The LSGAN \cite{LSGAN} is applied as basic GAN loss because it brings smooth gradients which is more suitable for incremental training than vanilla GAN.
Based on the relativistic discrimination used by the ESRGAN, we further lower the difficulty of discrimination for the orthogonal projections by introducing the conditional LR input. The input LR images will be bicubic-upsampled and concatenated with HR or SR images which construct $\left \{x_{W}^{h},x_{W}^{c} \right\}$ and $\left\{x_{W}^{s},x_{W}^{c} \right\}$, and sent to the discriminator.
We formulate the loss of generator as:
\setlength\abovedisplayskip{0.002\textwidth}
\setlength\belowdisplayskip{0.002\textwidth}
\begin{align}
\begin{split}
L_{G}=&E_{x_{W}^{s}}[(D(\left\{x_{W}^{s},x_{W}^{c}\right\})- E_{x_{W}^{h}}[ D(\left \{ x_{W}^{h},x_{W}^{c} \right\})] -1)^{2}] + \\ &E_{x_{W}^{h}}[(D(\left \{x_{W}^{h},x_{W}^{c}\right\})- E_{x_{W}^{s}}[ D(\left \{ x_{W}^{s},x_{W}^{c}\right\})])^{2} ],
\end{split}
\end{align}
and the loss of discriminator is the symmetrical form :
\setlength\abovedisplayskip{0.002\textwidth}
\setlength\belowdisplayskip{0.002\textwidth}
\begin{align}
\begin{split}
L_{D}=&E_{x_{W}^{h}}[(D(\left\{x_{W}^{h},x_{W}^{c}\right\}) - E_{x_{W}^{s}}[ D(\left \{ x_{W}^{s},x_{W}^{c} \right\})] -1)^{2}] + \\ &E_{x_{W}^{s}}[(D(\left \{x_{W}^{s},x_{W}^{c}\right\})- E_{x_{W}^{h}}[ D(\left \{ x_{W}^{h},x_{W}^{c}\right\})])^{2} ].
\end{split}
\end{align}
The final GAN loss is:
\setlength\abovedisplayskip{0.002\textwidth}
\setlength\belowdisplayskip{0.002\textwidth}
\begin{align}
\begin{split}
L_{GAN}(G,D)=L_{G} + L_{D}.
\end{split}
\end{align}
\noindent
{\bfseries{pixel-constrained L1 loss}}\quad
We apply the 1-norm distance for the orthogonal projections on two subspaces between SR images and HR images. $\alpha$ implies the different strength of constraint placed on the two orthogonal projections.
\setlength\abovedisplayskip{0.005\textwidth}
\setlength\belowdisplayskip{0.005\textwidth}
\begin{align}
\begin{split}
L_{1}(G) =E_{x_{W}^{s}}[\left \| x_{W}^{s}-x_{W}^{h} \right \|_{1}]+\alpha E_{x_{V}^{s}}[\left \| x_{V}^{s}-x_{V}^{h} \right \|_{1}].
\end{split}
\end{align}
\noindent
{\bfseries{Total loss}}\quad
We conclude the total loss as
\setlength\abovedisplayskip{0.005\textwidth}
\setlength\belowdisplayskip{0.005\textwidth}
\begin{align}
\begin{split}
L_{Total}(G,D) = L_{1}(G) + \beta L_{GAN}(G,D),
\end{split}
\end{align}
where the weight $\beta$ is used to balance different loss terms.
\subsection{Algorithm summary}
Let $\phi $ denote the training set which contains $m$ samples of LR face images $\left \{x_{1}^{l},x_{2}^{l},...,x_{m}^{l}\right \}$ and HR face images $\left \{x_{1}^{h},x_{2}^{h},...,x_{m}^{h}\right \}$.
$\bar{x}$ denotes the average face of all HR face images in the training set.
$d$ denotes the dimension of the full space while ${d}'$ denotes the dimension of subspace $W$ when it increase to $99.9\%$ of the full space. The whole algorithm has been summarized in $Algorithm 1$.
\begin{algorithm}[!h]
\caption{Incremental orthogonal projection discrimination.}
\label{alg:Framwork}
\begin{algorithmic}[1]
\REQUIRE ~~\\
$\phi $, $P$, $d$, ${d}'$, $\bar{x}$, $G$, $D$; The step size $k$ for each epoch.\\
\ENSURE ~~\\
$G$.\\
\STATE Initialize the dimension of subspace $W$: $n = 0$.\\
\WHILE{$n \leq {d}'$}
\STATE Start the new epoch: $n=n+k$.
\STATE Separate the principal component vectors to construct orthogonal complementary subspace $W$ and $V$:
\quad $P_{W}=\left \{p_{1},p_{2},...,p_{n} \right \}$;
\quad $P_{V}=\left \{p_{n+1},p_{n+2},...,p_{d} \right \}$.
\FOR{each batch of samples $\left \{x^{l},x^{h}\right \}$ in $\phi $}
\STATE Generate the super-resolved face images:
\quad $x^{s}=G(x^{l})$.
\STATE Take the bicubic-upsampled $x^{l}$ as condition $x^{c}$.
\STATE Subtract the average face:
\quad $x^{s}\leftarrow x^{s}-\bar{x}$, \quad $x^{c}\leftarrow x^{c}-\bar{x}$, \quad $x^{h}\leftarrow x^{h}-\bar{x}$.
\STATE Get the orthogonal projections of $x^{s}$, $x^{a}$ and $x^{h}$ on the subspace $W$:
\quad $x_{W}^{s}=P_{W}x^{s}$, $x_{W}^{c}=P_{W}x^{c}$, $x_{W}^{h}=P_{W}x^{h}$,
\STATE Get the residues on complementary subspace $V$;
\quad $x_{V}^{s}=P_{V}x^{s}$, $x_{V}^{c}=P_{V}x^{c}$, $x_{V}^{h}=P_{V}x^{h}$,
\STATE Apply loss according to Eq $8$.
\STATE Back-propagation and update $G$ and $D$
\ENDFOR
\ENDWHILE
\RETURN $G$;
\end{algorithmic}
\end{algorithm}
\begin{figure*}[!ht]
\centerline{
\includegraphics[width=1.0\textwidth,height=0.87\textwidth]{images/result1.png}} %
\caption{Comparisons of related methods on two datasets.
} %
\label{Fig 1} %
\end{figure*}
\begin{figure*}[!htp]
\centerline{
\includegraphics[width=0.92\textwidth,height=0.29\textwidth]{images/trade-off.png}} %
\caption{
The trade-off boundaries between perception and distortion for ESRGAN, RankSRGAN and our PCA-SRGAN on two datasets.
} %
\label{Fig 1} %
\end{figure*}
\begin{table*}[]
\scriptsize
\centering
\renewcommand\tabcolsep{4pt}
\label{Tab03}
\begin{tabular}{ccccccccccccccc}
\toprule
\multirow{3}{*}{Method} & \multicolumn{4}{c}{CelebA} & \multicolumn{4}{c}{FFHQ} \\
\cmidrule(r){2-5} \cmidrule(r){6-9}
& Region 1 & Region 2 & Region 3 & Region 4 & Region 1 & Region 2 & Region 3 & Region 4 \\
\midrule
ESRGAN & \textcolor{blue}{5.46/8.98/29.95} & \textcolor{blue}{5.21/9.26/29.65} & 4.97/9.91/29.18 &4.90/10.01/29.07 & \textcolor{blue}{5.02/8.99/29.69} &\textcolor{blue}{4.74/9.34/29.31} & 4.64/9.99/28.70 &\textcolor{red}{4.54/10.19/28.59}\\
RankSRGAN & 5.54/9.00/29.86 & 5.25/9.41/29.49 & \textcolor{blue}{4.92/9.85/29.11} & \textcolor{blue}{4.89/10.01/28.90} & 5.10/8.89/29.63 & 4.79/9.35/29.28 & \textcolor{red}{4.59/9.93/28.75} & \textcolor{blue}{4.59/10.03/28.65} \\
PCA-SRGAN &\textcolor{red}{5.33/8.76/29.96} & \textcolor{red}{4.96/9.37/29.38} &\textcolor{red}{4.90/9.52/29.04} & \textcolor{red}{4.84/10.19/28.71} & \textcolor{red}{4.73/8.93/29.66} &\textcolor{red}{4.63/9.40/29.34} & \textcolor{blue}{4.60/9.87/28.77} & 4.60/10.10/28.60 \\
\midrule
Groundtruth & \multicolumn{4}{c}{4.88/0/-} & \multicolumn{4}{c}{4.68/0/-} \\
\bottomrule
\end{tabular}
\caption{The values of PI-RMSE and additional PSNR for compared works are provided in preset regions. We highlight the best performance with red and the second with blue.}
\end{table*}
\section{Experiments}
\subsection{Datasets and preprocess}
We have conducted the experiments on two face datasets.
\noindent
{\bfseries{CelebA}}\quad CelebA consists of a large amount of celebrity face images cropped from the websites \cite{CelebA}. Face images on CelebA are close to images taken in the real scene and contain rich texture, which brings realistic perceptual effect.
\noindent
{\bfseries{FFHQ}}\quad FFHQ is a high-quality human face image dataset \cite{FFHQ} released recently and contains considerable variation in terms of age, ethnicity and background. Compared to CelebA, face images in this dataset is more clear and smooth with less noise.
Similar to FSRGAN, we select 18000 cropped and aligned face images with the size of $128\times$ 128 as training sets and additional 100 images are used as validation sets for both CelebA and FFHQ.
PCA operations are applied to the training sets of these two datasets. We get the mean and projection matrix for orthogonal projection discrimination on corresponding subspace.
\subsection{Implemtation details}
Following SRGAN and ESRGAN, all experiments are performed with a scaling factor of $4\times$ between LR and HR images. The PSNR-oriented models are first obtained by pre-training on the corresponding face datasets. Then we employ the pre-trained models as an initialization for the generator and the Adam \cite{Adam} as the optimizer.
The training process contains 200 epochs divided into two stages according to the dimension of subspace $W$.
In the first 100 epochs, the subspace grow up to $99\%$ of the full space by increasing the dimension evenly and we set the learning rate as 0.0002 with loss weights $\alpha =1.0$ and $\beta =0.02$.
In next 100 epochs, the subspace grow up from $99\%$ to $99.9\%$ along with a linear decay for learning rate from 0.0002 to 0.00001, and the weights are set as $\alpha =0.0$ and $\beta =0.05$ to enlarge the effect of GAN loss and avoid the over-smooth of L1 loss.
With this weight setting of loss functions, our model gets excellent visual effect as shown in Figure 4. Furthermore, the trade-off between perception and distortion of our model can be explored by applying a series of different weight settings.
\subsection{Evaluation}
We evaluate our PCA-SRGAN by qualitative visual effects and quantitative performance of the trade-off between perception and distortion. The PI and RMSE used in the PIRM2018 Challenge are chosen as the quantitative indices for perception and distortion respectively.
PI is a no-reference image quality measure and a lower value indicates better perceptual performance. RMSE is similar to the PSNR and positively correlated with the distortion degree.
\noindent
{\bfseries{Qualitative results}}\quad
We make comparisons with related methods, among which FSRGAN, ESRGAN and RankSRGAN have the aid of the perceptual (VGG) loss.
As we can see in Figure 4, examples are displayed for visual comparison and the values of PI and MSE are provided as references. Considering the visual effect of whole faces, our PCA-SRGAN has
outperformed previous approaches, reflected in more natural perception, clear texture and realistic facial components.
For instance, PCA-SRGAN has produced visually pleasing sharper contour and facial components in all displayed examples than SRResnet which outputs the blurry eyes, teeth or hair and WaveletSR which generates some texture but still results in the smooth and unclear edge.
PCA-SRGAN obtains less distortion and generates desired texture which is closer to groundtruth than other GAN-based methods, including those perceptual loss aided models.
LapGAN-SR brings unnatural texture containing unpleasing noise (see Face \uppercase\expandafter{\romannumeral2} and \uppercase\expandafter{\romannumeral3}) which enlarges the distortion.
FSRGAN produces warped face obviously in all face images which is likely to be caused by inaccurate facial prior prediction.
ESRGAN and RankSRGAN achieve good perceptual results but our model obtains better visual effect on local patches, illustrated by clearer shape of the teeth (see Face \uppercase\expandafter{\romannumeral1},
\uppercase\expandafter{\romannumeral2}, and
\uppercase\expandafter{\romannumeral3}), more lifelike details in the eyes
(see Face \uppercase\expandafter{\romannumeral2} and \uppercase\expandafter{\romannumeral4})
as well as the vivid texture of hair (see Face \uppercase\expandafter{\romannumeral1}) and cheek (see Face \uppercase\expandafter{\romannumeral4}).
The superior visual performance has also approximately agreed with the referenced values below the images
\noindent
{\bfseries{Quantitative performance}}\quad
Among all contrasted methods,
we mainly seek and compare the boundaries of the PI-RMSE trade-off for three models --- RankerGAN, ESRGAN and our PCA-SRGAN, which have shown obviously better visual effect in Figure 4 than other models.
We choose a number of network weights that yield the PI values on the border over a certain range of RMSE which has been divided into four regions following the rules of the PIRM2018 Challenge.
The PI and RMSE values construct the plotted points and we draw a trend line along the boundaries in the Figure 5. For each model the best PI value in each region is also selected in the Table 1 for comparison.
In general, our model has achieved the superior performance of trade-off than ESRGAN and RankerGAN.
For CelebA dataset, PCA-SRGAN overwhelms them by a much better trade-off boundary on whole regions.
For FFHQ dataset, PCA-SRGAN has a comparable performance --- ranking 1st on the first two regions and ranking 2rd on Region 3.
It's also reasonable for our model to have slight weakness in Region 3 and 4 for the second dataset.
Referring to the PI values of the groundtruth in Table 1, our model which aims to catch enough components of HR face has a convergence of perceptual index and would not produce the PI values far lower than the groundtruth boundlessly.
\section{Ablation study}
We have conducted experiments on the CelebA dataset as the ablation study. All of them share the same network structure, learning rate and training process.
\begin{figure}[!htp]
\centerline{
\includegraphics[width=0.50\textwidth,height=0.41\textwidth]{images/Ablation_study.png}} %
\caption{
Overall comparisons for showing the effects of each part in
PCA-SRGAN. Each column represents a model with its configurations in the top.
} %
\label{Fig 1} %
\end{figure}
\begin{figure}[!h]
\centerline{
\includegraphics[width=0.50\textwidth,height=0.3\textwidth]{images/Ablation_study_2.png}} %
\caption{
The comparison of quantitative Perception-Distortion trade-off. The GAN-based configurations can be plotted as the curves for the trade-off and other two configurations are plotted as single points in the figure.
} %
\label{Fig 1} %
\end{figure}
The overall visual comparison is illustrated in Figure 6.
Only with the Pixel-L1 loss (see the 1st column) , the model generates smooth face without any texture. On the basis of the Pixel-L1 loss, clear edge is produced by utilizing the cumulative learning of PCA projection (see the 2nd column). Combining the L1 loss with GAN loss which is applied to the fixed 99.9\% PCA projection, the perceptual performance is improved but the GAN loss brings obvious noise and distortion (see the 3rd column).
At last, under the same weights condition of GAN regularization, our model with cumulative learning obtains better visual performance and generates realistic texture (see the 4th column).
Furthermore, a quantitative comparison of Perception-Distortion trade-off for GAN-based configurations has been also given to validate the performance of cumulative learning strategy. Obviously as shown in Figure 7, the model with cumulative discrimination (the red curve) overwhelms the same model without cumulative strategy (the green curve) by a much better trade-off boundary on whole regions.
In a word, the cumulative learning of PCA projection along with the adversarial discrimination in our model has suggested convincing effectiveness to enhance the face super resolution task.
\section{Conclusion}
In this paper, a method named PCA-SRGAN using incremental orthogonal projection discrimination is proposed to enhance the performance of GAN on face SR task.
We perform cumulative discrimination of orthogonal projections on PCA subspace to reduce the training difficulty and achieve precise and stable reconstruction without the help of perceptual (VGG) loss.
Qualitative and quantitative comparisons have revealed the compelling performance of our model on super-resolving realistic face and the trade-off between perception and distortion. In the further research, we will develop our model by introducing other types of dictionary space or apply the model to other image processing tasks.
\vfill\pagebreak
\bibliographystyle{named}
|
1,108,101,564,689 | arxiv | \section{Motivation}
The Atiyah-Singer index theorem \cite{AtSi71} relates the topological charge of
the background gauge field configuration to the number of fermionic zero-modes
of the Dirac operator. The theorem states a particular connection between the
background gauge field and the fermionic fields. It is a theorem derived on
differentiable manifolds, however. Quantum field configurations and in
particular lattice fields are non-differentiable. Hence it is of interest to
study on the lattice the topology of the gauge background as well as the
properties of the zero-modes of the Dirac operator.
A major progress in understanding the manifestation of the index theorem on the
lattice was the realization that the eigenvectors of a $\gamma_5$-hermitian
lattice Dirac operator with real eigenvalues should be interpreted as the
lattice counterparts of the continuum zero-modes. This can be understood from
the fact that only eigenvectors $\psi$ with real eigenvalues can have
non-vanishing pseudoscalar matrix elements $\langle\psi|\gamma_5|\psi\rangle$,
like the zero-modes in the continuum.
Continuum QED in 4D in its usual non-compact realization for the gauge field
has no topological charge. For the lattice formulation with compact
representation of the gauge fields on the other hand the situation changes.
Lattice QED exhibits a twofold phase structure: a physical one, containing the
massless photon and a confining phase with properties similar to QCD. Some
phenomena appearing in the confinement phase are: zero modes, magnetic
monopole-antimonopole pairs and the occurrence of a non-zero chiral condensate.
Chiral symmetry breaking is a key feature of the theory of strong interactions,
QCD. The order parameter is the chiral condensate $\langle \bar{\psi}\psi
\rangle$; it is related to the density of eigenvalues $\rho (\lambda)$ of the
Dirac operator near the origin via the Banks-Casher relation \cite{BaCa80}. As
compact lattice QED shows a confining phase for certain values of the coupling
constant one also expects \cite{SaSe} and indeed observes
\cite{SaSe} the appearance of a chiral condensate.
For a study of zero-modes and chiral symmetry breaking it is advisable to work
with chirally symmetric Dirac operators. On the lattice chiral symmetry is
realizable in a locally violated form, expressed through the Ginsparg-Wilson
relation \cite{GiWi82} which in its original form reads:
\begin{equation}
\gamma_5 \,\mathcal{D}+\mathcal{D}\,\gamma_5=2\,a\,\mathcal{D}\,\gamma_5\, R\,\mathcal{D}\;. \label{eq:gw2}
\end{equation}
where $\mathcal{D}$ denotes the massless Dirac operator, $a$ is the lattice spacing and
$R$ a local function of the gauge fields. In the continuum limit $\mathcal{D}$
anticommutes with $\gamma_5$ thus showing chiral symmetry. Dirac operators satisfying
the Ginsparg-Wilson relation preserve a lattice version of chiral symmetry \cite{Lu98}
without fermion doubling. For a given local operator $R$ there may be many Dirac
operators. The eigenvalue spectrum of a Dirac operator for $R=\frac{1}{2}$ lies on a unit
circle around the center $1/a$ and complex eigenvalues come in complex conjugate
pairs. Only for real $\lambda$ the expectation value of $\gamma_5$ between the
eigenstates $\bra{\psi_\lambda}\gamma_5 \ket{\psi_\lambda}$ is non-zero. Therefore
exact zero-modes have definite chirality.
Although there are several realizations of the Dirac operator satisfying the
Ginsparg-Wilson relation approximately, only the numerically costly overlap
operator \cite{NaNe} is an exact realization. Within QCD there have been
several studies on the density of near-zero-modes and the properties of exact
zero-modes and their relationship to topological excitations (see e.g.
\cite{GaGoLa01,Ga02a,HoDoDr02,Ga03}).
In compact lattice QED there have been studies with the overlap operator
\cite{BeHeMa01} demonstrating the existence of zero-modes in the confined
phase and relating the density of the near-zero-modes to the universal distributions
from Random Matrix Theory. Here we want to continue the study of such modes with an
emphasis on the chirality and the localization properties of exact zero-modes and
possible correlations to topological structures like monopoles.
\section{Formalism}
The Wilson gauge action reads
\begin{equation}
S[U]=\beta \sum_{x, \mu>\nu} (1-\cos \theta_{x,\mu\nu})\;,
\end{equation}
where the gauge fields are represented by group elements
$U_{x,\mu}=exp(i\theta_{x,\mu})\in U(1)$ with $\theta_{x,\mu}\in (-\pi,\pi]$. The
plaquette angles are given by $\theta_{x,\mu\nu} =
\theta_{x,\mu}+\theta_{x+\hat{\mu},\nu}- \theta_{x+\hat{\nu},\mu}-\theta_{x,\nu}\in
(-4\pi,4\pi)$. This compact realization of the gauge fields and the action introduces
higher order self-interactions. The pure gauge theory has a confinement phase for small
$\beta$ and a Coulomb phase above a phase transition which for this action is close to
$\beta\approx 1$ and weakly 1st order \cite{CaCrTa,ArLiSc01}
(for a recent study of the phase transition suggesting a new
order parameter cf. Ref. \cite{VeFo04}). One observes an
abundance of monopoles in the confined phase. Monopoles in compact $U(1)$ are defined
following Ref. \cite{DeTo80} from deficit angles of the plaquettes bordering 3-cubes,
corresponding to links on the dual lattices. Due to current conservation the links form
closed loops (better: networks \cite{KeReWe94}) on the dual lattice.
The massless overlap operator $D_{\textrm{\scriptsize ov}}$ may be written
\begin{equation}
\mathcal{D}_{\textrm{\scriptsize ov}}= 1+\gamma_5\,\epsilon(H) \;, \label{eq:of1}
\end{equation}
where $\epsilon$ denotes the operator $sign$-function and $H=\gamma_5 (s-H_{0})$ is some
Hermitian Dirac operator, constructed from an arbitrary Dirac operator. It is convenient
to use for $H_0$ the usual Wilson Dirac operator with negative mass term and $s$ is a
real parameter which can be adjusted such as to optimize the convergence in the
construction of $\epsilon(H)$. If $H_0$ is already an overlap operator then
$\mathcal{D}_{\textrm{\scriptsize ov}}=H_0$.
For a $U(1)$ gauge theory on manifolds with torus topology there is no topological
charge in the sense of a non-vanishing Pontryagin index. So the Atiyah-Singer index
theorem may be realized only trivially, i.e. allowing for a cancellation between the
numbers of left-handed and right-handed zero-modes. However, on the lattice we have
non-differentiable fields and therefore we may expect other violations as well.
It has been demonstrated in QCD that for instanton configurations the Dirac operator
shows a zero-mode localized in space-time \cite{GaGoLa01,GaGoRa01,GaLa01,Ga02a}. To
our knowledge nothing is known about the localization properties of zero-modes in
QED.
In order to study the localization properties a simple set of gauge invariant
quantities has been introduced. For an eigenvector $\psi(x)$ of the lattice Dirac
operator one defines a local density
\begin{equation}
p_\sigma(x)=\sum_{d}\psi (x)^*\Gamma_\sigma\psi (x) \;,
\label{eq:loc1}
\end{equation}
where $\Gamma_\sigma$ is an element of the Clifford algebra and the local
sum runs over the Dirac indices. The
eigenvectors are normalized to unit norm, i.e.
\begin{equation}
\sum_{x} p_0(x) =\langle\psi^{\dag} \psi \rangle=1\;. \label{eq:loc3}
\end{equation}
For the cases of particular interest we abbreviate the scalar density $p(x)=p_0(x)$
(for $\Gamma_0$ the unit matrix) and the chiral density $p_5(x)$ (for $\gamma_5$). We
also determined the vector densities $p_\mu$ (for $\Gamma=\gamma_\mu$), the axial
vector densities $p_{5\,\mu}$ (for $\Gamma=\gamma_5\gamma_\mu$) and the tensor
densities $p_{\mu\nu}$ (for $\Gamma=\gamma_\mu\gamma_{\nu\neq\mu}$).
Exact zero-modes are also eigenmodes of $\gamma_5$ with eigenvalues $\pm 1$.
Therefore for such eigenmodes we expect even locally
\begin{equation} \label{pp5relation}
p(x)=\pm p_5(x)\;.
\end{equation}
The integral over the chiral density provides a measure for the amount of chiral
symmetry breaking because it takes on its largest value for the zero-modes, namely $\pm
1$ for normalized eigenvectors, and vanishes otherwise.
The so-called \emph{inverse participation ratio} (IPR) is introduced for further
quantification of the localization (see for example Ref. \cite{GaGoLa01}). We define it for
both, $p$ and $p_5$:
\begin{equation}\label{eq:loc4}
I=V\sum_{x}\;p(x)^2\;,\;\;
I_5=V\sum_{x}\;p_5(x)^2,
\end{equation}
where $V$ is the lattice volume.
Assume that the density is evenly distributed on a subvolume $V_f$ with $p(x)=1/V_f$
and vanishing elsewhere. Then we find $I=V/V_f$, with the limiting cases $I=1$ for
distribution over the whole lattice and $I=V$ for localization at one point. It is
essentially the inverse fraction of the volume contributing to the mode. This makes it
an appropriate measure for the localization of eigenmodes.
Let us try to understand why the inverse participation ratio $I_5$ may be considerably
smaller than $I$ for near-zero-modes than for exact zero-modes. The local Dirac sum may
be written as a sum of two positive terms $p(x)=p_+(x)+p_-(x)$ where
$p_5(x)=p_+(x)-p_-(x)$. Therefore $p(x)\ge |p_5(x)|$. Due to this property $I_5$ is
bounded for all eigenmodes: $I \ge I_5$. For exact eigenmodes we have $I=I_5$ due to
(\ref{pp5relation}).
Consider now eigenmodes corresponding to eigenvalues far from the origin: $p_5(x)$ will
be much smaller than $p(x)$ and fluctuate around zero. In this case $I_5$ is expected
to be significantly smaller than $I$. When the eigenvalues approach zero $I_5$
increases and the ratio $I_5/I$ is expected to approach $1$.
We also determine different densities $p_\sigma$ for exact zero-modes of
chirality +1 or -1. This amounts to checking relations like e.g.
\begin{equation}
\braket{\bar{\psi}}{\gamma_\mu\gamma_5\psi}=\pm
\braket{\bar{\psi}}{\gamma_\mu\psi}\;,
\end{equation}
which we could verify.
\section{Simulation and results}
The background gauge fields were obtained by using the Metropolis and overrelaxation
updating algorithm. All configurations are well decorrelated, separated by $5000$
updating sweeps. We analyzed $400$ configurations on the $4^4$, $6^4$ and $8^4$
lattices and $100$ on $12^4$ lattices. For the computation of the eigenvalues and
eigenvectors the so-called implicitly restarted Arnoldi method \cite{SoLeSoYa}
was used. The overlap operator was computed by an appropriate series expansion. Only
the smallest $10$ eigenvalues and their eigenvectors were calculated. Real eigenmodes
at zero have exact values $\braket{\bar{\psi}}{\gamma_5\psi} = \pm 1$.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\parbox[b][7.1cm][t]{6cm}{\includegraphics*[width=6cm]{09coll.eps}}&
\includegraphics*[width=6cm]{099coll.eps}
\end{tabular}
\end{center}
\caption{The plots show the normalized histogram
(probability distribution) for the topological charge
$\nu$ for the configurations at
$\beta=0.9$ (left-hand plot) and $\beta=0.99$ (right-hand plot).
\label{fig-zmd}}
\end{figure}
In the confined phase many exact zero-modes were observed whereas no zero-modes
appeared in the Coulomb phase. Zero modes are chiral eigenmodes and are related to the
topological charge via the Atiyah-Singer index theorem. In our discussion we therefore
identify the number of zero-modes with positive or negative chirality in a
configuration with the positive or negative topological charge $\nu$. We never observed
configurations which have zero-modes of different chirality. To our knowledge there is
no formal theorem explaining this feature in 4D (there exists such a vanishing theorem
in 2D \cite{AnKiNiSc}).
In Fig.\ \ref{fig-zmd} we show the number $\nu$ of exact zero-modes with chirality
$\pm 1$ for two different values of $\beta$. We observe obvious volume dependence. The
number of zero modes as well as the average percentage of zero-mode configurations
compared to all configurations grows with the volume. The integrated number of
zero-modes (i.e. adding up all $|\nu|$) $n_{zeros}/n_{configs}$ reaches values
$\mathcal{O}(0.9)$ for the large lattices. This can be observed for both $\beta=0.99$
and $\beta=0.9$. On $4^4$ lattices at $\beta=0.99$ we found only 3 such modes on all
400 configurations (compared to e.g. 24 zero-modes on $6^4$). This small number can
possibly be attributed to the situation that the pseudocritical point (i.e. the peak
position of the specific heat and other cumulants) on smaller lattices effectively
occurs already at smaller $\beta$-values and thus we may have been already in or at
least closer to the Coulomb phase for that lattice size. For $6^4$ the position of the
pseudocritical point is near 1.002 whereas for $12^4$ it is near 1.010, cf. Ref.
\cite{ArLiSc01}.
\begin{table}
\begin{tabular}{rrrrrrl}
\hline
$l$ &
$\#_{conf}$ &
$n(\nu=0)$ &
$n(\nu=1)$ &
$n(\nu=2)$ &
$n(\nu=3)$ &
$10^4\times\langle \nu^2\rangle/V$\\
\hline
4 & 400 &397 & 3 & 0 & 0 &\quad 0.29(17)\\
6 & 400 &376 & 24 & 0 & 0 &\quad 0.46(9)\\
8 & 500 &323 &173 & 4 & 0 &\quad 0.92(6)\\
12 & 100 & 36 & 44 &15 & 5 &\quad 0.72(10)\\
\hline
\end{tabular}
\caption{\label{tab:susc}
Summary of configuration number with topological charge for $\beta=0.99$
and the corresponding topological susceptibility.}
\end{table}
Table\ \ref{tab:susc} summarizes the results for $\beta=0.99$ together with the
topological susceptibility. Its behavior is compatible with
approaching a constant for large volumes.
Following the definitions from Refs. \cite{DeTo80,GrJaJe85} we determined the monopoles
(which are closed loops on the dual lattices). We find no correlation bet\-ween the
monopole density and the number of exact zero-modes on the corresponding
configurations. For $\beta=0.9$ the monopole density is 0.19(1) for both lattices
sizes $6^4$ and $8^4$, for $\beta=0.99$ it varies from 0.118 to 0.121 for $|\nu|$
ranging from 0 to 3 on the largest lattice size $12^4$. For $\beta>\beta_c$ the density
decreases exponentially. We see no correlation between the number of zero-modes and
the integrated monopole loop lengths at $\beta=0.9$ and 0.99; this observation
confirms \cite{BeHeMa01}.
The scaling of the density of small near-zero-modes in the sector of different zero
mode number has been compared with Random Matrix Theory expectations in Ref.
\cite{BeHeMa01}. The universality class was identified to be the unitary
ensemble. We also studied the densities for our data and agree with these findings.
Due to our choice of a different offset parameter value $s$ in the Wilson action
entering the overlap operator construction, the condensate value (and with it the
scaling parameter) has to be renormalized multiplicatively \cite{KiNaNe98} for the
comparison.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics*[width=6cm,clip=true]{rho_b090_L8_finest_pluscurve.eps}
\includegraphics*[width=6cm]{rho_b103_b110_L8.eps}
\end{tabular}
\end{center}
\caption{We compare the densities of the smallest eigenvector for $\beta=0.9$
(in the $\nu=0$ sector) and $\beta=1.03$ and $1.1$ for lattices size $8^4$.
\label{fig:density}}
\end{figure}
Fig. \ref{fig:density} compares the densities for the confinement region with
those for the Coulomb region, exhibiting a different behavior. In the
confinement region one expects for $\nu=0$ a distribution following the chiral
unitary ensemble of Random Matrix Theory
\cite{NiDaWe98} and we overlay a fit to that functional behavior
\begin{equation}
\rho(z)=\frac{z}{2}\,\exp{(-z^2/4)}\;.
\end{equation}
In the Coulomb region the results for $\beta=1.03$ and $\beta=1.1$ are compatible with
each other indicating that there is no gap, different to the observed behavior in the
deconfined phase in QCD. Our statistics is not sufficient to exclude a small gap,
though.
For the zero-modes we observe $p(x)=\pm p_5(x)$ as expected for exact GW-fermions and
thus also the IPRs agree. For chiral eigenmodes we expect local agreement of those
densities which correspond to relations for the gamma matrices like
$\gamma_1\,\gamma_2=-\gamma_3\,\gamma_4\,\gamma_5$, i.e. $p_{12}(x)=-p_{34}(x)$. Indeed
the tensor densities for the chiral modes do obey this relation within their group. The
local densities for vector and axial vector both vanish. For non-zero-modes these
relations are not valid.
In general the integrated tensor densities are comparatively small in absolute
magnitude ($0.1\ldots 0.01$) for zero-modes. For near-zero-modes all integrated vector,
and tensor densities are small ($0.1\ldots 0.01$). The integrated pseudoscalar density
was exactly $\pm 1$ as expected for exact zero-modes.
\begin{figure}[t]
\begin{center}
\includegraphics*[height=6cm]{fig_ipr_8.eps}
\end{center}
\caption{\small The graphs show the mean inverse participation ratio for the
near-zero-modes (in the $\nu=0$ sector) at $\beta=0.99$ and $\beta=1.03$ (lattice size
$8^4$): mean values of IPR (l.h.s.) and IPR-histograms (r.h.s.). The exact zero modes
in the $\nu=1$ sector have a wider distribution (shaded area) extending to higher
values; the $\langle I\rangle$ and s.d. for those is indicated by the filled circle
(arrow) and error bars in the upper l.h.s. plot. \label{fig:ipr}}
\end{figure}
The inverse participation ratio should allow some information on the space-time
localization of the eigenmodes. In Fig.\ \ref{fig:ipr} we summarize this
quantity for the modes for {\em non-zero} eigenvalues, binned according to
Im($\lambda$) and also show the IPR histograms. In the confinement region we
find no increase of the localization for the lowest bins. This observation
differs from results in QCD studies \cite{GaGoRa01}, where also the near-zero
modes show stronger localization.
In the Coulomb region no modes at very small $\textrm{Im}(\lambda)$ were observed and
the low-lying modes show small IPR, close to the minimum of 1.
Let us now concentrate on the {\em exact} zero-modes in the confining phase. These are
well localized objects with comparatively large individual IPR values up to 4.2. For
fixed $\beta$ the $\langle I\rangle$ show little if any volume dependence with values
of 1.92(29) for $L=8$ and 2.03(69) for $L=12$ (this independence on the volume was also
observed in recent QCD studies \cite{AuBeGo04}). These values are, however, clearly
larger than the mean values for the small non-zero modes. The difference is mainly due
to an IPR distribution extending to much higher individual IPR values, as seen in Fig.
\ref{fig:ipr}. The values of IPR for the lowest non-zero modes for the sector with no
or that with one exact zero mode agree within small errors.
In an attempt to visualize the geometrical shape of these localized zero-modes we
studied 3D cuts of the density distribution. Fig.\ \ref{fig:MCdensity} gives an example
of the observed structure. We plot the surfaces of constant density $p(x)$ for an
exact zero-mode for the eight 3D time slices ($n_t=1\ldots 8$) of the $8^4$ lattice.
The shape describes a tubular structure in 4D. However, most of the observed shapes are
less clear and often made up from disconnected pieces. Contrary to QCD \cite{GaLa01} no
clear instanton-like picture (i.e. 4D ``blobs'') evolves. The structure we observe for
the zero-mode eigenvectors are sometimes ball-like in 4D, sometimes tube-like, even
tubes closed in some direction due to the periodic boundary conditions. Although a
relation to the concept of closed monopole loops (cf. Ref. \cite{GrJaJe85}) may be
conjectured, no stringent conclusion can be drawn.
\begin{figure}[tp]
\begin{center}
$\begin{array}{c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c}
\epsfig{file=cut8yzt1.ps,
bb=40 70 520 610, clip=, scale=0.12} &
\epsfig{file=cut8yzt2.ps,
bb=40 70 520 610, clip=, scale=0.12} &
\epsfig{file=cut8yzt3.ps,
bb=40 70 520 610, clip=, scale=0.12} &
\epsfig{file=cut8yzt4.ps,
bb=40 70 520 610, clip=, scale=0.12} \\
\epsfig{file=cut8yzt5.ps,
bb=40 70 520 610, clip=, scale=0.12} &
\epsfig{file=cut8yzt6.ps,
bb=40 70 520 610, clip=, scale=0.12} &
\epsfig{file=cut8yzt7.ps,
bb=40 70 520 610, clip=, scale=0.12} &
\epsfig{file=cut8yzt8.ps,
bb=40 70 520 610, clip=, scale=0.12} \\
\end{array}$
\end{center}
\caption{\label{fig:MCdensity}The density shape for zero-mode eigenvector of a Monte
Carlo generated gauge configuration (at $\beta=0.99$, lattice size $8^4$). The 3D cuts
are for time slices $n_t=1\ldots 4$ (first row) and $n_t=5\ldots 8$ (second row). This
zero-modes has $I=2.75$.}
\end{figure}
This evidence supports the notion, that the zero-modes are not instanton-like lumps,
but extended and periodically closed in one or more directions. In a recent study
\cite{AuBeGo04} of the zero-modes in QCD it was argued, that the scaling of $\langle
I\rangle$ with the lattice spacing may be used to identify the co-dimension of these
objects. Due to lack of results at several different lattice spacing we cannot attempt
this here. In Ref. \cite{AuBeGo04} values of $\langle I\rangle\approx 2.8$ were observed,
i.e. larger than our values $\langle I\rangle\approx 2$, indicating even stronger
localization.
In order to gain further insight, we determined for some gauge configurations the
eigenmodes of the Dirac operator applying also anti-periodic boundary conditions in the
time direction by multiplying all temporal gauge links of a timeslice with -1. This
leaves the gauge action invariant (although it is not a gauge transformation on the
finite lattice). The number of monopoles and Dirac plaquettes does not change,
either.
However, the characteristic polynomial coefficients of the eigenvalue equation for the
Dirac matrix involve traces over closed loops and thus may differ due to the
periodically closed loops. The Dirac matrix was diagonalized again for the such
modified gauge configuration and the eigenvalues were compared with the ones without an
added phase: Quite often the zero-modes disappeared. Where the zero-mode survived the
transformation, the $3$-D density of the new eigenvectors had a completely different
structure. This confirms the suspicion that for $U(1)$ gauge theory the zero modes are
related to lumps extending in at least one direction. Changing the boundary conditions
in QCD sometimes also repositions topological objects
(see e.g. \cite{GaPuGaSo} and references therein) and in a recent analytical study it was
reported \cite{Ga04} that zero modes with constant curvature may be constructed in the
$U(1)$ subgroup. These modes may be ``switched'' on and off by changing the boundary
conditions. This is in agreement with our observation.
\begin{figure}[tp]
\begin{center}
$\begin{array}{c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c}
\epsfig{file=cut8zmxyt1.ps,
bb=15 30 290 330, clip=, scale=0.25} &
\epsfig{file=cut8zmxyt2.ps,
bb=15 30 290 330, clip=, scale=0.25} &
\epsfig{file=cut8zmxyt3.ps,
bb=15 30 290 330, clip=, scale=0.25} &
\epsfig{file=cut8zmxyt4.ps,
bb=15 30 290 330, clip=, scale=0.25} \vspace{-3mm} \\
\epsfig{file=cut8zmxyt5.ps,
bb=15 30 290 330, clip=, scale=0.25} &
\epsfig{file=cut8zmxyt6.ps,
bb=15 30 290 330, clip=, scale=0.25} &
\epsfig{file=cut8zmxyt7.ps,
bb=15 30 290 330, clip=, scale=0.25} &
\epsfig{file=cut8zmxyt8.ps,
bb=15 30 290 330, clip=, scale=0.25} \\
\end{array}$
\end{center}
\caption{\label{fig:artificialmode}The density shape for zero-mode eigenvector of
the constructed gauge configuration (lattice size $8^4$).
The 3D cuts are for slices $n_t=1\ldots 4$ (first row) and $n_t=5\ldots 8$ (second row).
The inverse participation ratio for that mode is
$I=1.85$.}
\end{figure}
In 2D the Pontryagin index is constructed from $ \int d\sigma_{\mu\nu} F_{\mu\nu} $.
On the lattice for compact $U(1)$ gauge theory and with torus topology various
explicit configurations with such topological charge have been explicitly constructed
as well as observed in Monte Carlo simulations (e.g. in Ref. \cite{GaHiLaGaHi}). In 4D the
topological charge is proportional to $\int d^4x F_{\mu\nu} \tilde{F}_{\mu\nu}$ and a
possible method \cite{Ba82,SmVi87a} to construct objects with non-vanishing
topological charge is e.g. to combine two-dimensional submanifolds, both with
topological charge in the 2D definition. Intersections of 2D surfaces in 4D may be
points as well as more complicated, e.g. line-like objects. In Ref. \cite{BeHeMa01} such a
configuration - as suggested in Ref. \cite{SmVi87a} has been studied and identifies as
leading indeed to exact zero-modes. For unit topological charge (lattice size $L^4$,
$\omega=2\pi/L^2$) it has the form
\begin{eqnarray}
U_1(x)&=&\exp{(i\,\omega \,x_2)}\,,\\
U_2(x)&=&1 \quad\textrm{for~}x_2=1\ldots L-1\,\,, \\
&&\exp{(-i\,\omega \,L\, x_1)} \quad\textrm{for~}x_2=L\,,
\end{eqnarray}
and for $U_3$ and $U_4$ equivalent, with $x_3$ and $x_4$ replacing $x_1$ and $x_2$.
Fig.\ \ref{fig:artificialmode} shows the geometric shape of the eigenvector density of
this mode and we observe tubular structures. Note, however, that the tubes are in the
3D intersections, i.e. extend like planes in 4D.
\section{Conclusion}
We have studied zero-mode properties of the overlap Dirac operator for $U(1)$ gauge
theory in the confined phase. We find no correlation between the occurrence of these
modes and the monopole density confirming \cite{BeHeMa01}.
We do observe individual zero-modes with definite chirality. This would contradict
the Atiyah-Singer index theorem if applied naively for lattice systems. However,
for $U(1)$ lattice gauge theory for compact gauge fields the situation changes. In
particular the confinement phase of this system has no analytic continuation to the
weak coupling continuum limit. In the confinement phase one can explictly construct
configurations with topological charge and zero-modes \cite{SmVi87a}, essentially by
combining two 2D sub-manifolds with non-vanishing 1st Chern number.
In the Coulomb phase there are no zero-modes and also the near-zero-modes are
suppressed. There we find no pronounced localization signal. In the confinement
phase the exact zero-modes are definitely more localized than other eigenmodes (as
exhibited by the inverse participation ratio). The IPR of the nearby non-zero modes
does not seem to depend on the Im($\lambda$). The geometrical structure of the zero
eigenmodes has no clear signature singling out 4D blobs - in contrast to e.g. the
case of instantons in QCD, which have been observed in distorted hyperspherical
shapes. All evidence points to tubular or even planar structures supporting the
density function $p(x)$ for a zero-mode.
{\bf Acknowledgment:}
We want to thank Elizabeth\ Gasparim, Christof\ Gattringer and Pushan\ Majumdar
for discussions and Wolfgang Bietenholz for helpful comments.
Support by Fonds zur F\"orderung der Wissen\-schaft\-li\-chen Forschung in \"Osterreich,
project P16310-N08 is gratefully acknowledged.
|
1,108,101,564,690 | arxiv | \section{Introduction}
Algebraic curves in finite-dimensional Grassmannians is a
classical subject in algebraic geometry and other branches of
mathematics (see e.g. \cite{HP}-\cite{Sot}). In contrast, the
story of interplay between infinite-dimensional Grassmannians and
algebraic curves was quite non-classical.
\par Interest of recent years to infinite-dimensional
Grassmannians is mainly due to the Sato's papers \cite{Sat,SS}. He
demonstrated that the Kadomtsev-Petviashvilii (KP) hierarchy and
hierarchies of other nonlinear partial differential equations
integrable by the inverse scattering transform method discovered
in \cite{GGKM} (see e.g. \cite{ZMNP,AS}), have a beautiful
geometrical interpretation in terms of special
infinite-dimensional Grassmannian (Sato Grassmannian). Since the
papers \cite{Sat,SS} Sato Grassmannian became a powerful tool in
many branches of mathematics and theoretical and mathematical
physics from algebraic geometry to quantum field theory, string
theory, and theory of integrable equations (see e.g.
\cite{DJKM}-\cite{AM}).\par Significance of algebraic curves in
the construction of solutions of integrable equations has been
understood in the middle of seventies (see e.g. \cite{DMN,BBEIM}).
In particular Krichever \cite{Kri1,Kri2} demonstrated that for any
complex algebraic curve (with some additional data) one can
construct a (Baker-Akhiezer) function $\psi$ which obeys a
compatible pair of linear differential equations. Compatibility of
these equations is equivalent to a nonlinear integrable equation,
for instance, to the KP equation. After the Sato's results
\cite{Sat,SS} on the identification of solutions of integrable
equations with subspaces $W$ in Grassmannian it became clear that
the correspondence discovered in \cite{Kri1,Kri2} can be extended
to a map between algebraic curves and subspaces in Sato
Grassmannian \cite{SW}. This paper of Segal and Wilson and the
paper \cite{AdCKP} coined the name of Krichever map
(correspondence) for such a map.
\par Since then the Krichever map, its inversion and extensions
have been studied within different contexts in a number of papers
(see e.g. \cite{Mul2,Mul1},\cite{Mul3}-\cite{BK}). In particular, in the
papers \cite{Mul2,Mul1,Mul3} it was shown that, the so-called
Schur pair $(A,W)$ plays a central role in the construction and
analysis of Krichever map. With all the diversity of the results
obtained, the constructions associated with the Krichever map
share a common feature. It is the fact that an algebraic curve,
though closely connected, is an object exterior to Sato
Grassmannian. It seems that there are very few results concerning
the study of algebraic curves in Sato Grassmannian itself. We note
the study of rational curves in Gr$_1$ and in Schur cells of
Gr$^{(2)}$ \cite{SW} (section $7$) and brief analysis of
hyperelliptic curves in Birkhoff strata of Gr$^{(2)}$ \cite{KK}.
\par In the present paper we will follow a classical way adopted
in \cite{SW,KK} and we will look for algebraic curves inside Sato
Grassmannian itself. Our main result is that each Birkhoff stratum
$\Sigma_{1,2,\dots,n}$ of the Sato Grassmannian Gr contains a
subset $W_{1,2,\dots,n}$ closed with respect to pointwise multiplication.
Algebraically all $W_{1,2,\dots,n}$ are infinite families of infinite dimensional
associative commutative algebras with $n+1$ generators.
Geometrically each point of $W_{1,2,\dots,n}$
is an algebraic variety and the whole $W_{1,2,\dots,n}$ is an algebraic $ind$-variety
with each finite-dimensional subvariety being a family of algebraic curves.
For the big cell $\Sigma_\varnothing$ the variety $W_\varnothing$
is the collection of families of normal rational curves (Veronese
curves) of the all orders $2,3,4,\dots$.
For the stratum $\Sigma_1$, each point of the subset $W_1$ is the coordinate
ring of the elliptic curve and $W_1$ is equivalent to the infinite family
of such rings.
Set $W_{1,2}$ is equivalent to the families of coordinate rings of a
special space curve with pretty interesting properties. This
family of curves in $W_{1,2}$ contains plane trigonal curve of
genus two and index$(\overline{\partial}_{W_{1,2}})=-2$. We
conjecture that the closed subspaces $W_{1,2,\dots,n}$ in higher
strata $\Sigma_{1,2,\dots,n}$ $(n=3,4,5,\dots)$ have similar
properties. In particular, $W_{1,2,\dots,n}$ contains plane
$(n+1,n+2)$ curve of genus $n$ and
index$(\overline{\partial}_{W_{1,2,\dots,n}})=-n$.\par It is
shown, that the projections of basic algebraic curves in each
stratum to lower dimensional subspaces are given by singular
higher order curves. Two ways of their regularization are
discussed. The first is the standard blow-up by quadratic
transformation within the same stratum without change of genus.
The second way consists in transition to the higher stratum. In
such a regularization procedure genus of a curve increases.\par
Local and Poisson structures associated with the subspaces
$W_{1,2,\dots,n}$ are discussed. It is shown that the tangent
bundles of $W_{1,2,\dots,n}$ and $W_{1,2,\dots,n}$ modules
$E_{1,2,\dots,n}$ are isomorphic to the linear spaces of
$2-$coboundaries and Harrison's cohomology modules $H^2(W,E)$ and
$H^3(W,E)$ vanish. Special classes of $2-$cocycles and
$2-$coboundaries are described by the systems of integrable
quasilinear PDEs. For example, a class of $2-$coboundaries
associated with the subspace $W_\varnothing$ in the big cell is
provided by the dispersionless KP (dKP) hierarchy.\par We give
also an interpretation of the families of ideals
$I(\Gamma_{\infty})$ for families of algebraic curves in
$W_{1,2,\dots,n}$ as the Poisson ideals. It is shown that the
family of ideals for the family of normal rational curves in the
big cell is the Poisson ideal with respect to a Poisson structure
obeying certain constraints. Two sets of canonical variables in
such Poisson ideals are used. It is demonstrated that in the
Darboux coordinates the above constraints are nothing else than
the dKP hierarchy. Similar results remain valid for other strata
too. \par Finally an interrelation between cohomological and
Poisson structures of $W_{1,2,\dots,n}$ is observed.
\section{Birkhoff strata and index$(\overline{\partial}_{W_{\widehat{S}}})$}
Here we recall briefly basic facts about Sato Grassmannian and its stratifications (see e.g. \cite{SW,PS}).\par
Let $H=\mathbb{C}((z))$ be the set of all formal Laurent series with coefficients in $\mathbb{C}$ and $H_+=\mathbb{C}[z]$ is the set of all formal polynomials in $z$. Sato Grassmannian Gr is the set of closed vector subspaces $W \subset H$ such that the projection $W \to H_+$ is Fredholm. Each $W \subset Gr$ possesses an algebraic basis ($w_0(z),w_1(z),\dots$) with the basis elements
\begin{equation}
\label{basiselem}
w_n=\sum_{i=-\infty}^n a_i z^i
\end{equation}
of finite order $n$. The set $W_{fin}$ of elements in Gr of finite order is dense. \par
Grassmannian Gr is a connected Banach manifold which exhibits a stratified structure \cite{PS}. To describe this structure one introduces the set $\mathcal{I}$. It is the family of all sets $S \subset \mathbb{Z}$ which are bounded from below and contain all sufficiently high integers. The canonical form of such $S$ is
\begin{equation}
S=\{s_0,s_1,s_2\dots \}
\end{equation}
such that $s_0 < s_1 < s_2 <\dots$ and $s_n =n$ for large $n$. Then for the subspace $W \subset Gr$ one defines
\begin{equation}
\label{setS}
S_W=\{s \in \mathcal{I}:\ W \mathrm{contains\ elements\ of\ order\ s}
\}.
\end{equation}
Given $S \in \mathcal{I}$ the subset $\Sigma_S$ of Gr defined by
\begin{equation}
\Sigma_S=\{ W \in Gr :\ S_W=S\}
\end{equation}
is called the Birkhoff stratum associated with the set $S$. The
closure of $\Sigma_S$ (Birkhoff variety) is an
infinite-dimensional irreducible $ind$-variety of the finite
codimension $l(s) = \sum_{k \geq 0}(k-s_k)$. In particular, if
$S=\{0,1,2,\dots \}$ the corresponding stratum has codimension
zero and it is a dense open subset of Gr which is called the
principal stratum or big cell. Lower Birkhoff strata correspond to
the sets $S$ of type (\ref{setS}) different from $\{0,1,2,\dots
\}$. For instance, the set $S=\{-1,0,2,3,4,\dots \}$ corresponds
to stratum $\Sigma_1$, while the set $\{-2,-1,0,3,4,\dots \}$ is
associated with $\Sigma_{1,2}$. Here and below, for convenience,
we will use also the notation $\Sigma_{\widehat{S}}$ fir the
Birkhoff strata where $\widehat{S}=\{\mathbb{N}-S\}$ denotes a set
of holes in the positive part of $S$ with respect to $\mathbb{N}$.
Note that Grassmannian Gr has also the Schubert or Bruhat
decomposition which is dual to Birkhoff stratification. Schubert
cells $C_S$ are subsets of the elements of the form
$\sum_{k=-N}^Nb_kz^k$ numerated by the same sets $S$ as Birkhoff
strata and have finite dimensions $l(s)$. Schubert cell $C_S$ and
Birkhoff stratum $\Sigma_S$ intersect transversally in a single
point. \par Schubert varieties in finite and infinite dimensional
Grassmannians have been studied pretty well while it seems that
the Birkhoff varieties have attracted considerable interest mainly
within the theory of integrable systems (with few exceptions (see
e.g. \cite{GM})). It was shown in \cite{SS,AvM} that the flows
generated by the standard KP hierarchy belong to the big cell. On
the other hand, singular solutions of the KP hierarchy for which
the $\tau$-function and its derivatives vanish, are associated
with higher strata. A method of desingularization of wave
functions near blowup locus (Birkhoff strata) has been proposed in
\cite{AvM}. In the papers \cite{MMM,KMAM0,KMAM} it was
demonstrated that there are infinite hierarchies of integrable
equations associated with each Birkhoff strata. \par In addition
to algebraic and geometrical aspects the Birkhoff stratification
exhibits also an interesting analytic structure. It was observed
in \cite{SW} (section 7.3) that the Laurent series
(\ref{basiselem}) are the boundary values of certain functions
$\Omega = \mathbb{C} - \mathcal{D}_\infty $ where
$\mathcal{D}_\infty$ is a small disk around the point $z=\infty$.
Formalizing these observations Witten \cite{Wit} suggested to view
Sato Grassmannian as the space of boundary conditions for the
$\overline{\partial}$ operator. Namely, let $\mathcal{H}$ be the
Hilbert space of square integrable functions $w(z,\overline{z})$
with respect to the bilinear form
\begin{equation}
\label{skewbilform}
\langle u,v \rangle =\iint_{\Omega} \frac{dz \wedge d \overline{z}}{ 2\pi i} u(z,\overline{z}) v(z,\overline{z}).
\end{equation}
Then, given $W \in Gr$ there is an associated domain
$\mathcal{D}_W$ on $\mathcal{H}$ for $\overline{\partial}$ given
by those functions $w$ for which $\overline{\partial}w \in
\mathcal{H}$ and such that their boundary values
$S_\infty={\partial} \mathcal{D}_\infty$ are in $W$. This elliptic
boundary value problem is defined correctly if
$\overline{\partial}$ is the skew-symmetric operator, i.e.
\begin{equation}
\label{bilform}
\langle v, \overline{\partial} u \rangle = -\langle \overline{\partial} v ,u \rangle, \forall u \in \mathcal{D}_{W},\ \forall v \in \mathcal{D}_{\widetilde{W} }
\end{equation}
where $\widetilde{W} $, the dual of an element $W$ in Gr, as the space of formal Laurent series $u(z)$ of $z \in S_{\infty}$, obeys the condition
\begin{equation}
\int_{S_{\infty}} \frac{dz}{2 \pi i z} v(z)u(z)=0,\quad \forall u \in W.
\end{equation}
Let $\overline{\partial}_W$ denotes the $\overline{\partial}$ operator acting on the domain $\mathcal{D}_{\infty}$. The index of this operator is defined (see e.g. \cite{Wit})
\begin{equation}
\label{dbarindex}
\mathrm{index}\ \overline{\partial}_W = \mathrm{dim} (\mathrm{ker} \overline{\partial}_W)-
\mathrm{dim} (\mathrm{coker} \overline{\partial}_W).
\end{equation}
Taking into account that for given $S_W$ one has $S_{\widetilde{W} }=\{-n|n \notin S_{W}\}$, one finds \cite{KMAM}
\begin{equation}
\mathrm{index}\ \overline{\partial}_W = \mathrm{card} (S_W - \mathbb{N}) -\mathrm{card} (S_{\widetilde{W} } - \mathbb{N}).
\end{equation}
where $\mathbb{N}=\{0,1,2,3,\dots\}$.\par
For the hidden KP hierarchies the index of the $\overline{\partial}$ operator has been calculated in \cite{KMAM}.
\section{Big cell $\Sigma_{\varnothing}$. Families of normal rational (Veronese) curves} \label{sect-bigcell}
We begin with the principal stratum $\Sigma_\varnothing$. Since in
this case the basis (\ref{basiselem}) is composed by the Laurent
series of all positive order $n=0,1,2,3,\dots$ there exists a
canonical basis $\{ p_0,p_1,p_2,\dots \}$ in $\Sigma_\varnothing$
with the basis elements of the form
\begin{equation}
\label{bigcellbasis}
p_i(z)=z^i+\sum_{k=1}^\infty \frac{H^i_k}{z^k}, \qquad i=0,1,2,\dots.
\end{equation}
Basis elements (\ref{bigcellbasis}) are parameterized by the infinite set of arbitrary $H^i_k \in \mathbb{C}$.\par
Points of $\Sigma_{\varnothing}$ are represented by the subspaces which are spans of $\{p_0(z),p_1(z),p_2(z),\dots\}$
with fixed all $H^i_k$. $\Sigma_{\varnothing}$ itself is a family of such subspaces parameterized by $H^j_k$. \par
In this paper we will study particular situations when such subspaces have a specific algebraic property, namely,
when they admit a multiplication of elements. The following Lemma is the starting point of the analysis.
\begin{lem}
\label{lembigcell}
Laurent series $p_i(z)$ (\ref{bigcellbasis}) with fixed $H^i_k$ obey the equations
\begin{equation}
\label{bigcellalg}
p_j(z)p_k(z)=\sum_{l=0}C_{jk}^lp_l(z), \qquad j,k=0,1,2,\dots
\end{equation}
if and only if the parameters $H^i_k$ satisfy the constraints
\begin{equation}
\label{bigcell0coeff}
H^0_j=0, \qquad j=0,1,2,\dots
\end{equation}
and
\begin{equation}
\label{bigcellsercoeff}
H^{j+k}_m-H^{j}_{m+k}-H^k_{j+m}+\sum_{l=1}^{j-1} H^k_{j-l}H^l_m+\sum_{l=1}^{k-1} H^j_{k-l}H^l_m
-\sum_{l=1}^{m-1} H^k_{m-l}H^j_l=0, \qquad j,k,m=1,2,3,\dots.
\end{equation}
\end{lem}
{\bf Proof} Let us require that Laurent series
(\ref{bigcellbasis}) obey the condition (\ref{bigcellalg}).
Comparing the coefficients in front of positive powers of $z$ in
both sides of (\ref{bigcellalg}), one concludes that
\begin{equation}
\label{bigcellstructcoeff}
C^l_{jk}=\delta^l_{j+k}+H^k_{j-l}+H^j_{k-l}, \qquad j,k=0,1,2,3,\dots
\end{equation}
and all $H^0_j=0$. Counting of negative powers of $z$ gives the relations (\ref{bigcellsercoeff}). The conditions (\ref{bigcell0coeff}) and (\ref{bigcellsercoeff}) obviously are also the sufficient one.
$\square$\par
Note that in this case $p_0=1$ and the conditions $p_0p_i(z)=p_i(z)$ are identically satisfied.\par
Lemma (\ref{lembigcell}) has an immediate consequence.
\begin{prop}
\label{W0W0CW0}
The big cell $\Sigma_\varnothing$ contains the subset $W_\varnothing$ closed with respect to pointwise multiplication, i.e. for any fixed $H^j_k$ obeying (\ref{bigcell0coeff}) and (\ref{bigcellsercoeff})
and any two $q_1(z,H),q_2(z,H)\in W_\varnothing$ their product $q_1(z,H)q_2(z,H) = q_3(z,H)\in W_\varnothing$.
This subset $W_\varnothing$ is an infinite family of infinite-dimensional commutative associative algebras.
\end{prop}
{\bf Proof} The conditions (\ref{bigcellalg}) guarantees that for any two elements $q_1=\sum_{j=0}^\infty \alpha_j p_j(z,H),q_1=\sum_{j=0}^\infty \beta_j p_j(z,H)$ with fixed $H^j_k$ obeying the constraints
(\ref{bigcell0coeff}) and (\ref{bigcellsercoeff}), the product $q_1q_2$ is of the form $\sum_{j=0}^\infty \gamma_j p_j(z,H)$ with some $\gamma_j$.
Equations (\ref{bigcellalg}) represent the table of multiplication
for a commutative algebra with the basis $(1,p_1,p_2,\dots)$ and
the structure coefficients $C^j_{kl}$ (\ref{bigcellstructcoeff})
for each $z$. It is a direct check that the conditions
(\ref{bigcell0coeff}) and (\ref{bigcellsercoeff}) are equivalent
to the associativity condition
\begin{equation}
\label{bigcelass}
\sum_{l=0}^{\infty}C^l_{jk}C^p_{lm}=\sum_{l=0}^{\infty}C^l_{mk}C^p_{lj}, \qquad j,k,m,p=0,1,2,\dots\ .
\end{equation}
for the structure coefficients $C^l_{jk}$. So, at fixed $H^j_k$, the span of $\{p_0(z),p_1(z),p_2(z),\dots \}$,
i.e. the point of the big cell is an infinite-dimensional associative commutative algebra with the structure constants
(\ref{bigcellstructcoeff}). The subset $W_{\varnothing}$ is a family of such algebras.
$\square$\par
In a different context the formulae (\ref{bigcellalg})-(\ref{bigcelass}) have appeared first in the paper \cite{KM} devoted to the coisotropic deformations of associative algebras. In the rest of the paper we will refer to the conditions (\ref{bigcell0coeff}) and (\ref{bigcellsercoeff}) and similar one as the associativity conditions.\par
Algebra $A_{\Sigma_0}$ at fixed $H^j_k$ described above is the polynomial algebra $\mathbb{C}[p_1]$ in the basis of Fa\`a di Bruno polynomials \cite{KM}. Indeed, it is easy to see that the relations (\ref{bigcellalg}) and (\ref{bigcellstructcoeff}) are equivalent to the following
\begin{equation}
\label{bigcellcurr}
\begin{split}
p_2=&{p_1}^2-2H^1_1, \\
p_3=& {p_1}^3-3H^1_1p_1-3H^1_2, \\
p_4=& {p_1}^4-4H^1_1{p_1}^2-4H^1_2p_1-4H^1_3+2{H^1_1}^2, \\
p_5=& {p_1}^5-5H^1_1{p_1}^3-5H^1_2{p_1}^2- \left( 5\,H^1_{{3}}-5\,{H^1_{{1}}}^{2} \right)p_1-5\,H^1_{{4}}+5\,H^1_{{1}}H^1_{{2}},\\
\dots & \\
p_n=& {p_1}^n+\sum_{k=0}^{n-2}u_{nk}{p_1}^k, \qquad n=6,7,8,\dots
\end{split}
\end{equation}
where $u_{nk}$ are certain polynomials of $H^1_m$, $m=1,2,\dots,n-1$. The polynomials in the r.h.s. of (\ref{bigcellcurr}) have been called Fa\`a di Bruno polynomials in \cite{KM}. \par
The pointwise constraints (\ref{bigcellalg}) and (\ref{bigcellcurr})
for the basis elements $p_i(z)$ have simple geometrical
interpretation. Indeed, if one treats
$p_1(z),p_2(z),p_3(z),\dots$, for given $H^i_k$ and variable $z$,
as the local affine coordinates, then the conditions
(\ref{bigcellalg}) become the constraints on coordinates
$p_1,p_2,p_3,\dots$ of the form
\begin{equation}
\label{bigcellconstrcoord}
f_{jk}=p_jp_k-\sum_{l=0}^{j+k}C_{jk}^lp_l=0.
\end{equation}
These relations define an algebraic variety for the fixed $H^j_k$. So, under the constraint (\ref{bigcellconstrcoord}) one has an algebraic variety at each point of $W_\varnothing$.
Varying $H^j_k$, one gets $W_\varnothing$, and thus we have
\begin{prop}
\label{towerbigcell}
The big cell $\Sigma_\varnothing$ contains an infinite family $\Gamma_\infty$ of algebraic varieties which are intersections of the quadrics
\begin{equation}
\label{towerVeronese}
f_{jk}=p_jp_k-p_{j+k}-\sum_{l=1}^jH^k_l p_{j-l} - \sum_{l=1}^k H^j_l p_{k-l} =0, \qquad j,k=1,2,3,\dots
\end{equation}
and parameterized by the variables $H^j_k$ obeying the algebraic equations (\ref{bigcell0coeff}) and (\ref{bigcellsercoeff}). This family $\Gamma_\infty$ is the infinite tower of infinite families of rational normal curves (Veronese curves) of all orders.
\end{prop}
{\bf Proof} In virtue of the equivalence of the set of equations (\ref{bigcellcurr}) to the set
\begin{equation}
\label{bigcellcurrcurv}
\begin{split}
h_2=&p_2- {p_1}^2+2H^1_1=0, \\
h_3=&p_3- {p_1}^3+3H^1_1p_1+3H^1_2=0, \\
\dots& \\
h_n=&p_n- {p_1}^n+\sum_{k=0}^{n-2}u_{nk}{p_1}^k=0, \qquad n=4,5,6, \dots
\end{split}
\end{equation}
the variety $\Gamma_{\infty}$ has dimension $1$ for each fixed
$H^1_m$, $m=1,2,3,\dots$ . The ideal of $\Gamma_{\infty}$ is
$I(\Gamma_{\infty})=\langle h_2,h_3,h_4,\dots\rangle$. For each
finite-dimensional subspace with coordinates $p_1,p_2,\dots,p_n$
and fixed $H^1_m$, $m=1,2,\dots,n-1$ the corresponding variety
$\Gamma_d$ is a rational normal curve of order $d$. For instance,
$\Gamma_3$ is the twisted cubic. Formulae (\ref{bigcellcurrcurv})
represent the canonical parameterization of rational normal curve
(Veronese curve) (see e.g. \cite{Har}). Due to the associativity
conditions (\ref{bigcellsercoeff}) and their consequence
$nH^i_n=iH^n_i$ all $H^i_k$ are polynomial functions of $H^1_m$,
$m=1,2,3,\dots$. For example $H^n_1=nH^1_n$,
$H^2_2={H^1_1}^2+2H^1_3$. Thus, the family of algebraic varieties
$\Gamma_d$ parameterized by $H^1_1,H^1_2,\dots, H^1_{d-1}$, i.e.
the family of rational normal curves of the order $d$, is itself
the affine algebraic variety in the $2d-1$--dimensional space.
Finally, the variety $\Gamma_{\infty}$ is the $ind$-variety since
$\Gamma_2 \subset \dots \subset \Gamma_{d-1} \subset \Gamma_d
\subset \Gamma_{d+1} \subset \dots$. $\square$\par
We note that within the theory of schemes (see e.g. \cite{Sha}-\cite{Don})
one can define algebraic variety associated with the point of the subset
$W_\varnothing$ (at fixed $H^j_k$) as the Spectrum Spec($R_A$) of the ring $R_A$
corresponding to the algebra $A_{\Sigma_{\varnothing}}$. \par
In the theory
of ideals and algebraic geometry, the so called canonical basis,
generated by elements of the form $q_n-a_n$ with some $a_n$ play
a distinguished role (see e.g. \cite{Har}, Lecture 5). For an
ideal $I(\Gamma_\infty)$ such basis can be found in the following
way. First from the constraint $h_2=0$, one has
$H^1_1=\frac{1}{2}\left({p_1}^2-p_2\right)$. Substituting this
expression for $H^1_1$ into $h_3$, one gets
\begin{equation}
\tilde{h}_3=p_3+\frac{1}{2}{p_1}^3-\frac{3}{2}p_1p_2+3H^1_2=0.
\end{equation}
From this relation one obtains $H^1_2$ in terms of $p_1,p_2,p_3$ and then substitutes into $h_4$ getting $\tilde{h}_4$. Continuing this procedure, one finds (see also \cite{KM2})
\begin{equation}
\tilde{h}_n=-n\left(P_n(\tilde{p})-H^1_{n-1}\right), \qquad n=2,3,4, \dots
\end{equation}
where $\tilde{p_k}=-\frac{1}{k}p_k$ and $P_n(\tilde{p})$ are standard Schur polynomials defined by the formula
\begin{equation}
e^{\sum_{n=1}^\infty z^n t_n}=\sum_{m=0}^\infty z^m P_m(t_1,t_2,t_3,\dots).
\end{equation}
Thus one has
\begin{prop}
\label{propII=I}
Canonical basis for the ideal $I(\Gamma_{\infty})$ is composed by the elements
\begin{equation}
\label{bigcell-h*}
h^*_n=p^*_n-H^1_{n-1}, \qquad n=2,3,4,\dots
\end{equation}
where $p^*_n=P_n\left(-p_1,-\frac{1}{2}p_2,-\frac{1}{3}p_3,\dots\right)$.
\end{prop}
This observation reveals that the variables $H^1_k$, $k=1,2,3,\dots$ play the distinguished role in the parameterization of the associativity conditions (\ref{bigcellsercoeff}).\par
The proposition \ref{propII=I} has an obvious
\begin{cor}
In the variables $p_n^*$ and $u_n=H^1_n$, $n=1,2,3,\dots$ the variety $\Gamma_{\infty}$ is given by intersection of the hyperplanes (\ref{bigcell-h*}).
\end{cor}
As far as the $\overline{\partial}-$operator
is concerned one easily shows that for the big cell
\begin{equation}
\label{indexGr0}
\mathrm{index}\ \overline{\partial}_{W_\varnothing}=0.
\end{equation}
\par Rational normal curves in $\Gamma_\infty$ defined by (\ref{bigcellcurrcurv}) are regular for all $d=2,3,4,\dots$. Their projection to lower dimensional subspaces are singular algebraic curves of different types.\par
For the twisted cubic defined by the first two equations (\ref{bigcellcurrcurv}) the projection along the axis $p_1$ to the subspace with coordinates $p_2,p_3$ is given by
\begin{equation}
\label{singcubcurv}
\mathcal{F}^0_{23}={p_3}^2-{p_2}^3+6H^1_2p_3+3{H^1_1}^2p_2+9{H^1_2}^2-2{H^1_1}^3=0
\end{equation}
or in the standard form
\begin{equation}
\label{singcubcurv-stand}
\mathcal{F}^0_{23}={\tilde{p}_3}^2-{p_2}^3+3{H^1_1}^2p_2-2{H^1_1}^3=0
\end{equation}
where ${\tilde{p}_3}=p_3+3H^1_2$. Since the discriminant of the curve vanishes
\begin{equation}
\label{Discr-0}
\Delta=\frac{1}{1728}\left(\left( \frac{{H^1_1}^2}{9} \right)^3-\left(- \frac{{H^1_1}^3}{27} \right)^2 \right)=0
\end{equation}
it has an ordinary double point and zero genus.\par
Curves (\ref{singcubcurv}) belong to the ideal $\mathcal{I}$ and
\begin{equation}
\mathcal{F}^0_{23}= \left(h_3 +b_3 \right) h_3+ \left(-{h_2}^2 + b_4 h_2 +b_2 \right) h_2
\end{equation}
with
\begin{equation}
\begin{split} \
b_4 =& -3\,{p_1}^{2}+6\,H^1_{{1}}=-3p_2,\\
b_3 =& 2\,{p_1}^{3}-6\,H^1_{{1}}p_1=2(p_3+3H^1_2),\\
b_2 =& -9\,{H^1_{{1}}}^{2}+12\,H^1_{{1}}{p_1}^{2}-3\,{p_1}^{4}=-3{p_2}^2+3{H^1_1}^2
\end{split}
\end{equation}
and
\begin{equation}
h_3+b_3=p_3+3H^1_2-\left({p_1}^2-3H^1_1 \right)p_1=p_3+3H^1_2-\left({p_2}-H^1_1 \right)p_1-{h_2}^2+b_4h_2+b_2=-3{p_2}^2+6{H^1_1}^2.
\end{equation}
We note that in terms of $h_2$ and $h_3$ the projected twisted cubic is the plane cubic too. \par
The projection parallel to the axis $p_1$ of the fourth order rational curve defined by first three equations (\ref{bigcellcurrcurv}) into the two dimensional space $(p_3,p_4)$ is represented by the singular trigonal curve
\begin{equation}
\label{singtrigcurv}
\begin{split}
\mathcal{F}_{34}^0={p_{{3}}}^{4}&-{p_{{4}}}^{3}-12\,H^1_{{3}}{p_{{4}}}^{2}
-12\,H^1_{{1}}H^1_{{2}}p_{{3}}p_{{4}}
- \left( 6\,(H^1_{{2}})^{2}+4\,(H^1_{{1}})^{3} \right) {p_{{3}}}^{2} \\
&- \left( 48\,(H^1_{{3}})^{2}-12\,H^1_{{1}}(H^1_{{2}})^{2}-3\,(H^1_{{1}})^{4} \right) p_{{4}}
- \left( 48\,H^1_{{1}}H^1_{{2}}H^1_{{3}}-8\,(H^1_{{2}})^{3}-12\,(H^1_{{1}})^{3}H^1_{{2}} \right) p_{{3}}\\
&+2\,(H^1_{{1}})^{6}-64\,(H^1_{{3}})^{3}+12\,H^1_{{3}}(H^1_{{1}})^{4}-24\,(H^1_{{1}})^{3}(H^1_{{2}})^{2}
-3\,(H^1_{{2}})^{4}+48\,(H^1_{{2}})^{2}H^1_{{3}}H^1_{{1}}=0.
\end{split}
\end{equation}
For the curve (\ref{singtrigcurv}) one has
\begin{equation}
\label{singtrigcurv-0-ideal}
\mathcal{F}_{34}^0=\left({h_4}^2+a_8h_4+a_4\right)h_4+\left(-{h_3}^3+a_9{h_3}^2+a_7{h_4}+a_6{h_3}+a_3\right)h_3
\end{equation}
where the coefficients $a_3,a_4,a_6,a_7,a_8,a_9$ are given in the
Appendix \ref{App-bigcell}.\par Finally, let us consider the fifth
order rational normal curve defined by first four equations
(\ref{bigcellcurrcurv}). Its projection into the plane $(p_2,p_5)$
is the singular genus zero quintic
\begin{equation}
\label{0F25}
\begin{split}
\mathcal{F}^0_{25}=&
{p_{{5}}}^{2}-{p_{{2}}}^{5}+10\,H^1_{{2}}p_{{2}}p_{{5}}- \left( -10\,H^1_{
{3}}-5\,{H^1_{{1}}}^{2} \right) {p_{{2}}}^{3}- \left( -10\,H^1_{{2}}H^1_{{1}
}-10\,H^1_{{4}} \right) p_{{5}} \\&- \left( -10\,H^1_{{1}}H^1_{{3}}-25\,{H^1_{{2}}
}^{2} \right) {p_{{2}}}^{2}- \left( 25\,{H^1_{{3}}}^{2}-50\,H^1_{{2}}H^1_{{4
}}+5\,{H^1_{{1}}}^{4}+30\,{H^1_{{1}}}^{2}H^1_{{3}}-50\,H^1_{{1}}{H^1_{{2}}}^{2}
\right) p_{{2}}\\&+25\,{H^1_{{2}}}^{2}{H^1_{{1}}}^{2}+25\,{H^1_{{4}}}^{2}-2\,{
H^1_{{1}}}^{5}+50\,H^1_{{2}}H^1_{{1}}H^1_{{4}}-20\,H^1_{{3}}{H^1_{{1}}}^{3}-50\,{H^1
_{{3}}}^{2}H^1_{{1}}=0.
\end{split}
\end{equation}
A projection of the fifth order rational curve into the two dimensional space $(p_4,p_5)$ defined by first four equations (\ref{bigcellcurrcurv}) is the plane singular $(4,5)$ curve in terminology of \cite{BEL}
\begin{scriptsize}
\begin{equation}
\label{0F45}
\begin{split}
\mathcal{F}^0_{45}=&p_5^4-p_4^5 -20\,H^1_{{4}}{p_5}^3 +\left(
20\,H^1_{{1}}H^1_{{3}}+10\,{H^1_{{2}}}^{2} \right) {p_4}{p_5}^2
+\left( 20\,H^1_{{2}}{H^1_{{1}}}^{2}+20\,H^1_{{2}}H^1_{{3}}\right)
{p_4}^2{p_5} \\& +\left(
5\,{H^1_{{1}}}^{4}+10\,{H^1_{{3}}}^{2}+20\,{H^1_{{2}}}^{2}H^1_{{1}}\right)
{p_4}^3 +\left(
150\,{H^1_{{4}}}^{2}-20\,{H^1_{{2}}}^{2}H^1_{{3}}-30\,{H^1_{{2}}}^{2}{H^1_{{1}}}
^{2}-20\,H^1_{{1}}{H^1_{{3}}}^{2}-4\,{H^1_{{1}}}^{5}\right)
{p_5}^2\\& +\left(
-20\,H^1_{{2}}{H^1_{{1}}}^{4}-60\,{H^1_{{1}}}^{2}H^1_{{2}}H^1_{{3}}+200\,H^1_{{3}}
H^1_{{4}}H^1_{{1}}-40\,H^1_{{2}}{H^1_{{3}}}^{2}+100\,{H^1_{{2}}}^{2}H^1_{{4}}-40\,
{H^1_{{2}}}^{3}H^1_{{1}}\right) {p_4}{p_5}\\& +\left(
-40\,{H^1_{{1}}}^{3}{H^1_{{2}}}^{2}-20\,H^1_{{3}}{H^1_{{1}}}^{4}-20\,{H^1_{{3}}}
^{3}+5\,{H^1_{{2}}}^{4}+50\,{H^1_{{3}}}^{2}{H^1_{{1}}}^{2}-40\,{H^1_{{2}}}^{2}
H^1_{{1}}H^1_{{3}}\right. \\& \left.
+100\,H^1_{{2}}{H^1_{{1}}}^{2}H^1_{{4}}+100\,H^1_{{4}}H^1_{{2}}H^1_{
{3}}\right) {p_4}^2+c_5p_5+c_4p_4+c_0=0
\end{split}
\end{equation}
\end{scriptsize}
where the coefficients $c_5,c_4,c_0$ are given in Appendix \ref{App-bigcell}.\par
In the next sections we will see that curves (\ref{singcubcurv}), (\ref{singtrigcurv}) are singular limits of regular algebraic curves from the strata $\Sigma_1$ and $\Sigma_{1,2}$. \par
The system (\ref{bigcellalg})-(\ref{bigcellstructcoeff}) admits infinitely many simple reductions.
The first $p_1=z$ is obviously trivial. The constraint $p_2=z^2$, i.e. all $H^2_n=0$, due to (\ref{bigcellsercoeff}) implies that $H^{2m}_n=0$, $m=1,2,3,\dots,\ n=1,2,3,\dots$, i.e. $p_{2n}={p_2}^n=z^{2n}$, $n=1,2,3,\dots$. It is easy to show that the system (\ref{bigcellalg})-(\ref{bigcellstructcoeff}) admits the reductions $p_{ln}={p_l}^n=z^{ln}$, $l=1,2,3,\dots$, $n=2,3,4,\dots$.
\section{Stratum $\Sigma_0$. Family of centered normal rational curves}
The first stratum different from $\Sigma_\varnothing$ is associated with $S=\{-1,1,2,\dots \}$. In the absence of zero order element the positive order elements of the canonical basis are
\begin{equation}
\label{modbigcell-curr}
p_i=z^i+H^i_0+\sum_{k \geq 1} \frac{H^i_k}{z^k}, \qquad i=1,2,3,\dots\ .
\end{equation}
Since $(p_{-1})^2 \notin \langle p_i\rangle_{i=-1,1,2,\dots}$ the element $p_{-1}$ cannot belong to a point in the subset of $\Sigma_0$
closed with respect to multiplication. Considering only $p_j$ of positive orders one has
\begin{lem}
Laurent series (\ref{modbigcell-curr}) obey the equations
\begin{equation}
\label{alg-modbig}
p_j(z)p_k(z)=\sum_{l=1}^\infty C_{jk}^lp_l(z), \qquad j,k=1,2,3,\dots
\end{equation}
if and only if the parameters $H^j_k$, $k=0,1,2,\dots$ obey the constraints
\begin{equation}
\label{algcoeff-modbig}
H^j_{k+m}+H^k_{j+m}+\sum_{s=0}^m H^k_sH^j_{m-s}=H^{j+k}_m+\sum_{s=0}^{j-1}H^k_sH^{j-s}_m+\sum_{s=0}^{k-1}H^j_sH^{k-s}_m.
\end{equation}
\end{lem}
{\bf Proof} Proof is similar to that of Lemma \ref{lembigcell}. The constants $C^l_{jk}$ are given by (\ref{bigcellstructcoeff}) with $j,k,l=1,2,3,\dots$ . $\square$ \par
As the consequence of this Lemma one gets
\begin{prop}
\label{tower0strat}
The stratum $\Sigma_0$ with $S=\{-1,1,2,3,\dots \}$ contains the subset $W_0$ closed with respect to pointwise multiplication, i.e. $W_0(H)\cdot W_0(H) \subset W_0(H)$. Elements of $W_0$ are vector spaces with basis $\langle p_i \rangle_i$ with $H^i_j$ obeying the constraints (\ref{algcoeff-modbig}). Algebraically the subspace $W_0$ is the infinite family of infinite-dimensional commutative associative algebras $A_{\Sigma_0}$ without unity element.
\end{prop}
{\bf Proof } Proof is analogous to that of Proposition \ref{W0W0CW0}. $\square$ \par
Algebra $A_{\Sigma_0}$ with fixed $H^i_j$ is a polynomial algebra since
\begin{equation}
\label{pi-modbig}
\begin{split}
p_2=&{p_1}^2-2H^1_0p_1, \\
p_3=&{p_1}^3-3H^1_0{p_1}^2-3(H^1_1-{H^1_0}^2)p_1, \\
\dots& \\
p_n=&{p_1}^n-\sum_{k=1}^{n-1}u_k{p_1}^k, \qquad n=4,5,6,\dots
\end{split}
\end{equation}
Geometrically interpretation of the subspace $W_0$ is similar to that given in Section \ref{sect-bigcell}.
\begin{prop}
The stratum $\Sigma_0$ contains an infinite family of algebraic varieties defined as intersection of the quadrics
\begin{equation}
\widetilde{\mathcal{F}}_{jk}= p_j+p_k-p_jp_k +\sum_{l=0}^{j-1}H^k_lp_{j-l}+\sum_{l=0}^{k-1}H^j_lp_{k-l}=0, \qquad j,k=1,2,3,\dots
\end{equation}
which is parameterized by $H^j_k$ obeying the equations (\ref{algcoeff-modbig}). This family is an infinite tower of normal rational curves of all orders passing through the origin.
\end{prop}
{\bf Proof } The ideal of this family of algebraic varieties is $I_0(\Gamma_\infty)=\langle \tilde{h}_2, \tilde{h}_3,\tilde{h}_4,\dots \rangle$ where
\begin{equation}
\begin{split}
\tilde{h}_2 =& p_2 -{p_1}^2+2H^1_0p_1, \\
\tilde{h}_3 =& p_3 - {p_1}^3+3H^1_0{p_1}^2+3(H^1_1-{H^1_0}^2)p_1, \\
\dots &\\
\tilde{h}_n =& p_n- {p_1}^n+\sum_{k=1}^{n-1}u_k{p_1}^k, \qquad n=4,5,6,\dots\ .
\end{split}
\end{equation}
In contrast to the big cell all these normal rational curves pass
through the origin $p_1=p_2=p_3=\dots=0$. $\square$ \par Since
$S_{\widetilde{W}_0}=\{0,1,2,\dots\}$ for the subspace $W_0$ one
has index$(\overline{\partial}_{W_0})=0$.\par Similar to the big
cell all normal rational curves given by (\ref{pi-modbig}) are
regular and have zero genus while their projections to the lower
dimensional subspaces are singular algebraic curves. For instance,
the projection of the Veronese curve of the order $3$ defined by
the first two equations (\ref{pi-modbig}) onto the subspace with
coordinates $(p_2,p_3)$ is the singular plane cubic
\begin{equation}
\label{degEll-modbig}
\mathcal{F}_{23}^{(0)}={p_{{3}}}^{2}-{p_{{2}}}^{3}- \left( 3\,{H^1_{{0}}}^{2}
-6\,H^1_{{1}} \right) {p_{{2}}}^{2}- \left( -6\,H^1_{{0}}H^1_{{1}}
+2\,{H^1_{{0}}}^{3} \right) p_{{3}}- \left(
-12\,H^1_{{1}}{H^1_{{0}}}^{2}
+9\,{H^1_{{1}}}^{2}+3\,{H^1_{{0}}}^{4} \right) p_{{2}}=0.
\end{equation}
In the standard form it is
\begin{equation}
{\tilde{p}_3}^2-{\tilde{p}_2}^2+3\,{H^1_{{1}}}^{2}\tilde{p}_2-2{H^1_1}^3=0
\end{equation}
where $\tilde{p}_3=p_{{3}}+3\,H^1_{{0}}H^1_{{1}}-{H^1_{{0}}}^{3}$
and $\tilde{p}_2=p_{{2}}+{H^1_{{0}}}^{2}-2\,H^1_{{1}}$.
Analogously the projection of Veronese curves to subspaces
$(p_2,p_5)$, $(p_3,p_4)$, $(p_4,p_5)$ are singular algebraic
curves of genus zero.\par Comparing the formulas of this and
previous sections, one observes that they are pretty close to each
other and the algebraic curves in the big cell an stratum
$\Sigma_0$ are essentially of the same type. Moreover one can
easily see that they are transformed to each other by the simple
change of ``coordinates''
\begin{equation}
p_i^{\mathrm{big\ cell}} = p_i^{\Sigma_0}-H^i_0, \qquad i=1,2,3,\dots
\end{equation}
where
\begin{equation}
\begin{split}
H^2_0=& 2\,H^1_{{1}}-{H^1_{{0}}}^{2},\\
H^3_0=& -3\,H^1_{{0}}H^1_{{1}}+{H^1_{{0}}}^{3}+3\,H^1_{{2}},\\
H^4_0=& -2\,{H^1_{{1}}}^{2}+4\,H^1_{{1}}{H^1_{{0}}}^{2}
-{H^1_{{0}}}^{4}-4\,H^1_{{2}}H^1_{{0}}+4\,H^1_{{3}}, \\
\dots & \ .
\end{split}
\end{equation}
So all the results for the stratum $\Sigma_0$ are easily obtainable from those for the big cell. Formally one can consider these two cases as two special reductions of a more general family of normal rational curves defined by the equations
\begin{equation}
\begin{split}
p_2=& {p_1}^2+u_{21}{p_1}+u_{20}, \\
p_3=& {p_1}^3+u_{32}{p_1}^2+u_{31}{p_1}+u_{30}, \\
\dots &\\
p_n=& {p_1}^n+\sum_{k=0}^{n-1}u_{nk}{p_1}^k, \qquad n=4,5,6,\dots
\end{split}
\end{equation}
where $u_{nm}$ are parameters. Such class of normal rational curves is invariant under the shifts
\begin{equation}
p_n\to p_n+\alpha_n, \qquad n=1,2,3,\dots
\end{equation}
where $\alpha_n$ are arbitrary parameters. This invariance allows us to fix the infinite (countable) set of parameters
$u_{nm}$. The gauge in which $u_{n\ n-1}=0$ corresponds to normal rational curves from the big cell. In the gauge $u_{n0}=0$ one has the normal rational curves from $\Sigma_0$.\par
Similar situation takes place for other strata for which $0 \notin S$. By this reason in the rest of the paper we will consider only strata for which $0 \in S$.
\section{Stratum $\Sigma_1$. Elliptic curve and its coordinate ring.}
\label{firststrat}
For the stratum $\Sigma_1$ one has $S=\{-1,0,2,3,\dots\}$ and the element of the first order $z+O(z^{-1})$ is absent in the basis. Hence, positive order elements of the canonical basis in $\Sigma_1$ have the form
\begin{equation}
\label{1stratser}
\begin{split}
p_0(z)=&1+\sum_{k=1}^{\infty} \frac{H^0_k}{z^k}, \\
p_i(z)=&z^i+H^i_{-1}z+\sum_{k=1}^{\infty} \frac{H^i_k}{z^k}, \qquad i=2,3,4,\dots.
\end{split}
\end{equation}
Similar to stratum $\Sigma_0$ the element $p_{-1}$ cannot belong to a point in the subset of $\Sigma_1$
closed with respect to multiplication.
\begin{lem}
\label{lem1strat}
Laurent series (\ref{1stratser}) obey the equations
\begin{equation}
\label{1stratalg}
p_i(z) p_j(z) = \sum_{l=0,2,3,\dots} C^l_{ij} p_l(z), \qquad i,j=0,2,3,4,\dots
\end{equation}
if and only if the parameters $H^i_k$ satisfy
\begin{equation}
\label{1strat0coeff}
H^0_k=0, \qquad k=1,2,3,\dots
\end{equation}
and
\begin{equation}
\label{1stratsercoeff}
\begin{split}
H^i_{j+l}&+H^j_{i+l}+H^j_{-1}H^i_{l+1}+H^i_{-1}H^j_{l+1}+\sum_{n=1}^{l-1}H^j_n H^i_{l-n} = \\
&
H^{i+j}_l+H^j_{-1}H^{i+1}_l+H^i_{-1}H^{j+1}_l+\sum_{n=2}^{i-1}H^{j}_{i-n}H^n_l+\sum_{n=2}^{j-1}H^{i}_{j-n}H^n_l+
H^i_{-1}H^j_{-1}H^2_l+\\&(H^i_j+H^j_i+H^i_{-1}H^j_1+H^i_{1}H^j_{-1})\delta^l_0,
\qquad j,k=2,3,4,\dots, \ l=-1,1,2,3,\dots.
\end{split}
\end{equation}
\end{lem}
{\bf Proof} Proof is similar to the case of $\Sigma_\varnothing$. Considering positive powers of $z$ in both sides of (\ref{1stratalg}), one gets
\begin{equation}
\label{1stratstructcoeff}
C^l_{ij}=\delta^l_{i+j}+ H^j_{-1} \delta^l_{i+1} + H^i_{-1} \delta^l_{j+1}+H^j_{i-l}+H^i_{j-l}+H^i_{-1}H^j_{-1}\delta^l_2 +\left( H^i_j+ H^j_i + H^i_{-1} H^j_1+H^i_1 H^j_{-1} \right)\delta^l_0 .
\end{equation}
Comparison of negatives powers gives formula (\ref{1stratsercoeff}).
$\square$\par
As a consequence of the Lemma \ref{lem1strat} one has
\begin{prop}
\label{prop1strat}
The stratum $\Sigma_1$ contains the subset $W_1$ closed with respect to pointwise multiplication $W_1(H) \cdot W_1(H) \subset W_1(H)$. Elements of $W_1$ are vector spaces with basis $\langle p_i \rangle_i$ with $H^i_k$ satisfying the conditions (\ref{1strat0coeff}) and (\ref{1stratsercoeff}). Moreover Codim$(W_1)=$card$(\mathbb{N}-S_{W_1})=1$.
\end{prop}
The subset $W_1$ is the infinite family of infinite-dimensional commutative associative algebras $A_{\Sigma_1}$ with the basis $(1,p_2,p_3,p_4,\dots)$ and corresponding structure constants $C^l_{ij}$ given by (\ref{1stratstructcoeff}).\par
An analysis of the multiplication table (\ref{1stratalg}), i.e.
\begin{eqnarray}
\label{1strat22}
{p_2}^2&=&p_4+2H^2_{-1}p_3+(H^2_{-1})^2p_2 +2H^2_2+2H^2_1H^2_{-1}, \\
\label{1strat23}
{p_2}p_3&=&p_{{5}}+H^2_{{-1}}p_{{4}}+H^3_{{-1}}p_{{3}}+ \left( H^2
_{{-1}}H^3_{{-1}}+H^2_{{1}} \right) p_{{2}}+H^2_{{-1}}H^3_{{1}}+H^2_{{1}}H^3_{{-1}}
+H^2_{{3}}+H^3_{{2}},\\
\label{1strat24}
{p_2}p_4&=&p_{{6}}+H^2_{{-1}}p_{{5}}+ \left( H^2_{{1}}+H^4_{{-1}}
\right) p_{{3}}+ \left( H^2_{{-1}}H^4_{{-1}}+H^2_{{2}}
\right) p_{{2}}+H^2_{{4}}+H^2_{{-1}}H^4_{{1}}\\&&+H^2
_{{1}}H^4_{{-1}}+H^4_{{2}}, \nonumber\\
\label{1strat33}
{p_3}^2&=&p_{{6}}+2\,H^3_{{-1}}p_{{4}}+ \left( {H^3_{{-1}}}^{2}+2\,H^3_{{1}} \right) p_{{2}}
+2\,H^3_{{3}}+2\,H^3_{{-1}}H^3_{{1}}, \\
{p_2}p_5 &= & p_{{7}}+H^2_{{-1}}p_{{6}}+H^2_{{1}}p_{{4}}+
\left( H^2_{{2}}+H^5_{{-1}} \right) p_{{3}}
+ \left( H^2_{{3}}+H^2_{{-1}}H^5_{{-1}} \right) p_{{2}}+H^2_{{-1}}
H^5_{{1}}+H^5_{{2}} \label{1strat25} \\ &&
+H^2_{{5}}+H^2_{{1}}H^5_{{-1}}, \nonumber \\
{p_3}p_4 &= & p_{{7}}+H^3_{{-1}}p_{{5}}+H^4_{{-1}}p_{{4}}+
H^3_{{1}}p_{{3}}+ \left( H^3_{{2}}+H^3_{{-1}}H^4_{{-1}}
+H^4_{{1}} \right) p_{{2}}+H^4_{{3}}+H^3_{{4}}\label{1strat34} \\&&
+H^3_{{-1}}H^4_{{1}}+H^3_{{1}}H^4_{{-1}} \nonumber
\end{eqnarray}
and so on shows that the algebra $A_{\Sigma_1}$ at fixed $H^j_k$ is the polynomial algebra generated by $p_0=1,p_2,p_3$.
However the formulae (\ref{1strat24}) and (\ref{1strat33}) immediately indicate that they are not free. Indeed, subtracting (\ref{1strat33}) from (\ref{1strat24}), one first eliminates $p_6$ then, using (\ref{1strat22}) and (\ref{1strat23}), gets
\begin{equation}
\label{1stratellcurv}
\begin{split}
\mathcal{C}_6=& {p_3}^{2}-{p_2}p_4+H^2_{-1}p_5+2H^3_{-1}p_4+(H^2_{1}+H^4_{-1})p_3
+(-{H^3_{{-1}}}^{2}-2\,H^3_{{1}}+H^2_{{-1}}H^4_{{-1}}+H^2_{{2}})p_2\\&
-2\,H^3_{{-1}}p_{{4}}-2\,H^3_{{3}}
-2\,H^3_{{-1}}H^3_{{1}}+H^2_{{-1}}p_{{5}}+H^2_{{4}}
+H^2_{{-1}}H^4_{{1}}+H^2_{{1}}H^4_{{-1}}+H^4_{{2}}
\\=&{p_3}^{2}-{p_2}^{3}+3\,H^2_{-1}p_3\,p_2-2\,
H^3_{{-1}}{p_2}^{2}+ \left( {H^2_{{-1}}}^{3}+3\,H^2
_{{1}}+H^2_{{-1}}H^3_{{-1}} \right) p_3 \\
&- \left( {H^3_{{-1}}}^{2}+2\,H^3_{{1}}-3\,H^2_{{-1}}H^2_{{1}}
-3\,H^2_{{2}}+H^3_{{-1}}{H^2_{{-1}}}^{2} \right) {p_2}\\
&+3\,H^2_{{4}}+3\,H^2_{{3}}H^2_{{-1}}
+3\,H^2_{{2}}{H^2_{{-1}}}^{2}+3\,{H^2_{{1}}}^{2}-2\,H^3_{{-1}}H^3
_{{1}}-2\,H^3_{{3}}-3\,H^2_{{-1}}H^3_{{2}}
-3\,{H^2_{{-1}}}^{2}H^3_{{1}}\\&
+H^2_{{-1}}H^3_{{-1}}H^2_{{1}}+4\,H^2_{{2}}H^3_{{-1}} =0.
\end{split}
\end{equation}
This constraint, due to (\ref{1stratalg}), leads to the following
constraints on $H^2_i$ and $H^3_i$
\begin{equation}
\label{1strat23rel}
\begin{split}
H^3_2=&\frac{3}{2}\,H^2_{{3}}-\frac{1}{2}\,H^2_{{-1}}H^3_{{1}}+\frac{1}{2}\,H^3_{{-1}}H^2_{{1}},\\
H^3_4=& \frac{1}{2}\,H^2_{{-1}}H^2_{{2}}H^3_{{-1}}
+\frac{1}{2}\,H^3_{{-1}}H^2_{{3}}+\frac{1}{4}\,{H^2_{{-1}}}^{3}H^3_{{1}}
-\frac{1}{2}\,H^3_{{1}}H^2_{{1}}+\frac{3}{2}\,H^2_{{4}}H^2_{{-1}}
+\frac{3}{2}\,H^2_{{2}}H^2_{{1}}+\frac{3}{2}\,H^2_{{5}}
\\&-\frac{3}{4}\,H^2_{{3}}{H^2_{{-1}}}^{2}-\frac{3}{2}\,H^2_{{-1}}H^3_{{3}}
-\frac{1}{4}\,{H^2_{{-1}}}^{2}H^3_{{-1}}H^2_{{1}},\\
H^3_5=& \frac{3}{4}\,H^2_{{1}}H^2_{{3}}-\frac{3}{4}\,H^2_{{-1}}H^2_{{5}}
+\frac{3}{2}\,H^2_{{6}}-\frac{3}{4}\,H^2_{{4}}{H^2_{{-1}}}^{2}
+2\,H^3_{{-1}}H^2_{{4}}+\frac{1}{4}\,H^3_{{-1}}{H^2_{{1}}}^{2}
+\frac{1}{2}\,{H^3_{{-1}}}^{2}H^2_{{2}}-H^3_{{-1}}H^3_{{3}}
\\&-\frac{1}{4}\,H^2_{{-1}}H^2_{{1}}{H^3_{{-1}}}^{2}
-\frac{1}{4}\,H^3_{{-1}}H^2_{{2}}{H^2_{{-1}}}^{2}
-H^3_{{-1}}H^2_{{-1}}H^2_{{3}}+\frac{1}{4}\,H^3_{{-1}}{H^2_{{-1}}}^{2}H^3_{{1}}
-\frac{3}{4}\,H^2_{{1}}H^2_{{-1}}H^2_{{2}}
\\&+\frac{1}{8}\,{H^2_{{-1}}}^{3}H^3_{{-1}}H^2_{{1}}
+\frac{3}{4}\,{H^2_{{-1}}}^{2}H^3_{{3}}-\frac{1}{2}\,{H^3_{{1}}}^{2}
-\frac{1}{8}\,{H^2_{{-1}}}^{4}H^3_{{1}}+\frac{3}{8}\,H^2_{{3}}{H^2_{{-1}}}^{3}
+H^2_{{2}}H^3_{{1}},\\
&\dots
\end{split}
\end{equation}
It is not difficult to see that the conditions (\ref{1strat23rel}) form a subset of the system (\ref{1stratsercoeff}). The coefficient $H^3_2$ appear in the curve (\ref{1stratellcurv}) itself which, after the substitution, becomes
\begin{equation}
\label{1stratellcurv-red}
\begin{split}
\mathcal{F}^1_{23}=&{p_3}^{2}-{p_2}^{3}+3\,H^2_{-1}p_3\,p_2-2\,
H^3_{{-1}}{p_2}^{2}+ \left( {H^2_{{-1}}}^{3}+3\,H^2
_{{1}}+H^2_{{-1}}H^3_{{-1}} \right) p_3 \\
&- \left( {H^3_{{-1}}}^{2}+2\,H^3_{{1}}-3\,H^2_{{-1}}H^2_{{1}}
-3\,H^2_{{2}}+H^3_{{-1}}{H^2_{{-1}}}^{2} \right) {p_2}\\
&-2\,H^3_{{3}}-2\,H^3_{{-1}}H^3_{{1}}+3\,H^2_{{4}}+
3\,H^2_{{2}}{H^2_{{-1}}}^{2}-\frac{3}{2}\,H^2_{{-1}}H^2_{{
3}}+3\,{H^2_{{1}}}^{2}-\frac{3}{2}\,{H^2_{{-1}}}^{2}H^3_{{1}}\\
&-\frac{1}{2}\,H^2_{{-1}}H^2_{{1}}H^3_{{-1}}+4\,H^3_{{-1}}
H^2_{{2}} =0.
\end{split}
\end{equation}
The constraint (\ref{1stratellcurv}) implies also that any element $p_n \in A_{\Sigma_1}$ has the form
\begin{equation}
\label{1stratunique}
p_n=\alpha_n(p_2)+\beta_n(p_2)p_3, \qquad n=2,3,4,5,6,\dots
\end{equation}
where $\alpha_n$ and $\beta_n$ are certain polynomials of orders
$\left[\frac{n}{2}\right]$ and $\left[\frac{n-3}{2}\right]$,
respectively. \par The system of equations (\ref{1stratsercoeff})
gives rise to infinitely many other constraints between $p_2$ and
$p_3$. For instance, subtracting the formulae (\ref{1strat34}) and
(\ref{1strat25}), expressing $p_4,p_5,p_6$ via $p_2$ and $p_3$,
one gets
\begin{equation}
\label{1stratC7}
\begin{split}
\mathcal{C}_7=& p_2p_5-p_3p_4 +\dots\\
=& H^2_{-1}{p_3}^2-H^2_{-1}{p_2}^3-3\,{H^2_{{-1}}}^{2}p_2p_3 +2\,H^2_{{-1}}H^3_{{-1}}{p_2}^2+
(-3\,H^2_{{1}}H^2_{{-1}}-{H^2_{{-1}}}^{2}H^3_{{-1}}
-{H^2_{{-1}}}^{4})p_3\\&
+(2\,H^2_{{-1}}H^3_{{1}}-3\,H^2_{{-1}}H^2_{{2}}
-3\,H^2_{{1}}{H^2_{{-1}}}^{2}+{H^2_{{-1}}}^{3}H^3_{{-1}
}+H^2_{{-1}}{H^3_{{-1}}}^{2})p_2+
2\,H^2_{{-1}}H^3_{{-1}}H^3_{{1}}\\&-3\,{H^2_{{-1}}}^{
2}H^2_{{3}}+3\,{H^2_{{-1}}}^{3}H^3_{{1}}+3\,{H^2_{
{-1}}}^{2}H^3_{{2}}-3\,H^2_{{2}}{H^2_{{-1}}}^{3}
-{H^2_{{-1}}}^{2}H^3_{{-1}}H^2_{{1}}-3\,H^2_{{4}}H^2
_{{-1}}\\&+2\,H^2_{{-1}}H^3_{{3}}-3\,H^2_{{-1}}{H^2_{
{1}}}^{2}-4\,H^2_{{-1}}H^2_{{2}}H^3_{{-1}}
=0.
\end{split}
\end{equation}
It is easy to see that
\begin{equation}
\mathcal{C}_7=H^2_{-1}\mathcal{F}^1_{23},
\end{equation}
i.e. the constraint (\ref{1stratC7}) is satisfied due to the constraint (\ref{1stratellcurv}).\\
One observes that other simple constraints obtained in such a way
have similar properties
\begin{equation}
\label{strat1-C8C9}
\begin{split}
\mathcal{C}_8=& {p_4}^2-p_{3}p_5+\dots={p_2}^4-{p_3}^2p_2+\dots = -p_2 \mathcal{F}^1_{23},\\
\mathcal{C}_9=& {p_3}p_6-p_{4}p_5+\dots={p_3}^3-{p_2}^3{p_3}+\dots = \left( p_{{3}}-3\,H^2_{{-1}}p_{{2}}-3\,H^2_{{1}}-{H^2
_{{-1}}}^{3}-H^2_{{-1}}H^3_{{-1}} \right) \mathcal{F}^1_{23}.
\end{split}
\end{equation}
For other examples see Appendix \ref{App-1strat}.\par
In general one has the following
\begin{lem}
\label{lemC6}
For any constraint
\begin{equation}
\label{effe}
f(p_2,p_3)=0
\end{equation}
arising from the system (\ref{1stratsercoeff}) the polynomial $f(p_2,p_3)$ is in the ideal generated by $\mathcal{F}^1_{23}$.
\end{lem}
{\bf Proof}
Let us assume that $f(p_2,p_3)$ is not in the ideal generated by $\mathcal{F}^1_{23}$, i.e.
\begin{equation}
\label{effeqG}
f(p_2,p_3)=q(p_2,p_3) \mathcal{F}^1_{23}+R(p_2,p_3)
\end{equation}
where $q(p_2,p_3)$ is certain polynomial and the rest $R(p_2,p_3)$
is not identically zero. Since
$R(p_2,p_3)=f(p_2,p_3)|_{\mathcal{F}^1_{23}=0}$ the rest
$R(p_2,p_3)$ has the form
\begin{equation}
\label{Gpol}
R(p_2,p_3)=A(p_2)+B(p_2)p_3
\end{equation}
where $A$ and $B$ are certain polynomials. So our assumption, due to (\ref{effe},\ref{effeqG},\ref{Gpol}), is equivalent to the existence of nonzero $A$ and $B$ such that
\begin{equation}
\label{restozero}
A(p_2)+B(p_2)p_3=0.
\end{equation}
The point is that such polynomials $A$ and $B$ do not exist. Indeed the l.h.s. of (\ref{restozero}), is a polynomial in $p_2$ and $p_3$ of certain order and, hence, can be written as $\sum_{k=0,2,3,\dots}^n \gamma_k p_k$. Since $p_0,p_2,p_3,p_4,\dots$ are elements of a basis in ${\Sigma_1}$ then the condition (\ref{restozero}) is satisfied iff all $\gamma_k=0$. One arrives to the same conclusion considering the representation of $p_2$ and $p_3$ as Laurent series
(\ref{1stratser}).
$\square$\par
This lemma leads to
\begin{prop}
Algebra $A_{\Sigma_1}$ at fixed $H^j_k$ is equivalent to the algebra $\mathbb{C}[p_2,p_3]\slash \mathcal{F}^1_{23}$.
\end{prop}
Similar to $\Sigma_\varnothing$ one can treat $p_2(z),p_3(z),...$
for given $H^i_k$ and variable $z$ as the local coordinates in
$\Sigma_1$. In such interpretation the condition (\ref{1stratalg})
becomes constraints on the coordinates and one has
\begin{prop}
\label{tower-1strat}
The stratum $\Sigma_1$ contains an infinite family $\Gamma^1_{\infty}$ of infinite dimensional varieties which are intersection of the quadrics
\begin{equation}
\label{1stratquadcurv}
\begin{split}
f_{ij}^{(1)}=&p_ip_j-p_{i+j}- H^j_{-1}p_{i+1} - H^i_{-1}p_{j+1} -
\sum_{l=2}^{i-1} H^j_{i-l} p_l - \sum_{l=2}^{j-1} H^i_{j-l} p_l \\
&- H^i_{-1}H^j_{-1}p_2 - \left( H^i_j+ H^j_i + H^i_{-1}
H^j_1+H^i_1 H^j_{-1} \right)=0,
\quad i,j = 2,3,4, \dots .
\end{split}
\end{equation}
and parameterized by the variables $H^j_k$ $(j=2,3,\dots)$ obeying
the algebraic equations (\ref{1stratsercoeff}). This family
$\Gamma^1_\infty$ is the infinite tower of algebraic curves of
genus $1$ with the elliptic curve in the base.
\end{prop}
{\bf Proof} As it was shown above the relations
(\ref{1stratquadcurv}) are equivalent to the following
\begin{equation}
\label{1strat-h1n}
\begin{split}
\mathcal{F}^1_{23}&=0, \qquad
h^{(1)}_n=p_n-\alpha_n(p_2)-\beta_n(p_2)p_3 =0, \qquad n=4,5,6,\dots .
\end{split}
\end{equation}
So, in the subspace $(p_2,p_3)$ one has, for given $H^i_j$ an
elliptic curve which generically has genus $1$. In the three
dimensional space $p_2,p_3,p_4$ one has a curve which is the
intersection on the cylindrical surface generated by the elliptic
curve and the quadric
\begin{equation}
h^{(1)}_4=p_4-{p_{{2}}}^{2}+2\,H^2_{{-1}}p_{{3}}+{H^2_{{-1}}}^{2}p_{{2}}+2\,H^2_{{-1}}H^2_{{1}}+2\,H^2_{{2}}.
\end{equation}
In the $d$-dimensional subspace one has the curve with the ideal
\begin{equation}
I(\Gamma_d^1)=\langle \mathcal{F}^1_{23},h^{(1)}_4,h^{(1)}_5,\dots,h^{(1)}_{d+1}\rangle.
\end{equation}
$\square$\par
Moduli $g_2$, $g_3$ (see e.g. \cite{HC,Sil}) of the elliptic curves (\ref{1stratellcurv}) are equal to
\begin{equation}
\label{1stratg2g3}
\begin{split}
g_2=& \frac{3}{2}\,H^2_{{1}}H^2_{{-1}}-\frac{1}{3}\,{H^3_{{-1}}}^{2}-3\,H^2_{{2}}
-\frac{1}{2}\,{H^2_{{-1}}}^{2}H^3_{{-1}}+2\,H^3_{{1}}-\frac{3}{16}\,{H^2_{{-1}}}^{4},\\
g_3=&2\,H^3_{{3}}-3\,H^2_{{4}}-\frac{3}{4}\,{H^2_{{1}}}^{2}
+\frac{2}{3}\,H^3_{{-1}}H^3_{{1}}+\frac{3}{2}\,H^2_{{-1}}H^2_{{3}}
+H^2_{{-1}}H^2_{{1}}H^3_{{-1}}-2\,H^3_{{-1}}H^2_{{2
}}-\frac{3}{4}\,H^2_{{2}}{H^2_{{-1}}}^{2}\\
&-{\frac
{2}{27}}\,{H^3_{{-1}}}^{3}-\frac{1}{8}\,{H^2_{{-1}}}^{4}H^3_{{-1}}-\frac{1}{6}\,{H^2_
{{-1}}}^{2}{H^3_{{-1}}}^{2}+\frac{3}{8}\,{H^2_{{-1}}}^{3}H^2_{{
1}}-\frac{1}{32}\,{H^2_{{-1}}}^{6}
\end{split}
\end{equation}
and the $J$-invariant is $J=1728\frac{4{g_2}^3}{\Delta}$ where the discriminant $\Delta=-16\,(4{g_{{2}}}^{3}+27\,{g_{{3}}}^{2})$ is given in the Appendix \ref{App-1strat}. \par
It follows from equations (\ref{1stratsercoeff}) that all $H^i_j$ can be expressed (polynomially) in terms of $H^2_i$ $(i=-1,1,2,3,\dots)$, $H^3_{-1},H^3_{1}$, and $H^3_{3}$. For instance
\begin{equation}
H^3_2=\frac{3}{2}\,H^2_{{3}}-\frac{1}{2}\,H^2_{{-1}}H^3_{{1}}+\frac{1}{2}\,H^3_{{-1}}H^2_{{1}}.
\end{equation}
Thus the family of curves $\Gamma^1_{\infty}$ is parameterized by
$H^2_{-1},H^2_1,H^2_2,\dots, H^3_{-1},H^3_1,$ and $H^3_3$.\par
We emphasize that the elliptic curve $\mathcal{F}^1_{23}$ and its coordinate ring
correspond to a point in $W_1$. Hence, in the stratum $\Sigma_1$ one may refer to such a point
as an {\it elliptic point} and $W_1$ as an {\it elliptic subset} of $\Sigma_1$.
\begin{prop}
Index$({\overline{\partial}_{W_1}})=-1$.
\end{prop}
{\bf Proof } Since $S_{W_1}=\mathbb{N}-\{1\}$, then $S_{\widetilde{W}_1}=\{-1,1,2,3,\dots \}$. Hence,
Index$({\overline{\partial}_{W_1}})=$card$(\{\varnothing\})$-card$(\{-1\})$=-1. $\square$\par
Coordinate ring of the elliptic curve (\ref{1stratellcurv-red})
contains various higher order singular algebraic curves. One of
the examples is the singular hyperelliptic curve
\begin{equation}
\label{1F25}
\begin{split}
\mathcal{F}^1_{25}=&{p_5}^2-{p_2}^5+5\,H^2_{{-1}}{p_2}^2p_5+\left(5\,H^2_{{1}}+5\,{H^2_{{-1}}}^{3}\right)p_2p_5+\left(-2\,{H^2_{{-1}}}^{2}H^3_{{-1}}+11\,H^2_{{-1}}H^2_{
{1}}+2\,{H^3_{{-1}}}^{2}\right.\\&\left.-2\,{H^2_{{-1}}}^{4}+3\,H^2_{{2
}}-2\,H^3_{{1}}
\right){p_2}^3+\left(-H^2_{{-1}}{H^3_{{-1}}}^{2}+{H^2_{{-1}}}^{3}H^3_{{
-1}}+2\,H^2_{{1}}{H^2_{{-1}}}^{2}-4\,H^2_{{2}}H^2_
{{-1}}\right.\\&\left.+H^2_{{-1}}H^3_{{1}}+2\,{H^2_{{-1}}}^{5}
+5\,H^2_{{3}}\right)p_5+D_4\,p_4+D_2\,p_2 +D_0=0
\end{split}
\end{equation}
whose coefficients are given in Appendix \ref{App-1strat}.
This plane quintic has genus $1$ and
\begin{equation}
\begin{split}
&\mathcal{F}^1_{25}=\left(p_{{5}}+4\,H^2_{{-1}}{p_{{2}}}^{2}
+ \left( -H^2_{{-1}}H^3_{{-1}}+p_{{3}}+6\,{H^2_{{-1}}}^{3}+4\,H^2_{{1}}
\right) p_{{2}}+\frac{1}{2}\,H^2_{{-1}}H^3_{{1}}-H^2_{{-1}}{
H^3_{{-1}}}^{2}\right.\\&\left.+{H^2_{{-1}}}^{3}H^3_{{-1}}+4\,H^2_
{{1}}{H^2_{{-1}}}^{2}-2\,H^2_{{2}}H^2_{{-1}}+\frac{5}{2}\,{\it
H2}_{{3}}-p_{{3}}H^3_{{-1}}-\frac{3}{2}\,H^3_{{-1}}H^2_{{1}}+
2\,p_{{3}}{H^2_{{-1}}}^{2}+2\,{H^2_{{-1}}}^{5}\right)h_5^{(1)}
\\&
+\left({p_{{2}}}^{2}+ \left( -2\,H^3_{{-1}}+4\,{H^2_{{-1}}}^{2}
\right) p_{{2}}-4\,{H^2_{{-1}}}^{2}H^3_{{-1}}+{H^3_{
{-1}}}^{2}+4\,{H^2_{{-1}}}^{4}\right)\mathcal{F}^1_{23}.
\end{split}
\end{equation}
Different type of curve contained in the family $\Gamma^1_{\infty}$ is given ,for instance, by the trigonal curve
\begin{equation}
\label{1stratTrig}
\begin{split}
&\mathcal{F}^{1}_{34}=
\\&{p_4}^3-{p_3}^4
+4\,H^3_{{-1}}{p_3}{p_4}^2 +\left(3\,{H^2_{{-1}}}^{3}+6\,H^2_{{-1}}H^3_{{-1}}
-6\,H^2_{{1}}\right){p_3}^3+\left(-2\,{H^3_{{-1}}}^{2}+4\,H^3_{{1}}\right){p_4}^2\\&
+\left(-6\,H^2_{{-1}}H^2_{{2}}+3\,H^2_{{1}}{H^2_{{-1}}}^{2}
+4\,H^2_{{-1}}H^3_{{1}}+12\,H^3_{{-1}}H^2_{{1}}-
5\,H^3_{{-1}}{H^2_{{-1}}}^{3}-10\,H^2_{{-1}}{H^3_{{-1}}}^{2}\right){p_3}p_4\\&
+\left(-3\,{H^2_{{-1}}}^{6}-12\,{H^2_{{-1}}}^{4}H^3_{{-1}}
+12\,{H^2_{{-1}}}^{3}H^2_{{1}}
-12\,{H^2_{{-1}}}^{2}{H^3_{{-1}}}^{2}+27\,H^2_{{-1}}H^2_{{1}}H^3_{{-1}}
-15\,{H^2_{{1}}}^{2}
\right.\\&\left.
+3\,H^2_{{3}}H^2_{{-1}}-6\,H^2_{{4}}
-3\,{H^2_{{-1}}}^{2}H^3_{{1}}+3\,H^2_{{2}}{H^2_{{-1}}}^{2}
+4\,H^3_{{-1}}H^3_{{1}}+4\,H^3_{{3}}\right){p_3}^2\\&
+A_4p_4\, +A_3 p_3\, +A_0=0
\end{split}
\end{equation}
where $A_4,A_3$ and $A_0$ are given in the Appendix \ref{App-1strat}. This plane curve has genus $1$. For this curve one has
\begin{equation}
\label{1F34-ideal}
\mathcal{F}^1_{34} =\left({p_4}^2+ap_4p_3+b{p_{3}}^2+cp_4+dp_3+f \right)h_4^{(1)}+ \left(-{p_3}^2+h p_3 + j \right)\mathcal{F}^1_{23}
\end{equation}
where the coefficients $a,b,c,d,f,h,j$ are given in Appendix \ref{App-1strat}.\par
Finally a projection of the curve in the four dimensional space $(p_2,p_3,p_4,p_5)$ defined by the equation
\begin{equation}
\mathcal{F}^1_{23}=0, \qquad h^{(1)}_4=0, \qquad h^{(1)}_5=0
\end{equation}
to the subspace $(p_4,p_5)$ is given by (4,5) curve
\begin{equation}
\label{1F45}
\mathcal{F}^1_{45}={p_5}^4-{p_4}^5+\dots=0
\end{equation}
where the coefficients are too long for writing them here. They can be easily computed by an algebraic manipulator. The curve (\ref{1F45}) is singular and has genus one.\par
\section{Stratum $\Sigma_1$. Weierstrass function reduction}
Here we will study the reduction of the system (\ref{1stratalg})-(\ref{1stratsercoeff}) associated with the celebrated Weierstrass $\wp$-function given by the series (see e.g. \cite{HC})
\begin{equation}
\wp(u)=\frac{1}{u^2}+\sum_{n=2}^\infty c_n u^{2n-2}
\end{equation}
where the coefficients $c_n$ are defined by the recurrence
relation
\begin{equation}
c_n=\frac{1}{(n-3)(2n+1)}\sum_{k=2}^{n-2}c_kc_{n-k}, \qquad n=4,5,6,\dots\ .
\end{equation}
The Weierstrass function $\wp(u)$ and its derivative $\wp'(u)$ obey the equation
\begin{equation}
\label{WPeqn}
\wp'(u)^2=4\wp(u)^3-g_2 \wp (u)-g_3.
\end{equation}
where $g_2=20 c_2$ and $g_3=28c_3$. Equation (\ref{WPeqn}) clearly
indicates that equation (\ref{1stratellcurv-red})
$\mathcal{F}^1_{23}=0$ should admit the reduction for which $p_2$
and $p_3$ are connected with $\wp(u)$ and $\wp'(u)$, respectively.
It is indeed the case and for such a reduction
\begin{equation}
\label{p23wp}
p_2(z)=\wp(1/z)\qquad \mathrm{and} \qquad p_3(z)=-\frac{1}{2}\wp'(u)|_{u=z^{-1}},
\end{equation}
i.e. $H^2_{2n}=c_{n+1}$, $n=1,2,3,\dots$, $H^2_{2m+1}=0$, $m=-1,0,1,\dots$, $H^{3}_{k}=-\frac{k}{2}H^{2}_{k}$, $k=-1,0,1,\dots$ . Then it is a straightforward check that the whole system (\ref{1stratalg})-(\ref{1stratsercoeff}) admits the reduction
\begin{equation}
\label{WPred}
\begin{split}
p_{n+2}=-\frac{1}{(n+1)!}\partial_u^n \wp(u)|_{u=z^{-1}}, \qquad n \ odd \\
p_{n+2}=\frac{1}{(n+1)!}\partial_u^n \wp(u)|_{u=z^{-1}}-\frac{1}{n+1}c_{n/2+1}, \qquad n \ even .
\end{split}
\end{equation}
Under this reduction equations (\ref{1strat22})-(\ref{1strat34}) take the form
\begin{equation}
\begin{split}
p_2p_2=& p_4 +\frac{1}{10}g_2, \\
p_2p_3=& p_5, \\
p_3p_3=& p_6 -\frac{g_2}{10}p_2-\frac{g_3}{7}, \\
p_2p_4=& p_6 +\frac{g_2}{20}p_2+\frac{3g_3}{28}, \\
\dots &.
\end{split}
\end{equation}
Due to (\ref{WPred}) they are nothing else than the classical
equations (see e.g. \cite{HC}) for the Weierstrass function
$\wp(u)$
\begin{equation}
\label{wprel}
\begin{split}
\wp''=&6\wp^2-\frac{g_2}{2},\\
\wp'''=&12\wp \wp', \\
\wp''''=&30 {\wp'}^2+12g_2\wp+18 g_3, \\
\wp''''=& 20 \wp''\wp -8 g_2 \wp -12 g_3,\\
\dots\ . &
\end{split}
\end{equation}
In particular, the last two equations (\ref{wprel}) and the first one give equation (\ref{WPeqn}). One can observe also that the formula (\ref{1stratunique}) under this reduction becomes the well known expression (see e.g. \cite{HC})
\begin{equation}
\label{WPpoly}
\alpha_n(\wp(u))+\beta_n(\wp(u))\wp'(u)
\end{equation}
for an entire elliptic function.\par
Pure algebraic characterization of the reduction (\ref{WPred}) is an interesting open problem. \par
For the Weierstrass reduction
\begin{equation}
\label{WP2P5}
p_2=\wp(u)|_{u=1/z}, \qquad p_5=-\frac{1}{24}\wp'''(u)|_{u=1/z}
\end{equation}
the hyperelliptic curve (\ref{1F25}) has the form
\begin{equation}
\label{WP125}
{\wp'''(u)}^2-576{\wp(u)}^5+144g_2{\wp(u)}^3+144g_3{\wp(u)}^2=0,\qquad u=1/z
\end{equation}
which is a consequence of the formula (\ref{WPeqn}). It has,
obviously, genus one. The formula (\ref{WP2P5}) reproduces the
well known parametrization of the fifth order hyperelliptic curve
of genus one in terms of the Weierstrass-$\wp$ function.\par
Weierstrass reduction (\ref{WPred}) of the trigonal curve
(\ref{1stratTrig}) is given by
\begin{equation}
\begin{split}
p_3=&-\frac{1}{2}\wp'(u)\Big{|}_{u=z^{-1}}, \\
p_4=&\frac{1}{6} \wp''(u)\Big{|}_{u=z^{-1}}-\frac{g_2}{60} =\wp^2(u)\Big{|}_{u=z^{-1}}+\frac{g_2}{10}
\end{split}
\end{equation}
while the curve (\ref{1stratTrig}) takes the form
\begin{equation}
\label{trigwp}
\left({\wp'(u)}^2+g_3\right)^2=\frac{2}{27}\left(\wp''(u)-g_2\right)^2\left(\wp''(u)+\frac{g_2}{2}\right).
\end{equation}
Finally, for the Weierstrass reduction, i.e. for
\begin{equation}
\label{p4p5WP}
\begin{split}
p_4=&\frac{1}{6}\wp''(u)|_{u=z^{-1}}-\frac{g_2}{10},\\
p_5=&-\frac{1}{24}\wp'''(u)|_{u=z^{-1}}
\end{split}
\end{equation}
the curve (\ref{1F45}) is
\begin{equation}
\begin{split}
&3(\wp'''(u))^4 - 128(\wp''(u))^5+64\,g_{{2}}(\wp''(u))^4+144\,g_{{3}}(\wp'''(u))^2\wp''(u)
+160\,{g_{{2}}}^{2}(\wp''(u))^3\\&+72\,g_{{2}}g_{{3}}(\wp'''(u))^2+(-16\,{g_{{2}}}^{3}
+1728\,{g_{{3}}}^{2})(\wp''(u))^2+(-64\,{g_{{2}}}^{4}+1728\,{g_{{3}}}^{2}g_{{2}})\wp''(u)
\\&+432\,{g_{{3}}}^{2}{g_{{2}}}^{2}-16\,{g_{{2}}}^{5}=0.
\end{split}
\end{equation}
It is a straightforward check that this equation is satisfied due
to equations (\ref{wprel}) and (\ref{1stratTrig}). The formula
(\ref{p4p5WP}) gives us a parameterization of the genus one
$(4,5)$ curve (\ref{1F45}) in terms of Weierstrass $\wp$-function.
\par In a similar manner one can get Weierstrass function
parameterization of $(n,n+1)$ curves $n=5,6,7,\dots$ of genus one.
\section{Stratum $\Sigma_{1,2}$. Trigonal curve of genus two}
Now we will consider the stratum $\Sigma_{1,2}$ which corresponds to the set $S=(-2,-1,0,3,4,\dots)$. First and second order elements are absent and, hence, the positive order elements of the canonical basis are
\begin{equation}
\label{2stratLaurbas}
\begin{split}
p_0&=1+\sum_{k=1}^{\infty} \frac{H^0_{k}}{z^k},\\
p_i&=z^i+H^i_{-2}z^2+H^i_{-1}z+\sum_{k=1}^{\infty}\frac{H^i_{k}}{z^k}, \qquad i=3,4,5,\dots\ .
\end{split}
\end{equation}
In this case $(p_{-2})^2,(p_{-1})^2 \notin \langle p_i\rangle_{i=-2,-1,2,3,\dots}$,
and, hence, only the elements of the positive order can be involved in the subset closed with respect to pointwise multiplication.
\begin{lem}
\label{lem2stratalg}
Laurent series (\ref{2stratLaurbas}) obey the equations
\begin{equation}
\label{2stratalg}
p_n(z) p_m(z)=\sum_{l=0,3,4,\dots}C_{nm}^l p_l(z), \qquad n,m=0,3,4,5,\dots
\end{equation}
if and only if
\begin{equation}
H^0_k=0,\qquad k=1,2,\dots
\end{equation}
and
\begin{equation}
\label{alg2-comp}
\begin{split}
&H^m_{n+l}+H^n_{m+l}+\sum_{i=1}^2 (H^n_{-i}H^m_{l+i}+H^m_{-i}H^n_{l+i})+\sum_{i=1}^{l-1} (H^n_{i}H^m_{l-i}+H^m_{i}H^n_{l-i})
\\
&=H^{n+m}_l+\sum_{i=m+1}^{m+2} H^n_{m-i} H^{i}_l+\sum_{i=3}^{m-1} H^n_{m-i} H^{i}_l
+\sum_{i=n+1}^{n+2}H^m_{n-i} H^{i}_l+\sum_{i=3}^{n-1} H^m_{n-i} H^{i}_l+H^n_m+H^m_n \\
&+H^n_{-2}H^m_{-2}H^{4}_l+(H^n_{-1}H^m_{-2}+H^n_{-2}H^m_{-1})H^{3}_l+\sum_{i=1}^2(H^n_{-i}H^m_i+H^m_{-i}H^n_i)\delta^0_l, \qquad n,m=3,4,5,\dots\ .
\end{split}
\end{equation}
The constants $C^l_{ij}$ have the form
\begin{equation}
\label{2stratstructconst}
\begin{split}
C^l_{nm}=&\delta_{n+m}^l+\sum_{i=m+1}^{m+2} H^n_{m-i} \delta_i^l+\sum_{i=3}^{m-1} H^n_{m-i} \delta_i^l
+\sum_{i=n+1}^{n+2}H^m_{n-i} \delta_i^l+\sum_{i=3}^{n-1} H^m_{n-i} \delta_i^l +H^n_{-2}H^m_{-2}\delta_4^l\\
&+(H^n_{-1}H^m_{-2}+H^n_{-2}H^m_{-1})\delta_3^l+(H^n_m+H^m_n)\delta_0^l+\sum_{i=1}^2(H^n_{-i}H^m_i+H^m_{-i}H^n_i)
\delta_0^l, \qquad n,m=3,4,5,\dots\ .
\end{split}
\end{equation}
\end{lem}
An analog of Proposition \ref{prop1strat} is
\begin{prop}
\label{prop2strat}
The stratum $\Sigma_{1,2}$ contains the subset $W_{1,2}$ of codimension $2$ closed with respect to the pointwise multiplication $W_{1,2}(H) \cdot W_{1,2}(H) \subset W_{1,2}(H).$ Elements of $W_{1,2}$ are vector spaces with basis $\langle p_i \rangle_i$ with $H^i_k$ obeying the condition (\ref{alg2-comp}). This subset is the infinite family of infinite-dimensional associative algebras $A_{\Sigma_{1,2}}$ with the basis $(1,p_3,p_4,p_5,\dots)$ and the structure constants given by (\ref{2stratstructconst}).
\end{prop}
Analysis of the equations (\ref{2stratalg}), i.e. the equations
\begin{equation}
\begin{split}
{p_3}^2=&p_{{6}}+2\,H^3_{{-2}}p_{{5}}
+ \left( 2\,H^3_{{-1}}+{H^3_{{-2}}}^{2} \right) p_{{4}}+2\,H^3_{{-1}}H^3
_{{-2}}p_{{3}}+2\,H^3_{{2}}H^3_{{-2}}
+2\,H^3_{{1}}H^3_{{-1}}+2\,H^3_{{3}}, \\
p_{3}p_{4}=&p_{{7}}+H^3_{{-2}}p_{{6}}+ \left( H^4_{{-2}}
+H^3_{{-1}} \right) p_{{5}}+ \left( H^3_{{-2}}H^4_{{-2}
}+H^4_{{-1}} \right) p_{{4}}+ \left( H^3_{{1}}
+H^3_{{-1}}H^4_{{-2}}+H^3_{{-2}}H^4_{{-1}} \right) p_{{3}} \\&
+H^3_{{2}}H^4_{{-2}}+H^3_{{1}}H^4_{{-1}}+H^3_{{-1}}
H^4_{{1}}+H^3_{{-2}}H^4_{{2}}+H^3_{{4}}+H^4_{{3}},\\
\dots
\end{split}
\end{equation}
shows that $A_{\Sigma_{1,2}}$ for fixed $H^j_k$ is the polynomial algebra generate by four elements $1,p_3,p_4,p_5$. \par
Analogously to $A_{\Sigma_1}$ these generators are not free and obey certain constraints. Considering equations (\ref{2stratalg}) with $i+j=8$, $i+j=9$, and $i+j=10$, one gets the following constraints
\begin{equation}
\label{2stratC8}
\begin{split}
\mathcal{C}_8=&p_3p_5-{p_4}^2-
H^3_{{-2}}p_3p_4
- \left( H^3_{{-1}}-2\,H^4_{{-2}}-{H^3_{{-2}}}^{2} \right) {p_3}^{2}\\
&- \left( H^5_{{-2}}-2\,H^4_{{-1}}+3\,H^3_{{-2}}H^4
_{{-2}}-3\,H^3_{{-2}}H^3_{{-1}}+2\,{H^3_{{-2}}}^{3}
\right) p_5\\
&- \left( H^3_{{-2}}{H^5}_{{-2}}+H^5_{{-1}}+H^3_{{1}}-{H^4_{{-2}}}^{2}+{
H^3_{{-2}}}^{2}H^4_{{-2}}-H^3_{{-2}}H^4_{{-1}}-2\,{
H^3_{{-1}}}^{2}+H^3_{{-1}}{H^3_{{-2}}}^{2}
\right.\\&\left.+4\,H^4_{{-2}}H^3_{{-1}}+{H^3_{{-2}}}^{4} \right) p_4- \left( H^3_{{-1}}H^5_{{-2}}+H^3_{{-2}}H^5_
{{-1}}+H^3_{{2}}-2\,H^4_{{1}}-2\,H^4_{{-1}}H^4_{{-
2}}-H^3_{{-2}}H^3_{{1}}\right.\\&\left.+3\,H^3_{{-2}}H^3_{{-1}}{
H^4}_{{-2}}-{H^3_{{-2}}}^{2}H^4_{{-1}}-2\,H^3_{{-2}}
{H^3_{{-1}}}^{2}+2\,{H^3_{{-2}}}^{3}H^3_{{-1}} \right)
p_3\\
&+2\,H^3_{{-1}}H^3_{{-2}}H^3_{{2}}+H^3_{{-2
}}H^3_{{1}}H^4_{{-1}}+H^3_{{-2}}H^3_{{-1}}H^4
_{{1}}-3\,H^3_{{-2}}H^3_{{2}}H^4_{{-2}}
-H^3_{{5}}-H^3_{{-1}}H^5_{{1}}-H^3_{{2}}H^5_{{-2}}\\&-H^3_{{1}}H^5_{{-1}}
+{H^3_{{-2}}}^{2}H^4_{{2}}+H^3_{{-2}}H^3_{{4}}+H^3_{{-2}}H^4_{{3}}
-H^5_{{3}}+2\,H^4_{{4}}-H^3_{{-2}}{
H^5}_{{2}}-2\,{H^3_{{-2}}}^{2}H^3_{{3}}+2\,H^4_{{2}}
H^4_{{-2}}\\&+2\,H^4_{{1}}H^4_{{-1}}+2\,H^3_{{1}}{{
H^3}_{{-1}}}^{2} +2\,H^3_{{3}}H^3_{{-1}}-4\,H^4_{{-2}
}H^3_{{3}}-2\,{H^3_{{-2}}}^{3}H^3_{{2}}-4\,H^4_{{-
2}}H^3_{{1}}H^3_{{-1}}-2\,{H^3_{{-2}}}^{2}H^3_{{1} }H^3_{{-1}}=0
\end{split}
\end{equation}
and
\begin{equation}
\label{2stratC9}
\begin{split}
\mathcal{C}_9=&p_3p_6-p_4p_5+\dots \\
=&{{p_3}}^{3}-{p_4}\,{p_5}-3\,H^3_{{-2}}{p_3}\,{p_5}-
\left( 3\,H^3_{{-1}}-H^4_{{-2}} \right) {p_3}\,{p_4}
- \left( {H^3_{{-2}}}^{3}-H^5_{{-2}}-H^4_{{-1}}+{H^3
}_{{-2}}H^4_{{-2}} \right) {{p_3}}^{2}\\
&- \left( 3\,H^3_{{1}}+3
\,H^3_{{-1}}{H^3_{{-2}}}^{2}-H^5_{{-1}}-H^3_{{-2}}
H^5_{{-2}}-2\,H^4_{{-2}}H^3_{{-1}}+{H^4_{{-2}}}^{2
}-2\,{H^3_{{-2}}}^{4}+2\,H^3_{{-2}}H^4_{{-1}}\right.\\&\left.-2\,{{
H^3}_{{-2}}}^{2}H^4_{{-2}} \right) {p_5} +N_4\, p_4+N_3\,
p_3+N_0=0\end{split}
\end{equation}
and
\begin{equation}
\label{2stratC10}
\begin{split}
\mathcal{C}_{10}=&{p_5}^2-p_4p_6+\dots \\=& {p_{{5}}}^{2}-p_{{4}}{p_{{3}}}^{2}+2\,H^3_{{-2}}{p_{{3}}}^{3}-
\left( -H^4_{{-2}}-2\,H^3_{{-1}}+5\,{H^3_{{-2}}}^{2}
\right) p_{{3}}p_{{5}}
\\&- \left( 2\,H^5_{{-2}}+6\,H^3_{{-1}}H^3_{{-2}}-H^4_{{-1}}-H^4_{{-2}}H^3_{{-2}}+{ H^3_{{-2}}}^{3} \right) p_{{3}}p_{{4}}
\\&- \left( 2\,H^5_{{-1}}-H^4_{{-1}}H^3_{{-2}}-2\,H^3_{{1}}+{H^3_{{-1}}}^{2}-
H^4_{{-2}}H^3_{{-1}}+{H^3_{{-2}}}^{2}H^3_{{-1}}+{
H^3_{{-2}}}^{4}-2\,H^3_{{-2}}H^5_{{-2}} \right) {p_{{3}}
}^{2}
\\&- \left( -H^4_{{1}}+H^4_{{-2}}H^4_{{-1}}-4\,H^3_{{-2}}H^5_{{-1}}+8\,H^3_{{1}}H^3_{{-2}}-H^5_{{
-2}}{H^3_{{-2}}}^{2}-H^5_{{-2}}H^4_{{-2}}-5\,H^4_{
{-2}}H^3_{{-1}}H^3_{{-2}}-2\,H^3_{{2}}\right.
\\&\left.-2\,{H^3_{{-2}}}^{5}-2\,{H^3_{{-1}}}^{2}H^3_{{-2}}+{H^4_{{-2}}}^{2}
H^3_{{-2}}-H^4_{{-2}}{H^3_{{-2}}}^{3}-H^4_{{-1}}
H^3_{{-1}}+H^4_{{-1}}{H^3_{{-2}}}^{2}+3\,{H^3_{{-2}
}}^{3}H^3_{{-1}} \right) p_{{5}}
\\& +B_4\, p_4+B_3\, p_3+B_0=0
\end{split}
\end{equation}
where the coefficients $N_i$ and $B_i$ are given in Appendix
\ref{App-2strat}. These three constraints are not independent
since
\begin{equation}
\begin{split}
&(p_{{3}}+2\,H^4_{-1}-H^5_{-2}-2\,{H^3_{-2}}^{3}-3\,H^3_{-2}H^4_{-2}+3\,H^3_{-1}H^3_{-2})\mathcal{C}_{10}\\
&+ (p_4-2\,H^3_{-2}p_{3}
+2\,H^5_{-1}+3\,{H^3_{-2}}^{4}-2\,H^3_{1}-2\,{H^3_{-1}}^{2}-3\,H^3_{-2}H^4_{-1}-2\,{H^4_{-2}}^{2}+6\,H^4_{-2}
H^3_{-1}\\
&+2\,H^3_{-2}H^5_{-2}+3\,H^4_{-2}{H^3_{-2}}^{2}-2\,H^3_{-1}{H^3_{-2}}^{2}
)\mathcal{C}_9 \\
&+(-p_5+(-3\,H^3_{-1}+H^4_{-2})p_3
+(H^4_{-2}H^5_{-2}+3\,H^3_{-2}H^5_{-1}-3\,{H^3_{-1}}^{2}H^3_{-2}+2\,{H^3_{-2}}^{3}H^3_{-1}
\\
&-2\,H^3_{-1}H^5_{-2}+{H^3_{-2}}^{5}+H^4_{1}-3\,H^3_{2}+5\,H^3_{-1}H^3_{-2}H^4_{-2}
+{H^3_{-2}}^{3}H^4_{-2}+2\,H^5_{-2}{H^3_{-2}}^{2}-H^4_{-1}H^4_{-2}\\
&-{H^3_{-2}}^{2}H^4_{-1}-H^3_{-2}{H^4_{-2}}^{2}-3\,H^3_{1}H^3_{-2}+H^4_{-1}H^3_{-1}))\mathcal{C}_8
=0 .
\end{split}
\end{equation}
There are infinitely many other constraints between $p_3,p_4$, and $p_5$. Two of them are given by
\begin{equation}
\begin{split}
\tilde{\mathcal{C}}_{10}=&p_3p_7-p_4p_6+\dots =-2H^3_{-2}\mathcal{C}_9+(2H^3_{-1}+{H^3_{-1}}^2)\mathcal{C}_8, \\
{\mathcal{C}}_{11}=&p_3p_8-p_5p_6+\dots
=2\,H^3_{-2}\mathcal{C}_{10}+ \left( -2H^3_{-1}-{H^3_{-2}}^{2}
\right) \mathcal{C}_9.
\end{split}
\end{equation}
An important constraint is given by
\begin{equation}
\label{2strattrigcurv}
\begin{split}
\mathcal{C}_{12}=& p_4p_8-{p_6}^2+\dots \\
=&\mathcal{F}^2_{34}={p_4}^3-{p_3}^4
+4\,H^3_{{-2}}{p_3}{p_4}^2
+\left(-3\,H^4_{{-2}}+4\,H^3_{{-1}}+2\,{H^3_{{-2}}}^{2}\right){p_3}^2p_4
+\left(-3\,H^4_{{-1}}-2\,H^3_{{-2}}H^4_{{-2}}\right){p_3}^3\\&
+Q_8 {p_{4}}^2 +Q_7p_4p_3+Q_6 {p_{3}}^2 +Q_4 p_4+Q_3p_3+Q_0=0.
\end{split}
\end{equation}
The coefficients $Q_i$ of this trigonal curve are given in Appendix
\ref{App-2strat}. One has
\begin{equation}
\begin{split}
\mathcal{F}^2_{34} =&(p_{{4}}+3\,H^3_{{-2}}p_{{3}}+3\,H^3_{{-1}}{H^3_{{-2}}}^{2}+3\,
H^3_{{1}}-H^5_{{-1}}-H^3_{{-2}}H^5_{{-2}}-2\,H^4_{{-2}}H^3_{{-1}}
+{H^4_{{-2}}}^{2}-2\,{H^3_{{-2}}}^{4}\\
&+2\,H^3_{{-2}}H^4_{{-1}}-2\,H^4_{{-2}}{H^3_{{-2}}}^{2})\mathcal{C}_8+
(-p_{{3}}-2\,H^4_{{-1}}+3\,H^3_{{-2}}H^4_{{-2}}+2\,{H^3_{{-2}}}^{3}
+H^5_{{-2}}-3\,H^3_{{-1}}H^3_{{-2}})\mathcal{C}_9.
\end{split}
\end{equation}
i.e.
$\mathcal{F}^2_{34}$ belongs to the ideal $\langle \mathcal{C}_8,\mathcal{C}_9 \rangle$. \par
Constraints (\ref{2stratC8}),(\ref{2stratC9}) and (\ref{2stratC10}) show that any element of the algebra $A_{\Sigma_{1,2}}$ can be represented in the form
\begin{equation}
\label{2stratpn}
p_n=a_n(p_3)+b_n(p_3)p_4+c_n(p_3)p_5, \qquad n=3,4,5,\dots
\end{equation}
where $a_n,b_n,$ and $c_n$ are polynomials.\par
This observation allows us to prove the following
\begin{prop}
\label{propAS2}
Algebra $A_{\Sigma_{1,2}}$ with fixed $H^j_k$ is equivalent to the polynomial algebra $\mathbb{C}[p_3,p_4,p_5] /
\langle \mathcal{C}_8,\mathcal{C}_9,\mathcal{C}_{10} \rangle $.
\end{prop}
The proof is based on the
\begin{lem}
For any constraint
\begin{equation}
f(p_3,p_4,p_5)=0
\end{equation}
arising from the system of equations (\ref{2stratalg}), the polynomials
$ f(p_3,p_4,p_5)=0$ belong to the ideal generated by $\mathcal{C}_8,\mathcal{C}_9$ and $\mathcal{C}_{10}$.
\end{lem}
{\bf Proof} The proof is similar to that of Lemma \ref{lemC6}.
Indeed we assume that $f$ does not belongs to the ideal $\langle
\mathcal{C}_8,\mathcal{C}_9,\mathcal{C}_{10} \rangle$. Hence
\begin{equation}
f(p_3,p_4,p_5)=q_8(p_3,p_4,p_5)\mathcal{C}_8+q_9(p_3,p_4,p_5)\mathcal{C}_9+q_{10}(p_3,p_4,p_5)\mathcal{C}_{10}+R(p_3,p_4,p_5)
\end{equation}
where $q_8,q_9,q_{10}$ are some polynomials and $R(p_3,p_4,p_5)$ is not identically zero. Since
\begin{equation}
R(p_3,p_4,p_5)=f(p_3,p_4,p_5)\Big{|}_{\mathcal{C}_8=\mathcal{C}_9=\mathcal{C}_{10}=0}
\end{equation}
the rest $R(p_3,p_4,p_5)$ has the form
\begin{equation}
R(p_3,p_4,p_5)=A(p_3)+B(p_3)p_4+C(p_3)p_5
\end{equation}
where $A,B$ and $C$ are certain polynomials in $p_3$. Our assumption due to (\ref{2stratC8}-\ref{2stratC10}) is equivalent to the existence of nonzero $A,B$ and $C$ such that
\begin{equation}
\label{2stratcondrest}
A(p_3)+B(p_3)p_4+C(p_3)p_5=0.
\end{equation}
Since the numbers $3n$,$3m+4$, and $3l+5$ for positive integers $n,m,l$ never coincide, the count of gradation or power of Laurent series shows that the three terms in (\ref{2stratcondrest}) always have different degrees. Consequently equation (\ref{2stratcondrest}) has no nontrivial solutions.
$\square$\par
Similar to the previous section one can treat $p_3$,$p_4$,$p_5,\dots$ for given $H^i_k$ and $z$ as the local affine coordinates in $\Sigma_{1,2}$. Thus one has
\begin{prop}
\label{prop2tower}
The stratum $\Sigma_{1,2}$ contains an infinite family $\Gamma^2_{\infty}$ of infinite-dimensional algebraic varieties defined by the quadrics
\begin{equation}
\begin{split}
f_{nm}=&p_{n+m}+\sum_{i=m+1}^{m+2} H^n_{m-i} p_i+\sum_{i=3}^{m-1} H^n_{m-i} p_i
+\sum_{i=n+1}^{n+2}H^m_{n-i} p_i+\sum_{i=3}^{n-1} H^m_{n-i} p_i+H^n_m+H^m_n \\
&+H^n_{-2}H^m_{-2}p_4+(H^n_{-1}H^m_{-2}+H^n_{-2}H^m_{-1})p_3+\sum_{i=1}^2(H^n_{-i}H^m_i+H^m_{-i}H^n_i)=0, \qquad n,m=3,4,5,\dots
\end{split}
\end{equation}
and varying with parameters $H^n_m$ obeying the algebraic equation (\ref{alg2-comp}). The prime ideal $I(\Gamma_\infty^2)$ of $\Gamma_\infty^2$ is generated by $\mathcal{C}_8,\mathcal{C}_9,\mathcal{C}_{10}$ and
\begin{equation}
h^{(2)}_n = p_n -a_n(p_3)-b_n(p_3)p_4-c_n(p_3)p_5, \qquad n= 6,7,8,\dots\ .
\end{equation}
\end{prop}
{\bf Proof } Proof is based on the equivalence of the set of
equations (\ref{2stratalg}) to the equations
(\ref{2stratC8}-\ref{2stratC10}) and (\ref{2stratpn}). The
constraint $\mathcal{C}_{10}=0$ is necessary to guarantee the
irreducibility of varieties. For the discussion of this point in
the case of all $H^n_m=0$ see e.g. \cite{CLOS}. $\square$\par The
family $\Gamma^2_{\infty}$ of algebraic curves contains the plane
trigonal curve given by the equation $\mathcal{F}^2_{34}=0$. It
has genus two. The family $\Gamma^2_{\infty}$ includes also the
plane $(4,5)$ curve $\mathcal{C}_{20}=\mathcal{F}^2_{45}=0$. It is
too complicated to be presented here. In the three dimensional
space with coordinates $p_3,p_4,p_5$ equations
(\ref{2stratC8}-\ref{2stratC10}) define an irreducible algebraic
curve $\Gamma$. It is the intersection of well-known surfaces. For
instance, the surface defined by the equation
$\mathcal{C}_{10}=0$ is the celebrated Whitney umbrella (see e.g.
\cite{CLOS}). On the other hand equation $\mathcal{F}^2_{34}=0$
defines the cylindrical surface generated by the trigonal curve.
So, the curve $ \Gamma$ is the intersection of the Whitney
umbrella surface (see e.g. \cite{Har}) and the cylindrical
surfaces generated by the trigonal curve. So one expects that the
curve $\Gamma$ has genus two.
\begin{prop}
Index$(\overline{\partial}_{W_{1,2}})=-2$.
\end{prop}
{\bf Proof }
$S_{\widetilde{W}_{1,2}}=\{-2,-1,1,2,3,\dots\}$ and hence card$\{S_{W_{1,2}}-\mathbb{N}\}=0$ and card$\{S_{\widetilde{W}_{1,2}}-\mathbb{N}\}=2$
$\square$\par
We note that similar to the first stratum for a generic curve $\Gamma$ one has
\begin{equation}
\label{indexconj}
\mathrm{genus}(\Gamma)+\mathrm{index}(\overline{\partial}_{W_{1,2}})=0.
\end{equation}
Now let us consider the stratum $\Sigma^*_2$ with
$S_{\Sigma^*_2}=\{-1,0,1,3,\dots\}$. It has the codimension $3$,
i.e. one half of codim$(\Sigma_{1,2})$. On the other hand a basis
in $\Sigma^*_2$ contains an element of the first order. Hence in
$\Sigma^*_2$ there is the canonical basis given by formulae
(\ref{2stratLaurbas}) with $H^i_{-1}$, $i=3,4,5,\dots$. Thus, the
results formulated above in Lemmas (\ref{lem2stratalg}),
(\ref{lemC6}) and Propositions (\ref{prop2strat}),(\ref{propAS2})
(\ref{prop2tower}) with $H^i_{-1}=0$, $i=3,4,5,\dots$ are valid
for the stratum $\Sigma^*_2$ too. In contrast, codim$(W^*_2)=1$
and index$(\overline{\partial}_{W^*_2})$=
card$(\varnothing)$-card$(\{-2\})=-1$.
\section{Higher strata. Plane $(n+1,n+2)$ curves}
For the higher strata all calculations and formulas become much
more involved. For the stratum $\Sigma_{1,2,3}$ with
$S=\{-3,-2,-1,0,4,5,6,\dots\}$ the canonical basis have the form
\begin{equation}
\label{3stratLaurser}
\begin{split}
p_0=&1+\sum_{k=1}^\infty\frac{H^0_k}{z^k}, \\
p_i=&z^i+\sum_{k=-3}^{-1}\frac{H^i_k}{z^k}+\sum_{k=1}^{\infty}\frac{H^i_k}{z^k}, \qquad i=4,5,6,\dots\ .
\end{split}
\end{equation}
Again only the elements of positive order may be involved in $W_{1,2,\dots,n}$.\par
The Laurent series (\ref{3stratLaurser}) obey the analogue of
equations (\ref{2stratalg}) if $H^0_k=0$ and $H^i_k,\
i=4,5,6,\dots$ satisfy a system of quadratic equations analogue to
(\ref{alg2-comp}). As a consequence $\Sigma_{1,2,3}$ contains the
subspace $W_{1,2,3}$ closed with respect to multiplication
$W_{1,2,3}\cdot W_{1,2,3} \subset W_{1,2,3}$. This subspace is the
infinite dimensional algebra $A_{\Sigma_{1,2,3}}$ with the basis
$(1,p_4,p_5,p_6,\dots)$. The algebra $A_{\Sigma_{1,2,3}}$ is
generated by four elements $p_4,p_5,p_6,p_7$. These elements are
not free and obey several constraints. They can be obtained
exactly in the same manner for $\Sigma_{1,2}$. The corresponding
expressions are pretty long. To give an idea of their form and
number we will present them in the case when all $H^i_k=0$, i.e.
$p_i=z^i$. Since $p_ip_j=p_{i+j},$ $i,j=4,5,6,\dots$, the simplest
constraints are
\begin{equation}
\label{3stratcurv}
\begin{split}
& C_{10}={p_5}^2-p_4 p_6, \quad C_{11}={p_5}p_6-p_4 p_7, \quad C_{12}={p_6}^2-p_4 p_7, \quad \widetilde{C}_{12}=p_7 p_5-{p_4}^3, \\
& C_{13}={p_6}p_7-{p_4}^2 p_5, \quad C_{14}={p_7}^2-{p_4}^2p_6,
\quad \widetilde{C}_{14}={p_7}^2-{p_5}^2p_4.
\end{split}
\end{equation}
First three of the constraints (\ref{3stratcurv}) are independent. Others are not since
\begin{equation}
\begin{split}
p_4 \widetilde{C}_{12} =&p_4 C_{12}+p_6C_{10}-p_5C_{11}, \\
p_4 {C}_{13} =&p_5 C_{12}-p_6C_{11}, \\
p_4C_{14}=& -p_7C_{11}+p_6\widetilde{C}_{12}={p_6}^2C_{10}-(p_7+p_5p_6)C_{11}+p_4p_6C_{12}, \\
\widetilde{C}_{14}=&C_{14}-p_4C_{10}.
\end{split}
\end{equation}
It is easy to show also that the constraints (\ref{3stratcurv})
imply that
\begin{equation}
\label{3stratC20}
C_{20}={p_4}^5-{p_5}^4=0.
\end{equation}
Constraints (\ref{3stratcurv}) imply that the general element of the algebra $A_{\Sigma_{1,2,3}}$ in this case has the form
\begin{equation}
p_k=A_k(p_4)+B_k(p_4)p_5+C_k(p_4)p_6+D_k(p_4)p_7
\end{equation}
where $A_k,B_k,C_k,D_k$ are certain polynomials. This observation allows us to prove that for any constraint $f(p_4,p_5,p_6,p_7)=0$ arising from the equations $p_ip_j=p_{i+j}$ the polynomial $f(p_4,p_5,p_6,p_7)$ belongs to the ideal generated by $C_{10},C_{11}$, and $C_{12}$. Indeed, since $f(p_4,p_5,p_6,p_7)|_{C_{10}=C_{11}=C_{12}=0} = A(p_4)+B(p_4)p_5+C(p_4)p_6+D(p_4)p_7$ with some polynomials $A,B,C,D$, the r.h.s. of this formula is identically zero, due to the fact that the integers $4n,4m+5,4l+6,4k+7$ are always distinct.\par
So the algebra $A_{\Sigma_{1,2,3}}(H^i_k=0)$ is equivalent to the polynomial algebra $\mathbb{C}[p_4,p_5,p_6,p_7]/ \langle C_{10},C_{11},C_{12} \rangle$. Geometrically the subspace $W_{1,2,3}$ is the infinite family of the algebraic varieties with the $(4,5)$ curve (\ref{3stratC20}) in the basis.\par
Taking into account these observations it is natural to conjecture that in general case $H^i_k\neq 0$ one has similar results for the algebra $A_{\Sigma_j}$ and affine algebraic variety.\par
In the general case one has
\begin{prop}
Index$(\overline{\partial}_{W_{1,2,3}})=-3$.
\end{prop}
{\bf Proof }
$S_{W_{1,2,3}}-\mathbb{N}=\varnothing$, $S_{\widetilde{W}_3}-\mathbb{N}=\{-3,-2,-1\}$. $\square$\par
This result suggests to conjecture that the curve $\mathcal{F}^3_{45}$ and the basic curve $\Gamma$ in the four dimensional space with the coordinates $(p_4,p_5,p_6,p_7)$ defined by the equations $C_{10}=C_{11}=C_{12}=0$ have genus $3$.\par
For n-th stratum $\Sigma_{1,2,\dots,n}$ associated with the set $S=\{-n,-n+1,\dots,-1,0,n+1,n+2,\dots \}$ the closed subspace $W_{1,2,\dots,n}$ ($W_{1,2,\dots,n}\cdot W_{1,2,\dots,n} \subset W_{1,2,\dots,n}$) has the basis $(1,p_{n+1},p_{n+2},\dots)$ with
\begin{equation}
p_i=z^i+\sum_{k=1}^iH^i_{-k}z^k+\sum_{k=1}^\infty \frac{H^i_{k}}{z^k}
\end{equation}
and $H^i_j$ obeying the system of quadratic algebraic equations analogue to (\ref{alg2-comp}). Algebraically $W_{1,2,\dots,n}$ is the infinite family of infinite-dimensional algebra generated by $n+1$ elements $(p_{n+1},p_{n+2},\dots,p_{2n+1})$ modulo $n$ independent constraints
\begin{equation}
\label{generalC}
\begin{split}
C_{2n+4}=& p_{n+1}p_{n+3}-{p_{n+2}}^2+\dots=0,\\
C_{2n+5}=& p_{n+1}p_{n+4}-{p_{n+2}}p_{n+3}+\dots=0,\\
\dots&\\
C_{3n+3}=& p_{n+1}p_{2n+2}-{p_{n+2}}p_{2n+1}+\dots=0.
\end{split}
\end{equation}
These constraints imply that any element of the algebra $A_{\Sigma_{1,2,\dots,n}}$ can be represented as
\begin{equation}
p_n=\alpha_{n0}(p_{n+1})+\sum_{k=n+2}^{2n+1}\alpha_{nk}(p_{n+1})p_k
\end{equation}
where $\alpha_{nk}$ are certain polynomials.
Geometrically $W_{1,2,\dots,n}$ is the infinite-dimensional algebraic varieties varying with parameters $H^j_k$ $(i=n+1,n+2,\dots,\ k=1,2,\dots)$. In the base of this family there is an algebraic curve in $n+1$-dimensional subspace with coordinates $(p_{n+1},p_{n+2},\dots,p_{2n+1})$ defined by $n$ constraints mentioned above. An ideal of this curve contains the element
\begin{equation}
\label{n+1n+2curve}
\mathcal{F}^n_{n+1,n+2}={p_{n+1}}^{n+2}-{p_{n+2}}^{n+1}+\dots
\end{equation}
which defines a $(n+1,n+2)$ curve in the two dimensional subspace
with coordinates $(p_{n+1},p_{n+2})$.\par These statements are
easily provable in the trivial case when $p_i=z^i$. General case
will be considered elsewhere. In general case one has
\begin{prop}
Index$(\overline{\partial}_{W_{1,2,\dots,n}})=-n$.
\end{prop}
Proof is straightforward.\par
This observation and results obtained in the previous section suggests to formulate the following
\begin{conj}
\label{Theconjecture}
The stratum $\Sigma_{1,2,\dots,n}$ contains the subset $W_{1,2,\dots,n}$ of codimension $n$ closed with respect to pointwise multiplication. Algebraically $W_{1,2,\dots,n}$ is the infinite family of polynomial algebras equivalent to the coordinate ring
\begin{equation}
\label{nstrat-conj}
\mathbb{C}[p_{n+1},p_{n+2},p_{n+3},\dots,p_{2n+1}]/C_{2n+4},C_{2n+5},
\dots C_{3n+3}
\end{equation}
with $C_{2n+4},C_{2n+5}, \dots C_{3n+3}$ given by
(\ref{generalC}). Geometrically $W_{1,2,\dots,n}$ is the infinite
family of algebraic curves with the basic algebraic curve $\Gamma$
in the $n+1$-dimensional subspace with the local affine
coordinates ($p_{n+1}$, $p_{n+2}$, $\dots$, $p_{2n+1}$) defined by
equations (\ref{generalC}). $W_{1,2,\dots,n}$ at fixed $H^j_k$ contains the plane
$(n+1,n+2)$ curves given by the equation
$\mathcal{F}^n_{n+1,n+2}=0$ of genus $n$. Curves $\Gamma$ has
genus $n$ too. Moreover
Index$(\overline{\partial}_{W_{1,2,\dots,n}})=-n$.
\end{conj}
\par We will analyze this conjecture and we will study other Birkhoff strata in separate paper.
\section{Birkhoff strata for Gr$^{(2)}$ and hyperelliptic curves}
\label{sect-Gr2} Universal Sato Grassmannian Gr contains various
special Grassmannians. The most known and studied are
Grassmannians Gr$^{(n)}$ of all subspaces $W$ of the space of
formal Laurent series obeying the constraint $z^n \cdot W \subset
W$ \cite{PS,SW}. For such Grassmannians Gr$^{(n)}$ the analysis
similar to that performed in previous sections is simplified
drastically.\par Here we consider the simplest case of the
Grassmannian Gr$^{(2)}$. The condition
\begin{equation}
\label{Z2Ww}
z^2 \cdot W \subset W
\end{equation}
imposes severe constraints on the elements $p_i$ of the basis and
structure of Birkhoff strata. Birkhoff strata $\Sigma_S$ in
Gr$^{(2)}$ are associated with the sets $S+2 \subset S$, i.e. all
of them have the form \cite{PS,SW}
\begin{equation}
S_m=\{-m,-m+2,-m+4,\dots,m,m+1,m+2,\dots \}
\end{equation}
with $m=0,1,2,\dots$. The codimension of the stratum $\Sigma_{S_m}$ is equal to $\frac{m(m+1)}{2}$. We, similar to the general case, consider only strata for which $0 \in S_m$, i.e. the strata $\Sigma_{S_{2n}}$. In these cases
\begin{equation}
\label{S2n-genGr2}
S_{2n}=\{-2n,-2n+2,\dots,0,2,4,\dots,2n,2n+1,2n+2,\dots \}
\end{equation}
with $n=0,1,2,\dots$. So, the basis in the stratum $\Sigma_{S_{2n}}$ does not contain elements $p_1,p_3,\dots,p_{2n-1}$.\par
For the big cell in Gr$^{(2)}$ the canonical basis is given by (\ref{bigcellbasis}). One has
\begin{lem}
Laurent series (\ref{bigcellbasis}) obey the equations (\ref{bigcellalg}) and condition (\ref{Z2Ww}) iff
\begin{equation}
\begin{split}
H^{2n}_i=&0,\qquad i=1,2,3,\dots, \quad n=0,1,2,\dots \\
H^{2n+1}_{2m}=&0, \qquad n,m=1,2,3 \dots
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
p_{2n}p_{2m}=&p_{2(n+m)}, \qquad n,m=0,1,2,\dots \\
p_{2n+1}p_{2m}=&p_{2(n+m)+1}+\sum_{i=1}^{m-1}H^{2n+1}_{2i+1} p_{2(m-i)-1}, \qquad n,m=0,1,2,\dots \\
p_{2n+1}p_{2m+1}=& p_{2(n+m+1)}+\sum_{i=1}^n
H^{2m+1}_{2i+1}p_{2{m-i}}+\sum_{i=1}^m H^{2n+1}_{2i+1}p_{2{n-i}},
\end{split}
\end{equation}
i.e. $p_{2n}=z^{2n}$, $n=0,1,2,3$
\end{lem}
These properties are obviously those given in
(\ref{bigcellalg}),(\ref{bigcellstructcoeff}) under the reduction
$p_2=z^2$. Consequently, for the big cell $\Sigma^{(2)}_0$ in
Gr$^{(2)}$ one has reduced version of the Proposition
\ref{W0W0CW0}. A version of the relation (\ref{bigcellcurr}) is
\begin{equation}
\begin{split}
z^2=&{p_1}^2-2H^1_1, \\
p_3=& {p_1}^3-3H^1_1p_1,\\
z^4=& {p_1}^4-4H^1_1{p_1}^2+4{H^1_1}^2, \\
p_5=& {p_1}^5-5H^1_1{p_1}^3+\frac{15}{2}{H^1_1}^2p_1
\end{split}
\end{equation}
or equivalently
\begin{equation}
\label{lambda-bigcell}
\lambda={p_1}^2-2H^1_1, \qquad p_{2n+1}=\alpha_n(\lambda)p_1
\end{equation}
where $\lambda=z^2$ and $\alpha_n(\lambda)=\lambda^n+\sum_{k=0}^{n-1}\beta_k\lambda^k$ are certain polynomials.
\begin{prop}
The big cell contains an infinite family of normal rational curves associated with ideal
\begin{equation}
I^{(2)}_0=\langle\lambda-{p_1}^2+2H^1_1,\, l_1^{(2)} ,\, l_2^{(2)}, \dots \rangle
\end{equation}
where $l_n^{(2)}=p_{2n+1}-\alpha_{n}(\lambda)p_1$.
This family includes an infinite family of singular hyperelliptic curves of genus zero given by
\begin{equation}
\label{FnGr2-bigcell}
\mathcal{G}_n^{(2)}={p_{2n+1}}^2-(\lambda+2H^1_1)\alpha_n^2(\lambda)=0, \qquad n=1,2,3,\dots\ .
\end{equation}
\end{prop}
{\bf Proof } Proof is the same as for the Proposition \ref{towerbigcell}.
Formula (\ref{FnGr2-bigcell}) is an immediate consequence of (\ref{lambda-bigcell}). $\square$\par
For the first stratum $\Sigma^{(2)}_2$ the results formulated in Section \ref{firststrat} remain valid with the additional constraint $p_2=z^2$. So one has $p_{2n}=(z^2)^n$ and
\begin{equation}
p_{2n+1}=z^{2n+1}+H^{2n+1}_{-1}z+\sum_{m=1}^\infty \frac{H^{2n+1}_{2m+1}}{z^{2m+1}}, \qquad n=1,2,3,\dots\ .
\end{equation}
One also has
\begin{equation}
\label{p=p3}
p_{2n+1}=\beta_n(\lambda)p_3, \qquad n=2,3,\dots
\end{equation}
where $\beta_n(\lambda)$ are certain polynomials of order $n-1$. As analog of Proposition \ref{tower-1strat} one has
\begin{prop}
The stratum $\Sigma^{(2)}_2$ contains an infinite family of algebraic varieties associated with the ideal
\begin{equation}
I^{(2)}=\langle C_6^{(2)},\ l_5^{(2)},\ l_7^{(2)}, \dots \rangle
\end{equation}
where
\begin{equation}
\label{2C6}
C_6^{(2)}={p_3}^2-\left(\lambda^3 +2H^3_{-1} \lambda^2 + \left( {H^3_{-1}}^2+2H^3_1\right) \lambda + 2H^3_3 +2H^3_{-1}H^3_1\right)
\end{equation}
and
\begin{equation}
l_{2n+1}^{(2)}=p_{2n+1}-\beta_n(\lambda)p_3, \qquad \beta_n(\lambda) \in \mathbb{C}[\lambda], \quad n=2,3,4,\dots\ .
\end{equation}
This family contains the elliptic curve $C_6^{(2)}=0$ of genus one in the base and an infinite family of genus one hyperelliptic curves
\begin{equation}
\label{p=C-1Gr2}
{p_{2n+1}}^2=\beta_n^2(\lambda) \left( \lambda^3 +2H^3_{-1} \lambda^2 + \left( {H^3_{-1}}^2+2H^3_1\right) \lambda + 2H^3_3 +2H^3_{-1}H^3_1 \right), \qquad n=2,3,4,\dots\ .
\end{equation}
\end{prop}
{\bf Proof } Proof again is the same as the Proposition \ref{tower-1strat} while (\ref{p=C-1Gr2}) is a consequence of
(\ref{p=p3}) and (\ref{2C6}). $\square$ \par
In the general case of the stratum $\Sigma^{(2)}_{2n}$ the positive order elements of the canonical basis are
\begin{equation}
\label{pn-nGr2}
\begin{split}
p_0=&1+\sum_{k=1}^\infty\frac{H^0_k}{z^k},\\
p_{2m}=&z^{2m}p_0, \qquad m=1,2,3,\dots \\
p_{2m+1}=&z^{2m+1}+\sum_{k=1}^{2n-1} H^{2m+1}_{-k} z^k+\sum_{k=1}^\infty\frac{H^{2m+1}_k}{z^k}, \qquad n,n+1,n+2,\dots\ .
\end{split}
\end{equation}
\begin{lem}
Laurent series (\ref{pn-nGr2}) satisfy the equations
\begin{equation}
p_i(z)p_j(z)=\sum_lC^l_{ij}p_l(z)
\end{equation}
iff
\begin{equation}
\label{evenH2Gr}
\begin{split}
H^0_k=&0, \qquad k=1,2,3,\dots \\
H^{2m+1}_{2l}=&0, \qquad m=n,n+1,n+2,\dots, \quad l=-2n,-2n-2,-2n-4,\dots \\
\end{split}
\end{equation}
and
\begin{equation}
\label{oddH2Gr}
\begin{split}
\sum_{a=0}^r H^{2i+1}_{2a+1} H^{2j+1}_{{2(r-a)}}=&H^{2(i+j+1)}_{2r+1}+ \sum_{q=-n-1}^jH^{2i+1}_{2q+1}H^{2(j-q)}_{2r+1}+ \sum_{q=-n-1}^jH^{2j+1}_{2q+1}H^{2(i-q)}_{2r+1}\\&+
\sum_{\substack{s,q=-n-1 \\ s+q \geq 1}}^\infty
H^{2i+1}_{2s+1}H^{2j+1}_{2q+1}H^{2(s+q+1)}_{2r+1}.
\end{split}
\end{equation}
\end{lem}
{\bf Proof } Proof is based on the following relations
\begin{equation}
\label{pp-genGr2}
\begin{split}
p_{2j} p_{2k} =& p_{2(j+k)}, \\
p_{2j} p_{2k+1} =& p_{2(j+k)+1} +\sum_{s=-n}^{k-1}H^{2j+1}_{2s+1}p_{2(k-s)-1},\\
p_{2j+1} p_{2k+1} =& p_{2(j+k+1)}
+\sum_{s=-n}^j H^{2j+1}_{2s+1}p_{2(j-s)}
+\sum_{s=-n}^k H^{2k+1}_{2s+1}p_{2(k-s)} \\
&+\sum_{s=-n}^{-1}\sum_{r=-n}^{-1}H^{2j+1}_{2s+1}H^{2k+1}_{2r+1}
p_{-2(s+r+1)}+\sum_{s=-n}^{-1}\sum_{r=0}^{-s-1}H^{2j+1}_{2s+1}H^{2k+1}_{2r+1}p_{-2(s+r+1)}\\
&+\sum_{r=-n}^{-1}\sum_{s=0}^{-r-1}H^{2j+1}_{2s+1}H^{2k+1}_{2r+1}p_{-2(s+r+1)}
\end{split}
\end{equation}
In particular these relations imply that $p_{2m}={p_2}^m$ and
\begin{equation}
\label{pm=pn-Gr2}
p_{2m+1}=\alpha_m(\lambda) p_{2n+1}, \qquad n+1,n+2,n+3,\dots
\end{equation}
where $\alpha_m(\lambda)$ are certain polynomials of order $m-n$. $\square$ \par
Consequently one has
\begin{prop}
The stratum $\Sigma^{(2)}_{2n}$ contains the subset $W^{(2)}_{2n}$ closed with respect to pointwise multiplications $W^{(2)}_{2n}(H) \cdot W^{(2)}_{2n}(H) \subset W^{(2)}_{2n}(H)$. Elements of $W^{(2)}_{2n}$ are vector spaces with basis $\langle \lambda,\lambda^2,\dots,p_{2n+1},p_{2n+3},\dots\rangle_i$ with parameters $H^{2m+1}_k$ obeying the conditions (\ref{evenH2Gr}) and (\ref{oddH2Gr}).
\end{prop}
\begin{prop}
The subset $W^{(2)}_{2n}$ is the infinite family of infinite-dimensional associative algebra $A_{W^{(2)}_{2n}}$ with the basis $z^{2n}$ ($n=0,1,2,\dots$), $p_{2n+1},p_{2n+3},\dots$ and $A_{W^{(2)}_{2n}}=\mathbb{C}[z^2,p_{2n+1}]/\ \mathcal{G}_{2,2n}^{(2)}$
where
\begin{equation}
\label{2F22n-Gr2}
\mathcal{G}_{2,2n}^{(2)}={p_{2n+1}}^2-\lambda^{2n+1}-\sum_{m=0}^{2n}u_m\lambda^m
\end{equation}
with $\lambda=z^2$ and $u_m$ are certain polynomials in $H^{2n+1}_{2k+1}, \qquad k=-n,-n+1,\dots, 0,1,2,\dots$\ .
\end{prop}
{\bf Proof} Proof is based on the relation (\ref{pm=pn-Gr2}). The relation (\ref{2F22n-Gr2}) follows from (\ref{pp-genGr2}) at $i=j=n$, ${p_{2n+1}}^2=p_{2(2n+1)}+u_{2n}p_{2(2n)}+\dots$. The fact that all other constraints
\begin{equation}
\label{p2-genGr2}
{p_{2l+1}}^{2}=\sum_{k=0}^{2l+1}v_k\lambda^k, \qquad l=n+1,n+2,\dots
\end{equation}
are satisfied due to the constraint (\ref{2F22n-Gr2}) is the consequence of divisibility of all polynomials $\sum_{k=0}^{2l+1}v_k\lambda^k$ given in (\ref{p2-genGr2}) by the polynomial $\sum_{k=0}^{2n+1}u_k\lambda^k$ given in
(\ref{2F22n-Gr2}) namely
\begin{equation}
\label{u=v-genGr2}
\sum_{k=0}^{2l+1}v_k\lambda^k={\alpha_l}^2(\lambda)\ \sum_{k=0}^{2n+1}u_k\lambda^k.
\end{equation}
$\square$ \par
Interpreting $z,p_{2n+1},p_{2n+3},\dots$ as the local affine coordinates one has
\begin{prop}
\label{2strat-tower}
The subset $W^{(2)}_{2n}$ is an infinite family $\Gamma^{(2)}_{\infty}$ of algebraic varieties corresponding to the ideal $I^{(2)}$ generated by $\mathcal{G}_{2,2n}^{(2)}$ and $l_{m}^{(2)}=p_{2m+1}-\alpha_m(\lambda)p_{2n+1}$, $m=n+1,n+2,\dots$ .
It contains the hyperelliptic curve (\ref{2F22n-Gr2}) of genus $n$ in the base and an infinite tower of hyperelliptic curves given by (\ref{p2-genGr2}) of orders $2(n+1)+1, 2(n+2)+1, \dots$ and genus $n$. Codim$(W^{(2)}_2n)=n$ and Index$(\overline{\partial}_{W^{(2)}_2n})=-n$.
\end{prop}
{\bf Proof } The hyperelliptic curves given by (\ref{p2-genGr2}) have genus $n$ due to (\ref{u=v-genGr2}). Due to (\ref{S2n-genGr2}) $S_{\widetilde{W}_{2n}}=\{ -2n+1,-2n+3,\dots,-3,-1,\dots,2n+1,2n+2,\dots \}$ and Index$(\overline{\partial}_{W^{(2)}_2n})=$card$(\varnothing)$-card$(\{ -2n+1,-2n+3,\dots,-3,-1\})=-n$.
$\square$ \par
Part of the results presented here were obtained earlier in a different way in the paper \cite{KK}. \par
Thus, the stratum $\Sigma_{2n}$ is characterized by the presence of families of plane hyperelliptic curves $C_{2n+1}$ of genus $n$ in the closed subset $W_{2n}$. This is due to the presence of $n$ gaps (elements $p_1(z)$, $p_3(z), \dots$, $p_{2n-1}(z)$) in the basis of $W_{2n}$. The fact that for hyperelliptic curves (Riemann surfaces) of genus $n$ one has $n$ (Weierstrass) gaps in a generic point is well known in the theory of abelian functions (see e.g. \cite{Baker}). Probably not that known observation is that these gaps and consequently the properties of corresponding algebraic curves are prescribed by the structure of the Birkhoff strata $\Sigma_{2n}$ in Gr$^{(2)}$.\par
Stratification of Grassmannians Gr$^{(n)}$ and associated algebraic varieties and curves will be studied elsewhere.
\section{Resolution of singularities and transitions between strata}
\label{sect-desing} In the previous section it was shown that each
Birkhoff strata contains infinite towers of families of algebraic
curves. Generically these curves are regular. On the other
hand it was also noted that the projection of these regular curves
on the lower dimensional subspaces in the same stratum are
represented by the higher order singular curves which appear in
the higher strata as regular curve. This observation clearly
indicates that there is intimate interconnection between the
curves of the same type in different strata. It suggests also to
adopt wider approach in analyzing the possible mechanisms of
resolution of singularities of such curves.\par Let us begin with
the simplest example of the twisted cubic in the big cell defined
by the equations
\begin{equation}
\label{bigcellparam}
\begin{split}
q_2=&{q_1}^2-2H^1_1, \\
q_2=&{q_1}^3-3H^1_1q_1-3H^1_2.
\end{split}
\end{equation}
To avoid confusion we denote here the coordinate in $\Sigma_\varnothing$ by $(q_1,q_2,q_3)$.
Its general projection on the two dimensional subspace
$\Sigma_\varnothing$ with coordinates $(k_2,k_3)$ is given by the
plane cubic
\begin{equation}
\label{degEllzeros}
\left( k_{{3}}+\frac{3}{2}\,\alpha\,k_{{2}}+3\,H^1_{{2}}
+\frac{3}{2}\,H^1_{{1}}\alpha+\frac{1}{2}\,{\alpha}^{3}+\frac{1}{2}\,\beta\,\alpha \right)^{2}
=\left( k_{{2}}+2\,H^1_{{1}}+\frac{1}{4}\,{\alpha}^{2} \right) \left( k_{{2}}-H^1_{{1}}+
\beta+{\alpha}^{2} \right)^{2}
\end{equation}
The nodal cubic (\ref{degEllzeros}) has polynomial parameterization
\begin{equation}
\begin{split}
\label{k2k3}
k_2=&{k_1}^2+\alpha k_1-2H^1_1, \\
k_3=&{k_1}^3+(\beta-3H^1_1) k_1-3H^1_2.
\end{split}
\end{equation}
It has ordinary double point at
$k_{{2}}=H^1_{{1}}-\beta-{\alpha}^{2}$,
$k_3=-3\,H^1_{{2}}-{3}\,H^1_{{1}}\alpha+{\alpha}^{3}+\,\beta\,\alpha
$ and zero genus.\par A standard way to resolve this singularity
is to blow-up it by quadratic transformation (see e.g.
\cite{Har,Wal}). For simplicity we will consider the case
$\alpha=\beta=0$. An appropriate quadratic transformation in this
case is of the form
\begin{equation}
\label{00blowup}
k_3=\tilde{k}\left(k_2-H^1_1\right)-3H^1_2.
\end{equation}
In virtue of equation (\ref{degEllzeros}) with $\alpha=\beta=0$ the new variable $\tilde{k}$ obeys also the equation
\begin{equation}
\label{newtildeq00}
\tilde{k}^2-\left(k_2+2H^1_1\right)=0.
\end{equation}
The system of equations (\ref{00blowup}) and (\ref{newtildeq00}) defines the curve in the three-dimensional space $(\tilde{k},k_2,k_3)$. This system is equivalent to the system
\begin{equation}
\label{equiv00sys}
\begin{split}
k_2-H^1_1=&\tilde{k}^2-3H^1_1, \\
k_3+3H^1_2=&\tilde{k}^3-3H^1_1\tilde{k}.
\end{split}
\end{equation}
So two points $\left(3\sqrt{H^1_1},H^1_1,-3H^1_2\right)$ and
$\left(-3\sqrt{H^1_1},H^1_1,-3H^1_2\right)$ on the three
dimensional curve defined by (\ref{equiv00sys}) correspond to the
ordinary double point $\left(H^1_1,-3H^1_2\right)$ of the plane
curve (\ref{degEllzeros}). Moreover, comparing (\ref{equiv00sys})
and (\ref{bigcellparam}), one concludes that the three dimensional
curve is nothing but the twisted cubic (\ref{bigcellparam}).
Twisted cubic is regular. So the transformation (\ref{00blowup})
represents a resolution of singularity (blowing-up) of the curve
(\ref{degEllzeros}). This observation is very natural and almost
trivial, since the curve (\ref{degEllzeros}), has been obtained as
the projection of the original twisted cubic.\par Important
features of such regularization is that the regularized curve
(\ref{degEllzeros}) is the curve in three-dimensional space. It
belongs to the same stratum $\Sigma_\varnothing$ and the genus of
the regularized curve remains to be zero.\par The presence of the
elliptic curve (\ref{1stratellcurv-red}) in the first stratum
$\Sigma_1$ indicates the existence of a different regularization
procedure. Generically the curve (\ref{1stratellcurv-red}) has
genus one and $p_2,p_3$ are full Laurent series (\ref{1stratser})
with $H^i_k$ obeying to the constraints (\ref{1stratsercoeff}). An
important property of these constraints is that the system
(\ref{1stratsercoeff}) does not have reductions for which
$H^j_m=0$ for $m\geq n$. It is a well known fact for the
Weierstrass reduction (\ref{WPred}). So $p_2$ and $p_3$ are either
full Laurent series or polynomials $p_2^s=z^2+H^2_{-1}z$,
$p_3^s=z^3+H^3_{-1}z$. In the latter case the cubic curve
(\ref{1stratellcurv-red}) is singular and has the form
\begin{equation}
\label{singpolyn-cubcurve}
\left(p_{{3}}+\frac{3}{2}\,H^2_{{-1}}p_{{2}}
+\frac{1}{2}\,H^2_{{-1}}H^3_{{-1}}+\frac{1}{2}\,{H^2_{{-1}}}^{3}\right)^2-\left(p_2+\frac{1}{4}{H^2_{-1}}^2\right)\left(p_2+{H^3_{-1}}+{H^2_{-1}}^2\right)^2=0.
\end{equation}
Now let us compare singular curves (\ref{singpolyn-cubcurve}) and (\ref{degEllzeros}). Taking into account (\ref{k2k3}), one readily concludes that they represent the same curve with correspondence
\begin{equation}
\begin{split}
& \alpha \leftrightarrow H^2_{-1}, \quad \beta-3H^1_1\leftrightarrow H^3_{-1}, \quad p_1 \leftrightarrow z,\quad
p_2 \leftrightarrow k_2+2H^1_1,\quad p_3 \leftrightarrow k_3 +3H^1_2.
\end{split}
\end{equation}
So, the boundary form of the elliptic curve
(\ref{1stratellcurv-red}) on the boundary $\Delta_{01}$ between
strata $\Sigma_1$ and $\Sigma_\varnothing$ coincides with the
projection of the twisted cubic (\ref{bigcellparam}) on this
boundary from the side of $\Sigma_\varnothing$.\par This
observation suggests the following mechanism for transition
between the subspaces $W_\varnothing$ and $W_1$ of the strata
$\Sigma_\varnothing$ and $\Sigma_1$. Inside $W_\varnothing$ one
has the twisted cubic (\ref{bigcellparam}). Its form on boundary
$\Delta_{01}$ from the side of $\Sigma_\varnothing$ is given by
the nodal cubic (\ref{degEllzeros}). It coincides (under the
identification with the boundary form (\ref{singpolyn-cubcurve}))
with the elliptic curve (\ref{1stratellcurv-red}). In order to move
inside $\Sigma_1$ the polynomials (\ref{k2k3}) should become the
Laurent series
\begin{equation}
\label{01tailgrowth}
\begin{split}
k_2+2H^1_{1} \to& {k_1}^2+\alpha k_1+\sum_{n=1}^\infty \frac{H^2_n}{{k_1}^{n}},\\
k_3+3H^1_{2} \to& {k_1}^3+\left(\beta-3H^1_1\right)k_1+\sum_{n=1}^\infty \frac{H^3_n}{{k_1}^{n}},
\end{split}
\end{equation}
where $H^2_j$ and $H^3_j$ should obey the constraints (\ref{1stratsercoeff}). We emphasize that in the transition $W_\varnothing \to W_1$ the variable $k_1=p_1$ in $\Sigma_\varnothing$ becomes the formal variable $z$ in $\Sigma_1$.\par
The transition from the elliptic curve (\ref{1stratellcurv-red}) to the twisted cubic in $\Sigma_\varnothing$ is just the inverse process. The boundary form (\ref{singpolyn-cubcurve}) of the elliptic genus one curve (\ref{1stratellcurv-red}) on $\Delta_{01}$ from the side of $\Sigma_1$ is obtained by cutting the Laurent tails of $p_2$ and $p_3$. Passing to $\Sigma_\varnothing$ one has the curve (\ref{degEllzeros}). Then blowing-up the singularity, one gets the twisted cubic. \par
In such a transition mechanism both generic curves in the strata
$\Sigma_\varnothing$ and $\Sigma_1$, i.e. the twisted cubic
(\ref{bigcellparam}) and elliptic curve (\ref{1stratellcurv-red})
are regular, but have different genus. This mechanism provides us
with the method of regularization of the nodal cubic
(\ref{singpolyn-cubcurve}) by instantaneous growing-up the full
Laurent tail according to (\ref{01tailgrowth}).\par This mechanism
is valid also for the transition between entire infinite-
dimensional $W_\varnothing$ and $W_1$. \par Now let us consider
the quintic (\ref{0F25}). It has two ordinary double points.
Complete resolution of these singularities without changing the
genus (zero) is performed by quadratic transformations in two
steps. In the final form it is given by the first four equations
(\ref{bigcellcurr}) and the fifth order Veronese curve in the
space with coodinates $(p_1,p_2,p_3,p_4,p_5)$ is the corresponding
regularized curve.\par The regularization of the quintic
(\ref{0F25}) by increasing genus to one is provided by the
transition to the quintic (\ref{1F25}) via the procedure of rising
the Laurent tail of the type (\ref{01tailgrowth}). Cutting the
Laurent tail one passes from the stratum $\Sigma_1$ to
$\Sigma_\varnothing$.\par Similar procedure of resolution of
singularities take place for trigonal curve (\ref{singtrigcurv})
and $(4,5)$ curve (\ref{0F45}). In the big cell their genus one
regularized version are given by the curves (\ref{1stratTrig}) and
(\ref{1F45}). \par Singularities of trigonal curve
(\ref{1stratTrig}) in the first stratum can be resolved again in
two ways . The first way consists in performing quadratic
transformations from $p_4$ to the new variable $p_2$ defined by
\begin{equation}
\label{p4p2-1strat}
p_4={p_2}^2-2H^2_{-1}p_3-{H^2_{-1}}^2 p_2 -2H^2_{-1}H^2_1-2H^2_2.
\end{equation}
The corresponding regularized curve in the three dimensional space
$(p_2,p_3,p_4)$ is the genus one curve which is the intersection
of the cylindrical surfaces generated by the elliptic curve
(\ref{1stratellcurv-red}) and the surface defined by equation
(\ref{p4p2-1strat}). Second regularization is provided by the
transition from the curve (\ref{1stratTrig}) to the genus two
trigonal curve (\ref{2strattrigcurv}) in the stratum
$\Sigma_{1,2}$.\par Analogous mechanism of resolution of
singularities take place for other algebraic curves in Sato
Grassmannian. It has a particular simple form for the Grassmannian
Gr$^{(2)}$. In the big cell $\Sigma_\varnothing^{(2)}$ the
blowing-up of singular hyperelliptic curves (\ref{FnGr2-bigcell})
is performed by the series of quadratic transformations in the
same way as that described above for the general big cell
$\Sigma_\varnothing$.\par As far as the regularization with the
change of genus is concerned we again consider general projections
of the normal rational curves (\ref{FnGr2-bigcell}) on the two
dimensional subspaces with coordinates $(p_2,p_{2n+1})$. For the
twisted cubic it is given by equation (\ref{degEllzeros}) with
$H^1_2=0$ and has the polynomial parameterization (\ref{k2k3}).
Comparing the curve (\ref{degEllzeros}) with the elliptic curve
(\ref{2C6}) in $\Sigma^{(2)}_2$, we observe that the
regularization of the curve (\ref{degEllzeros}) is achieved by the
procedure of the instant growing-up of the full Laurent tail given
by
\begin{equation}
\begin{split}
z^2 \to& \tilde{z}^2 = {k_1}^2,\\
k_3 \to &p^{(1)}_3 = \tilde{z}^3+ (\beta-3H^1_1)\tilde{z} + \sum_{n=1}^\infty \frac{H^2_{2n+1}}{\tilde{z}^{2n+1}}
\end{split}
\end{equation}
where $H^3_{2n+1}$ should obey constaints (\ref{oddH2Gr}). So again $k_1(p_1)$ from $\Sigma_\varnothing^{(2)}$ becomes formal parameter in Laurent series in $\Sigma_1^{(2)}$. An inverse process of degeneration of the elliptic curve (\ref{2C6}) consists in cutting-down the Laurent tail for $p_2$, (i.e. putting $H^3_{2n+1}=0$, $n=1,2,3,\dots$). In such a procedure higher order singular hyperelliptic curves (\ref{FnGr2-bigcell}) from $\Sigma^{(2)}_0$ become genus one singular hyperelliptic curves (\ref{p=C-1Gr2}) from $\Sigma^{(2)}_2$.\par
In order to regularize the curves (\ref{FnGr2-bigcell}) for any $n$ completely one applies the growing-up mechanism to $p_{2n+1}^{(0)}$ from $\Sigma_\varnothing^{(2)}$. Namely, by the substitution
\begin{equation}
\begin{split}
p_1 \to& \tilde{z}, \\
p_{2n+1}^{(0)} \to& p_{2n+1}= \tilde{z}^{2n+1}+\sum_{k=-n+1}^\infty \frac{H^{2n+1}_{2l+1}}{\tilde{z}^{2l+1}}
\end{split}
\end{equation}
one transforms genus zero curve (\ref{FnGr2-bigcell}) into genus $n$ hyperelliptic curve (\ref{2F22n-Gr2}).
\section{Tangent cohomology of varieties $W_{1,2,...,n}$ and systems of integrable quasilinear PDEs}
Subspaces $W_{1,2,\dots,n}$ described in the previous sections are
infinite-dimensional algebraic varieties defined by the set of
polynomial equations for $p_i$, $i\in S_{1,2,\dots,n}$ and
components $H^i_j$ of the structure constants. In compact form all
these equations are given by
\begin{eqnarray}
&&p_jp_k-\sum_lC_{jk}^lp_l=0 \label{alg}, \\
&&\sum_l\left(C_{jk}^lC_{lm}^p-C_{mk}^lC_{lj}^p\right)=0, \qquad j,k,m,p \in S_{1,2,\dots,n} \label{ass}
\end{eqnarray}
where, for given $W_{1,2,\dots,n}$, $C_{jk}^l$ have the
corresponding parameterization by $H^j_k$ . One can treat $p_i$
and $H^i_j$ as the local affine coordinates in the
infinite-dimensional space $\Gamma_{W_{1,2,\dots,n}}$.\par A
standard method to analyze local properties of the varieties
defined by equations (\ref{alg},\ref{ass}) is to deal with their
tangent bundle $T_W$ (\cite{HP}-\cite{Har}). Let us denote by
$\pi_i$ and $\Delta^l_{ik}$ the elements of $T_W$ in a point. They
are defined, as usual, by the system of linear equations
\begin{eqnarray}
&&\pi_jp_k+p_j\pi_k-\sum_l\Delta_{jk}^lp_l-\sum_l C_{jk}^l\pi_l=0 \label{Talg}, \\
&&\sum_l \left(\Delta_{jk}^lC_{lm}^p+C_{jk}^l\Delta_{lm}^p-\Delta_{mk}^lC_{lj}^p-C_{mk}^l\Delta_{lj}^p\right)=0, \qquad j,k,m,p \in S_{1,2,\dots,n}. \label{Tass}
\end{eqnarray}
In more general setting these equations define also a
$W_{1,2,\dots, n}$-module $E$.\par Equations (\ref{Talg}) and
(\ref{Tass}) imply certain cohomological properties of the variety
$W_{1,2,\dots, n}$. Indeed, if one introduces the bilinear map
$\psi(\alpha,\beta)$ with $\alpha,\beta \in W_{1,2,\dots, n}$
defined by (see e.g. \cite{Sha})
\begin{equation}
\psi(p_i,p_j)=\sum_l\Delta_{jk}^lp_l,
\end{equation}
then equations (\ref{Tass}) become
\begin{equation}
\label{2cocy}
\alpha \psi(\beta,\gamma)- \psi(\alpha\beta,\gamma)+ \psi(\alpha ,\beta\gamma)-\gamma \psi(\alpha,\beta)=0
\end{equation}
where $\alpha,\beta,\gamma \in W_{1,2,\dots,n}$. Bilinear maps of such type are called Hochschild $2-$cocycles \cite{Hoc}. So, the tangent bundle to the variety of the structure constants $C_{jk}^l$ is isomorphic to the linear space of the $2-$cocycles on $W_{1,2,\dots,n}$ (see e.g. \cite{Sha}).\par
Equations (\ref{Talg}) gives us an additional information about the $2-$cocycle $\psi(\alpha,\beta)$. Introducing a linear map $g(\alpha)$ defined by $g(p_i)=\pi_i$, one rewrites equation (\ref{Talg}) as
\begin{equation}
\label{1cob}
\psi(\alpha,\beta)=\alpha g(\beta)+\beta g(\alpha) -g(\alpha\beta)
\end{equation}
with $\alpha,\beta \in W_{1,2,\dots,n}$. Thus
\begin{equation}
\psi(\alpha,\beta)=\delta g(\alpha,\beta)
\end{equation}
where $\delta$ is the Hochschild coboundary operation. Hence, $\psi(\alpha,\beta)$ is a $2-$coboundary and one has
\begin{prop}
\label{prop-coc-cob}
The tangent bundle of the variety $W_{1,2,\dots,n}$ is isomorphic to the linear space of $2-$coboundaries and Harrison's cohomology modules $H^2(W_{1,2,\dots,n},E)$ and $H^3(W_{1,2,\dots,n},E)$ vanish.
\end{prop}
This statement is essentially the reformulation for the subspaces $W_{1,2,\dots,n}$ of the well-known results concerning the cohomology of commutative associative algebras (see e.g. \cite{Hron}-\cite{Fro2}). In particular the existence of the $2-$cocycle and $H^2(W_{1,2,\dots,n},E)=0$ is sufficient condition for the regularity of the point at which it is calculated (see e.g. \cite{Hron}-\cite{NR}).\par
The above results are valid for general commutative associative
algebra and corresponding $W$. For the Birkhoff strata the
structure constants $C_{jk}^l$ have a special structure being
parameterized by $H^j_k$. Consequently $\Delta^k_{jk}$ are also
parameterized by $\Delta_{jk}$, i.e. by images of $H^j_k$ in the
map $W\to E$, and equations (\ref{ass}) becomes linear equations
(\ref{Tass}) for $\Delta_{jk}$. Being the elements of $E$ (in
particular, the tangent space in a point) $\Delta_{ij}$ admit a
natural Ansatz
\begin{equation}
\label{D-ans}
\Delta_{jk}=\frac{\partial u_k}{\partial x_j}
\end{equation}
where $u_k$ is a set of independent coordinates for the variety
defined by the associativity condition (\ref{ass}) and $x_j$ is a
set of new independent parameters. Under the Ansatz (\ref{D-ans})
equations (\ref{Tass}) take the form of quasilinear PDEs for
$u_i$. Solutions of these system provide us with the particular
$2-$cocycles and, hence, $2-$coboundaries associated with the
subspaces $W_{1,2,\dots,n}$.\par These systems of quasilinear PDEs
are very special. Let us consider the subspace $W_{\varnothing}$
in the bigcell. Since in this case
$C_{jk}^l=\delta_{j+k}^l+H^k_{j-l}+H^j_{k-l}$ one has
$\Delta^l_{kj}=\Delta_{k,j-l}+\Delta_{j,k-l}$ and the system
(\ref{Tass}) is
\begin{equation}
\label{bigcelllin}
\begin{split}
\Delta_{j+k,m}&+
\left(
-\Delta_{j, m+k}
+\sum_{l=1}^{j-1}H^k_{j-l}\Delta_{l m}
+\sum_{l=1}^{k-1}H^j_{k-l}\Delta_{l m}
-\sum_{l=1}^{m-1}H^k_{m-l}\Delta_{j l}
\right)\\
&+
\left(
-\Delta_{k, m+j}
+\sum_{l=1}^{j-1}\Delta^k_{j-l}H_{l m}
+\sum_{l=1}^{k-1}\Delta^j_{k-l}H_{l m}
-\sum_{l=1}^{m-1}\Delta^k_{m-l}H_{j l}
\right)=0.
\end{split}
\end{equation}
This system implies
\begin{equation}
\label{bigcellsymmetrizz}
k \Delta_{ik}-i\Delta_{ki}=0.
\end{equation}
Let us choose $H^1_k$ as the variables $u_k$.
\begin{prop}
\label{prop-bigcellansatz}
Under the Ansatz
\begin{equation}
\label{bigcellansatz}
\Delta_{ik}=\frac{\partial u_k}{\partial x_i}, \qquad i,k=1,2,3,\dots\ .
\end{equation}
the system (\ref{bigcelllin}) coincides with the dKP hierarchy.
\end{prop}
{\bf Proof } For $j=1,k=2,m=1$ the system (\ref{bigcelllin}) is
\begin{equation}
\Delta_{31}-\Delta_{13}-\Delta_{22}+2H^1_1\Delta_{11}=0
\end{equation}
while the relations (\ref{bigcellsymmetrizz}) at $i=1,k=2$ and
$i=1,k=3$ are
\begin{equation}
2\Delta_{12}-\Delta_{21}=0, \qquad 3\Delta_{13}-\Delta_{31}=0.
\end{equation}
The ansatz (\ref{bigcellansatz}) gives
\begin{equation}
\label{dKP-KZ}
\begin{split}
&\partial_{x_3} u_1 -\frac{3}{2} \partial_{x_2} u_2 +3u_1\partial_{x_1} u_1=0,\\
& 2\partial_{x_1} u_2-\partial_{x_2} u_1=0.
\end{split}
\end{equation}
It is the celebrated dKP (Khokhlov-Zaboloskaya) equations (see e.g. \cite{Zak1}-\cite{Kri5},\cite{KM}). The higher equations (\ref{bigcelllin}), (\ref{bigcellsymmetrizz}) give rise to the higher dKP equations under the ansatz (\ref{bigcellansatz}). $\square$ \par
As an immediate consequence of the Proposition \ref{prop-bigcellansatz} one has
\begin{prop}
Solutions of the dKP hierarchy provide us with the class of $2-$cocycles and $2-$coboundaries
defined by
\begin{equation}
\label{dKP-cob}
\psi(p_j,p_k)=\sum_l \left(\frac{\partial u_{j-l}}{\partial x_k}+\frac{\partial u_{k-l}}{\partial x_j}\right)p_l
\end{equation}
for the subspace $W_\varnothing$ in the big cell.
\end{prop}
We will refer to such $2-$coboundaries as dKP $2-$coboundaries. These dKP $2-$coboundaries describe local properties of the family of normal rational curves discussed in section \ref{sect-bigcell}.\par
In terms of the dKP tau-function $F$ defined by (see e.g. \cite{TT,KM})
\begin{equation}
u_k=H^1_k=-\frac{1}{k}\frac{\partial F}{\partial x_1\partial x_k}
\end{equation}
the whole dKP hierarchy is represented by the celebrated dispersionless Hirota-Miwa equations (see e.g. \cite{TT},\cite{KM})
\begin{gather}
\begin{aligned}\label{Hirota}
&-\frac{1}{m}F_{i+k,m}+\frac{1}{m+k}F_{i,k+m}+\frac{1}{i+m}F_{k,i+m}
+\sum_{l=1}^{i-1}\frac{1}{m(i-l)}F_{k,i-l}F_{l,m}\\&+\sum_{l=1}^{k-1}\frac{1}{m(k-l)}F_{i,k-l}F_{l,m}
-\sum_{l=1}^{m-1}\frac{1}{i(m-l)}F_{k,m-l}F_{i,l}=0
\end{aligned}
\end{gather}
where $F_{i,k}$ stands for the second-order derivative of $F$ with
respect to $x_{i}$ and $x_{k}$. So any solution $F$ of the system (\ref{Hirota}) provides us with the dKP $2-$cocycles (and $2-$coboundaries) given by
\begin{equation}
\psi(p_j,p_k)=-\sum_{l=1}\left(\frac{1}{j-l}\frac{\partial^2}{\partial x_k \partial x_{j-l}}
+\frac{1}{k-l}\frac{\partial^2}{\partial x_j \partial x_{k-l}}\right)\frac{\partial F}{\partial x_1} p_l.
\end{equation}
This formula shows that the choice (\ref{D-ans}) corresponds to a
simple realization of the map $W_\varnothing \to E$, namely, $F
\to \frac{\partial F}{\partial x_1}$ or $H^j_k \to \frac{\partial
H^j_k}{\partial x_1}$.
\par
It is evident that all above expressions are well defined only
for bounded $\frac{\partial u_i}{\partial x_k}$. When
$\frac{\partial u_i}{\partial x_k} \to \infty$ the formulas
presented above break down and $H^2(W,E) \neq 0$.\par For the dKP
hierarchy the points where $\frac{\partial u_i}{\partial x_k} \to
\infty$ are the, so-called, breaking points (or points of gradient
catastrophe). Such points form the singular sector of the space of
solutions of the dKP hierarchy. In this sector the space of
variables $x_1,x_2,\dots$ is stratified and such stratification is
closely connected with the Birkhoff stratification. For
Burgers-Hopf hierarchy ($2-$reduction of the dKP hierarchy) and
the Grassmannian Gr$^{(2)}$ such situation has been analyzed in
\cite{KK}.\par We are confident that similar results hold for
other strata too.
\section{Families of curves, Poisson ideals and coisotropic deformations}
Families of curves, algebraic varieties and families of their
ideals considered above can be viewed also as imbedded in larger
spaces with certain specific properties, for instance, as the
coisotropic submanifolds of Poisson manifolds and Poisson ideals,
respectively. Recall that a submanifold in the Poisson manifold
equipped with the Poisson bracket $\{\ ,\ \}$ is a coisotropic
submanifold if its ideal $\mathcal{I}$ is the Poisson ideal (see
e.g \cite{Wei}), i.e.
\begin{equation}
\label{IdId=Id}
\{ \mathcal{I} , \mathcal{I} \} \subset \mathcal{I}.
\end{equation}
Relevance of Poisson ideals in the study of (quantum) cohomology of manifolds was observed in the paper \cite{GK}. Theory of coisotropic deformations of commutative associative algebras based on the property (\ref{IdId=Id}) has been proposed in \cite{KM}. An extension of this theory to general algebraic varieties was given in \cite{KO}.\par
Thus let us consider an infinite-dimensional Poisson manifold $P$ with local coordinates $q_1,q_2,q_3,\dots,y_1,y_2,y_3,\dots$ endowed with the Poisson bracket defined by the relations
\begin{equation}
\label{py=J}
\{q_i,q_k\}=0, \quad \{y_i,y_k\}=0, \quad \{y_i,q_k\}=J_{ki}, \qquad i,k=1,2,3,\dots
\end{equation}
where $J_{ki}$ are certain functions of $p$ and $y$. This choice of the Poisson structure is suggested by the roles that the variables $p_i$ and $y_k$ play in our construction. Jacobi identities for the Poisson structures (\ref{py=J}) are given by the system
\begin{equation}
\label{bigcell-Jacobi}
\begin{split}
&\sum_s J_{ls} \partial_{y_s} J_{kj}-\sum_s J_{ks} \partial_{y_s} J_{lj}=0,\\
&\sum_s J_{sj} \partial_{q_s} J_{lk}-\sum_s J_{sk} \partial_{q_s} J_{lj}=0.
\end{split}
\end{equation}
Then, we consider ideals $\mathcal{I}(\Gamma_\infty)$ of the families of algebraic varieties in $W_{1,2,\dots,n}$ discussed in sections \ref{sect-bigcell}-\ref{sect-Gr2} as ideals in $P$ and require that they are Poisson ideals, i.e.
\begin{equation}
\label{IGIG=IG}
\{ \mathcal{I}(\Gamma_\infty) , \mathcal{I}(\Gamma_\infty) \} \subset \mathcal{I}(\Gamma_\infty).
\end{equation}
The property (\ref{IGIG=IG}) means, in particular, that the
Hamiltonian vector fields generated by each member of
$\mathcal{I}(\Gamma_\infty)$ are tangent to the coisotropic
submanifold with the ideal $\mathcal{I}(\Gamma_\infty)$.\par The
crucial question now is whether a Poisson structure exists such
ideals $\mathcal{I}(\Gamma_\infty)$ obey (\ref{IGIG=IG}). Let us
begin with the big cell. For the subspace $W_\varnothing$ the
answer is given by
\begin{prop}
The family of ideals $I(\Gamma_\infty)$ of the family of normal rational curves in the big cell represents the Poisson ideal in the Poisson manifold endowed with the Poisson brackets (\ref{py=J}) with $J_{ik}$ obeying the constraints
\begin{equation}
\label{bigcellJcond}
(J_{i\ k-1}-J_{k\ i-1})|_{\Gamma_\infty}=0\qquad i,k-2,3,4,\dots\ .
\end{equation}
\end{prop}
{\bf Proof } To prove (\ref{IGIG=IG}) it is sufficient to show that for the elements $h_n$ of the basis of $I(\Gamma_\infty)$ one has $\{h_n,h_m\}\subset I(\Gamma_\infty)$. The local coordinates $p^*_n=P_n(-p_1,-\frac{1}{2}p_2,-\frac{1}{3}p_3,\dots)$, $n=2,3,4,\dots$ and $u_n=H^1_n$ and canonical basis $h^*_2,h^*_3,h^*_4,\dots$, given by (\ref{bigcell-h*}), i.e. $h^*_n=p^*_n-u_{n-1}$, $n=2,3,4,\dots$ are the most convenient for this purpose. In these coodinates one has the identity
\begin{equation}
\{h^*_n,h^*_m\}=J^*_{n\ m-1}-J^*_{m\ n-1}, \qquad n,m=2,3,4,\dots
\end{equation}
where $J^*_{n m}$ denotes the Poisson tensor in these coordinates. So the conditions $\{h_n,h_m\}\subset I(\Gamma_\infty)$ is satisfied if and only if the conditions (\ref{bigcellJcond}) are valid.
$\square$ \par
On $\Gamma_\infty$ one has $p_n^*=u_{n-1}$, $n=2,3,4,\dots$ and, hence,
\begin{equation}
\label{bigcellJ*=ab}
J^*_{ik}|_{\Gamma_\infty}=\alpha_{ik}(u)+\beta_{ik}(u)p^*_1, \qquad i,k=1,2,3\dots
\end{equation}
where $\alpha_{ik}$ and $\beta_{ik}$ are functions of $u_k$ only.
Since $p^*_1 \notin I(\Gamma_{\infty})$ the conditions
(\ref{bigcellJcond}) are equivalent to
\begin{equation}
\label{bigcella=b=0}
\alpha_{i\ k-1}=\alpha_{k\ i-1}, \quad \beta_{i\ k-1}=\beta_{k\ i-1}, \qquad i,k=1,2,3,\dots \ .
\end{equation}
The property (\ref{bigcellJ*=ab}) indicates that Poisson tensors $J^*_{ik}$ linear in the variables $p^*_k$ could be of particular relevance. Thus let us consider the following class of tensors $J^*_{ik}$
\begin{equation}
\label{bigcellJ*}
J^*_{lk}=-\sum_m\frac{1}{m} J_{mk}(u)p^*_{l-m}
\end{equation}
where $J_{mk}(u)$ depend only on the variables $u_1,u_2,u_3,\dots$ . The conditions (\ref{bigcellJcond}) or (\ref{bigcellJ*=ab}) are equivalent to the following
\begin{equation}
\label{bigcellJcond2}
\begin{split}
& \frac{1}{m}J_{mn}-\frac{1}{n}J_{nm}=0, \\
& \frac{1}{m}J_{m\ n-1}-\frac{1}{n}J_{n\ m-1} + \sum_{k=1}^{m-2}\frac{1}{k}u_{m-k-1}J_{k\ n-1}
- \sum_{k=1}^{n-2}\frac{1}{k}u_{n-k-1}J_{k\ m-1}=0, \quad n,m=1,2,3,\dots\ .
\end{split}
\end{equation}
Using the well-known property of Schur's polynomials, i.e. $\partial_{p_k}P_n(p)=P_{n-k}(p)$ which implies that $\partial_{p_i} h =\-\frac{1}{i}\sum_{l=1}^{i-1}p^*_{l-i}\partial_{p_l^*}h$, one easily concludes that the Poisson structure (\ref{py=J}) with $J^*_{lk}$ of the form (\ref{bigcellJ*}) in the coordinates $p_1,p_2,\dots;u_1,u_2,\dots$ has the form
\begin{equation}
\label{pu=Ju}
\{p_i,p_k\}=0, \quad \{u_i,u_k\}=0, \quad \{u_i,p_k\}=J_{ki}(u), \qquad i,k=1,2,3,\dots\ .
\end{equation}
\begin{obs}
The system (\ref{bigcellJcond2}) is equivalent to the system (\ref{bigcelllin}) modulo the associativity conditions (\ref{bigcellsercoeff}) and $J_{nm}=\Delta_{nm}$. So there is a strong interrelation between cohomological and Poisson structures associated with the subspace $W_\varnothing$ in the big cell.
\end{obs}
This fact has been checked by computer calculations up to
$n,m=11$. We do not have formal proof of this statement.\par Note
that due to the properties of the Schur polynomials the Poisson
tensor (\ref{pu=Ju}) is of the form
\begin{equation}
\label{bigcellJJ*}
J^*_{ik}=\sum_{m} J_{mk}(u)\frac{\partial p_i^*}{\partial p_m}.
\end{equation}
A subclass of the Poisson tensors (\ref{bigcellJJ*}) for which $J_{mk}(u)=\frac{\partial u_k}{\partial x_m}$, i.e.
\begin{equation}
\label{bigcellJJ*-sub}
J^*_{ik}=\sum_{m} \frac{\partial p_i^*}{\partial p_m}\frac{\partial u_k}{\partial x_m} , \qquad i,k=1,2,3,\dots
\end{equation}
where $x_1,x_2,x_3,\dots$ are new coordinates on $\mathcal{M}$, is
of particular interest. First in the coordinate $x_i$, $p_i$ the
Poisson structures (\ref{py=J}), {\ref{pu=Ju}} take the form
\begin{equation}
\label{py=Darb}
\{p_i,p_k\}=0, \quad \{x_i,x_k\}=0, \quad \{x_i,p_k\}=\delta_{ki}, \qquad i,k=1,2,3,\dots\ .
\end{equation}
i.e., the coordinates $p_i,x_i$, $i=1,2,3,\dots$ are the Darboux
coordinates in $\mathcal{M}$. Second, the Jacobi conditions
(\ref{bigcell-Jacobi}) are identically satisfied for the Ansatz
$J_{ik}(u)=\frac{\partial u_k}{\partial x_i}$ while the algebraic
constraints (\ref{bigcellJcond2}) become the system of quasilinear
equations
\begin{equation}
\label{bigcelluxcond}
\begin{split}
& \frac{1}{m}\frac{\partial u_n}{\partial x_m}-\frac{1}{n} \frac{\partial u_m}{\partial x_n} =0, \\
& \frac{1}{m}\frac{\partial u_{n-1}}{\partial x_m}-\frac{1}{n}\frac{\partial u_{n-1}}{\partial x_m}
+ \sum_{k=1}^{m-2}\frac{1}{k}u_{m-k-1} \frac{\partial u_{n-1}}{\partial x_k}
- \sum_{k=1}^{n-2}\frac{1}{k}u_{n-k-1} \frac{\partial u_{m-1}}{\partial x_k}=0, \quad n,m=1,2,3,\dots\ .
\end{split}
\end{equation}
This system of equations coincides with that derived in \cite{KM2}
in a different manner. It was shown in \cite{KM2} that the system
(\ref{bigcelluxcond}) is equivalent to the dKP hierarchy. This
fact provide us with an alternative proof of the Proposition
\ref{prop-bigcellansatz}.
\par
Thus we have
\begin{obs}
In the Darboux coordinates the system of equations (\ref{pu=Ju}), (\ref{bigcellJcond2}) characterizing the Poisson structure for the family of ideals $I(\Gamma_{\infty})$ is equivalent to the dKP hierarchy with $x_1,x_2,x_3,\dots$ and $u_1,u_2,u_3,\dots$ playing the role of independent and dependent variables, respectively.\par
The sets of variables $(p^*_k,u_k)$ and $(p^*_k,x_k)$ play the dual roles in the description of the families of ideals $I(\Gamma_{\infty})$. The former are canonical from the algebraic viewpoint while the latter are canonical within the interpretation of the family of ideals $I(\Gamma_{\infty})$ as Poisson ideal. In virtue of the formulas (\ref{bigcellJcond2}),(\ref{bigcellJJ*-sub}) the connection between these two sets of variables is provided by solutions of the dKP hierarchy.
\end{obs}
This observation points out the deep interrelation between the
theory of Poisson ideals for the families of algebraic curves in
Sato Grassmannian and theory of integrable hierarchies and the
role of Darboux coordinates in such interconnection.\par The
Darboux coordinates has been used in \cite{KM} within the study of
coisotropic deformations of the relations (\ref{alg},\ref{ass})
viewed as equations defining structure constants of associative
algebras. It was shown in \cite{KM} that for infinite-dimensional
polynomial algebra in the Fa\`{a} di Bruno basis for which
structure constants $C^l_{jk}$ are given by
(\ref{bigcellstructcoeff}) the coisotropy condition
(\ref{IGIG=IG}) is equivalent to the associativity conditions
(\ref{bigcellsercoeff}) plus the exactness conditions
\begin{equation}
\label{bigcell-exacta}
\frac{\partial H^i_n}{\partial x_l}=\frac{\partial H^l_n}{\partial x_i}, \qquad i,l,n=1,2,3,\dots \ .
\end{equation}
These conditions together with the algebraic relations $nH^i_n=iH^n_i$ imply the existence of a function $F$ such that \cite{KM}
\begin{equation}
H^i_m=-\frac{1}{m}\frac{\partial^2 F}{\partial x_i\partial x_m}.
\end{equation}
With such a form of $H^i_k$ the associativity conditions (\ref{bigcellsercoeff}) are equivalent to the celebrated Hirota-Miwa bilinear equations (\ref{Hirota}).\par
This result indicates one more time the importance of the Darboux coordinates in the whole our approach. The detailed analysis of the Poisson structures for ideals of the families of algebraic curves in Birkhoff strata and their connection with the hierarchy of integrable equations will be given in the forthcoming paper.\par
Here we will present only few illustrative examples . In the stratum $\Sigma_1$ one has the ideal
\begin{equation}
\label{1strat-I}
I(\Gamma_{\infty}^1)=\langle \mathcal{F}^1_{23},h^{(1)}_4,h^{(1)}_5,\dots\rangle
\end{equation}
where $\mathcal{F}^1_{23}$ and $h^{(1)}_n$ are given by (\ref{1stratellcurv-red}) and (\ref{1strat-h1n}). The requirement that the family of ideals (\ref{1strat-I}) is a Poisson ideal gives rise to an infinite hierarchy of systems of PDEs. The simplest of them which is equivalent to the condition $\{\mathcal{F}^1_{23},h^{(1)}_4\}\Big{|}_{\Gamma_{\infty}^1}=0$ with the canonical Poisson bracket and
\begin{equation}
\begin{split}
\mathcal{F}^1_{23}=&{p_{3}}^2-{p_{2}}^3-\mu_4p_2p_3-\mu_3{p_2}^2-\mu_2p_3-\mu_2p_2-\mu_0,\\
h^{(1)}_4=&p_4-{p_2}^2-v_2p_3-v_1p_2-v_0
\end{split}
\end{equation}
is given by (see also \cite{KO})
\begin{equation}
\label{KP-1}
\begin{split}
{\frac {\partial \mu_{{4}}}{\partial x_4}} =& -\frac{2}{3}\frac{\partial}{\partial x_2}(\mu_2\mu_3) -\frac{5}{9}{\mu_4}^2 \frac{\partial \mu_4}{\partial x_2}
+\frac{4}{9} \mu_4 \frac{\partial \mu_4}{\partial x_3} +2\frac{\partial \mu_2}{\partial x_2} +\frac{4}{3} \frac{\partial \mu_3}{\partial x_3} +\frac{4}{9}\frac{\partial \mu_4}{\partial x_2} {\frac{\partial}{\partial x_2}}^{-1} \frac{\partial \mu_4}{\partial x_3} +\frac{8}{9} {\frac{\partial}{\partial x_2}}^{-1} {\frac{\partial^2 \mu_4}{\partial {x_3}^2}}, \\
{\frac {\partial \mu_{{3}}}{\partial x_4}} =& -\frac{2}{3} \mu_{{4}} \mu_{{3}} {\frac {\partial \mu_{{4}}}{\partial x_2}} +v_{{1}} {\frac {\partial \mu_{{3}}}{\partial x_2}}+2\,{\frac {\partial \mu_{{1}}}{\partial x_2}}
-3\,{\frac {\partial v_{{0}}}{\partial x_2}} -2\,\mu_{{3}} {\frac {\partial v_{{1}}}{\partial x_2}}
+\frac{2}{3} \mu_4 {\frac {\partial \mu_{{3}}}{\partial x_3}}
-\mu_{{4}} {\frac {\partial v_{{1}}}{\partial x_3}}
+\frac{4}{3} \,\mu_{{3}} {\frac {\partial \mu_4}{\partial x_3}}, \\
{\frac {\partial \mu_{{2}}}{\partial x_4}} =& -\frac{2}{3}\mu_{{4}} \mu_{{2}} {\frac {\partial \mu_{{4}}}{\partial x_2}}
+2\,{\frac {\partial v_{{0}}}{\partial x_3}} + v_{{1}} {\frac {\partial \mu_{{2}}}{\partial x_2}}
-\mu_{{4}} {\frac {\partial v_{{0}}}{\partial x_2}} -\frac{2}{3}\mu_{{1}} {\frac {\partial \mu_{{4}}}{\partial x_2}}
+\frac{2}{3}\mu_{{4}} {\frac {\partial \mu_{{2}}}{\partial x_3}} +\frac{2}{3}\mu_{{2}} {\frac {\partial \mu_{{4}}}{\partial x_3}}, \\
{\frac {\partial \mu_{{1}}}{\partial x_4}} =& -\frac{2}{3}\mu_{{4}} \mu_{{1}} {\frac {\partial \mu_{{4}}}{\partial x_2}}
+2\,{\frac {\partial \mu_{{0}}}{\partial x_2}}
+ v_{{1}} {\frac {\partial \mu_{{1}}}{\partial x_2}}
-2\,\mu_{{3}} {\frac {\partial v_{{0}}}{\partial x_2}}
-\mu_{{1}} {\frac {\partial v_{{1}}}{\partial x_2}}
+\frac{2}{3}\mu_{{4}} {\frac {\partial \mu_{{1}}}{\partial x_3}}
-\mu_{{4}} {\frac {\partial v_{{0}}}{\partial x_3}}
-\mu_{{2}} {\frac {\partial v_{{1}}}{\partial x_3}}
+\frac{4}{3}\, \mu_{{1}} {\frac {\partial \mu_{{4}}}{\partial x_3}}, \\
{\frac {\partial \mu_{{0}}}{\partial x_4}} =& v_{{1}} {\frac {\partial \mu_{{0}}}{\partial x_2}}
-\mu_{{1}} {\frac {\partial v_{{0}}}{\partial x_2}}
+\frac{2}{3}\mu_{{4}} {\frac {\partial \mu_{{0}}}{\partial x_3}}
-\mu_{{2}} {\frac {\partial v_{{0}}}{\partial x_3}}
-\frac{2}{3}\mu_{{4}} \mu_{{0}} {\frac {\partial \mu_{{4}}}{\partial x_2}}
+\frac{4}{3}\, \mu_{{0}} {\frac {\partial \mu_{{4}}}{\partial x_3}}
\end{split}
\end{equation}
where $v_1=\frac{2}{3}\mu_3-\frac{2}{9}{\mu_4}^2+\frac{4}{3}{\frac{\partial}{\partial x_2}}^{-1}\frac{\partial \mu_4}{\partial x_3}$ and $v_0$ is associated with a gauge freedom of the system.\par
For the stratum $\Sigma_{1,2}$ the coisotropy condition (\ref{IGIG=IG}) is given by pretty large system of equations. For example the condition
\begin{equation}
\label{C8C9=0}
\{\mathcal{C}_8,\mathcal{C}_9\}\Big{|}_{\Gamma_{\infty}^2}=0
\end{equation}
with the Poisson bracket in the Darboux coordinates $(p_3,p_4,p_5,\dots,x_3,x_4,x_5,\dots)$ and cyclic variable $x_3$, is equivalent to the system
\begin{equation}
\partial_{x_4}H^5_i=\partial_{x_5}H^4_i, \qquad i=1,2,4,5
\end{equation}
where
\begin{equation}
\begin{split}
H^4_4=&3\,U_{x_4}U_{x_5}V_{x_4}
-2\,U_{x_4}{V_{x_4}}^{2}-H^4_{{2}}U_{x_4}
-2\,{U_{x_4}}^{2}V_{x_5}-U_{x_4}{U_{x_5}}^{2}
+{U_{x_4}}^{4}-H^4_{{1}}U_{x_5} +{V_{x_5}}^{2}
+V_{x_4}H^4_{{1}},\\
H^4_5=&2\,H^4_{{2}}V_{x_4}-6\,U_{x_5}{V_{x_4}}^{2}
+5\,V_{x_4}{U_{x_5}}^{2}-\frac{5}{3}\,V_{x_4}{U_{x_4}}^{3}
-2\,U_{x_5}H^4_{{2}}+\frac{4}{3}\,U_{x_5}{U_{x_4}}^{3}
-H^4_{{1}}V_{x_5}\\&
+3\,V_{x_4}U_{x_4}V_{x_5}
-2\,U_{x_5}U_{x_4}V_{x_5}+\frac{7}{3}\,{V_{x_4}}^{3}-\frac{4}{3}\,{U_{x_5}}^{3},\\
H^5_1=&{V_{x_4}}^{2}+2\,H^4_{{2}}+U_{x_4}V_{x_5}
-2\,U_{x_5}V_{x_4}+{U_{x_5}}^{2}-{U_{x_4}}^{3},\\
H^5_2=&2\,H^4_{{1}}U_{x_4}-V_{x_5}V_{x_4}
+V_{x_5}U_{x_5}-V_{x_4}{U_{x_4}}^{2},\\
H^5_4=&2\,H^4_{{2}}V_{x_4}-7\,U_{x_5}{V_{x_4}}^{2}
+6\,V_{x_4}{U_{x_5}}^{2}-\frac{4}{3}\,V_{x_4}{U_{x_4}}^{3}
-2\,U_{x_5}H^4_{{2}}+\frac{5}{3}\,U_{x_5}{U_{x_4}}^{3}
-H^4_{{1}}V_{x_5} \\& +4\,V_{x_4}U_{x_4}V_{x_5}
-3\,U_{x_5}U_{x_4}V_{x_5}+\frac{8}{3}\,{V_{x_4}}^{3}
-\frac{5}{3}\,{U_{x_5}}^{3}-H^4_{{1}}{U_{x_4}}^{2},\\
H^5_5=&5\,U_{x_5}V_{x_4}{U_{x_4}}^{2}-2\,{V_{x_4}}^{2}{U_{x_4}}^{2}
-3\,H^4_{{2}}{U_{x_4}}^{2}-4\,{U_{x_4}}^{3}V_{x_5}
-2\,{U_{x_4}}^{2}{U_{x_5}}^{2}+2\,{U_{x_4}}^{5} \\&
+2\,U_{x_4}{V_{x_5}}^{2}-2\,V_{x_4}H^4_{{1}}U_{x_4}
+{H^4_{{1}}}^{2}+2\,V_{x_5}{V_{x_4}}^{2}
-3\,V_{x_4}V_{x_5}U_{x_5}+V_{x_5}{U_{x_5}}^{2}
+H^4_{{2}}V_{x_5}
\end{split}
\end{equation}
and
$\partial_{x_4}H^5_{-2}=\partial_{x_5}H^4_{-2}:=\partial_{x_4}\partial_{x_5}U$
and
$\partial_{x_4}H^5_{-1}=\partial_{x_5}H^4_{-1}:=\partial_{x_4}\partial_{x_5}V$.\par
Finally, the requirement that the family of ideals
$I^2(\Gamma^2_{\infty})$ for hyperelliptic curves described in
Proposition \ref{2strat-tower} is the Poisson ideal with respect
to the canonical Poisson bracket gives rise to the infinite
hierarchy of hydrodynamical type systems which is equivalent to
that found in the paper \cite{KK}.
\section{Conclusion}
The approach to algebraic curves and associated integrable systems $via$ an analysis of the algebro-geometric structure of the Birkhoff strata of the Sato Grassmannian seems to be rather natural. Properties of the Birkhoff strata $\Sigma_s$ essentially fix the properties of algebraic curves in each point of the corresponding subsets $W_s$ and associated integrable systems. \par
Our approach is apparently different from those discussed before, in particular, from the methods of Krichever \cite{Kri1,Kri2,Kri3}, Segal-Wilson \cite{SW}, Mulase \cite{Mul1,Mul2,Mul3} and Takasaki \cite{Tak1}. We shall try to clarify the possible principal differences (if so) or eventual interrelation between our method and those mentioned above in a future publication.
\subsubsection*{Acknowledgements}
The authors thank Marco Pedroni and Andrea
Previtali for many useful discussions. This work has been
partially supported by PRIN grant no 28002K9KXZ and by FAR 2009 (\emph{Sistemi dinamici Integrabili e Interazioni fra campi e particelle}) of the University of Milano Bicocca .
|
1,108,101,564,691 | arxiv | \section{Introduction}
As a microblogging website allowing users to post messages of 140 characters or less called ``tweets'', Twitter has become the biggest daily source of news, public opinions, and personal discussions. For example, on the day of the 2016 U.S. presidential election, Twitter had nearly 40 million messages sent by midnight that day \cite{nytimes_url}. Twitter has 310 million monthly active users, posting hundreds of millions of tweets per day \cite{twitter_url}. People on Twitter share, exchange, and discuss any events of their life including various issues relating to their health conditions or health-related behaviors. This phenomenal leads to a research area among public health scientists to achieve public health analytic outcomes by harnessing the vast amount of publicly available health-related data.
That said, there is a growing interest in exploring the wide range of topics over social media Twitter for epidemiology and surveillance. The study of Culotta et al. \cite{culotta2013lightweight} uses the Twitter corpus to estimate influenza rates and the sale volume of alcohol with high accuracy. The researchers in \cite {myslin2013using} reveals a content and sentiment analysis of tobacco-related Twitter messages and categorizes tobacco-relevant posts with the focus on emerging products like hookah and electronic cigarettes among users. Xu et al. \cite{xu2016leveraging} observe the usage patterns of the terms ``cancer", ``breast cancer", ``prostate cancer", and ``lung cancer" between Caucasian and African American groups on Twitter to understand their knowledge and awareness about specific topics in real-time. Apparently,
the social media-based surveillance highlight the potential that online user-generated data can be a very powerful and valuable medium for monitoring and tracking the health-related issues and health risk behaviors towards addictive substances like alcohol or tobacco of the public community in short time.
Marijuana has been legalized to be used for the medical and recreational purpose in 8 states Colorado, Washington, Alaska, Oregon, California, Nevada, Maine, and Massachusetts, and Washington, D.C. Despite some medicinal benefits, it is an indisputable fact that marijuana usage has exerted a myriad of detrimental impacts on the public health. For example, according to data provided by the U.S. National Survey on Drug User and Health \cite{macleod2004psychological}, youth with poor academic outcomes were more than four times as likely to have consumed marijuana in the past year than youth with an average of higher grades. Additionally, the study in \cite{fried2002current} also reveals a strong correlation between marijuana usage and the decline in intelligence quotient (IQ). Cannabis possession and consumption are still illegal under the federal law because of its significant health and safety risks to people, especially young individuals \cite{url_mmj_wh}. Thus, the threats that the marijuana usage poses to the public health and our society should be taken into consideration seriously. Consequently, surveillance of actual marijuana use and concerns would be necessary and useful for legislators to impose appropriate public health laws.
Our research leverages Twitter's public data of marijuana-related tweets exchanged among Twitter users to reveal hidden patterns of marijuana related aspects. Particularly, we collected more than 300,000 marijuana-related tweets during November 2016 in our study. We use the unigram and bigram of marijuana related hashtags to compute the word frequencies. Furthermore, we apply some text-mining sentiment techniques to analyze the users' attitudes based on their tweets. Our data indicates a strong correlation between tweets with outer links and positive attitudes, and between the number of marijuana tweets with political events in November 2016.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{twitter_streaming}
\caption{The workflow of tweet collecting server which aggregates the tweets according to the vocabulary of marijuana-related keywords. The server is written in Python.}
\label{fig:twitter_streaming}
\end{figure}
\section{Related Work}
Together with the rapid expansion of social media including Facebook, Instagram, or Snapchat, microblogging websites such as Twitter have evolved to become an enormous source of various kind of information. Many studies have considered utilizing these sites for event monitoring purposes and event-based surveillance system. For instance, using garnered information from Twitter, the authors in \cite{pak2010twitter} build a sentiment classifier based on n-gram model, that is able to determine positive, negative and neutral emotions of Twitter users. With the same purpose, \cite{agarwal2011sentiment} conducts their research with three types of models: Unigram model, feature-based model, and tree kernel-based model. Besides, Wang et al. \cite{wang2014hurricane} using Twitter corpus unravel a high correlation of some hashtags and tweets posted by users living in the locations affected by the 2012 Hurricane Sandy with its movement. Researchers have also begun to investigate various ways of automatically collecting training Twitter data. Consider \cite{kouloumpis2011twitter} as an example, they take advantages of Twitter hashtags as the training data to train their three-way sentiment classifiers.
Studies on Twitter data are also noticeable in health care sector. For example, the work of \cite{li2016discovering}, for the first time, seeks to recommend relevant Twitter hashtags for health-related keywords based on distributed language representations. Analyzing data from Twitter, blogs, and forums, the authors in \cite{denecke2013exploit} make an attempt to detect hints to public health threats as well as monitor population's health status. In 2012, the researchers in \cite{paul2012model} introduced a new approach to discovering a large number of meaningful ailment description by using machine learning and natural language processing techniques. In \cite{cavazos2015twitter}, the authors make an effort to examine the sentiment and themes of marijuana-related chatters on Twitter sent by influential Twitter users and to describe the demographics of these Twitter users. Another worth noting work is by \cite{cavazos2014characterizing} in which the authors estimate user demographic characteristics based the content of tweets of a popular pro-marijuana Twitter handle and its followers. However, the studies do not consider other meaningful properties of tweets' metadata, such as geographical features, external links, user's device types, etc., and their correlation with each other and with other social phenomena.
\section{Data Collection and Dataset}
\subsection{Data Collection}
We collected and processed more than 300,000 marijuana-related tweets in the English language, posted during November 2016. First of all, we built the marijuana vocabularies, i.e., the list of marijuana-related keywords, with the help of Online Slang Dictionary (\textit{http://onlineslangdictionary.com}). Next, we developed a data collection tool that garnered all tweets containing one or more marijuana-related terms from cities and states in the United States. The workflow of the data collection mechanism is illustrated in Fig. \ref{fig:twitter_streaming}. The server, written in Python, interacts with the Twitter Search API. While the Twitter Search API usually serves Tweets from the past week, our system allows us to bypass this limitation of time constraints by basically replicating the way the Twitter Search engine works on browsers. The server calls the API by command: \textit{``https://twitter.com/i/search/timeline?f=realtime"} with following parameters:
\begin{itemize}
\item \textit{q}: a query text in which searched tweets will contain. It is a word and phrase relating to marijuana, pre-stored in our vocabulary.
\item \textit{since}: the lower bound of the posting date of searched tweets.
\item \textit{until}: the upper bound of the posting date of searched tweets.
\item \textit{lang}: the language of searched tweets.
\end{itemize}
This API retrieves a list of matched tweets in the form of an HTML string. The server then extracts useful data from HTML string and save them to our tweet database server.
\subsection{Data Description and Processing}
The resulting tweets are stored in a NoSQL tweet database which includes $316,191$ documents relating to marijuana. Each document represents a tweet with 15 different fields extracted from the JSON object resulted from our data collection process, including username, URL, external links, text, the number of retweets and favorites, keyword, state, posted time, types of devices, etc. Also, those records are linked into a different table of N-gram sequences processed from tweet text. We use unigrams (1-gram) and bigrams (2-gram) for generating text mining clouds. In term of sentiment analysis, we utilized machine learning techniques from a third party to code the Tweets for different types: positive sentiment about marijuana, negative sentiment about marijuana and neutral sentiment about marijuana. In addition, since hashtags (symbol \#) are likely to be used before prime keywords or phrases in a tweet, we extract hashtags from Tweets as well. We also count the total number of external links of each user and analyze the users who post most tweets which contain external links.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width=.9\columnwidth]{mj_cloud}}
\subfloat[]{\includegraphics[width=1.1\columnwidth]{mmj_bigram_cloud} } \\
\caption{Unigram cloud (a) and Bigram cloud (b) of marijuana-related tweets collected during November 2016.}
\label{fig:mj_cloud_chart}
\end{figure*}
\begin{table}[t]
\centering
\caption{Top 20 users who post marijuana-related tweets with external links}
\label{tab:outerlink}
\begin{tabular}{|l|l|l|ll}
\cline{1-3}
\textbf{User} & \textbf{Number of links} & \textbf{Place} & & \\ \cline{1-3}
Potnetworkcom & 1638 & Denver, CO & & \\ \cline{1-3}
eatin\_n\_streets & 1582 & Denver, CO & & \\ \cline{1-3}
DenverCP & 764 & Denver, CO & & \\ \cline{1-3}
\_DiegoPellicer\_ & 654 & Seattle, WA & & \\ \cline{1-3}
MME\_MESA & 424 & Mesa, AZ & & \\ \cline{1-3}
ermphd & 410 & Austin, TX & & \\ \cline{1-3}
OG\_Chino & 397 & Los Angeles, CA & & \\ \cline{1-3}
ABG\_Marketplace & 343 & Kansas City, MO & & \\ \cline{1-3}
WeedFeed & 271 & Chicago, IL & & \\ \cline{1-3}
Boston\_CP & 270 & Boston, MA & & \\ \cline{1-3}
SLM420LOVE & 269 & California, CA & & \\ \cline{1-3}
CoCannabisCo & 266 & Oregon, OR & & \\ \cline{1-3}
SpeakEasy\_SEVL & 252 & Colorado Springs, CO & & \\ \cline{1-3}
Chance\_Takers & 243 & Atlanta, GA & & \\ \cline{1-3}
greco\_james & 238 & Phoenix, AZ & & \\ \cline{1-3}
Diabetes\_Newzz & 236 & New York, NY & & \\ \cline{1-3}
PhoenixCP & 233 & Phoenix, AZ & & \\ \cline{1-3}
420digitalweb & 224 & Denver, CO & & \\ \cline{1-3}
Cannabis\_Card & 212 & San Diego, LA & & \\ \cline{1-3}
StartupCannabis & 206 & New York, NY & & \\ \cline{1-3}
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{sentiment_and_tweet_type_correlation_1}
\caption{Sentiment analysis of $158,814$ tweets without outer links and $157,377$ tweets with outer links: Tweets without outer links seem to be more negative than ones with outer links. }
\label{fig:mmj_sentiment}
\vspace{-0.1in}
\end{figure}
\section{Results and Discussion}
\subsection{Marijuana Unigram and Bigram Clouds}
Unigrams and bigrams allow generating cloud tags for illustration of popular terms. Fig. \ref{fig:mj_cloud_chart}(a) shows the most frequent terms among unigrams (after removing some most common terms in English). Generally, ``dope",``weed", ``pot", and ``marijuana" are some highlight words. Besides those four favorite words, there are many action words associated with marijuana consumption such as ``smoke", ``smoking", ``buy", ``like", ``love" and ``smell".
Interestingly, data extracted from our text-mining algorithm indicated that there were many terms with provocative meaning such as ``ass", ``bitch", ``shit", or ``dam" are usually used in marijuana tweets. In addition, our data also shows strong correlation between number of tweets and some geographical locations which appeared have substantial activities related to cannabis. For example, Colorado, which has firstly legalized using marijuana for adult 21 years of age or older, is mentioned most. This suggests that legalization of marijuana in many areas has sparked a controversy, including positive and negative opinions.
\begin{figure*}
\includegraphics[width=7in,height=3.2in]{mmj_tweet_on_each_day_in_month}
\caption{Daily distribution of the number of tweets relating to marijuana in November 2016: there is an exponential increase in the number of marijuana-related tweets during the week of the US presidential election and legalization votes in some more states.}
\label{fig:mmj_tweet_on_each_day_in_month}
\end{figure*}
Fig. \ref{fig:mj_cloud_chart}(b) reveals more detailed information of marijuana use via the bigram cloud. Apparently, this type of cloud clearly shows many marijuana-related vocabularies such as legal, melting, dope, super, crock, etc. Also, the frequency of pair words which is related to legalization is very popular. It may reflect the event of state legalization votes during November. The frequency of ``medical marijuana" indicates that more and more users want to promote the benefit of using marijuana for medical purposes.
\subsection{Identifying Users' Attitudes via Tweets}
Our data indicates that we can actually can distinguish users' attitudes towards cannabis use via the number of outer links in their tweets. Outer links or external links are identified, based on the total number of URLs in the tweet metadata including full URLs and shortened URLs. Particularly, more than 300,000 tweets in our database, there are total $158,814$ tweets without outer links and $157,377$ tweets with outer links. Table \ref{tab:outerlink} shows top 20 users who had outer links in their tweets. Our data analysis reveals that most of these users (17/20) were from states where the use of marijuana for medical purpose or recreational use is legal (e.g., Colorado, Washington, Illinois, Massachusetts, California, and New York). For example, top three users with most tweets containing external links, unsurprisingly, come from Denver, Colorado - one of the first state where marijuana is legal for both medical and recreational purpose. More specifically, we find out that most of these users are likely to be news and magazine organizations such as Potnetworkcom, DenverCP, PhoenixCP, Boston\_CP, WeedFeed, MME\_MESA. For example, user Potnetworkcom with 1638 tweets, has a website \textit{http://potnetwork.com} - that publishes all things Marijuana and entertain other users ``with up to date information about marijuana pop culture" or DenverPC belongs to website \textit{http://toplocalnow.com/} that tweets breaking news and weather updates from Denver and many other cities. Our data reveals that many organizations, which provide services and products associated with marijuana, tend to utilize Twitter to promote their products and generate publicity.
We use a tweet sentiment analysis tool by Mashape \cite{mashape_url} to estimate the Twitter user attitude towards Cannabis. The tool works by examining individual words and short sequences of words (n-grams) and comparing them with a probability model. We analyze two sets: the tweets with outer links and tweets without outer links for evaluation. In Fig. \ref{fig:mmj_sentiment}, we present information about the proportion of the sentiment. Overall, for the set of tweets with URLs, the percentage of positive tweets is higher than the negative tweets. For the set of tweets without URLs, however, the percentage of positive tweets is much lower. Considering the group of positive tweets, the proportion of the positive tweet with external links is 62\%, compared with 32\% of tweets without external links. This implies that many users who attach external links to some websites try to deliver the information about the benefits of marijuana, such as for medical and experiments. They might want other people to perceive the advantages of marijuana. Users, who do not attach URLs in their tweets, can be individual marijuana smokers. However, because of many offensive terms (e.g., ``bitch", or alike words) included in their tweets, they are identified as having negative sentiments towards marijuana.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width=1.0\columnwidth]{mmj_tweet_on_each_weekday}}
\subfloat[]{\includegraphics[width=1.1\columnwidth]{marijuana_state_chart} } \\
\caption{(a) Daily distribution of marijuana-related tweets in a week during November 2016 - the average for the whole month excluding the day of presidential election and marijuana legalization vote; (b) the state map of the marijuana-related tweet frequencies (the number of tweets over the population of each state). }
\label{fig:time_space_chart}
\end{figure*}
\subsection{Temporal and Spatial Distribution of Tweets}
The volume of marijuana-related discussions is largely driven by political events. Fig. \ref{fig:mmj_tweet_on_each_day_in_month} shows the daily distribution of the number of tweets relating to marijuana in November 2016. Clearly, in the first week of the month, the number of tweets increased at an exponential rate and reached a peak on November 8. The tweets express the user's emotion and opinion about the marijuana policy reforms. For example, on November 9 four more states (California, Nevada, Maine, and Massachusetts in addition to Colorado, Washington, Alaska, Oregon, and Washington DC) voted for legal marijuana consumption of both recreational and medical purpose \cite{four_state_mmj_url}. Another important reason is that the same period, the outcome of US presidential election was decided and the elected president has shown support for using cannabis for medical purpose and is likely to encourage the federal government to allow more states to vote on legalizing recreational marijuana \cite{hillary_trump_url}.
It is interesting to estimate the tweet frequency during the regular weeks, i.e., without special effects of presidential elections or marijuana legalization events. We, therefore, consider the time from November 15 to 31. The research of \cite{kypri2014effects} proves there are more tweets about alcohol during the weekend. This is also true for marijuana. Fig. \ref{fig:time_space_chart}(a) presents a daily distribution of marijuana-related tweets in regular weeks, i.e., excluding the abnormal weeks of U.S. Election Day 2016 and cannabis legalization election days. We observe a clear trend that there is a significant uptick in the number of tweets at the weekend as compared to weekdays. We make a prediction that at the weekend, users tend to have more spare time to enjoy recreational activities. Also, Twitter accounts of celebrities, the media or businesses might exploit the value of weekend tweeting to post more tweets since their audiences have more time to consume and share content.
By the end of November 2016, there are eight states including California, Nevada, Maine, and Massachusetts, Colorado, Washington, Alaska, and Oregon, and Washington DC which have been legalized to use marijuana for both recreational and medical purpose \cite{mmj_governing_url}. Our spatial graph in Fig. \ref{fig:time_space_chart}(b) shows that there are more tweets from those eight states, thus matching with the marijuana state law map in \cite{mmj_governing_url}. The number of tweets, however, are also quite high in some states such as Georgia. Based on the federal laws and state marijuana laws map, we expect that fewer marijuana related tweets in this state because this area only allows for limited medical purposes. Surprisingly, our data indicated a contrary observation. This can be interpreted as there are some level of marijuana use beyond the medical purposes. Further study on such issues is considered as our future work.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{distribution_user_device}
\caption{The proportion of devices of the users who post marijuana-related tweets on Twitter }
\label{fig:distribution_user_device}
\vspace{-0.1in}
\end{figure}
\subsection{ Types of Devices Used for Marijuana Tweeting}
\begin{figure*}
\includegraphics[width=7in,height=3.8in]{mmj_hashtag_chart}
\caption{Top 20 hashtags in marijuana-related tweets: \#marijuana, \#cannabis, \#dope, and \#weed are most common hashtags.}
\label{fig:mmj_hashtag_chart}
\end{figure*}
It is known that 82\% of active users are on mobile phones \cite{twitter_url}. This raises us a question on the device types of marijuana tweeting users. Within more than 91,000 users we process, about 67\% use mobile phones (51\% for iPhone and 16\% for Android phones) via the Twitter mobile application to post their tweets (Fig. \ref{fig:distribution_user_device}). There are about $8,695$ users who use Internet browsers such as Chrome, Firefox, and Safari that are in the group of Twitter Web Client. Importantly, many users (remaining $23.5\%$) employ third-party services to publish their marijuana-related tweets. Two such popular services are IFTTT and TweetDeck. IFTTT, an abbreviation of ``If This Then That", is a web-based service that allows users to tweet automatically based on schedules or some particular events. Similarly, TweetDeck is a Twitter tool for real-time tracking, organizing, and engagement that helps users to reach their audiences by automatic postings. Thus, there is an unusually less number of users using mobile devices when comparing to the average number of 82\%. Our observed data can be explained that in the marijuana related tweets, many users are employing automated posting services to promote their products or implementing marketing strategies.
\subsection{Marijuana-related Hashtags}
Topics in Tweeter are categorized based on hashtags, labeling words or phrases preceded by pound sign (\#). By using hashtags, Twitter's users can express their tweet's content, and thus, specific subjects of discussions among users can be found more quickly. Fig. \ref{fig:mmj_hashtag_chart} illustrates most common hashtags among marijuana-related discussions. Unsurprisingly, \#marijuana \#cannabis \#dope and \#weed are the most ubiquitous terms. Besides, other marijuana-related hashtags are also frequently used such as \#pot, \#kush, \#mmj, \#hemp, \#cbd and \#thc. Two terms \#cbd and \#thc respectively refer to Cannabidiol and Tetrahydrocannabinol, two main ingredients in the marijuana plant. \#mmj means ``marijuana".
We also notice that some political hashtags frequently appear such as \#legalizeit, \#electionnight, \#electionnday, \#vote, \#YESon. \#YESon means ``Yes on", a typical phrase used by Twitter users in the election campaigns. There might be a variety of reasons for this circumstance. Firstly, our data set is collected during November 2016. During this time, nine states were voting for marijuana legalization, including Florida, Massachusetts, North Dakota, Maine, Arkansas, Montana, Arizona, Nevada, California. On November 8, California, Nevada, Maine, and Massachusetts all voted in favor of legalized use, sale, and consumption of recreational marijuana. Secondly, the appearance of political hashtags \#electionnight, \#electionnday, and \#vote reflects US presidential election event happening at the same time. It is clear that the presidential candidates' attitudes and the new government's policy toward the state marijuana legalization trend will dramatically effect every marijuana business and individual who consumes marijuana, just provoking a lot of discussions on this topic.
\section{Conclusion}
We address the challenges of unstructured Tweets' content by implementing efficient and accurate text-mining algorithms. As a result, many interesting and valuable features of data are extracted. Firstly, by analyzing the Unigrams and Bigram of tweet's content and the distribution of marijuana-related tweets within a week and a month, we reveal that the tweet's content tends to reflect the opinion of users about the current related topics such as medical marijuana, marijuana legalization, and the US presidential election. Secondly, the data also shows the geographical distribution of marijuana use across 50 states of U.S. with some unexpected observations. In addition, our result also reveals some level of association between users' attitudes and tweets with and without external links. Finally, our study spots the differences between the way and purpose of the individual users and the organizational users. Those findings would suggest some valuable patterns which could be used as a marijuana surveillance approach for federal authorities and public health agencies in developing policy and regulations.
\bibliographystyle{IEEEtran}
|
1,108,101,564,692 | arxiv | \section{Introduction}
In a recent paper, Singh et al. \cite{sam} have reported results for energy levels, radiative rates, and lifetimes among 209 levels of four Ne-like ions, namely Hf~LXIII, Ta~LXIV, W~LXV, and Re~LXVI. These levels belong to the 2s$^2$2p$^6$, 2s$^2$2p$^5$$n\ell$ ($n \le$ 7, but for $n$ = 6 and 7, $\ell \le$ 2), and 2s2p$^6$$n\ell$ ($n \le$ 7, but for $n$ = 6 and 7, $\ell \le$ 2) configurations. For their calculations, they have adopted the general-purpose relativistic atomic structure package (GRASP0 version of P.H.~Norrington and I.P.~Grant) and the flexible atomic code (FAC), which are both available on the websites \\{\tt http://amdpp.phys.strath.ac.uk/UK\_APAP/codes.html} and {\tt https://www-amdis.iaea.org/FAC/}, respectively. By performing and comparing the two sets of calculations, they have assessed their energy levels to be accurate to $\sim$0.5~Ryd. However, some of the levels (particularly the higher ones) between the two calculations differ by up to $\sim$2~Ryd. In our long experience for a wide range of ions, such a large difference in energy levels with these two different codes (i.e. GRASP and FAC) has normally not been found, and therefore we have performed our own calculations with the same configurations, as adopted by them. Unfortunately, we note that some of their results with FAC cannot be reproduced, and hence the large discrepancies. In addition, the above listed 209 levels are not the lowest, and hence there is scope for improvement, particularly for the lifetimes, because some of the neglected levels from other configurations, such as 2p$^5$6f/g/h and 2p$^5$7f/g/h/i, intermix with these and hence contribute to the calculations.
\section {Energy levels}
Singh et al. \cite{sam} have performed two sets of calculations, with GRASP and FAC, and have included CI (configuration interaction) among 64 configurations, namely 2s$^2$2p$^6$, 2s$^2$2p$^5$$n\ell$ ($n \le$ 7, but for $n$ = 6 and 7, $\ell \le$ 2), 2s2p$^6$$n\ell$ ($n \le$ 7, but for $n$ = 6 and 7, $\ell \le$ 2), 2s$^2$2p$^4$3$\ell$3$\ell'$, 2s$^2$2p$^4$3$\ell$4$\ell'$, and 2s$^2$2p$^4$3$\ell$5$\ell'$. These configurations generate 3948 levels in total, but the results have been reported for only among 209 levels of the above listed lowest 31 configurations alone. However, some of the energies obtained between the two calculations differ by up to $\sim$2~Ryd, see for example the (2p$^5$) 7p and 7d levels of Ta~LXIV, W~LXV, and Re~LXVI in their Tables~2--4, or present Table~1. In the absence of measurements and other theoretical results it becomes difficult to know `which set of data is more accurate'. Therefore, we have performed calculations with both codes to verify their reported results as well as to further assess the accuracy of the energy levels.
In our calculations with GRASP, we have included the same 3948 levels from 64 configurations, as by Singh et al. \cite{sam}. However, with FAC we have performed two calculations, i.e. one (FAC1) with the same configurations as with GRASP, and the other (FAC2) much larger with 6619 levels generated by 2s$^i$2p$^j$ ($i$+$j$ = 8), (2s$^i$2p$^j$, $i$+$j$ = 7) 3$\ell$, 4$\ell$, 5$\ell$, 6$\ell$, 7$\ell$, and (2s$^i$2p$^j$, $i$+$j$ = 6) 3$\ell$4$\ell$, 3$\ell$5$\ell$, and 3$\ell$6$\ell$ configurations. This is to assess the effect of higher lying levels on the accuracy of lower level energies. Results obtained from these three calculations, along with those reported by Singh et al. are listed in Table~1 for a ready comparison, but only for the highest four levels of (2p$^5$) 7p and 7d each, for which the discrepancies are the maximum.
There are no appreciable differences between our calculations with GRASP and those of Singh et al. \cite{sam} for the levels of Ne-like ions under discussion. However, there are some occasional minor differences for a few levels -- see for example, the 7p~$^3$P$_0$ level of W~LXV in Table~1c. The same is unfortunately not true for the calculations with FAC, because differences for the levels shown in Table~1 are up to 1.8~Ryd. The reason/s for these difference/s are hard to speculate, particularly when the corresponding discrepancies with the lower levels are not as striking.
Finally, a comparison between our FAC1 and FAC2 energies indicates that there is no appreciable impact of larger CI on the 209 levels of Ne-like ions, because the two sets of calculations agree within $\sim$0.05~Ryd. Therefore, for these levels the CI included in the GRASP1 and FAC1 calculations is sufficient to produce accurate energy levels. Furthermore, as expected, both calculations produce comparable results for a majority of levels, and the differences (if any) are within 0.25~Ryd. This is in contrast to what Singh et al. \cite{sam} have shown, mainly because not only (some of) their results with FAC are incorrect but can also be not reproduced. It may be worth mentioning here that we have noted similar problems in the past with their calculations with both the GRASP and FAC codes -- see for example, the energy levels of five Br-like ions \cite{brlike} with 38 $\le$ Z $\le$ 42 and F-like W~LXVI \cite{w66a},\cite{w66b}.
\begin{table}
\caption{Comparison of some energy levels of Ne-like ions.}
\begin{tabular}{lllllllll} \hline
Level (2p$^5$) & 7p~$^3$D$_1$ & 7p~$^3$P$_0$ & 7p~$^3$S$_1$ & 7p~$^1$D$_2$ & 7d~$^3$F$^o_2$ & 7d~$^3$D$^o_1$ & 7d~$^3$P$^o_2$ & 7d~$^1$F$^o_3$ \\
\hline
{\bf a.} Hf~LXIII\\
GRASP1a & 1083.665 & 1083.837 & 1085.502 & 1085.506 & 1086.274 & 1086.377 & 1086.750 & 1086.762 \\
GRASP1b & 1083.651 & 1083.821 & 1085.489 & 1085.493 & 1086.261 & 1086.364 & 1086.737 & 1086.749 \\
FAC1a & 1083.972 & 1084.129 & 1084.972 & 1085.168 & 1085.835 & 1085.841 & 1086.250 & 1086.259 \\
FAC1b & 1083.707 & 1083.861 & 1085.554 & 1085.567 & 1086.350 & 1086.450 & 1086.831 & 1086.846 \\
FAC2 & 1083.658 & 1083.815 & 1085.526 & 1085.520 & 1086.303 & 1086.411 & 1086.781 & 1086.798 \\
\hline
{\bf b.} Ta~LXIV \\
GRASP1a & 1120.443 & 1120.620 & 1222.404 & 1122.407 & 1123.189 & 1123.293 & 1123.695 & 1123.707 \\
GRASP1b & 1120.429 & 1120.603 & 1122.391 & 1122.393 & 1123.175 & 1123.279 & 1123.682 & 1123.693 \\
FAC1a & 1120.758 & 1120.918 & 1121.396 & 1121.590 & 1122.694 & 1122.707 & 1122.745 & 1122.745 \\
FAC1b & 1120.486 & 1120.644 & 1122.456 & 1122.469 & 1123.267 & 1123.368 & 1123.778 & 1123.793 \\
FAC2 & 1120.436 & 1120.597 & 1122.422 & 1122.422 & 1123.219 & 1123.324 & 1123.728 & 1123.745 \\
\hline
{\bf c.} W~LXV \\
GRASP1a & 1158.024 & 1158.205 & 1160.113 & 1160.116 & 1160.912 & 1161.017 & 1161.450 & 1161.462 \\
GRASP1b & 1158.009 & 1158.187 & 1160.099 & 1160.102 & 1160.897 & 1161.003 & 1161.436 & 1161.448 \\
FAC1a & 1158.343 & 1158.509 & 1158.609 & 1158.803 & 1159.928 & 1159.941 & 1160.461 & 1160.464 \\
FAC1b & 1158.067 & 1158.228 & 1160.167 & 1160.180 & 1160.991 & 1161.094 & 1161.535 & 1161.551 \\
FAC2 & 1158.013 & 1158.179 & 1160.130 & 1160.133 & 1160.943 & 1161.049 & 1161.485 & 1161.503 \\
\hline
{\bf d.} Re~LXVI\\
GRASP1a & 1196.421 & 1196.606 & 1198.647 & 1198.650 & 1199.458 & 1199.565 & 1200.031 & 1200.042 \\
GRASP1b & 1196.406 & 1196.588 & 1198.633 & 1198.635 & 1199.443 & 1199.551 & 1200.016 & 1200.028 \\
FAC1a & 1196.608 & 1196.768 & 1196.807 & 1196.934 & 1197.972 & 1197.985 & 1199.002 & 1199.008 \\
FAC1b & 1196.465 & 1196.629 & 1198.702 & 1198.715 & 1199.540 & 1199.644 & 1200.118 & 1200.134 \\
FAC2 & 1196.430 & 1196.596 & 1198.663 & 1198.668 & 1199.492 & 1199.598 & 1200.067 & 1200.086 \\
\hline
\end{tabular}
\begin{flushleft}
{\small
GRASP1a: calculations of Singh et al. \cite{sam} with GRASP for 3948 levels \\
GRASP1b: present calculations with GRASP for 3948 levels \\
FAC1a: calculations of Singh et al. with FAC for 3948 levels \\
FAC1b: present calculations with FAC for 3948 levels \\
FAC2: present calculations with FAC for 6619 levels \\
}
\end{flushleft}
\end{table}
As discussed above, there is not much scope for improvement in the calculations of energy levels for the four Ne-like ions considered here. However, the corresponding calculations for lifetimes ($\tau$) can certainly be improved, because levels from some of the neglected configurations, such as 2p$^5$6f/g/h and 2p$^5$7f/g/h/i, intermix with these and hence contribute to the determination of $\tau$. Similarly, the limited results for radiative rates (A-values) reported by Singh et al. \cite{sam}, mainly from the ground to higher excited levels, are insufficient for the accurate modelling of plasmas, because a complete set of data for {\em all} transitions is often required.
\section{Conclusions}
Recently, Singh et al. \cite{sam} reported results for energy levels, A-values, and $\tau$ among 209 levels of four Ne-like ions with 72 $\le$ Z $\le$ 75. Particularly for energy levels they performed two sets of calculations with the GRASP and FAC codes. This was to assess the accuracy of energies, because prior similar data, experimental or theoretical, are almost non-existent, except for W~LXV. For many levels their two sets of energies differ by $\sim$0.5~Ryd, but for a few (particularly the higher ones) the discrepancies are up to $\sim$2~Ryd. This is in spite of adopting the same level of CI and including the contribution of relativistic effects in both calculations. Since such large differences between any two independent calculations have neither been noted earlier nor are expected, we have performed fresh calculations with the same two codes and with the same level of CI. On the basis of detailed comparisons made, among our various calculations as well as with the work of Singh et al., our conclusion is that there is no (appreciable) discrepancy for any level between the energies obtained with GRASP and FAC. Conversely, some of the level energies reported by Singh et al., with the FAC code, are incorrect and cannot be reproduced.
With an inclusion of even larger CI, than considered by Singh et al. \cite{sam}, there is no significant change, either in magnitude or orderings, for the 209 levels of Ne-like ions, which belong to the 2s$^2$2p$^6$, 2s$^2$2p$^5$$n\ell$ ($n \le$ 7, but for $n$ = 6 and 7, $\ell \le$ 2), and 2s2p$^6$$n\ell$ ($n \le$ 7, but for $n$ = 6 and 7, $\ell \le$ 2) configurations. However, some levels of higher neglected configurations, such as (2p$^5$)~6f/g/h and 7f/g/h/i, intermix with these and hence their A-values contribute to the determination of $\tau$. Therefore, there is scope for improvement over the calculations of $\tau$ for some (about a third of) levels, i.e. higher than 137 -- see Tables~1--4 of \cite{sam}. Similarly, the limited results of A-values reported by Singh et al. are insufficient for a reliable plasma modelling, for which a complete set of data is preferably required. A complete set of energies and A-values for three Ne-like ions, namely Hf~LXIII, Ta~LXIV and Re~LXVI are reported in our recent paper \cite{nelike}, whereas for W~LXV in an earlier one \cite{w65}.
|
1,108,101,564,693 | arxiv | \section{Prelude: Some facts from the literature and how they imply
time-travel.} \label{time-section}
The following series of figures represent G\"odel's famous rotating
universe. One of the many interesting features of G\"odel's universe
is that it contains closed time-like curves (CTC's for short), i.e.\
it permits ``time-travel''. In the following figures we use
geodesics and light-cones in the spirit of e.g.\ \cite[sections
3.1-3.3]{AMN07} for visualizing G\"odel's universe together with
some of its main features. For these notions cf.\ p.\pageref{gr-p}
herein. In Figures~\ref{godel-fig},\ref{godelnagy-fig}
\underbar{null-geodesic} \index{null-geodesic} is the same as
photon-like geodesic and ``null-cone'' \index{null-cone} is the same
as light-cone in the present paper.
\begin{figure}[hbtp]
\setlength{\unitlength}{0.67 truemm} \small
\begin{center}
\begin{picture}(250,200)(0,0)
\put(130,160){\makebox(0,0)[l]{$p'$}}
\put(60,176){\makebox(0,0)[l]{\shortstack[l]{Matter world-line\\
$(r,\varphi)$ constant}}}
\put(132,183){\makebox(0,0)[b]{\shortstack[l]{$r=0$\\ (coordinate
axis)}}}
\put(148,177){\makebox(0,0)[lt]{\shortstack[l]{$p'$'s future null cone\\
(refocuses at $p''$)}}}
\put(152,147){\makebox(0,0)[lb]{$p$'s null cone refocuses at $p'$}}
\put(152,132){\makebox(0,0)[lb]{Null geodesics}}
\put(195,129){\makebox(0,0)[lb]{\shortstack[l]{Caustic on $p$'s\\
future null cone}}}
\put(183,97){\makebox(0,0)[l]{\shortstack[l]{Null cone\\ tangent
to\\ circle}}} \put(201,68){\makebox(0,0)[l]{$L$}}
\put(174,75){\makebox(0,0)[t]{\shortstack[l]{Null cone\\includes
circle}}} \put(180,47){\makebox(0,0)[l]{$t=0$}}
\put(140,52){\makebox(0,0)[lt]{\shortstack[lt]{$p$'s future\\ null
cone}}} \put(129,18){\makebox(0,0)[l]{$t$}}
\put(132,12){\makebox(0,0)[l]{$\varphi$}}
\put(142,4){\makebox(0,0)[l]{$r$}}
\put(105,45){\makebox(0,0)[rt]{\shortstack[l]{$r<\log(1+\sqrt{2})$\\
(closed spacelike\\ curve)}}}
\put(34,85){\makebox(0,0)[t]{\shortstack[l]{$\quad r>\log(1+\sqrt{2})$\\
(closed timelike\\ curve)}}}
\put(58,78){\makebox(0,0)[lt]{\shortstack[l]{$r=\log(1+\sqrt{2})$\\
(closed null curve)}}} \put(129,40){\makebox(0,0)[l]{$p$}}
\put(125,98){\makebox(0,0)[rt]{O}}
\put(67,135){\makebox(0,0)[b]{\shortstack[l]{Null cone\\
includes\\circle}}}
\put(101,138){\makebox(0,0)[b]{\shortstack[l]{Null cone\\ tangent to\\
circle}}} \epsfysize = 200 \unitlength \epsfbox{ujgodel.eps}
\end{picture}
\end{center}
\caption[haho]{\label{ujgodel-fig} G\"odel's universe
\label{godel-fig} in co-rotating cylindric-polar coordinates
$\langle t,r,\varphi\rangle$. Irrelevant coordinate $z$ suppressed.
Light-cones (null-cones) and photon-geodesics indicated. Light-cone
{\em opens up} and tips over as $r$ increases (see line $L$)
resulting in closed time-like curves (CTC's). Drag effect (of
rotation) illustrated. Photons emitted at $p$ spiral out, reach CTC
and reconverge at $p'$. This is a slightly corrected version of
Figure~31 in Hawking-Ellis~\cite[p.169]{Hawel} (cf.\
p.\pageref{corr-p}). (null cone = light-cone, null curve = photon
curve)}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{1.2 truemm} \small
\begin{center}
\begin{picture}(130,150)(0,0)
\put(21,139){\makebox(0,0)[l]{\shortstack[l]{matter
world-line\\$(r,\varphi)$ constant}}}
\put(36,112){\makebox(0,0)[b]{\shortstack[l]{light-cone\\ tangent to
\\circle}}}
\put(47,128){\makebox(0,0)[r]{\shortstack[r]{$r=0$\\ (coordinate
axis)}}} \put(87,134){\makebox(0,0)[l]{$p$'s light-cone refocuses at
$p'$}} \put(100,118){\makebox(0,0)[b]{photon geodesics}}
\put(119,103){\makebox(0,0)[b]{\shortstack[l]{$r$ = critical\\
(closed photon
\\curve)}}}
\put(115,51){\makebox(0,0)[t]{\shortstack[l]{light-cone\\ includes\\
circle}}}
\put(92,50){\makebox(0,0)[t]{\shortstack[l]{light-cone\\tangent to\\
circle}}} \put(64,16){\makebox(0,0)[l]{$t$}}
\put(69,7){\makebox(0,0)[l]{$\varphi$}}
\put(78,2){\makebox(0,0)[lb]{$r$}}
\put(13,53){\makebox(0,0)[t]{\shortstack[l]{closed time-like\\ curve
(a CTC)}}} \put(45,35){\makebox(0,0)[r]{$p$'s future light-cone}}
\put(67,23){\makebox(0,0)[l]{$p$}}
\put(67,143){\makebox(0,0)[l]{$p'$}} \epsfysize = 150 \unitlength
\epsfbox{nagygodel.eps}
\end{picture}
\end{center}
\caption[haho]{\label{godelnagy-fig} A closer look at G\"odel's
universe.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.67 truemm} \small
\begin{center}
\begin{picture}(250,310)(0,0)
\epsfysize = 310 \unitlength \epsfbox{5god.eps}
\end{picture}
\end{center}
\caption[godel5]{\label{god5-fig}G\"odel's universe as on previous
figure but with an ``$r$=constant'' (and $z$=constant) hypersurface
indicated. This hypersurface is parallel with the $t$-axis.
Throughout this work, z=constant. I.e.\ throughout we suppress the
irrelevant spatial coordinate $z$. In
Figures~\ref{god5-fig}-\ref{gode1-fig}, $\Phi$ is the same as
$\varphi$ in the rest of the paper.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.67 truemm} \small
\begin{center}
\begin{picture}(250,310)(0,0)
\epsfysize = 310 \unitlength \epsfbox{4god.eps}
\end{picture}
\end{center}
\caption[godel4]{\label{god4-fig}G\"odel's universe with a
time-traveler's (time-like) life-line indicated. The time-traveler's
acceleration is bounded (but cannot be zero). The time-like curve
$C$ stays always inside the light-cones and spirals back to the past
as $m$ observes it. This is possible because the light-cones far
away from the $t$-axis are so much tilted that they reach below the
horizontal plane. See the explanation on p.\pageref{time-expl}.}
\end{figure}
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.67 truemm} \small
\begin{center}
\begin{picture}(250,310)(0,0)
\epsfysize = 310\unitlength \epsfbox{1gode.eps}
\end{picture}
\end{center}
\caption[haho]{\label{timetrav} \label{gode1-fig} Time-traveler
starting at time $s$ and arriving at time $h$, where $h$ is earlier
than $s$.}
\end{figure}
\vfill\eject\newpage
\bigskip
\noindent \emps{Explanation for
Figures~\ref{god4-fig},\ref{timetrav}:} \label{time-expl}
Figures~\ref{god4-fig},\ref{timetrav} illustrate the time-travel
aspect in G\"odel's universe. Assume observer $m$ lives on the time
axis $\bar t$. Assume $p$ is a point far enough from $\bar t$. I.e.\
the radius $r$ of $p$ is large enough. Then at $p$ the light-cones
are so much tilted that a time-like curve $C$ can spiral back into
the past as observed by $m$. $C$ involves only bounded acceleration.
An observer, say $k$, can live on $C$. Then in $m$'s view, $k$ moves
towards the past. Moreover, $k$ can go back to the past as far as he
wishes.
It is an entertaining exercise to prolong curve $C$ such that it
starts at $s\in\bar t$ and ends at $h\in\bar t$ such that $h\prec
s$, i.e.\ $h$ is in the past of $s$, see Figure~\ref{timetrav}. Then
our observer $k$ can start its journey at $s$, spiral outwards to
radius $r$, then spiral back along $C$ and then spiral inwards to
$h$. Then $k$ can wait on the time axis $\bar t$ to meet itself at
point $s$. We leave the details to the reader, but see
Figure~\ref{timetrav}.
Cf.\ also Figure 28 on p.113 in Horwich~\cite{Horw}, which we
include below.
\begin{figure}[!hbtp]
\setlength{\unitlength}{1.2 truemm} %
\begin{center}
\begin{picture}(134,115)(0,0)
\epsfysize = 115\unitlength
\epsfbox{Horwich.eps}
\end{picture}
\end{center}
\caption{\label{Horwich-fig} Figure from Horwich~\cite[Figure 28
(p.113)]{Horw}.}
\end{figure}
\vfill\eject\newpage
\section{Preparation for constructing G\"odel style rotating universes. The Naive Spiral World.}
\label{spiral-section}
\vspace*{36pt}
In this part we populate Newtonian space with massive observers
$m_i$ for $i\in I$ which carry equal mass and are evenly distributed
(where we understand ``even" in the common sense). We will call
these $m_i$'s {\em distinguished observers} or
{\em mass-carriers} or {\em galaxies}%
\footnote{We use the world ``galaxy'' only in a metaphorical sense
and it means nothing more than our distinguished observers carrying
mass. Cf.\ Rindler~\cite[p.203]{Rin} for more on our usage for
galaxies.} . Then we rotate this inhabited space around the $z$
axis. The galaxy in the origin is called $m_0$. We will make sure
that nothing happens in the direction $z$, therefore we can suppress
direction $z$ in our pictures and discussion. So space-time becomes
three-dimensional with axes $t,x,y$. We concentrate on the
$xy$-plane inhabited by the galaxies (or distinguished observers)
$m_i$. We rotate this plane of galaxies around the origin, i.e.\
around $m_0$. The rotation is rigid, i.e.\ the distances between the
galaxies do not change. The {\em angular velocity} of this rotation
is denoted by $\omega$. We call the plane inhabited by the $m_i$'s
the {\em universe}. Hence $\omega$ is called the angular velocity of
the universe. The rotation takes place in a Newtonian inertial frame
of reference.%
\footnote{Here we use the expression ``inertial frame of reference"
in the most classical (Newtonian) way, namely as it was given by L.\
Lange in 1885: ``A reference frame in which a mass point thrown from
the same point in three different (non co-planar) directions follows
rectilinear paths each time it is thrown, is called an inertial
frame."} The angular velocity $\omega$ is chosen such that the
resulting centrifugal force exactly balances the gravitational
attraction between the $m_i$'s. This is possible, cf.\ G\"odel's
paper \cite[second half of p.270]{Go96} for a proof. (Cf.\
\cite[pp.261-289]{Go96} for more detail.)
So our first pictures will show space-time diagrams in which the
life-lines%
\footnote{What we call life-line is called world-lines in most of
the literature of general relativity.} of the galaxies $m_i$ appear
as spirals around the $t$-axis (which happens to be the life-line of
$m_0$). An extra feature is that, similarly to G\"odel's papers, we
assume the existence of certain kinds of {\em cosmic compasses}. Our
cosmic compasses need not agree with what are called gyroscopes in
physics. For the time being cosmic compasses constitute only certain
conventions. Equivalently, they can be regarded as distinguished
{\em local coordinate frames} or ``local coordinate systems'' for
our distinguished observers or mass-carriers (the $m_i$'s). These
local frames need not be inertial. For the time being we do not
associate any tangible or
observational physical meaning to our compasses and local frames.%
\footnote{What they represent is mainly a logical ``stage'' in our
construction of rotating universes. Though, in principle we could
associate (a fairly complicated) observational meaning to them. We
do not go into this here.} In Section~\ref{gyroscope-section} we
will turn our attention to gyroscopes and local inertial frames,
too.
We assume that all the $m_i$'s agree with each other in that they
have two cosmic compasses for carrying the original spatial
directions $x$ and $y$ of our original Newtonian inertial reference
frame with which we began our construction. This makes them
equivalent (with each other) in the sense that any of them, say $m$,
may think that he is at the center, he is not rotating and it is the
rest of the observers who are rotating around $m$.
\bigskip
This \label{gr-p} paper is based on general relativity but we do not
assume that the reader is familiar with the details of general
relativity. What we do assume is familiarity with (i) the basics of
special relativity and (ii) awareness of some of the basic
principles of general relativity explained in items (1)-(2) below.
All this can be found in \cite{AMN07}. All what we need to know
about special relativity in this paper can be found in
\cite[sections 2.1-2.4]{AMN07}. What we need to know about general
relativity theory, summarized in items (1)-(2) below, can be found
in \cite[sections 3.1-3.3]{AMN07}.
(1) General relativity assumes that {\em special relativity holds
locally}. This means, roughly, that in a general relativistic
space-time, every point (event) is ``surrounded'' by a small, local
coordinate frame (LF for short) and in each LF special relativity
holds in some sense (cf.\ e.g.\ Rindler~\cite{Rin} for a simple
explanation of this). The LF's are local in the topological sense
that space-time $M$ comes together with a topology and then LF's are
local in the sense that the ``closer" we go to the point $p\in M$
the more accurately the local special relativity frame LF describes
the behavior of light-signals and moving bodies. (For a precise
formulation see \cite[sec.3.3, e.g., Def.3.3]{AMN07}.)
In the case of G\"odel's universe, $M$ together with this topology
is just the original (Newtonian) space-time $\Reals^4$. Thus, in the
case of G\"odel's universe $\langle M,\dots\rangle$ a single
``global" coordinate system can cover the whole of $M$. This means
that there exist coordinatizations $Co:\Reals^4\longrightarrow M$
with $Co$ a bijection which satisfy some natural requirements which
we do not list here. E.g.\ $Co$ involves one ``time coordinate'' and
three ``space coordinates'', hence at first glance it looks similar
to the familiar coordinatization of Newtonian space-time or special
relativity. Further, one of the space coordinates turns out to be
irrelevant, hence $Co:\Reals^4\longrightarrow M$ will admit a
3-dimensional representation (via suppressing the irrelevant
coordinate). So in our pictures there will be {\em one big
coordinate system} $Co$ covering the whole picture and there will be
{\em many small coordinate systems} representing the LF's or other
local coordinate systems. The big coordinate system represents the
whole of our manifold $M$ to be described.
When we describe a space-time $M$, the key ingredient is specifying
how the little LF's are glued together to form the whole of $M$. We
will do this by specifying a (fairly arbitrary) coordinatization $C$
of $M$ and then to each point $p\in M$ we describe how the LF at $p$
is fitted into $M$ at point $p$.%
\footnote{The effect is somewhat similar to an Escher painting,
e.g.\ he glues little birds together and there emerges an over-all
pattern which has nothing to do with birds.} When specifying which
LF is glued to what point, we use the coordinate system $C$ as a
tool for communication. Most of the time we will use geometric
constructions for presenting the above data. In such a picture, the
LF at $p$ is represented by drawing the {\em light-cone} at $p$
together with the {\em unit vectors}
$\langle t_p, x_p,y_p\rangle$ of the LF at $p$. Sometimes we
indicate only the future light-cones, sometimes we indicate both
the future and the past light-cones. Most of the time we indicate
the local simultaneity of the LF, too.%
\footnote{To specify the LF, it is enough to specify the unit vectors
$\langle t_p, x_p, y_p\rangle$. These determine the light-cones and the local simultaneity.
However, the latter are very helpful in visualizing the space-time, that's why we
indicate them in the pictures.}
These pictures, beginning with Figure~\ref{spi-fig}, represent {\em
precise geometrical constructions}, hence they intend to specify the
space-time in question completely (as opposed to being a mere
``sketch'' conveying only intuitive ideas). In
Sections~\ref{technical-section},\ref{literature-section} which
contain the technical details we present the constructions behind
the pictures together with the metric tensor field of the space-time
in question. (To explain the latter, we note that a model of general
relativity is usually given in the form $\langle M,{\sf g}\rangle$
where $M$ is a manifold and ${\sf g}$ is a tensor field defined on
$M$. We will not need these tensor-fields until
Section~\ref{literature-section}.) We note that ${\sf g}$ can be
reconstructed from the way the LF's are glued together in our
pictures, hence if the reader understands the geometry of these
pictures, he will automatically understand the space-time (or
general relativity model) they represent.
\smallskip
(2) Occasionally we will mention so-called {\em geodesics}.
Geodesics are the general relativistic counterparts of straight
lines of special relativity, in particular, the life-lines of
inertial bodies or freely falling bodies are called geodesics. The
same applies to life-lines of photons. {\em Curves} are understood
in the usual sense, e.g.\ geodesics are special curves. Properties
of curves are generalized from special relativity to general
relativity by saying that curve $\ell$ has property $P$ if it has
$P$ locally (in the sense of special relativity). E.g.\ $\ell$ is
{\em time-like} if for each $p\in\ell$ the LF surrounding $p$
``thinks'' that $\ell$ is time-like in the sense of special
relativity. Similarly for {\em space-like}, {\em photon-like} (and
for other properties of geodesics).
We note that time-like curves are the possible life-lines of
arbitrary bodies, i.e.\ of not necessarily inertial bodies. These
may undergo acceleration. Both geodesics and time-like curves are
curves in the usual sense. A curve is time-like if it always stays
inside the light-cones. A curve $\ell$ is photon-like if for any
point $p\in\ell$, $\ell$ is tangent to the light-cone at $p$.
\bigskip
\begin{figure}[!hp]
\setlength{\unitlength}{0.2 truemm} \small
\begin{center}
\begin{picture}(769,455)(0,0)
\epsfysize = 455 \unitlength \epsfbox{gode2i.eps}
\end{picture}
\end{center}
\caption{\label{inercelo} Observers $m', m'', m'''$ perform a rigid
rotation around observer $m$. Such observers are the only
mass-carriers in this universe. Because of this rotation, $m'''$
moves so fast that his light-cone tilts over so much that it is
almost horizontal.}
\end{figure}
\vfill\eject
\begin{figure}[!hp]
\setlength{\unitlength}{0.18 truemm} \small
\begin{center}
\begin{picture}(875,1035)(0,0)
\epsfysize = 1035\unitlength \epsfbox{gode2h.eps}
\end{picture}
\end{center}
\caption[G\"odel's universe GU with emphasis on inertial
observers]{\label{godinerc} G\"odel's Universe with emphasis on
\underbar{inertial} observers instead of photons (the rotation is
``rigid''). The coordinate system $\langle t',x',y'\rangle$ of say $m'$ does
\underbar{not} follow the rotation of the matter in this universe.
The life-lines of $m,\ldots,m'''$ are (special) geodesics. $\langle
t,x,y\rangle$, $\langle t',x',y'\rangle$ etc.\ are distinguished local
coordinate systems. E.g.\ $\langle t'',x'',y''\rangle$ is the local
coordinate system of observer $m''$.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.13 truemm} \small
\begin{center}
\begin{picture}(827,1590)(0,0)
\epsfysize = 1590\unitlength \epsfbox{gode2ee.eps}
\end{picture}
\end{center}
\caption{\label{hajnalka} Previous figure copied on top of itself.
It goes on like this in both directions forever. $m',m'',m'''$ are
(time-like life-lines of) observers ``equivalent with'' the observer
$m$ living on $\bar t$.}
\end{figure}
\vfill\eject
\begin{figure}[!hp]
\setlength{\unitlength}{0.13 truemm} \small
\begin{center}
\begin{picture}(874,1630)(0,0)
\epsfysize = 1630\unitlength \epsfbox{gode2hh.eps}
\end{picture}
\end{center}
\caption{\label{double} Previous figure with non-rotating local
coordinate systems $\langle t',x',y'\rangle$, $\langle t'',x'',y''\rangle$ etc.\
emphasized.}
\end{figure}
\vfill\eject
\begin{figure}[!hp]
\setlength{\unitlength}{0.25 truemm} \small
\begin{center}
\begin{picture}(220,730)(0,10)
\epsfysize = 730 \unitlength
\epsfbox{gode2g.eps}
\end{picture}
\end{center}
\caption{\label{notrot} The coordinate system $\langle t',x',y'\rangle$ of
say $m'$ does \underbar{not} follow the rotation of the matter in
this universe. The reader is asked to check that in a certain sense
the direction $x'$ remains parallel with the original direction $x$.
This is why $m'$ thinks that $m$ is rotating around $m'$.}
\end{figure}
\vfill\eject
\begin{figure}[!hp]
\setlength{\unitlength}{0.08 truemm} \small
\begin{center}
\begin{picture}(2111,2476)(0,0)
\epsfysize = 2476 \unitlength \epsfbox{spi.eps}
\end{picture}
\end{center}
\caption{\label{spi-fig} Each $m_i$ can measure the time needed for
a single turn of the universe. (I.e.\ each $m_i$ can measure the
angular velocity $\omega$ of the universe.) To ensure this we have
to calibrate the $t_i$ vectors of the $m_i$'s such that in $m_0$'s
view the vertical components of all the $t_i$'s are {\em equal} with
that of $t_0$. $\omega=\pi/30$, Map 2 applies. Cf.\
p.\pageref{map2-fig}.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.055 truemm} \small
\begin{center}
\begin{picture}(3066,3759)(0,0)
\epsfysize = 3759 \unitlength \epsfbox{spi1.eps}
\end{picture}
\end{center}
\caption{\label{spi1-fig} Previous figure with past-light-cones
indicated. $\omega=\pi/45$, Map 1 applies. Cf.\
p.\pageref{map1-fig}.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.085 truemm} \small
\begin{center}
\begin{picture}(1679,2603)(0,0)
\epsfysize = 2603 \unitlength \epsfbox{torta2.eps}
\end{picture}
\end{center}
\caption{\label{torta2-fig} $\omega=\pi/30$, Map 2 applies. Cf.\
p.\pageref{map2-fig}.}
\end{figure}
\newpage
G\"odel wanted the distinguished massive observers $m_0,\dots,
m_i,\dots$ of his universe to be equivalent with each other. So far
they are equivalent from the point of view that each of them thinks
that the rest of the universe rotates around himself. This is so
because the local coordinate systems (hence the cosmic compasses) of
the distinguished observers $m_i$ do not rotate, do not follow the
rotation of the universe. At this point we can ensure one more
symmetry property of the $m_i$'s.
Each $m_i$ can measure the time needed for a single turn of the
universe, for example as follows: $m_i$ picks a distinguished
observer, say $m_0$, such that $m_i$'s $y$-compass points in the
direction of $m_0$ at an instant, and then measures the time passed
until his
$y$-compass again points in $m_0$'s direction.%
\footnote{What does it mean that $m_i$'s $y$-compass points in
$m_0$'s direction at some time $t$? We may use the following
definition: there is a curve $\ell$ connecting $m_i$'s life-line
(starting with the event at $t$) with $m_0$'s life-line such that at
each point $p$ of the curve $\ell$ the following holds: $\ell$ lies
in the local simultaneity of the distinguished observer $m$ passing
through $p$ and $m$'s $y$-compass points in $\ell$'s direction in
$p$.}
This is how $m_i$ can measure the angular velocity
$\omega$ of the universe. To ensure that all the distinguished
observers get the same value for the angular velocity, we have to
calibrate the $t_i$ vectors of the $m_i$'s such that in $m_0$'s view
the vertical components of all the $t_i$'s are {\em equal} with that
of $t_0$. This is ensured in
Figure~\ref{spi-fig}, and from now on we will always ensure this.%
\footnote{This will also ensure that each $m_i$ will measure the
same angular velocity for the universe, no matter which ``partner"
he chooses (in place of $m_0$) for the measurement.} This choice of
the local time-unit vectors ensures also that the local LF's measure
a kind of ``universal time", namely that of the big global reference
frame. However, this ``universal time" does not satisfy natural
requirements about ``time" presented in the next section.
\label{NGU-page} Above we specified the time-unit-vectors of the
local frames. Let us now specify three other unit-vectors at each
point $p$, these will specify the light-cone and the local special
relativity at $p$. All what we say below in specifying the three
unit vectors are meant in the big global reference frame. The
r-unit-vector at $p$ points in the radial direction parallel to the
$xy$-plane and has length 1. The (suppressed) $z$-unit-vector points
in the direction of the (suppressed) $z$-axis and has length 1.
Finally, the last unit-vector is orthogonal to the three
unit-vectors given so far and has the same length as the
t-unit-vector. In the local frame at $p$, these 4 vectors constitute
an orthonormal system. By this, we
specified fully our general relativistic space-time.%
\footnote{The corresponding metric tensor is given in
section~\ref{literature-section}.}
The {\em preliminary} version of G\"odel's universe GU constructed
above and depicted in Figures~\ref{inercelo}-\ref{torta2-fig} will
be referred to as ``Naive GU'' (NGU) or more specifically, ``{\em
Naive Spiral World}''. The reason for this is that so far we have
chosen the simplest possible arrangement of light-cones without
checking whether they will satisfy certain properties we have in
mind. Indeed, Section~\ref{tilting-section} will lead to some
refinement/fine-tuning of the light-cone structure. However, the
Naive GU has many of the desired properties already. Namely, the
life-lines of the galaxies are geodesics, i.e., the distinguished
observers $m_i$ are really inertial observers. The radial straight
lines parallel to the $xy$-plane are all geodesics, too.
\vfill\eject \noindent
\section{\bf Non-existence of a global time in G\"odel's universe.}
\label{folia-section}
\noindent \label{folia-expl} Figures~\ref{nonfolia}--\ref{folia}
below form an informal illustration for the idea of
``non-foliasibility'' of G\"odel's universe GU. I.e.\
Figures~\ref{nonfolia}--\ref{folia} intend to illustrate the claim
that there is no global natural simultaneity in GU.
By a \emps{potential simultaneity} of GU we can understand a
hyper-surface $S$ in the usual sense and we can require it to
satisfy conditions like (i)--(vi) below.
\smallskip
\begin{description}
\item[(i)]
$(\forall p,q\in S)[p\ne q\ \Rightarrow\ (\exists$ {\it maximal
space-like geodesic } $\ell)(p,q\in\ell\subseteq S)]$.
\item[(ii)]
$(\forall$ {\it space-like geodesic }$\ell)[$ {\it a nonempty open
segment of $\ell$ lies in }$S\ \ \Rightarrow\ \ \ell\subseteq S]$.
\item[(iii)]
Every maximal time-like geodesic $\ell$ intersects $S$ (i.e.\
$\ell\cap S\ne\emptyset$).
\item[(iv)]
$S$ ``avoids'' the light-cones, i.e.\ no nonempty segment of a
photon-geodesic lies inside $S$. (Note that any open segment of a
geodesic is a geodesic again.)
\item[(v)] there is no time-like curve connecting two points of $S$.
\item[(vi)] there is no time-like geodesic connecting two points of
$S$.
\end{description}
\smallskip
Note that (i)-(iii) are ``closure conditions'', i.e.\ they try to
make $S$ big, while condition (iv) points in the direction that $S$
is only $n-1$--dimensional (in some sense), hence it tries to make
$S$ ``thin'' like a usual surface.
In the pictures we start out from the origin $\bar 0$ and try to
build a simultaneity containing $\bar 0$ first by moving along the
$\bar y$--axis and then by moving along the negative $-\bar
x$--axis. Then we try to combine the two. While the figure does not
prove the nonexistence theorem, it illustrates ideas about its
plausibility.
For more careful formulation and proof of non-existence of global
time in GU cf.\ \cite[p.263 (written by Malament),
pp.269--287]{Go96}, Hawking-Ellis~\cite[p.170]{Hawel}.
Earman~\cite[Lemma 4.1]{Earm} is also (remotely) relevant here, but
it proves less than what G\"odel claims, namely, we do not require
$S$ to satisfy all properties of a Cauchy hypersurface (cf.\
\cite[p.44]{Earm} for definition of Cauchy
hypersurfaces).%
\footnote{ The general relativistic computer constructed in
Etesi-N\'emeti~\cite{EN} (cf.\ also Hogarth~\cite{HogDis},
Earman~\cite{Earm}, N\'emeti-D\'avid ~\cite{ND06}) can be realized
in the G\"odel-type universes, too, because of their special causal
structure. This is interesting because we do not know whether the
GU's enjoy the so called Malament-Hogarth property (in the
literature general relativistic computers are usually constructed in
Malament-Hogarth space-times).}
\bigskip
\vfill\eject\newpage
\begin{figure}[!ht]
\setlength{\unitlength}{0.25 truemm}
\bigskip\bigskip
\small
\begin{center}
\begin{picture}(597,781)(0,0)
\epsfysize = 781 \unitlength \epsfbox{gode2ja.eps}
\end{picture}
\bigskip\bigskip
\end{center}
\caption{\label{nonfolia} Idea of ``non-foliasibility'' of G\"odel's
space-time. I.e.\ nonexistence of a global, natural simultaneity (or
global time) in G\"odel's universe. See explanation on
p.\pageref{folia-expl}.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.26 truemm} \small
\begin{center}
\begin{picture}(598,740)(0,0)
\epsfysize = 740 \unitlength \epsfbox{gode2k.eps}
\end{picture}
\end{center}
\caption{\label{folia} Previous figure but with the two strips of
constructed simultaneity closer to each other, $p,\bar 0$ and
$q,\bar 0$ are still simultaneous. The ``informal logic'' of these
two figures generates a simultaneity connecting all points of
space-time with each other. This is in contradiction with the
intuitive notion of simultaneity.}
\end{figure}
\vfill\eject\newpage
\section{G\"odel's universe in co-rotating coordinates, ``whirling
dervishes''. Transforming the rotation away.}
\label{dervish-section} \vspace*{36pt}
Gott~\cite[p.91]{Gott} writes ``You could equally well view
G\"odel's universe as static and non-rotating, as long as
self-confessed ``nondizzy observers'' would be spinning like
whirling dervishes with
respect to the universe as a whole.''%
\footnote{G\"odel~\cite[p.271]{Go49} writes: ``Of course, it is also
possible and even more suggestive to think of this world as a rigid
body at rest and of the compass of inertia as rotating everywhere
relative to this body.''} Below we will introduce new coordinates
$\langle T^r, X^r, Y^r, Z^r\rangle$ co-rotating with the matter
content $m_0,\dots,m_i,\dots$ of the universe. In $\langle
T^r,\dots\rangle$ the massive bodies $m_i$ appear as static with
their life-lines vertical lines. We will call $\langle
T^r,\dots\rangle$ ``{\em Dervish World}'' motivated by the above
quotation from Gott. The transformation between the old spiral
coordinates and the new rotating coordinates is elaborated later, on
pp.\pageref{coord1}--\pageref{coord2}.
\bigskip
In the Spiral World, the ``galaxies'' $m_1, m_2,\dots, m_i$ appear
as rotating around $m_0$ in direction $\varphi$ with angular
velocity $\omega$ while their cosmic compasses $x_i, y_i$ appear
fixed (non rotating). As a contrast, the Dervish World shows
$m_1,\dots, m_i$ as motionless, while it shows their cosmic
compasses as rotating in direction $-\varphi$ with angular velocity
$\omega$.
\bigskip
We will indicate on page~\pageref{Mach-page} how this dervish world
can be used to show that GU can be used to demonstrate that General
Relativity (in its present form) does not imply the full version of
Mach's principle.
\bigskip
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{2 truemm}
\begin{center}
\begin{picture}(60,100)(9,-1)
\epsfysize = 100\unitlength
\epsfbox{der5a.eps}
\end{picture}
\end{center}
\caption[haho]{\label{dervis1
G\"odel's universe GU in {\em rotating} coordinates $T^r=t,\; X^r,\;
Y^r$. These coordinates co-rotate with GU, hence GU appears as {\em
being at rest}. As a price, the local coordinate systems like $\langle
t',x',y'\rangle$ appear as rotating backwards (in direction $-\varphi$)
in the new coordinate system. The transformation between the old
spiral coordinates and new rotating ones is elaborated on
p.\pageref{coord1}.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{2 truemm}
\begin{center}
\begin{picture}(60,100)(9,-5)
\epsfysize = 100\unitlength
\epsfbox{der4a.eps}
\end{picture}
\end{center}
\caption[haho]{\label{dervis2
We have a system of static, non-moving massive observers $m,m',m''$
etc.\ (the same as in Figures~\ref{inercelo}--\ref{hajnalka}) whose
cosmic compasses i.e.\ whose local coordinate systems are spinning
around creating a whirling effect. Gott~\cite[p.91]{Gott} called
these ``whirling dervishes''. This arrangement can be used to show
that Mach's principle is violated.} See p.\pageref{Mach-page} for
explanation.
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{0.1 truemm} \small
\begin{center}
\begin{picture}(1000,2060)(0,0)
\epsfysize = 2060 \unitlength \epsfbox{nade0.eps}
\end{picture}
\end{center}
\caption{\label{nade0-fig} A typical dervish consisting of massive
observer (or galaxy) $m_0$ and its cosmic compasses $\langle
x_0,y_0,z_0\rangle$. In other words, $m_0$'s dervish is $m_0$'s {\em
local} coordinate system. $\omega=\pi/15$. }
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{0.062 truemm} \small
\begin{center}
\begin{picture}(2460,3360)(0,0)
\epsfysize = 3360 \unitlength \epsfbox{nade1.eps}
\end{picture}
\end{center}
\caption{\label{nade1-fig} Dervishes $m_0,\dots,m_7$ involving
greater radiuses, hence more ``violent'' whirling effects.
$\omega=\pi/15$. Re-calibrated version of Map 2 applies, cf.\
p.\pageref{map2-fig
.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{2 truemm}
\begin{center}
\begin{picture}(60,100)(9,-5)
\epsfysize = 100\unitlength
\epsfbox{uder1.eps}
\end{picture}
\end{center}
\caption{\label{uder1-fig} Light-cones and local unit vectors of
spiral world above, and their counterparts in dervish world $\langle
T^r,\dots, Z^r\rangle$ below. Detailed representation of upper part
is in Figures~\ref{spi-fig}, \ref{spi1-fig}, \ref{torta2-fig} and
that of lower part is in next Figure~\ref{torta-fig}. See also
Figures~\ref{dervis1}-\ref{nade1-fig}. The transformation between
the two worlds is described on
pp.\pageref{coord1}-\pageref{coord2}.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{0.085 truemm} \small
\begin{center}
\begin{picture}(1914,2603)(0,0)
\epsfysize = 2603 \unitlength \epsfbox{torta.eps}
\end{picture}
\end{center}
\caption{\label{torta-fig} Light-cones with local unit vectors in
dervish world $\langle T^r,\dots\rangle$. $\omega=\pi/30$, Map 2
applies.}
\end{figure}
\vfill\eject\newpage
\section{Fine-tuning the space-time structure of the Naive GU obtained
so far. Tilting the light-cones.} \label{godel-section}
\label{tilting-section}
First we show two pictures hinting at the fact that the lengths of
unit-vectors etc.\ in our Naive Dervish World might be of
inconvenient proportions.
\begin{figure}[!hp]
\setlength{\unitlength}{0.085 truemm
\small
\begin{center}
\begin{picture}(1980,2260)(0,0)
\epsfysize =2260 \unitlength \epsfbox{naivtorta1.eps}
\end{picture}
\end{center}
\caption{\label{naivtorta1-fig} $\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.057 truemm} \small
\begin{center}
\begin{picture}(2940,3340)(0,0)
\epsfysize = 3340 \unitlength \epsfbox{nade2.eps}
\end{picture}
\end{center}
\caption{\label{nade2-fig} Whirling dervishes on larger radiuses.
Re-calibrated version of Map 2 applies as follows. $r'(m_i)=2\cdot
r(m_i), v'(m_i) = v(m_i), \omega' = \omega/2$; where $r',v',\omega'$
belong to the present figure while $r,v,\omega$ belong to Map 2.}
\end{figure}
\newpage
The fact that the $x_i$ vector of $m_i$ has a much longer component
parallel with coordinate $X^r$ than $x_0$ (illustrated in the
previous two figures) is the visual manifestation of the following
fact, seen better in the spiral world. In the spiral world, $m_i$
can send a photon $ph$ upward almost parallel with the $t$ axis such
that $ph$ reaches $m_i$ again in a ``rigidly bounded'' time (an
upper bound is $4\pi/\omega$) where the bound is independent of the
choice of $i$. We choose the path of $ph$ such that its distance
from $m_0$ remains constant(ly the $m_0$--$m_i$ distance). This path
need not be geodesic but as G\"odel wrote, we can use mirrors to
force $ph$ to follow this path. See Figure~\ref{bizony-fig}.
In G\"odel's Universe the return-time of the photons sent around
$m_0$ in a circle of radius $r$ tend to infinity as $r$ tends to
infinity.
\smallskip
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.044 truemm} \small
\begin{center}
\begin{picture}(3380,4820)(0,0)
\epsfysize = 4820 \unitlength \epsfbox{bizony.eps}
\end{picture}
\end{center}
\caption{\label{bizony-fig} The time needed for a photon sent out by
$m_7$ and kept with mirrors on a circle around $m_0$ to come back is
a little more than the time needed for the universe to make a turn.}
\end{figure}
Let us see how we can remove this difference with G\"odel's universe
without destroying the logic of our construction. How can we
fine-tune our construction? We are aiming at the ``smallest'' and
simplest change so that the logic of our construction would remain
intact. Changing the length's of the $x_i$ vectors and keeping the
other unit-vectors as they were results in making the light-cones
narrower. Since this will not lead to CTC's, we will ``tilt" the
light-cones, instead. So, in fine-tuning the Naive GU we will speak
about tilting the light-cones, and we will call the new space-time
Tilted GU.
Let us work in the dervish world.
\bigskip
\noindent \underbar{Choice 1} \label{choice1-p}
We can tilt the light-cones forwards (in the
positive $\varphi$ direction) such that with increasing $r$ (radius)
we also increase the tilting. This can be done in such a manner that
the difference we talked about disappears. The result of such
tilting results a version of NGU represented in
Sections~\ref{tilting-section}-\ref{refined-section}
(Figures~\ref{godtorta2-fig}--\ref{2vis-fig}). The so obtained
tilted universe resembles very closely the universes presented in
G\"odel's papers. (E.g.\ they agree in many structural properties
[in G\"odel's sense].)
\bigskip
\begin{figure}[!ht]
\setlength{\unitlength}{0.7 truemm}
\begin{center}
\begin{picture}(87,60)(0,0)
\put(20,60){\makebox(0,0){t}} \put(23,45){\makebox(0,0){$m_i$}}
\put(90,5){\makebox(0,0){$\varphi$}}
\epsfysize = 60\unitlength
\epsfbox{forward.eps}
\end{picture}
\end{center}
\caption{\label{forward-fig} Choice 1 is that we tilt the
light-cones forwards.}
\end{figure}
\begin{figure}[!ht]
\setlength{\unitlength}{0.7 truemm}
\begin{center}
\begin{picture}(88,60)(0,0)
\put(35,60){\makebox(0,0){t}} \put(38,45){\makebox(0,0){$m_i$}}
\put(90,5){\makebox(0,0){$\varphi$}}
\epsfysize = 60\unitlength
\epsfbox{backward.eps}
\end{picture}
\end{center}
\caption{\label{backward-fig} Choice 2 is that we tilt the
light-cones backwards.}
\end{figure}
\noindent \underbar{Choice 2} We can also tilt the light-cones (in
dervish world) backwards, opposite to the $\varphi$ direction,
carefully enough such that the difference goes away and we do not
induce other undesirable effects. See Figure~\ref{backward-fig}.
This Choice~2 tilting is just Choice~1 tilting seen from another
coordinate system (namely by using the coordinate transformation
$\varphi\to-\varphi$). Below we will explore Choice~1, and in
Section~\ref{gyroscope-section} (p.\pageref{gyroscope-section}) we
explore Choice~2. We will see that both Choice~1 and Choice~2 have
their advantages.\label{choicevege-p}
\bigskip
From now on, we concentrate on Choice 1.
\bigskip
\vfill\eject
We will call the tilting in Choice~1 ``{\em
forward-tilting}'', the so obtained dervishes {\em tilted
dervishes}, and the so obtained (tilted) dervish world {\em Tilted
Dervish World} or {\em Choice~1 Dervish World}. Recall that we
describe a simple transformation between the spiral world $\langle
t^s,\dots\rangle$ and the dervish world $\langle t^d,\dots\rangle$
in Section~\ref{technical-section} (p.\pageref{technical-section}).
We use this transformation for transforming the new, tilted universe
from the dervish world to the spiral world. We call the result {\em
Tilted Spiral World} or use simply the adjective ``new spiral'' or
``refined-spiral'' for referring to the so obtained light-cones as
new spiral cones or back rotated ones. The expression ``rotating
back'' or ``back-rotating'' intends to refer to application of the
inverse transformation $\langle
t^d,\dots\rangle\longrightarrow\langle t^s,\dots\rangle$ described
in Section~\ref{technical-section}. In such contexts the inverse
transformation is applied to the result of forward-tilting.
The result of the above outlined forward-tilting is the G\"odel-type
universe which we will describe in more detail in the coming parts.
We will call this space-time Tilted GU (or sometimes new GU).
Instead of defining the tilting of the cones at each point, we will
give details of the tilting for the cones occurring in the figures
only. These tilted light-cones (with local unit-vectors) and their
new spiral versions are depicted and constructed in detail in
Section~\ref{technical-section}. These objects (light-cones, $m_i$'s
etc) are systematically arranged in space-time (i.e.\ are
coordinatized) in Maps 1,2 (pp.\pageref{map1-fig},
\pageref{map2-fig}). These maps also include angular velocities,
tangential velocities.\bigskip
In this section we describe ``Tilted Dervish World'', and in the
next section, Section~\ref{refined-section}, we describe ``Tilted
Spiral World''.
\bigskip
\label{goduniv1-vege}
\newpage
\section*{Tilted dervishes (fuller description of new GU in dervish world).}
\label{fuller-section}
\begin{figure}[!hp]
\setlength{\unitlength}{0.1 truemm} \small
\begin{center}
\begin{picture}(1600,1660)(0,0)
\epsfysize = 1660 \unitlength \epsfbox{godtorta2.eps}
\end{picture}
\end{center}
\caption{\label{godtorta2-fig} Tilted-dervish universe or Choice~1
Dervish World. Light-cones, local unit-vectors along the $y$-axis.
$\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[hp]
\setlength{\unitlength}{0.1 truemm} \small
\begin{center}
\begin{picture}(1600,1760)(0,0)
\epsfysize = 1760 \unitlength \epsfbox{godtorta2a.eps}
\end{picture}
\end{center}
\caption{\label{godtorta2a-fig} Tilted Dervish World (Choice 1
Dervish World). $\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[hp]
\setlength{\unitlength}{0.165 truemm} \small
\begin{center}
\begin{picture}(1020,1220)(0,0)
\epsfysize = 1220 \unitlength \epsfbox{godtorta3a.eps}
\end{picture}
\end{center}
\caption{\label{godtorta3a-fig} Tilted Dervish World.
$\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[hp]
\setlength{\unitlength}{0.1 truemm} \small
\begin{center}
\begin{picture}(1600,1760)(0,0)
\epsfysize = 1760 \unitlength \epsfbox{godtorta1.eps}
\end{picture}
\end{center}
\caption{\label{godtorta1-fig} Tilted Dervish World.
$\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[p]
\setlength{\unitlength}{0.046 truemm} \small
\begin{center}
\begin{picture}(2240,4500)(0,0)
\epsfysize = 4500\unitlength \epsfbox{goder0.eps}
\end{picture}
\end{center}
\caption{\label{goder0-fig} Tilted-dervish universe. (Choice 1
Dervish World.) Light-cones, local unit-vectors on the $xy$-plane.
$\omega=\pi/45$, Map 1 applies.}
\end{figure}
\begin{figure}[p]
\setlength{\unitlength}{0.065 truemm} \small
\begin{center}
\begin{picture}(2020,3260)(0,0)
\epsfysize =3260 \unitlength \epsfbox{Goder1.eps}
\end{picture}
\end{center}
\caption{\label{Goder1-fig} Tilted Dervish World. $\omega=\pi/30$,
Map 2 applies.}
\end{figure}
\begin{figure}[p]
\setlength{\unitlength}{0.085 truemm} \small
\begin{center}
\begin{picture}(1280,2480)(0,0)
\epsfysize = 2480 \unitlength \epsfbox{Goder3_1.eps}
\end{picture}
\end{center}
\caption{\label{Goder3-fig} Tilted Dervish World. $\omega=\pi/30$,
Map 2 applies.}
\end{figure}
\begin{figure}[p]
\setlength{\unitlength}{0.065 truemm} \small
\begin{center}
\begin{picture}(2020,3260)(0,0)
\epsfysize =3260 \unitlength \epsfbox{1goder.eps}
\end{picture}
\end{center}
\caption{\label{1goder-fig} Tilted Dervish World. Compare with
Figure 61 on p.169 in Hawking-Ellis~\cite{Hawel} (cf.\ also
Fig.\ref{ujgodel-fig} herein). $\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[p]
\setlength{\unitlength}{0.061 truemm
\small
\begin{center}
\begin{picture}(2560,3260)(0,0)
\epsfysize = 3260 \unitlength \epsfbox{5hastanc.eps}
\end{picture}
\end{center}
\caption{\label{5hastanc-fig} \label{6hastanc-fig} Tilted Dervish
World.
``$\omega$ of universe'' $=\ \pi/60$ (recalibrated version
of Map 2 applies). Spinning dervishes are artificially sped up
(``artificial $\omega$ of dervishes'' $=\ \pi/15$).}
\end{figure}
\begin{figure}[p]
\setlength{\unitlength}{0.074 truemm} \small
\begin{center}
\begin{picture}(2268,2927)(0,0)
\epsfysize = 2927 \unitlength \epsfbox{gyors2.eps}
\end{picture}
\end{center}
\caption{\label{gyors2-fig} Tilted Dervish World. $\omega=\pi/30$,
Map 2 applies.}
\end{figure}
\label{goduniv2-vege}
\begin{figure}[!hp]
\setlength{\unitlength}{0.074 truemm} \small
\begin{center}
\begin{picture}(2268,2927)(0,0)
\epsfysize = 2927 \unitlength \epsfbox{gyors7.eps}
\end{picture}
\end{center}
\caption{\label{gyors7-fig} Tilted Dervish World. $\omega=\pi/30$,
Map 2 applies.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.072 truemm} \small
\begin{center}
\begin{picture}(2280,3020)(0,0)
\epsfysize = 3020 \unitlength \epsfbox{has4a.eps}
\end{picture}
\end{center}
\caption{\label{has4a-fig} Tilted dervishes with original angular
velocity. $\omega=\pi/30$, Map 2 applies.}
\end{figure}
\newpage
\section{Tilted Spiral World, i.e.\ Choice~1 Spiral World.}
\label{refined-section}
\begin{figure}[!hp]
\setlength{\unitlength}{0.16 truemm} \small
\begin{center}
\begin{picture}(880,1280)(0,0)
\epsfysize = 1280 \unitlength \epsfbox{vistorta5.eps}
\end{picture}
\end{center}
\caption{\label{vistorta5-fig} Tilted spiral world, i.e.\ Choice~1
Spiral World. Light-cones, unit-vectors along the $y$-axis.
$\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.073 truemm} \small
\begin{center}
\begin{picture}(1400,2860)(0,0)
\epsfysize =2860 \unitlength \epsfbox{vistorta4a.eps}
\end{picture}
\end{center}
\caption{\label{vistorta4a-fig} Tilted Spiral World.
$\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.052 truemm} \small
\begin{center}
\begin{picture}(1740,4060)(0,0)
\epsfysize =4060 \unitlength \epsfbox{nagyt1.eps}
\end{picture}
\end{center}
\caption{\label{nagyt1-fig} \label{nagyt0-fig} Tilted Spiral World.
$\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.079 truemm} \small
\begin{center}
\begin{picture}(2120,2880)(0,0)
\epsfysize = 2880 \unitlength \epsfbox{vis.eps}
\end{picture}
\end{center}
\caption{\label{vis-fig} Tilted Spiral World, full view.
Light-cones, life-lines, unit-vectors etc. Cf.\
Hawking-Ellis~\cite[Figure 61]{Hawel}. Cf.\ also
Figure~\ref{ujgodel-fig} herein. $\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.05 truemm} \small
\begin{center}
\begin{picture}(2560,4480)(0,0)
\epsfysize = 4480 \unitlength \epsfbox{visa.eps}
\end{picture}
\end{center}
\caption{\label{visa-fig} Tilted Spiral World, full view.
$\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.075 truemm} \small
\begin{center}
\begin{picture}(2120,2880)(0,0)
\epsfysize = 2880 \unitlength \epsfbox{2vis.eps}
\end{picture}
\end{center}
\caption{\label{2vis-fig} Full view of new spiral world. Cf.\
Hawking-Ellis~\cite{Hawel}, Figure~\ref{ujgodel-fig} herein and the
figure in Malament~\cite{Mal84}.
$\omega=\pi/30$, Map 2 applies.}
\end{figure}
\newpage
\section{Giving physical meaning to cosmic compasses. What rotates in
which direction (relative to whom).} \label{gyroscope-section}
\begin{figure}[!h]
\setlength{\unitlength}{0.61 truemm} %
\begin{center}
\begin{picture}(178,96)(0,0)
\epsfysize = 96\unitlength
\epsfbox{Pickover.eps}
\end{picture}
\end{center}
\caption{\label{Pickover-fig} What rotates in which direction? The
above is a picture from Pickover~\cite[p.185]{Picktime} from the
chapter on G\"odelian Universe implicitly offering a natural answer
to this question. This is also Figure 7.5 in
Gribbin~\cite[p.215]{Gri}.}
\end{figure}
In our ``Tilted Spiral World''
(Figures~\ref{nagyt0-fig},\ref{vis-fig}) the light cones are very
strongly tilted forwards with increasing radius $r$. Therefore, if
$m_0$ throws a ball, say in the $y$ direction, the ball will start
moving in the $y$ direction but with increasing radius it will {\em
have} to turn in the $\varphi$ direction because the life-line of
the ball has to stay inside the light-cones (i.e.\ it has to be a
time-like curve). The same applies even to a photon in place of the
ball. This
effect is called the {\em gravitational drag effect}%
\footnote{What we call drag effect is often called {\em dragging of
inertial frames}. For references on gravitational drag effect see
p.\pageref{drag-p}.}
and is illustrated e.g.\ in our Figure~\ref{ujgodel-fig} or
equivalently in Figure~31 of Hawking-Ellis~\cite{Hawel} as the
curving of the photon-geodesics. The drag effect affects those and
only those inertial bodies which are not at rest relative to one of
the $m_i$'s. This drag effect is present in the Naiv GU, too, but
in a less dramatic way. To study the drag effect in our Tilted GU
(in Figures~\ref{vis-fig}, \ref{6hastanc-fig}), we notice that our
Tilted Dervish World (Figure~\ref{6hastanc-fig}) is structurally
very close to G\"odel's original universe described and studied in
G\"odel~\cite{Go96}, Hawking-Ellis~\cite[pp.168-170]{Hawel} and
later papers. Hence the results about the drag effect in G\"odel's
universe obtained in these works are applicable to our version of GU
in Figure~\ref{6hastanc-fig}. The drag effect can be analyzed and
described by studying the behavior of geodesics. Indeed,
Figure~\ref{ujgodel-fig} represents ``dragging'' of some
characteristic geodesics. Let us be in dervish world. Then
Figure~\ref{ujgodel-fig} indicates the following. A ball thrown by
$m_0$ will start out radially, then will make a big circle and will
come back to $m_0$ from a new direction. From now on, we will call
the circular motion or rotation traced out by this circle the {\em
drag rotation}. In Figure~\ref{ujgodel-fig} the direction of the
drag rotation coincides with the $\varphi$-direction which in turn
coincides with the direction of CTC's. All this remains true in our
Tilted Dervish World (Figure~\ref{6hastanc-fig}). In the Tilted
Spiral World, matter (the $m_i$'s) is seen to rotate in the same
direction $\varphi$. Therefore in the Tilted Spiral World what we
said above about the drag rotation, CTC's etc.\ remains true. Hence,
in the Tilted Spiral World the drag rotation is even stronger than
in the dervish world and points in the same direction $\varphi$ in
which the matter content of the universe rotates. Hence in the
Tilted Spiral World, we have an {\em increased drag effect}. As a
curiosity we note that in the Tilted Spiral World everything rotates
in the same direction $\varphi$.
Next we turn to replacing our cosmic compasses%
\footnote{which were ``abstract directions'' so far} with physically
tangible compasses of an ``observational'' kind (i.e.\ subject to
testing by thought experiment). In general relativity, the devices
used for this purpose are called {\em gyroscopes} or {\em compasses
of inertia}. The nonspecialist reader does not need to recall the
definition, what we write below is amply enough for the present
paper. The most important property (for us) of gyroscopes is that
their working is based on inertial motion, hence the behavior of
geodesics will also influence the behavior of gyroscopes. For the
non-physicist reader we note the following.
In Newtonian physics it is provable that certain devices called
gyroscopes preserve their directions despite of our moving them
around, in other words, they behave like ``cosmic
compasses''.\footnote{See e.g.\ Epstein~\cite[p.128]{Eps} for nice
illustration.} We do
not recall the definition of gyroscopes in detail.%
However we note that
they can be made smaller and smaller in some sense such that their
Newtonian property of preserving direction (whatever this means)
remains true in general relativity (here the basic idea is that
general relativity agrees with Newtonian mechanics for small enough
speeds [with sufficient precision]). The essential idea behind
gyroscopes is that a rigid body rotating fast enough tends to
preserve its axis of rotation (in Newtonian physics). If we make the
body small enough, then the tangential velocities of its parts will
tend to zero. Hence the tangential velocities involved can be made
small enough for the Newtonian approximation to be satisfactory.
It is natural to assume that the increased drag effect in Tilted GU
described above will ``drag'' the gyroscopes, too, in the $\varphi$
direction. Indeed, an analysis of the geodesics of G\"odel's universe in
Lathrop-Teglas~\cite{Lath} suggests that this is so.
Our next goal is to find a new coordinatization $C^+$ for our Tilted
GU in which the gyroscope directions do not rotate.%
\footnote{Below by gyroscopes we always mean gyroscopes of $m_0$.}
One needs not regard this new coordinatization $C^+$ superior in
some sense to e.g.\ our Tilted Spiral World or more ``real'' than
Tilted, instead, $C^+$ is a coordinatization with some interesting
and useful properties. $C^+$ will be a (new) spiral world. We will
call this new spiral world Refined (or Choice 2) Spiral World. After
constructing $C^+$, it will be worthwhile to reconstruct the dervish
world in such a form that the {\em new local frames} (i.e.\
``veils'' or ``hands'' of the whirling dervishes) will be frames
co-rotating with the gyroscopes. Then the local frames will be what
are called {\em local inertial frames} in general relativity. A
representation of the dervish world with these new local inertial
frames represented as the ``veils'' of the dervishes will be called
Refined (or Choice~2) Dervish World. The two tilted spiral worlds
(Choices~1,2) and the two tilted dervish worlds (Choices~1,2)
represent the same space-time in different coordinates.
In the Refined Dervish World all the mass-carrier observers $m_i$
are at rest, they are evenly distributed and they are completely
alike, yet their compasses of inertia are rotating. This violates
Mach's principle that the state of zero rotation of an inertial
frame should coincide with the state of zero rotation with respect
to the distribution of matter in the universe. \label{Mach-page} For
Mach's principle see e.g.\ Barbour~\cite{Bar} and \cite{Mach}. For
more references on the drag effect and its connection with Mach's
principle see page~\pageref{drag-p}.
Above (p.\pageref{Pickover-fig}) we recalled a picture from
Pickover~\cite{Picktime} because it ``addresses'' the question of
what rotates in which direction. (E.g.\ does the universe rotate in
the same direction as the time-travelers (CTC's) do?) To make the
question meaningful, one has to tell relative to what coordinate
system is the question understood.%
\footnote{E.g.\ {\em relative} to the coordinates of our Tilted
Spiral World everything rotates in the same direction $\varphi$.} Of
course, one would like to name an ``observable" coordinate system
for asking such a question. A possibility is to choose that
coordinate system in which the gyroscopes do not rotate.%
\footnote{Technically, we have Fermi coordinates in mind.} This is
$C^+$ of our Choice~2 Spiral World. We will see that in $C^+$ the
directions of the various rotations are essentially different from
the ones in Pickover's picture. If one looks at $C^+$ without any
preparation, then the directions of rotations appear as ad hoc,
almost counter-intuitive. However, at least in our opinion, the
train of thought outlined in this paper may provide an explanation
for the arrangement of these directions. For more on this question
of counter-rotation in the case of rotating (Kerr-Newman) black
holes see \cite{ANW08}.
Let us return to our goal of finding a coordinatization $C^+$ of our
Spiral World in which gyroscope directions do not rotate.%
\footnote{This means that in $C^+$, gyroscopes of $m_0$ preserve
their directions (relative to the coordinate system).} We have
already observed that gyroscopes do rotate in our Tilted Spiral
World (Figure~\ref{vis-fig}). There are two equivalent ways for
finding $C^+$:
(i) We analyze the rotation of gyroscopes as seen from the Tilted
Dervish World, we observe that they rotate in the
$\varphi$-direction. This means that in the spiral world gyroscopes
rotate faster than the dervish world itself does (i.e.\ faster than
$\omega$). We choose the {\em refined spiral coordinates} to
co-rotate with these gyroscopes. Hence the ``gyroscope''-directions
will be fixed when viewed from the Refined Spiral World as we
wanted.
(ii) The following turns out to be equivalent with what we outlined
in (i) above. Let us go back to Section~\ref{tilting-section}
p.\pageref{choice1-p}, where we refined our Naive GU to get Tilted
GU. There, on p.\pageref{choice1-p}, we found two possible choices
(Choices~1,2) for the desired fine-tuning. Of the two, so far we
took the simpler one, Choice~1. Choice~2 consists of tilting the
light-cones in the dervish world {\em backwards} i.e.\ in a
direction {\em opposite} to that of $\varphi$ (in Choice~1 we tilted
them forwards). What we claim here is that the result of choosing
Choice~2 in Section~\ref{tilting-section} is equivalent with the
result of the refinements outlined in item (i) above. This is the
reason why we call our newest refined spiral and dervish worlds
outlined in item (i) above {\em Choice~2} worlds as well as Refined
worlds.
The new Choice~2 spiral and dervish worlds are illustrated and
elaborated (constructed) in the figures below. A natural question
comes up: If we had to refine our Choice~1 worlds because the drag
effect made the gyroscope directions rotate, how do we know that the
same problem will not come up in the new Choice~2 worlds? The answer
is two-fold. (1) The extremely strong drag effect in Choice~1 Spiral
World was caused by tilting the light-cones forwards extremely with
increasing radius $r$. Cf.\ Figure~\ref{nagyt0-fig} for this effect.
Now, in our Choice~2 Spiral World the light-cones are not tilted
forwards so much, actually recall that Choice~2 was obtained from
Choice~1 by tilting light-cones backwards (relative to our naive
GU). So, this very strong drag effect affecting even the gyroscopes
need not arise (more precisely, need not be strong enough for
affecting the gyroscopes). Indeed, as we said earlier, our dervish
world is very close structurally to G\"odel's original space-time
(GU). Therefore results about the original GU are applicable to our
versions (calibrated slightly differently). Now, the results in
Lathrop-Teglas~\cite{Lath} can be used to conclude that in our
Choice~2 Spiral World gyroscope directions are fixed, i.e.\ they do
not rotate. This can be seen by their characterization of geodesics
in basically%
\footnote{Our Choice~2 Spiral World is structurally very close to
the coordinatization $\langle t,r,\theta,z\rangle$ of GU given in
Lathrop-Teglas~\cite{Lath}.} Choice~2 Spiral World, as well as from
their claim that Choice~2 Spiral coordinates are so called Fermi
coordinates.
\begin{figure}[hbtp]
\setlength{\unitlength}{0.07 truemm} \small
\begin{center}
\begin{picture}(2100,2820)(0,0)
\epsfysize = 2820 \unitlength \epsfbox{ujvis.eps}
\end{picture}
\end{center}
\caption{\label{ujvis-fig} Choice~2 GU spiral view (i.e., Refined
Spiral World). Here gyroscope directions are fixed (they do not
change). (We are in Fermi coordinates in the sense of e.g.\
Lathrop-Teglas~\cite{Lath}.) $\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.07 truemm
\small
\begin{center}
\begin{picture}(2100,2820)(0,0)
\epsfysize = 2820 \unitlength \epsfbox{ujvis1.eps}
\end{picture}
\end{center}
\caption{\label{ujvis1-fig} Choice~2 spiral view (Refined Spiral
World). Here gyroscope directions are fixed (they do not change).
(We are in Fermi coordinates in the sense of e.g.\
Lathrop-Teglas~\cite{Lath}.) $\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.09 truemm} \small
\begin{center}
\begin{picture}(1380,2360)(0,0)
\epsfysize = 2360 \unitlength \epsfbox{ujt.eps}
\end{picture}
\end{center}
\caption{\label{ujt-fig} Choice~2 spiral view. Cf.\
Fig.\ref{ujvis-fig} for more information. $\omega=\pi/30$, Map 2
applies.}
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.09 truemm} \small
\begin{center}
\begin{picture}(1380,2360)(0,0)
\epsfysize = 2360 \unitlength \epsfbox{ujt1.eps}
\end{picture}
\end{center}
\caption{\label{ujt1-fig} Choice~2 spiral view. Cf.\
Fig.\ref{ujvis1-fig} for more information. $\omega=\pi/30$, Map 2
applies.}
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.081 truemm} \small
\begin{center}
\begin{picture}(1860,2580)(0,0)
\epsfysize = 2580 \unitlength \epsfbox{2agoder.eps}
\end{picture}
\end{center}
\caption{\label{2agoder-fig} Dervish view in dual GU (Choice~2).
Compare with Figure 61 on p.169 in Hawking-Ellis~\cite{Hawel} (cf.\
also Fig.\ref{ujgodel-fig} herein). $\omega=\pi/30$, Map 2 applies.}
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.073 truemm} \small
\begin{center}
\begin{picture}(2280,2980)(0,0)
\epsfysize = 2980 \unitlength \epsfbox{goder3a.eps}
\end{picture}
\end{center}
\caption{\label{goder3a-fig} Dervish view in dual GU (Choice~2).
$\omega=\pi/45$, Map 1 applies.}
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.072 truemm} \small
\begin{center}
\begin{picture}(2280,3040)(0,0)
\epsfysize = 3040 \unitlength \epsfbox{4ahas.eps}
\end{picture}
\end{center}
\caption{\label{4ahas-fig} Tilted-dervishes, Choice~2 with original
angular velocity. $\omega=\pi/30$, Map 2 applies. }
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.065 truemm} \small
\begin{center}
\begin{picture}(2580,3320)(0,0)
\epsfysize = 3320 \unitlength \epsfbox{5ahastanc.eps}
\end{picture}
\end{center}
\caption{\label{5ahastanc-fig} Dervish view in dual GU (Choice~2).
``$\omega$ of universe'' $=\ \pi/60$ (recalibrated version of Map 2
applies). Spinning dervishes are artificially sped up (``artificial
$\omega$ of dervishes'' $=\ \pi/15$).}
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.065 truemm} \small
\begin{center}
\begin{picture}(2580,3320)(0,0)
\epsfysize = 3320 \unitlength \epsfbox{5bhastanc.eps}
\end{picture}
\end{center}
\caption{\label{5bhastanc-fig} Dervish view in dual GU (Choice~2).
``$\omega$ of universe'' $=\ \pi/60$ (recalibrated version of Map 2
applies). Spinning dervishes are artificially sped up (``artificial
$\omega$ of dervishes'' $=\ \pi/15$).}
\end{figure}
\begin{figure}[hbtp]
\setlength{\unitlength}{0.065 truemm} \small
\begin{center}
\begin{picture}(2380,3300)(0,0)
\epsfysize = 3300 \unitlength \epsfbox{ujlabda1.eps}
\end{picture}
\end{center}
\caption{\label{ujlabda1-fig} Choice~2 dervish view. ``Fast''
gyroscope lines. ``$\omega$ of universe'' $=\ \pi/60$ (recalibrated
version of Map 2 applies). Spinning dervishes are artificially sped
up (``artificial $\omega$ of dervishes'' $=\ \pi/15$).}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.938 truemm} \small
\begin{center}
\begin{picture}(180,200)(0,0)
\epsfysize = 200 \unitlength \epsfbox{ujlabda3.eps}
\end{picture}
\end{center}
\caption{\label{ujlabda3-fig}
This belongs to the previous two figures involving
gyroscope lines: Schematic paths of \underbar{gyroscopes-directed}
test-particles. Such particles can be visualized as small spaceships
whose pilots follow (the direction shown by) their gyroscopes
strictly. Choice~2.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.1 truemm} \small
\begin{center}
\begin{picture}(1600,1860)(0,0)
\epsfysize = 1860 \unitlength \epsfbox{2agodtorta.eps}
\end{picture}
\end{center}
\caption{\label{2agodtorta-fig} GU, Choice~2. $\omega=\pi/30$, Map 2
applies.}
\end{figure}
\vfill\eject
\newpage
\section{Metric tensors and some literature.}
\label{literature-section}
\subsection{The metric tensor of the Naive GU.}
The linear element in the Naive Spiral World is
$$
{\sf ds}^2=-\frac{1-r^2\omega^2}{(1+r^2\omega^2)^2}\,{\sf dt}^2+{\sf
dr}^2+{\sf dz}^2 +
\frac{r^2(1-r^2\omega^2)}{(1+r^2\omega^2)^2}\,{\sf d\varphi}^2
-\frac{4r^2\omega}{(1+r^2\omega^2)^2}\,{\sf d\varphi}{\sf dt}\,.
$$
\noindent Thus the components of the metric tensor {\sf g} of the
Naive GU in the Naive Spiral World are
$$
{\sf g}_{tt} = -\frac{1-r^2\omega^2}{(1+r^2\omega^2)^2},\quad {\sf
g}_{rr} = 1,\quad {\sf g}_{zz} = 1,\quad
{\sf g}_{\varphi\varphi} =
\frac{r^2(1-r^2\omega^2)}{(1+r^2\omega^2)^2},\quad {\sf g}_{\varphi
t} = {\sf g}_{t\varphi} = -\frac{2r^2\omega}{(1+r^2\omega^2)^2}\,,
$$
\noindent and the rest of the ${\sf g}_{ij}$'s are 0.
The nonzero Christoffel symbols ${\Gamma}^i_{jk}$ are
$$
{\Gamma}^{r}_{tt} =
\frac{r\omega^2(r^2\omega^2-3)}{(1+r^2\omega^2)^3}\,,\qquad
{\Gamma}^{t}_{tr} =
\frac{(1-r^2\omega^2)r\omega^2}{(1+r^2\omega^2)^2}\,,\qquad
{\Gamma}^{\varphi}_{tr} = \frac{-2\omega}{(1+r^2\omega^2)^2r}\,,
$$
$${\Gamma}^{r}_{t\varphi} =
\frac{2r\omega(1-r^2\omega^2)}{(1+r^2\omega^2)^3}\,,\qquad
{\Gamma}^{t}_{r\varphi} =
\frac{2r^3\omega^3}{(1+r^2\omega^2)^2}\,,\qquad
{\Gamma}^{\varphi}_{r\varphi} =
\frac{1-r^2\omega^2}{(1+r^2\omega^2)^2r}\,,\qquad
$$
$${\Gamma}^r_{\varphi\varphi} =
\frac{r(3r^2\omega^2-1)}{(1+r^2\omega^2)^3}\,,\qquad \mbox{ and the
${\Gamma}^i_{kj}={\Gamma}^i_{jk}$ for the nonzero ${\Gamma}^i_{jk}$
listed above}.
$$
\noindent The scalar curvature is
$$R = 2\omega^2\frac{(2r^2\omega^2-7)}{(r^2\omega^2+1)^2}\,. $$
Now, $\Gamma_{rr}=\bar 0 =\langle 0,0,0,0\rangle$ shows that the
radial straight lines in the $xy$-planes (i.e., the lines with
direction ``{\sf dr}") are geodesics. The life-lines of the galaxies
are of direction $\omega\mbox{\sf d}\varphi+\mbox{\sf dt}$, hence
$$
\omega^2\Gamma_{\varphi\varphi}+2\omega\Gamma_{\varphi
t}+\Gamma_{tt}=\bar 0
$$
shows that the life-lines of the distinguished observers $m_i$ are
geodesics in the Naive GU.
\smallskip
G\"odel wanted the distinguished observers $m_0,\dots,m_i$ to be
fully ``equivalent'' with each other. This means that $m_i$ and
$m_0$ should be indistinguishable for any choice of $m_i$. This
means that there should exist an automorphism $h_{i,0} :\langle
\Reals,{\sf g}\rangle\longrightarrow\langle \Reals,{\sf g}\rangle$
such that $h_{i,0}$ takes the life-line of $m_i$ to that of $m_0$.
Since the scalar curvature is preserved by automorphisms, this
implies that the scalar curvature should not depend on $r$ (as it
really does not depend on $r$ in G\"odel's universe as we will see
soon). This implies that in the Naive GU, the distinguished
observers $m_i$ are not fully equivalent with each other, because
the scalar curvature depends on $r$.
\smallskip
We note that the linear element in the Naive Dervish World is
$${\sf ds}^2=-\,{\sf dt}^2+{\sf dr}^2+{\sf dz}^2 +
\frac{r^2(1-r^2\omega^2)}{(1+r^2\omega^2)^2}\,{\sf d\varphi}^2 +
\frac{2r^2\omega}{(1+r^2\omega^2)}\,{\sf d\varphi}{\sf dt}\,.
$$
\bigskip
\subsection{The metric tensor of G\"odel's universe GU.}
G\"odel in \cite[p.275]{Go96}, \cite[p.195]{Gcwii} and elsewhere
defines his universe by presenting the ``linear element'' (i.e.\ the
``metric tensor field'') as
\begin{description}
\item[$(\star)$]
${\sf ds}^2=\frac{2}{\omega^2}[-{\sf dt}^2+{\sf dr}^2+{\sf dz}^2 +
(\sinh^2r-\sinh^4r){\sf d\varphi}^2 + 2\sqrt{2}\sinh^2r{\sf
d\varphi}{\sf dt}]\,.$
\end{description}
This is understood in the
{\em cylindric-polar coordinates} $\langle t^d, r^d, \varphi^d,
z^d\rangle$ of the dervish world we discussed in
Sections~\ref{dervish-section},\ref{technical-section}. Cf.\
Figure~\ref{koor-fig}. Instead of $\frac{2}{\omega^2}$, G\"odel
writes $4a^2$ but in our notational system these two constants are
basically the same. (One can interpret G\"odel's $a$ as
$a=\frac{1}{\sqrt{2}}\omega$.%
\footnote{Cf.\ item (9) on p.191 in G\"odel~\cite{Gcwii}.} Anyway,
$a$ and $\omega$ are only ``parameters''.) Other differences are
that G\"odel used the $+---$ sign-convention and we also made a
$\varphi\to-\varphi$ coordinate transformation so as to use the same
form of G\"odel's metric that Lathrop-Teglas~\cite{Lath} uses. In
tensorial form, $(\star)$ can be written by specifying that
G\"odel's metric tensor field $\frac{2}{\omega^2}{\sf g}$ is defined
by
\begin{description}
\item[]
${\sf g}_{tt} = 1\,,\qquad {\sf g}_{rr} = -1$\,,\qquad
${\sf g}_{\varphi\varphi} = (\sinh^4r - \sinh^2r)$\,,\qquad ${\sf
g}_{\varphi t}=\sqrt{2}\sinh^2r$\,,\qquad ${\sf g}_{zz}=-1$\,,
\item[]
${\sf g}_{t\varphi}={\sf g}_{\varphi t}$, and the rest of the ${\sf
g}_{ij}$'s are 0.
\end{description}
Clearly, ${\sf g}(p)$ is a function of $p=\langle
t,r,\varphi\rangle$, but only ${\sf g}_{\varphi\varphi}$ and ${\sf
g}_{\varphi t}$ depend on $p$\,. Further, of the parts of $p$, they
depend only on $r_p$ and on $\varphi_p$\,. This is caused by the
symmetries of our space-time, i.e.\ rotation along $\varphi$ and
translation along $t$ are automorphisms of GU (both for all versions
of GU herein as well as in G\"odel's
quoted%
\footnote{There are papers of G\"odel in which these symmetries fail
(for rotating universes), cf.\ e.g.\ \cite[p.208]{Gcwii}.}
papers).
Notice that in the Naive Dervish World, both {\sf
g}$_{\varphi\varphi}$ and {\sf g}$_{t\varphi}$ tend to constants as
$r$ tends to infinity while in G\"odel's Dervish World they both
tend to infinity as $r$ tends to infinity. This is why we refined
our Naive GU to obtain the Tilted GU.
Lathrop-Teglas~\cite{Lath} presents G\"odel's universe in so-called
Fermi coordinates. This means that the t axis as well as the radial
lines are geodesics and the gyroscopes (i.e., compasses of inertia)
of $m_0$ are not rotating. This is a spiral world where the cosmic
compasses are replaced with compasses of inertia. It is very similar
to Refined (Choice 2) Spiral World depicted in
Figure~\ref{ujvis1-fig}. Indeed, \cite{Lath} obtains this metric
from $(\star)$ above by the following coordinate transformation.
Below $t',r',z',\varphi'$ are the new coordinates, $t,r,z,\varphi$
are the coordinates used in ($\star$) and
$c=\frac{\sqrt{2}}{\omega}$.
$$t'=ct,\qquad r'=cr,\qquad z'=cz,\qquad \varphi'=\omega
t'-\varphi\,.$$
This is the transformation from forward tilted (Choice 1) Dervish
World to backward tilted (Choice 2) Spiral World (apart from
multiplying with a constant $c$). From now on, for simplicity, we
write $t,r,\varphi,z$ for $t',r',\varphi',z'$.
\newcommand{\mbox{sh}}{\mbox{sh}}
\newcommand{\mbox{ch}}{\mbox{ch}}
Let us use the notation
$$\mbox{sh}=\sinh(\frac{\omega}{\sqrt{2}}\,r)\qquad\mbox{and}\qquad
\mbox{ch}=\cosh(\frac{\omega}{\sqrt{2}}\,r)\,.
$$ Now, the ``linear element'' (i.e.\ the ``metric tensor field'') of
G\"odel's universe in Fermi coordinates is
$${\sf ds}^2=-(1+2\mbox{sh}^2\mbox{ch}^2){\sf dt}^2+{\sf dr}^2+{\sf dz}^2 +
\frac{2}{\omega^2}\,\mbox{sh}^2(1-\mbox{sh}^2){\sf d\varphi}^2 +
\frac{4}{\omega}\,\mbox{sh}^4{\sf d\varphi}{\sf dt}\,.$$
\goodbreak
The nonzero Christoffel symbols ${\Gamma}^i_{jk}$ are
$$
{\Gamma}^{r}_{tt} = \omega\sqrt{2}\mbox{sh}\mbox{ch}((2\mbox{ch}^2-1)\,,\qquad
{\Gamma}^{t}_{tr} = \omega\sqrt{2}\mbox{sh}\mbox{ch}\,,\qquad
{\Gamma}^{\varphi}_{tr} = \omega^2\sqrt{2}\mbox{sh}\mbox{ch}\,,
$$
$${\Gamma}^{r}_{t\varphi} = -2\sqrt{2}\mbox{sh}^3\mbox{ch}\,,\qquad
{\Gamma}^{t}_{r\varphi} = \frac{\sqrt{2}sh^3}{\mbox{ch}}\,,\qquad
{\Gamma}^{\varphi}_{r\varphi} =
\frac{-\omega(2\mbox{ch}^4-4\mbox{ch}^2+1)}{\sqrt{2}\mbox{sh}\mbox{ch}}\,,\qquad
$$
$${\Gamma}^r_{\varphi\varphi} =
\frac{\sqrt{2}\mbox{sh}\mbox{ch}(2\mbox{ch}^2-3)}{\omega}\,,\qquad \mbox{ and the
${\Gamma}^i_{kj}={\Gamma}^i_{jk}$ for the nonzero ${\Gamma}^i_{jk}$
listed above}.
$$
\noindent The scalar curvature is
$$R = 2\omega^2\,. $$
\bigskip A sample of papers investigating G\"odel's universe is
Chakrabarti-Geroch-Liang~\cite{Chak},
Chandrasekhar-Wright~\cite{Chan}, Dorato~\cite{Dorato},
G\"odel~\cite{Go49}, \cite{Go96}, Heckmann-Sch\"ucking~\cite{Heck},
Kundt~\cite{Kundt}, Lathrop-Teglas~\cite{Lath},
Malament~\cite{Mal84}, Obukhov~\cite{Obuk}, Plaue-Scherfner-de
Sousa~\cite{PlaueSS}, Sklar~\cite{Sklar}, Stein~\cite{Stein}. A
sample of books about general relativity and time (especially
relevant to the present paper) is Earman~\cite{Earm},
Gibilisco~\cite{Gib}, Gott~\cite{Gott}, Horwich~\cite{Horw},
Novikov~\cite{Novi}, O'Neil~\cite{O95}, Pickover~\cite{Picktime},
Yourgrau~\cite{Yourg}.
For more on the drag effect and its connections with Mach's
principle cf.\ e.g.\ Wald~\cite[p.89 item 3.(c), p.187 Problem 3(b),
p.319 immediately below item (12.3.17)]{Wal}. For more detail on
``drag'' and Mach cf.\ Misner-Thorne-Wheeler~\cite[\S 21.12
(entitled ``Mach's...'') and especially pp.546-548, also item B on
p.879, pp.1117, 699, 893, 1120]{MTW}. Cf.\ also d'Inverno~\cite[\S
9.2 (pp.121-124)]{Din}, Gibilisco~\cite[pp.19-123 (subtitle: Alone
in the universe)]{Gib}. Cf.\ also \cite[pp.880-1]{MTW} for nice
drawings of rotating black holes.
\label{drag-p} For the gravitational drag effect we refer to
Rindler~\cite[pp.10-13, \S\S 1.15, 1.16]{Rin}, Wald
\cite[pp.9,71,89,183,319]{Wal}, Wald~\cite[pp.32-33]{Wal77},
together with Misner-Thorne-Wheeler[\S 40.7 (pp.1117-1120), \S 33.4
(p.892), \S 21.12 (in particular p.547), p.1120 (footnote)]{MTW}.
The gravitational drag effect is related to Mach's principle as is
explained e.g.\ in \cite[\S 21.12]{MTW} and in \cite[\S 1.15 (e.g.\
p.12)]{Rin}.
Figure~\ref{ujgodel-fig} is a slightly corrected version of
Figure~31 in \label{corr-p} Hawking-Ellis~\cite{Hawel}. This picture
can also be found in Yourgrau~\cite{Yourg}.
Malament~\cite[p.99]{Mal84} pointed out that the light-cones on that
figure are tilted so much that they do not contain the vertical
lines which are the life-lines of the distinguished observers in the
dervish-world (which the figure represents). Below we include the
Figure from Malament's paper (in which the light-cones are corrected
already).
\begin{figure}[hbtp]
\setlength{\unitlength}{0.68 truemm} %
\begin{center}
\begin{picture}(209,93)(0,0)
\epsfysize = 93\unitlength
\epsfbox{Malament.eps}
\end{picture}
\end{center}
\caption{\label{Malament-fig} Figure from Malament's paper
\cite{Mal84}.}
\end{figure}
The present work is part of a broader effort for what we could
bluntly call demystifying general relativity theory and its
relatives like wormhole-theory and cosmology. More concretely, we
try to provide a purely logic based conceptual analysis for general
relativity and its relatives. One of the aims is to provide a
technically correct but easily understandable introduction to
general relativity including its most exotic reaches for the
questioning mind of the nonspecialist. A sample of works in this
general direction is \cite{AMN07}, \cite{ANW08}, \cite{Maddis},
\cite{Szdis}, \cite{SzGVC08}.\bigskip
\noindent {\bf Acknowledgements.} Thanks are due to too many people
for their encouragement and help to be listed here. Special thanks
go to Mark Hogarth, Endre Szab\'o and Csaba T\H oke. We want to
express very special thanks to Csilla N\'emeti who gave invaluable
help in getting this project going, e.g.\ she put a lot of energy,
enthusiasm, and care into drawing, discussing, and redrawing the
first versions of the first few figures of this project. Research
supported by National Fund for Basic Research OTKA No.\ 73601. A.\
Andai was supported by Japan Society for the Promotion of Science,
contract number P 06917.
\bigskip
\hfill\eject
\section{Appendix: technical details for the constructions.}
\label{technical-section}
\noindent {\bf Connections between our spiral coordinate system}
$\langle t,x,y,z\rangle=\langle t^s,\dots,z^s\rangle$ {\bf and
co-rotating (dervish) coordinate system} $\langle t',x',y',z'\rangle
= \langle T^r,X^r, Y^r, Z^r\rangle$:
\bigskip
\bigskip
\label{coord1}
By definition, $t'=t$ and $z'=z$. Throughout we suppress the
irrelevant spatial coordinate $z$. Below, instead of the Cartesian
systems $\langle t,\dots,y\rangle,\langle t',\dots,y'\rangle$ we use
their cylindric-polar-coordinates variants $\langle
t,\varphi,r\rangle$
and $\langle t',\varphi',r'\rangle$.%
\footnote{Cf.\ e.g.\ d'Inverno~\cite[Fig.19.2 (p.253)]{Din}.} The
connections are the usual standard ones, e.g.\ $r=\sqrt{x^2+y^2}$,
$y=r\cdot\cos(\varphi)$, $x=r\cdot\sin(\varphi)$,
$\varphi=\arctan(x/y)$. In more detail, $r(p)=\sqrt{x(p)^2+y(p)^2}$
etc. $\langle t^s,\varphi^s,r^s\rangle:=\langle t,\varphi,r\rangle$
and $\langle T^r,\varphi^r,r^r\rangle=\langle
t^{der},\varphi^{der},r^{der}\rangle=\langle t',\varphi',r'\rangle$.
Here $s$ abbreviates ``spiral'' and ``der'' abbreviates ``dervish''.
The ``galaxies'' $m_1, m_2,\dots, m_i$ appear as rotating around
$m_0$ in direction $\varphi$ with angular velocity $\omega$ in
$\langle t^s,\dots\rangle$ while their cosmic compasses $x_i, y_i$
appear fixed (non rotating). As a contrast, $\langle
T^r,\dots\rangle$ shows $m_1,\dots, m_i$ as motionless, while it
shows their cosmic compasses
as rotating in direction $-\varphi$ with angular velocity
$\omega$. We use $p$ to denote an arbitrary point which has
coordinates $t(p), \varphi(p), r(p)$ etc. We represent these simple
connections in Figures~\ref{koor-fig}--\ref{koor4-fig}. As we said,
we suppress coordinate $z$. In Figure~\ref{koor-fig} below
(p.\pageref{koor-fig}) we regarded only such points $p$ which are on
the cylinder $r(p)=1$. Generalizing to arbitrary points is trivial
since $r$ does not change. As it is obvious from the picture, the
transformation ``spiral'' $\mapsto$ ``dervish'' is
\begin{description}
\item[]
$\varphi^d(p) = \varphi^s(p) - \omega\cdot t^s(p)$
\item[]
$r^d(p) = r^s(p)$
\item[]
$t^d(p) = t^s(p)$
\item[]
$z^d(p) = z^s(p)$. Clearly,
\item[]
$\varphi^s(p) = \varphi^d(p) + \omega\cdot t^d(p)$.
\end{description}
The angular velocity of the rotation of the universe as seen by
$\langle t^s,\dots\rangle$ is $\omega$.
\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{0.7 truemm} \small
\begin{center}
\begin{picture}(240,280)(0,0)
\put(92,144){\makebox(0,0)[lb]{$t^{\rm d}(p)=t^{\rm s}(p)$}}
\put(160,123){\makebox(0,0)[l]{$p$}}
\put(76,71){\makebox(0,0)[rb]{$1_{r}^{\rm d}=1_{r}^{\rm s}$}}
\put(88,80){\makebox(0,0)[rb]{$\bar 0$}}
\put(157,60){\makebox(0,0)[lt]{$\varphi^{\rm s}(p)$}}
\put(88,48){\makebox(0,0)[lt]{$\varphi^{\rm d}(p)$}}
\put(70,42){\makebox(0,0)[t]{$\varphi^{\rm d}(p)$}}
\put(110,34){\makebox(0,0)[l]{$\varphi^{\rm s}(p)$}}
\put(176,138){\makebox(0,0)[l]{\shortstack[l]{life-line of $m_i$\\(a
galaxy)}}} \put(86,160){\makebox(0,0)[r]{$1_{t}^{\rm s}=1_{t}^{\rm
d}$}} \put(95,280){\makebox(0,0)[lt]{$t^{\rm s}=t^{\rm d}$}}
\put(26,15){\makebox(0,0)[l]{\shortstack[l]{direction of expected
rotation of \\cosmic compasses\\
in the $\langle t^{\rm d},\varphi^{\rm d},r^{\rm d}\rangle$
coordinate system}}}
\put(160,15){\makebox(0,0)[l]{\shortstack[l]{direction of rotation
of universe\\ (i.e. of
distant galaxies) w.r.t.\\
cosmic compasses i.e.\ in $\langle t^{\rm s},
\varphi^{\rm s},r^{\rm s}\rangle$}}}
\put(2,278){\makebox(0,0)[lt]{\shortstack[l]{View \underbar{from}
the
\underbar{spiral coordinate}\\
\underbar{system} $\langle t^{\rm s},\varphi^{\rm s},r^{\rm s}\rangle$:}}}
\put(122,184){\makebox(0,0)[b]{\framebox{\large $\varphi^{\rm d}=0$}}}
\put(35,126){\makebox(0,0)[r]{\framebox{\normalsize $\varphi^{\rm
s}=0$}}} \put(182,95){\makebox(0,0)[l]{$t^{\rm s}(p)$}}
\put(198,40){\makebox(0,0)[r]{$\varphi^{\rm d}(p)=\varphi^{\rm
s}(p)-\omega\cdot t^{\rm s}(p)$}}
\put(132,57){\makebox(0,0)[b]{$1_{\varphi}^{\rm s}$}}
\put(138,138){\makebox(0,0)[b]{$\omega$}}
\put(35,35){\makebox(0,0)[r]{$r$}} \put(53,236){\makebox(0,0)[r]{the
$\varphi^{\rm s}=0$ plane}} \epsfysize = 280 \unitlength
\epsfbox{koor.eps}
\end{picture}
\end{center}
\caption{\label{koor-fig}
As throughout this work, here too, the irrelevant spatial coordinates
$z^{\rm d}=z^{\rm s}=z_{\rm i}=z$ are suppressed.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{0.069 truemm} \small
\begin{center}
\begin{picture}(2420,2980)(0,0)
\put(1270,1300){\makebox(0,0)[rb]{$p$}}
\put(20,2960){\makebox(0,0)[lt]{\shortstack[l]{View \underbar{from}
the \underbar{dervish
coordinate}\\
\underbar{system} $\langle t^{\rm d},\varphi^{\rm d},r^{\rm d}\rangle$:}}}
\put(1380,2970){\makebox(0,0)[lt]{$t_0=t^{\rm d}=t^{\rm s}$}}
\put(1620,2560){\makebox(0,0)[l]{\shortstack[l]{direction of rotation\\
of dervishes i.e.\ of\\
cosmic compasses}}}
\put(1590,1760){\makebox(0,0)[l]{\framebox{\shortstack[l]{a dervish
co-rotating \\ with ``$\varphi^{\rm s}=0$ surface''\\ i.e. with spiral\\
coordinate system}}}}
\put(340,1190){\makebox(0,0)[r]{\framebox{\large $\varphi^{\rm s} =0$}}}
\put(870,1460){\makebox(0,0)[r]{\framebox{\normalsize $\varphi^{\rm
d}=0$}}} \put(1145,1000){\makebox(0,0)[r]{$t^{\rm d}(p)=t^{\rm
s}(p)$}} \put(1450,850){\makebox(0,0)[l]{$1_{r}^{\rm d}=1_{r}^{\rm
s}$}} \put(1790,760){\makebox(0,0)[lb]{$1_{\varphi}^{\rm d}$}}
\put(2040,780){\makebox(0,0)[lt]{$\varphi^{\rm s}(p)$}}
\put(1310,660){\makebox(0,0)[lt]{$\varphi^{\rm d}(p)$}}
\put(820,540){\makebox(0,0)[r]{$r$}}
\put(1150,560){\makebox(0,0)[t]{$\varphi^{\rm d}(p)$}}
\put(1560,430){\makebox(0,0)[t]{$\varphi^{\rm s}(p)$}}
\put(260,400){\makebox(0,0)[l]{\shortstack[l]{direction of \\
rotation of \\
cosmic compasses
in the \\$\langle t^{\rm d},\varphi^{\rm d},r^{\rm d}\rangle$
coordinate system}}}
\put(260,120){\makebox(0,0)[l]{\shortstack[l]{direction of rotation
of universe\\
(i.e.\ of distant galaxies) w.r.t.\ cosmic compasses \\ i.e.\
in $\langle t^{\rm s},\varphi^{\rm s},r^{\rm s}\rangle$}}}
\put(1270,2810){\makebox(0,0)[r]{\large $x_0$}}
\put(1410,2820){\makebox(0,0)[lb]{\large $y_0$}}
\put(1280,1890){\makebox(0,0)[r]{\large $y_0$}}
\put(1320,1850){\makebox(0,0)[rt]{\large $x_0$}}
\put(1360,1080){\makebox(0,0)[r]{$t_0$}}
\put(1450,965){\makebox(0,0)[l]{\large $x_0$}}
\put(1330,940){\makebox(0,0)[t]{\large $y_0$}}
\put(1380,1740){\makebox(0,0)[lb]{$1_t$}}
\put(1370,1545){\makebox(0,0)[lb]{$m_0$}}
\put(1370,2545){\makebox(0,0)[lb]{$m_0$}}
\put(920,1800){\makebox(0,0)[r]{life-line of $m_i$}}
\put(1800,500){\makebox(0,0)[l]{$\varphi^{\rm s}(p)=\varphi^{\rm
d}(p)+\omega\cdot t^{\rm d}(p)$}}
\put(760,1600){\makebox(0,0)[lb]{$-\omega$}}
\epsfysize =2980 \unitlength \epsfbox{koor5.eps}
\end{picture}
\end{center}
\caption{\label{koor5-fig} Dervish view of spiral world, i.e.\
backward transformation $\langle t^{\rm der},\ldots\rangle\longrightarrow\langle
t^{\rm spi},\ldots\rangle$. Notice that the $t=0$ plane in this figure
coincides with that of previous figure (e.g.\ marked points are the
same on the two).}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{0.57 truemm} \small
\begin{center}
\begin{picture}(280,370)(0,0)
\epsfysize = 370\unitlength \epsfbox{koor4.eps}
\end{picture}
\end{center}
\caption{\label{koor4-fig} \label{coord2} }
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.3 truemm} \small
\begin{center}
\begin{picture}(540,680)(0,0)
\epsfysize = 680 \unitlength \epsfbox{1negy.eps}
\end{picture}
\end{center}
\caption{\label{1negy-fig} Details of observer $m_1$.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.3 truemm} \small
\begin{center}
\begin{picture}(540,700)(0,0)
\epsfysize = 700 \unitlength \epsfbox{10negy.eps}
\end{picture}
\end{center}
\caption{\label{10negy-fig} Details of observer $m_2$. }
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.24 truemm} \small
\begin{center}
\begin{picture}(680,880)(0,0)
\epsfysize = 880 \unitlength \epsfbox{unegyzet.eps}
\end{picture}
\end{center}
\caption{\label{unegyzet-fig} Details of observer $m_3$.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.248 truemm} \small
\begin{center}
\begin{picture}(680,720)(0,0)
\epsfysize = 720 \unitlength \epsfbox{13negy.eps}
\end{picture}
\end{center}
\caption{\label{13negy-fig} Details of observer $m_4$.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.2 truemm} \small
\begin{center}
\begin{picture}(840,960)(0,0)
\epsfysize =960 \unitlength \epsfbox{3negy.eps}
\end{picture}
\end{center}
\caption{\label{3negy-fig} Details for observer $m_5$.}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hbtp]
\setlength{\unitlength}{0.17 truemm} \small
\begin{center}
\begin{picture}(960,1200)(0,0)
\epsfysize = 1200 \unitlength \epsfbox{4negy.eps}
\end{picture}
\end{center}
\caption{\label{4negy-fig} Details for observer $m_6$.}
\end{figure}
\begin{figure}[!hp]
\setlength{\unitlength}{0.21 truemm} \small
\begin{center}
\begin{picture}(390,985)(0,0)
\epsfysize = 985 \unitlength \epsfbox{map1.eps}
\end{picture}
\end{center}
\caption{\label{map1-fig} Map 1}
\end{figure}
\vfill\eject\newpage
\begin{figure}[!hp]
\setlength{\unitlength}{0.31 truemm} \small
\begin{center}
\begin{picture}(260,656)(0,0)
\epsfysize =656 \unitlength \epsfbox{map2.eps}
\end{picture}
\end{center}
\caption{\label{map2-fig} Map 2}
\end{figure}
\vfill\eject\newpage
\bibliographystyle{plain}
\label{references}
|
1,108,101,564,694 | arxiv | \section{Introduction}
Duquennoy \& Mayor (1991) found that only 1/3 of the 164 G dwarf
primary stars within 22 pc of the sun might be true single stars, that is,
stars having no companions more massive than 0.01 $M_\odot$. About 2/3 of
the G dwarf primaries are thus members of binary or multiple star systems.
The binary frequency for M dwarf primaries within 20 pc is somewhat lower,
no more than 1/2 (Fischer \& Marcy 1992). As a result, it appears that
roughly half the nearby primary stars are single stars. Considering that
the other half have at least one more stellar companion, less than a third
of all of the nearby stars are single stars like the sun. Given the drive
to detect and characterize Earth-like planets around the closest stars,
it is clear that binary stars need to be as thoroughly scrutinized as
single stars to see if they might also be hospitable abodes for
habitable planets.
Binary stars have been included on radial velocity planet searches for
quite some time, beginning with the pioneering search by Walker et al.
(1995). Over 20 years of data has strengthened the case for a planet with a
minimum mass of 1.7 $M_J$ (Jupiter masses) orbiting with a semimajor axis
of 2.13 AU around $\gamma$ Cephei A (Walker et al. 1992; Hatzes et al. 2003).
The $\gamma$ Cephei binary system has an orbital period of $\sim$ 57 yrs,
implying an orbital separation of $\sim$ 18.5 AU (Hatzes et al. 2003).
Several other binary systems with separations of $\sim$ 20 AU appear to have
planetary companions, Gl 86 and HD 41004 A (Eggenberger et al. 2004).
However, of the binary or multiple systems with planets detected to
date, most of the systems are considerably wider, with semimajor axes
ranging from $\sim$ 100 AU to $\sim$ 1000 AU or larger (Eggenberger et al.
2004; Mayor et al. 2004; Mugrauer et al. 2004; Halbwachs et al. 2005).
Three of the planet host stars are members of hierarchical triple systems,
HD41004 A, HD 178911 B (Zucker et al. 2002), and 16 Cygni B, with the planet
orbiting the single member of the triple system. Searches are underway for
unknown binary companions to planet host stars, with the consequence
being that the number of planets found in binary or multiple
star systems is likely to increase as more companions are detected
(Patience et al. 2002; Mugrauer et al. 2004). Currently there are
at least 29 known binary or triple star systems with extrasolar planets
(M. Mugrauer 2004, private communication).
Theoretical work on planet formation in binary systems has been
minimal because of the decades-long focus on understanding the
origin of the solar system. The discovery of extrasolar planets
in binary systems has now enlarged the theoretical realm to include
binary stars as well. Marzari \& Scholl (2000) and
Barbieri, Marzari, \& Scholl (2002) modeled the
formation of terrestrial planets in the $\alpha$ Centauri
binary star system, with a separation of $\sim$ 24 AU. They found
that while gravitational perturbations by the binary companion could
excite the eccentricities (and hence relative velocities) of planetesimals
to values high enough to halt growth, the presence of gas drag
introduces an orbital phasing that minimizes their relative
velocities and allow collisions to lead to growth rather than to
fragmentation, at least close ($\sim$ 2 AU) to one of the binary stars.
Using a symplectic integrator developed by Chambers et al. (2002),
Quintana et al. (2002) modeled the final phase of growth of planetary
embryos into terrestrial planets in the $\alpha$ Centauri system,
finding that multiple terrestrial planets could form, provided
that the protoplanetary disk was inclined by no more than 60 degrees
to the plane of the binary system. Kortenkamp, Wetherill, \& Inaba
(2001) found that a binary companion could serve as a
source of orbital eccentricities leading to runaway growth
of planetary embryos into terrestrial planets, hastening the
formation process, as was also found by Quintana et al. (2002).
Moriwaki \& Nakagawa (2004) extended the study of planetesimal
accretion to {\it circumbinary} protoplanetary disks, finding that for
a 1 AU binary separation and eccentricity $e$ = 0.1, planetesimals could only
grow outside of 13 AU. Nelson (2003) studied the orbital evolution
of gas giant planets formed in circumbinary disks, finding that evolution
can lead to either ejection of the planet or to a stable orbit.
Marzari, Weidenschilling, Barbieri, \& Granata (2005) studied the orbital
evolution of gas giant planets orbiting one of the stars in a binary
system, finding that unstable initial conditions resulted in the hyperbolic
ejection of one or more planets, with the remaining planet being left
behind on an eccentric, shorter-period orbit.
Th\'ebault et al. (2004) examined the formation of $\gamma$ Cephei's
gas giant planet in the core accretion scenario (Mizuno 1980), subject
to the gravitational perturbations of the binary companion on a moderately
eccentric ($e$ = 0.36) orbit. They found that with a massive gaseous
disk, needed to achieve orbital phasing, a 10 $M_\oplus$ core
could grow in $\sim$ 10 Myr, but that the core always ended up
at 1.5 AU, rather than out at the observed 2.1 AU.
Nelson (2000) modeled the thermal and hydrodynamical evolution of
protoplanetary disks in an equal-mass binary system with a semimajor
axis of 50 AU and $e$ = 0.3. The model was chosen to represent
the L1551 IRS5 binary protostar system, where 0.05 $M_\odot$ disks orbit a
pair of 0.5 $M_\odot$ protostars (Rodr\'iguez et al. 1998). Nelson (2000)
found that following each periastron, the disks were heated by internal
shocks to such an extent that disk temperatures increased enough (to
$\sim$ 200 K at 10 AU) to not only prevent gas giant planet formation
by disk gravitational instability (Boss 1997), but also enough to
vaporize volatile solids and thereby prevent gas giant planet
formation by core accretion (Mizuno 1980). Nelson (2000) concluded that
``planet formation is unlikely in equal-mass binary systems
with $a \sim$ 50 AU.''
Given the existence of several gas giant planets in binary
systems with separations of 20 AU or less, the negative results of
Th\'ebault et al. (2004) and Nelson (2000) regarding the formation
of gas giant planets in binary systems clearly call for a re-examination
of this important question. The main thrust of this paper is
to present radiative hydrodynamical models of the disk instability
mechanism for giant planet formation (Boss 1997, 2001, 2002a,b,
2003) that add in the effects of a binary star companion.
Recent calculations with very high spatial resolution have shown
that the disk instability mechanism appears to become increasingly
vigorous as the continuum limit is approached (Boss 2005), and
furthermore that planets formed by this mechanism are relatively
immune to loss by orbital migration during a phase of gravitational
instability. We shall see that disk instability appears to be
capable of leading to the rapid formation of gas giant planets
in binary systems with a range of semimajor axes, provided that
the disk midplanes are cooled on an orbital time scale by vertical
convection, as is indicated by similarly detailed models (Boss 2004a).
In fact, binary companions appear to be able to stimulate the
formation of self-gravitating protoplanets in otherwise
stable disks.
\section{Numerical Methods}
The numerical calculations were performed with a finite volume
hydrodynamics code that solves the three dimensional equations of
hydrodynamics and the Poisson equation for the gravitational
potential. The same code has been used in many previous studies of
disk instability (Boss 2001, 2002a,b, 2003, 2004a, 2005) and has been shown
to be second-order-accurate in both space and time through convergence
testing (Boss \& Myhill 1992). The code has been tested on a variety of
test cases (Boss \& Myhill 1992), including the nonisothermal test case for
protostellar collapse (Myhill \& Boss 1993). Bodenheimer et al. (2000)
found that the results obtained with this code agreed well with those
of an adaptive mesh refinement (AMR) code on isothermal collapse
calculations.
The equations are solved on a spherical coordinate grid with $N_r = 101$,
$N_\theta = 23$ in $\pi/2 \ge \theta \ge 0$, and $N_\phi = 256$ or 512.
The radial grid is uniformly spaced with $\Delta r = 0.16$ AU
between 4 and 20 AU. The $\theta$ grid is compressed into the midplane to
ensure adequate vertical resolution ($\Delta \theta = 0.3^o$ at the midplane).
The $\phi$ grid is uniformly spaced, to prevent any bias in the azimuthal
direction. The central protostar wobbles in response to the growth of
nonaxisymmetry in the disk, thereby preserving the location of
the center of mass of the star and disk system. The number of terms in the
spherical harmonic expansion for the gravitational potential of the disk
is $N_{Ylm} = 32$ or 48. The Jeans length criterion (Boss 2002b)
is used to ensure that the clumps that form are not numerical
artifacts: even at the maximum clump densities, the numerical grid
spacings in all three coordinate directions remain less than 1/4 of
the local Jeans length.
The boundary conditions are chosen at both 4 and 20 AU to absorb radial
velocity perturbations rather than to reflect mass and momentum back
into the main grid (Boss 1998). Mass and linear or angular momentum entering
the innermost shell of cells at 4 AU are added to the central protostar
and thereby removed from the hydrodynamical grid. No matter is allowed
to flow outward from the central cell back onto the main grid. Similarly,
mass and momentum that reaches the outermost shell of cells at 20 AU
piles up in this shell with zero radial velocity and is not allowed
to return to the main grid. The outermost gas does however continue to exert
gravitational forces on the rest of the disk.
As in Boss (2001, 2002a,b, 2003, 2004a, 2005), the models treat radiative
transfer in the diffusion approximation, which should be valid near the disk
midplane and throughout most of the disk, because of the high vertical optical
depths. The divergence of the radiative flux term is set equal to zero
in regions where the
optical depth $\tau$ drops below 10, in order to ensure that the diffusion
approximation does not affect the solution in regions where it is not
valid. As a result, it has not been found necessary to include a flux-limiter
in the models (Boss 2001). The energy equation is solved explicitly in
conservation law form, as are the four other hydrodynamic equations.
Further details about the code may be found in Boss (2002b).
\section{Artificial Viscosity}
Artificial viscosity has not been used in the previous disk
instability models published by Boss (2001, 2002a,b, 2003, 2004a,
2005), but it has been included in a few models presented here
in order to explore its effects on clump formation. The implicit
artificial viscosity of this second-order accurate code, coupled
with small time steps (a result in part of the use of the spherical
coordinate system, rather than cylindrical coordinates), is sufficient
to maintain stability of the code even in the presence of the strong
shocks driven by binary companions.
Artificial viscosity can be used to help stabilize numerical schemes
and to provide microphysical heating within shocks. We use a tensor
artificial viscosity (Tscharnuter \& Winkler 1979), which enters into the
momentum equations as follows, where the other source terms on the right
hand sides of these equations are suppressed for clarity,
$${\partial (\rho v_r) \over \partial t} + \nabla \cdot (\rho v_r {\vec v})
= ... - {1 \over r^3} {\partial (r^3 Q^r_r) \over \partial r},$$
$${\partial (\rho v_{\theta}) \over \partial t} + \nabla \cdot
(\rho v_{\theta} {\vec v}) = ... -
{1 \over r sin\theta } {\partial (sin\theta Q^\theta_\theta) \over
\partial \theta} + {Q^\phi_\phi cot\theta \over r},$$
$${\partial (\rho A) \over \partial t} + \nabla \cdot (\rho A {\vec v}) =
... - {\partial Q^\phi_\phi \over \partial \phi},$$
\noindent
where $\rho$ is the mass density, ${\vec v} = (v_r, v_{\theta}, v_{\phi})$
is the velocity, $A = r sin\theta v_{\phi}$ is the specific angular momentum,
and the $Q^r_r$, $Q^\theta_\theta$, and $Q^\phi_\phi$
terms are the components of the artificial viscosity
tensor. The artificial viscosity tensor is set equal to zero when
the divergence of the velocity field ($\nabla \cdot {\vec v}$) is positive
(i.e., in expanding regions), and when the divergence is negative,
is defined to be
$$ Q^r_r = l_r^2 \ \rho \ \nabla \cdot {\vec v} \
( {\partial v_r \over \partial r} - {1 \over 3}
\nabla \cdot {\vec v} ),$$
$$ Q^\theta_\theta = l_\theta^2 \ \rho \ \nabla \cdot {\vec v} \
( {1 \over r} {\partial v_\theta \over \partial \theta}
+ {v_r \over r} - {1 \over 3} \nabla \cdot {\vec v} ),$$
$$ Q^\phi_\phi = l_\phi^2 \ \rho \ \nabla \cdot {\vec v} \
( {1 \over r sin\theta} {\partial v_\phi \over \partial \phi}
+ {v_r \over r} + {v_\theta cot\theta \over r}
- {1 \over 3} \nabla \cdot {\vec v} ),$$
\noindent where $l_r^2 = max (C_r r^2, C_{\Delta r} \Delta r^2)$,
$l_\theta^2 = C_\theta (r \Delta \theta)^2$, and $l_\phi^2 = C_\phi
(r sin\theta\Delta \phi)^2$. $\Delta r$, $\Delta \theta$, and $\Delta \phi$
are the local grid spacings, $C_{\Delta r}$, $C_\theta$, and $C_\phi$
are free parameters usually set equal to 1, and $C_r$ is a free parameter
usually set equal to $10^{-4}$. The contribution to the right hand side
of the specific internal energy equation is then
$$ E_Q= - Q^r_r \varepsilon^r_r
- Q^\theta_\theta \varepsilon^\theta_\theta
- Q^\phi_\phi \varepsilon^\phi_\phi,$$
\noindent where
$$ \varepsilon^r_r = {\partial v_r \over \partial r},
\varepsilon^\theta_\theta = {1 \over r}
{\partial v_\theta \over \partial \theta} + {v_r \over r},
\varepsilon^\phi_\phi = {1 \over r sin\theta}
{\partial v_\phi \over \partial \phi} + {v_r \over r} +
{v_\theta cot\theta \over r}.$$
\noindent $E_Q$ is constrained to be positive or zero,
reflecting the role of the artificial viscosity as a dissipative mechanism.
When artificial viscosity is to be used, the coefficient $C_\phi$
normally is set equal to zero in order to preserve the local
conservation of angular momentum. Only selected terms from the complete
tensor have been employed here. Terms involving coupling the
$r$ and $\theta$ components with the $\phi$ components have been
dropped (i.e., $Q^r_\phi$, $Q^\phi_r$, $Q^\theta_\phi$, and $Q^\phi_\theta$
are neglected), in order to conserve angular momentum locally in a
consistent manner (see test cases in Boss \& Myhill 1992).
\section{Initial Conditions}
The standard model consists of a $1 M_\odot$ central protostar
surrounded by a disk with a mass of 0.091 $M_\odot$ between 4 and 20 AU.
Disks with similar masses appear to be necessary to form gas giant
planets by core accretion (e.g., Inaba, Wetherill, \& Ikoma 2003).
Most models also include the gravitational forces associated with
a $1 M_\odot$ binary star companion, as described below. Note that
the initial disk model does not include the gravitational forces
from the binary companion, so the evolution proceeds as if
the binary companion has just been formed, an unrealistic
but necessary assumption that is needed in order to make progress
on this problem.
\subsection{Disk density}
The initial protoplanetary disk structure is based on
the following approximate vertical density distribution (Boss 1993) for
an adiabatic, self-gravitating disk of arbitrary thickness in
near-Keplerian rotation about a point mass $M_s$
$$ \rho(R,Z)^{\gamma-1} = \rho_0(R)^{\gamma-1} $$
$$ - \biggl( { \gamma - 1 \over \gamma } \biggr) \biggl[
\biggl( { 2 \pi G \sigma(R) \over K } \biggr) Z +
{ G M_s \over K } \biggl( { 1 \over R } - { 1 \over (R^2 + Z^2)^{1/2} }
\biggr ) \biggr], $$
\noindent where $R$ and $Z$ are cylindrical coordinates, $\rho_0(R)$ is
the midplane density, $G$ is the gravitational constant, and $\sigma(R)$
is the surface density. For setting up the initial model only,
$K = 1.7 \times 10^{17}$ (cgs units) and $\gamma = 5/3$. The radial
variation of the midplane density is
$$\rho_0(R) = \rho_{04} \biggl( {R_4 \over R} \biggr)^{3/2}, $$
\noindent where $\rho_{04} = 1.0 \times 10^{-10}$ g cm$^{-3}$ and
$R_4 = 4$ AU.
\subsection{Disk temperatures}
The initial temperature profile is based on two dimensional
radiative hydrodynamics calculations (Boss 1996) and is the
same as was used in previous models (Boss 2001, 2002a,b, 2004a). A
range of outer disk temperatures are investigated, with $T_o = 40$,
50, 60, 70, or 80 K (Table 1). As a result of the initial temperature and
density profiles, the initial disks have $Q$ gravitational stability
parameters whose minima range from $Q_{min} = 1.3$ for $T_o = 40$K.
to $Q_{min} = 1.9$ for $T_o = 80$K. [$Q$ is defined to be
$Q = c_s \Omega /(\pi G \sigma)$, where $c_s$ is the isothermal
sound speed, $\Omega$ is the angular velocity,
and $\sigma$ is the surface mass density of the disk.]
$T_o = 80$K is considerably higher than the temperatures at which the
solar system's comets are thought to have formed -- the experiments
of Notesco \& Bar-Nun (2005) imply that cometary nuclei agglomerated
from dust grains at $\sim$ 25 K, while observations of nuclear spin
temperatures of H$_2$O in three Oort Cloud comets suggest formation
temperatures of $\sim$ 20 to $\sim$ 45 K (Dello Russo et al. 2005). The
Oort Cloud comets are thought to have formed between 5 and 40 AU, so they
provide the ground truth for theoretical models of giant planet formation,
at least in our planetary system. The outer disk temperatures of
60, 70, and 80 K were then purposely chosen to be higher than
expected for the solar nebula, in order to err on the conservative
side with regard to the outcome of a phase of disk instability.
Alternatively, models could be run with outer disk temperatures
closer to those inferred from comets, but with lower disk masses,
so that the initial values of $Q$ are again well above $\sim$ 1.5,
implying a relatively gravitationally stable initial disk. Models
starting with the same $Q$ values should evolve very similarly.
In low optical depth regions, such as in the envelope infalling
onto the disk, the temperature is assumed to be 50 K in the models,
consistent with heating by radiation at distances of order 10 AU
from a quiescent, solar-mass protostar (Chick \& Cassen 1997).
I.e., the disk surface is assumed to be immersed in a thermal bath
at a temperature of 50 K; the outer layers of the disk are thus
assumed to be able to radiate at whatever temperature is needed to
maintain this gas temperature. A more detailed calculation of
the thermal structure at the disk surface should be explored in
future models, as the surface temperature throttles disk cooling.
E.g., Chiang et al. (2001) calculated radiative, hydrostatic equilibrium
models of flared protoplanetary disks heated by radiation from their
central stars. Their two-layer disk models consisted of a disk surface
and a disk interior, with the optically thin disk surface being hotter
than the disk interior, given the assumed heat source. At a distance of
10 AU in their standard model, the disk surface temperature is
$\sim$ 100 K and the interior temperature is $\sim$ 50 K. While the
gas and dust temperatures are roughly equal inside the disk, well above
the disk's photosphere the gas temperature can reach temperatures of
$\sim 10^4$ K (Kamp \& Dullemond 2004). Mechanical heating associated
with dynamical processes in the disk midplane may be the source of
the superheated atmospheres inferred for inner protoplanetary disks
(Glassgold, Najita, \& Igea 2004). At distances of 50 AU or more,
observations imply a vertical temperature gradient, with
midplane temperatures of $\sim$ 13-20 K underlying outer layers with
temperatures of $\sim$ 30 K (Dartois, Dutrey, \& Guilloteau 2003).
\section{Binary Star Companion}
The binary models include the gravitational accelerations from a binary
star companion to the solar-mass star around that the disk orbits.
The models neglect any radiation coming from the second star in the
system.
\subsection{Tidal Potential}
The tidal potential at a position $\vec r$ due to a binary star
companion with mass $M_b$ located at $\vec r_b$ is given by
$$ \Phi_{tide}(\vec r) = - {G M_b \over | \vec r - \vec r_b |},$$
\noindent
where the binary star companion is represented
as a single point mass. The tidal
potential may then be expressed in terms of an expansion in
Legendre polynomials $P_l$ of order $l$ as
$$ \Phi_{tide}(\vec r) = - {G M_b \over r_b}
\sum^{\infty}_{l = 0} \biggl( {r \over r_b} \biggr)^l P_l(cos S),$$
\noindent
where $S$ is the angle between $\vec r$ and $\vec r_b$. The $l = 1$
term in this expansion is responsible for the acceleration of
the primary star and its disk toward the binary companion,
an acceleration that is balanced by the centrifugal force
necessary for orbital motion of the primary star and its disk
around the center of mass of the entire system. Hence we take
as the tidal potential the following
$$ \bar \Phi_{tide}(\vec r) = - {G M_b \over | \vec r - \vec r_b |}
+ {G M_b r \over r_b^2} cos S.$$
\noindent
The first non-trivial term in the tidal potential expansion will
then be the $l = 2$ term, which forces the disk into a
prolate-ellipsoidal shape. When $\bar \Phi_{tide}(\vec r)$ is
added into the gravitational potential of the disk, obtained from
the solution of Poisson's equation, we have effectively included
the tidal force of the orbiting binary companion (Boss 1981)
as well as the orbital motion of the star/disk around the center of
mass of the entire system. No other changes are needed for the
equations of motion (Mizuno \& Boss 1985).
\subsection{Binary Star Orbit}
We employ a nonrotating, noninertial reference frame for the
models with a binary star companion, with the coordinate origin
fixed at the center of mass of the primary star and its disk.
Because of the way that the tidal force of the binary companion
has been included, in this reference frame the binary companion
appears to orbit around the coordinate origin of the disk
whose evolution is being calculated (Mizuno \& Boss 1985). A
similar approach was used by Larwood et al. (1996) in their
models of accretion disks being warped by binary companions.
In the present models, the binary star is assumed
to lie in the same plane as the disk, so that no warps are
created, and the disk retains its symmetry above and below
its midplane.
The binary star companion is assumed to be on an orbit
with eccentricity $e_b$ and semimajor axis $a_b$ (Table 1).
$\phi_{bi}$ defines the initial position angle of the binary in
its eccentric orbit, with $\phi_{bi} = 0$ corresponding to
starting at periastron, and $\phi_{bi} = \pi$ to apoastron.
$\phi_{b}(t)$ denotes the position angle of the companion
as it moves along its orbit (i.e., the true anomaly, $f$).
For Keplerian orbits, $\phi_{b}(t)$ is calculated by
$$ \phi_{b}(t) = \phi_{bi} + \int^t_0 \biggl[ {J_b \over r_b^2(t)}
\biggr] dt, $$
\noindent
where the angular momentum per unit mass $J_b$, a constant, is equal to
$$J_b = \Omega_b a_b^2 (1 - e_b^2)^{1/2}.$$
\noindent
$\Omega_b$, the mean motion, is equal to $\Omega_b = 2 \pi/P_b$, where
$P_b$ is the orbital period of the binary. The mean motion is
also equal to
$$\Omega_b = \biggl( { G (M_s + M_b) \over a_b^3 } \biggr)^{1/2}, $$
\noindent
where $M_s$ is the mass of the star with the disk. The binary separation
$r_b(t)$ is determined from the time evolution of $\phi_b(t)$ through
$$r_b(t) = a_b {(1 - e_b^2) \over (1 + e_b cos \phi_b(t))}. $$
\noindent
In these models, an equal mass binary system is assumed, i.e.,
$M_s = M_b = 1 M_\odot$. The only free parameters then are
$a_b$, $e_b$, and $\phi_{bi}$, as noted in Table 1.
Models with $\phi_{bi} = 0$ start at periastron, so that
$r_b(t = 0) = a_b (1 - e_b)$, whereas models
with $\phi_{bi} = \pi$ start at apoastron, so that
$r_b(t = 0) = a_b (1 + e_b)$. In order to avoid abrupt initial
changes in the disk when starting a model, the tidal forces
begin at zero strength and increase linearly with time over the first
30 yrs of evolution, when their full strength is reached and
maintained thereafter.
\section{Results}
Table 1 summarizes the disk models with and without binary companions.
The latter models are presented here in order to be able to separate
out the effects of including the binary companions from what the
disks would do in the absence of external forces.
\subsection{Models without Binary Companions}
We begin with several disk instability models that are identical
to those previously published by Boss (2002), except for starting
with higher initial outer disk temperatures ($T_o$). Boss (2002)
presented results for models with initial $T_o =$ 40K and 50K (as in
models eb and ab), leading to initial Toomre (1964) $Q$ stability
values of $Q_{min}$ = 1.3 and 1.5, respectively. In these initially
marginally gravitationally unstable disks, strongly nonaxisymmtric
structures begin to form within a few hundred years of evolution,
equal to about 10 orbital periods at an orbital radius of $\sim$ 10 AU
where clumps first appear in an unperturbed disk of this type. Given
that the orbital period of the $a_b = 50$ AU binary system is 250 yrs,
it is clear that the unperturbed disks with initial $Q_{min}$ = 1.3 and 1.5
can be expected to develop nonaxisymmetry on the same time
scale as the binary perturbations. Hence models were studied
with higher initial temperatures in order to try to see what
would happen in a disk that might not do much on its own prior
to being excited by the binary perturber. Models f, g, and h thus
began with $T_o =$ 60K, 70K, and 80K, respectively,
leading to an initial $Q_{min}$ = 1.6, 1.8, and 1.9. These
models are more gravitationally stable initially than models
with $T_o =$ 40K and 50K, and hence should also represent a
protoplanetary disk that has not yet evolved into a state of
marginal gravitational instability.
Figure 1 shows the initial radial distribution of the surface
density in model f with $T_o =$ 60K, compared to the critical
surface density needed to make the disk have a Toomre $Q = 1$
at that radius, i.e., in order to be strongly gravitationally
unstable initially. Because the initial temperature profile
rises high above $T_o$ inside $\sim$ 8 AU, this critical surface
density rises sharply as well. Hence the innermost regions
are expected to remain gravitationally stable. Figure 2 shows
the surface density for model f after 87.1 yrs of evolution.
The presence of axisymmetric rings and growing spiral arms
can be inferred from the ripples in the surface density.
In addition, it is clear that the region inside $\sim$ 6 AU
has already been significantly modified from the initial
profile, with mass having been transported inward onto the central
protostar as well as outward to the growing ring centered around 6 AU.
Figures 3 and 4 show the further evolution of model f after 160 yrs and
233 yrs, respectively, as the innermost region is severely
depleted of gas and dense ring-like features grow between 8 AU
and 10 AU. The high average surface density at the 4 AU inner boundary
is produced by a few dense cells where disk mass is flowing onto
the central protostar and exiting the hydro grid. These figures
show that the disk evolves to form rings that become increasingly
closer to Toomre $Q = 1$ instability [in fact, the Toomre (1964)
criterion explicitly refers to ring formation as a predecessor
to the development of nonaxisymmetry.]
This trend is further displayed in Figures 5 and 6, which show the
evolution of the Toomre $Q$ parameter for model h ($T_o = 80$ K).
Starting from a disk with a minimum Toomre $Q$ value of 1.9,
considerably stabler than model f with 1.6, Figure 6 shows that
after 245 yrs of evolution the disk has formed rings around 10 AU
where $Q$ drops to $\sim 1.5$, sufficient for marginal gravitational
instability. The inner regions become even more stable ($Q > 2$)
as a result of their higher temperatures and depleted gas surface
density. Figures 1-6 make it clear why clumps tend to form
preferentially around 10 AU in these models, as interior to that
distance is where midplane temperatures rise to higher values at
smaller radii and where the disk surface density is depleted by
accretion onto the central protostar. Beyond 10 AU the instability
proceeds somewhat slower because of the longer orbital time
periods.
Figures 7 and 8 show the formation of clumps in models f and h
at times of 233 yrs and 245 yrs, respectively. While the clumps
are not necessarily self-gravitating at this phase of evolution,
it is clear that these disks are trying to form clumps in spite
of their relatively high initial outer disk temperatures, higher
in fact than appears to be appropriate for the solar nebula
based on cometary speciation (e.g., Dello Russo et al. 2005).
Evidently even low amplitude nonaxisymmetry can transfer
mass and angular momentum over times of order 10 orbital
periods sufficient to approach a more robust phase of gravitational
instability. Models f, g, and h suggest that the natural
evolution of gravitationally stable disks is toward marginal
gravitational instability and then on to clump formation,
even in the absence of triggering effects such as binary
companions or secular cooling. Note that in all of these models the
outer disk temperature is not allowed to drop below the initial
value of $T_o$, in an attempt to err on the side of being
conservative with respect to thermal decompression and cooling.
\subsection{Models with Binary Companions}
Table 1 lists final times $t_f$ of the models with binary
star companions on orbits with eccentricity $e_b$ and semimajor
axis $a_b$. For models with $a_b$ = 50 AU, the binary orbital
period is 250 yrs, while for $a_b$ = 100 AU, it is 707 yrs
(note that the binary companion is also a solar-mass star).
The evolution of the models following the first periastron
passage of the binary companion should also be relevant for the
problem of a disk around a single star that undergoes a very
close encounter with another star in the star-forming cluster.
Figures 9-12 show the time evolution of model gbca, where
the binary companion has $a_b$ = 50 AU and $e_b$ = 0.5. The disk
models start out essentially axisymmetric with only a low
level ($\sim$ 1\%) of noise. After 83.8 yrs of evolution,
the disk has become slightly nonaxisymmetric (Figure 10),
primarily as a result of its own evolution (see previous
section). However, by 139 yrs (Figure 11), the binary
companion has completed just over half of an orbital
period and has passed periastron at a distance of 25 AU
from the center of the disk, severely distorting the outer
regions of the disk (note that the density concentrations
at 20 AU are an artifact of the disk boundary conditions
at 20 AU, where disk material is allowed to enter the
outermost shell of cells but cannot flow farther away
as it would in a calculation with a more distant outer
boundary.) The binary companion is located at this time
at about 2 o'clock. Periastron was at 3 o'clock, and the
binary companion orbits in a counter-clockwise sense, in the same
direction that the disk gas orbits, consistent with formation of the
entire system from a single rotating, dense cloud core.
While the structures in the outermost disk are strongly influenced
by the outer boundary conditions, the innermost arcs are not.
The tidal forces of the binary companion have forced the disk
into a prolate shape that is beginning to wind-up in the
inner regions because of Keplerian rotation (Figure 11). By
the time of Figure 12, at 191 yrs, the binary companion is
approaching apoastron, but the tidally perturbed disk is
still forming spiral arms and clumps, as well as strong
shock fronts in its innermost regions. Clearly the presence
of a binary companion with these orbital parameters has had
a major effect on the evolution of this initially gravitationally
stable disk, inducing the formation of clumps after the first
periastron. This fact makes it clear that starting this model
with an axisymmetric disk is not correct -- in a real disk orbiting
a protostar in a binary system of this type, the outer disk is never
axisymmetric. Axisymmetric initial models are a theoretical
convenience that allows one to jump into the system in an
approximate manner and to follow the subsequent evolution.
Figure 13 shows the midplane density contours for model ab
after 159 yrs. The disk has already been tidally perturbed because
this model began with the binary companion at periastron, though
the tidal forces were turned on over a time period of 30 yrs.
A well-defined clump is evident at 6 o'clock in Figure 13,
containing $\sim 1.5 M_{Jup}$ of gas and dust. This clump is
sufficiently massive to be gravitationally bound, as the
Jeans mass at the mean density ($7.2 \times 10^{-10}$ g cm$^{-3}$)
and temperature (161 K) of this clump is only 1.4 $M_{Jup}$.
The spherically-averaged radius of the clump is 0.66 AU,
only slightly larger than the tidal stability radius of 0.64 AU,
implying marginal tidal stability. The clump is moving on
an orbit with $a = 8.2$ AU and $e = 0.094$ at this time.
Figure 14 demonstrates that the clump in Figure 13 is properly
resolved with respect to the Jeans length criterion, which
dips to close to the grid resolution at the location of the
clump's density maximum, seen in Figure 15. Figure 16 shows
how the temperatures within the clump have risen considerably
over the initial temperatures as a result of compressional
heating -- the maximum temperature in the clump exceeds 300 K,
compared to a mean temperature of 161 K. Figure 17 shows the
temperature distribution thoughout the midplane of model ab
after 159 yrs, showing the effects of heating throughout the
disk. The disk is vertically unstable to convection according
to the Schwarzschild criterion at the location of the dense
clump seen in Figure 13, as well as at a number of other
radii near the midplane in model ab. Convective cooling appears
to be important for transporting thermal energy from the disk
midplane to the disk atmosphere, where it can be radiated
away, allowing a disk instability to produce dense clumps centered
on the midplane (Boss 2004a).
The effect of the binary eccentricity on the models can be
seen by comparing Figures 18 and 19. Models hbcae (Figure 18)
and hbca (Figure 19) are identical except that the binary
eccentricity is 0.25 for the former and 0.5 for the latter
($a_b = 50$ AU for both models). As a consequence, periastron
occurs at a radius of 37.5 AU for model hbcae and at 25 AU
for model hbca, leading to considerably stronger tidal
forces in the latter model. Figures 18 and 18 demonstrate
this point after one binary orbital period has elapsed: while
both disks have formed strong spiral arms and clumps,
model hbca is clearly more strongly distorted and has
developed higher densities along the outer boundary of the disk.
The clump at 10 o'clock in Figure 18 has a mass of
4.7 $M_{Jup}$, sufficiently high to be strongly self-gravitating,
whereas the clump at 2 o'clock in Figure 19 is not
quite self-gravitating with a mass of 0.68 $M_{Jup}$.
This suggests that while binary perturbers can stimulate
clump formation, too strong of a perturbation can make it
harder for the clumps to survive to become true protoplanets.
However, even in model hbca, other clumps form later
in the evolution that are dense enough and massive enough
to be self-gravitating.
The effect of the binary semimajor axis on the models can be
seen by comparing Figures 20 and 21. Models gba (Figure 20)
and gbca (Figure 21) are identical except that the semimajor
axis is 100 AU for the former and 50 AU for the latter
($e_b = 0.5$ AU for both models). As a consequence, periastron
occurs at a radius of 50 AU for model gba and at 25 AU
for model gbca, again leading to considerably stronger tidal
forces in the latter model. Figures 20 and 21 demonstrate
the effects of these different semimajor axes, shortly after
one binary orbital period has elapsed, in order to compare
these models at an equivalent time with respect to the
effects of the tidal forces. While model gba has evolved and
formed spiral arms, dense clumps have not formed at this time.
The evolution is closer to that of the disk models without
binary companions -- evidently tidal forces from binary companions
at distances of $\sim$ 50 AU or greater have relatively little
effect on the disk inside 20 AU. In model gbca (Figure 21),
on the other hand, the binary's periastron of 25 AU has had
a major effect on the disk, and has induced the formation of
a dense clump at 9 o'clock with a mass of 1.7 $M_{Jup}$.
Strong spiral arms are also evident throughout the disk.
In order to ascertain the effects of the numerical resolution,
model gbca was continued from the time shown in Figure 21 as
model gbcah with double the number of azimuthal grid
points (i.e., $N_\phi = 512$ instead of $N_\phi = 256$),
and more terms in the gravitational potential solution
(i.e., $N_{Ylm} = 48$ instead of $N_{Ylm} = 32$). Model gbcah
is shown in Figure 22 after another 28 yrs of evolution beyond
the point shown in Figure 21, i.e., roughly another orbital
period at $\sim$ 10 AU. A self-gravitating clump orbits at 10 AU
(seen at 8 o'clock) with a mass of 1.2 $M_{Jup}$, well above
the relevant Jeans mass of 0.72 $M_{Jup}$, with a radius (0.76 AU)
comparable to the tidal stability radius (0.75 AU). Clump
formation and survival is enhanced as the spatial resolution
is increased in the critical azimuthal direction (Boss 2000, 2005).
\subsection{Models with Varied Thermodynamical Stability Handling}
Two approaches have been used in these models and in the previous disk
instability models by Boss (2001, 2002a,b, 2003, 2005) for stability of
the radiative transfer solution, given the use of an explicit
time differencing scheme for the solution of the energy equation
in the diffusion approximation. First, taking smaller time steps
(i.e., smaller fractions of the Courant time) is often sufficient
to maintain stability of the thermodynamical solution. The calculations
typically begin with a time step that is 50\% of the minimum
Courant time on the grid. For some models, this fraction is reduced
to maintain stability, to values as small as 1\%, though typically
the fraction remains no smaller than 5\% or 10\%. While sufficient
to maintain stability, clearly this approach slows the calculation
proportionately. Hence it has been found useful to use a numerical
artifice to try to maintain stability of the numerical solution of
the energy equation in the low density regions where it tends
to break down. The artifice is simple: when the density inside
the disk drops below a specified critical value, $\rho_{crit}$,
then the temperature in that cell is forced back to its initial
temperature at the beginning of the evolution. This artifice is
justified to the extent that such regions are low in density
because they are undergoing decompression, and hence should also
be undergoing decompressional cooling. Setting the temperatures
of such regions to a value no lower than their initial temperature
is then a relatively conservative approach.
While the question of the handling of $\rho_{crit}$ may seem to be
largely a technical point, given the sensitivity of
the outcomes of disk instability calculations to the heating and cooling
processes in the disk, it is important to examine any technical
details that might have an unintended effect on the results.
All the models began with $\rho_{crit} = 10^{-13}$ g cm$^{-3}$, compared
to the initial midplane density of $10^{-10}$ g cm$^{-3}$ at 4 AU.
In order to maintain stability with a reasonably-sized time step,
however, in some models $\rho_{crit}$ is increased to values of
$3 \times 10^{-12}$ g cm$^{-3}$ or $10^{-11}$ g cm$^{-3}$.
With these values, even moderately low density regions of the disk
are effectively forced to behave isothermally. With this in mind, all
the models were searched for evidence that the highest value
of $\rho_{crit}$ used had a significant effect on the outcome
of the evolution. The primary criterion employed was looking
for the maximum density produced in the disk midplane around
5 AU to 10 AU, where the dense clumps form. It was found that
the maximum density reached was typically the same
($\sim 2 \times 10^{-9}$ g cm$^{-3}$) independent of whether
$\rho_{crit}$ stayed at a value of $10^{-13}$ g cm$^{-3}$
throughout the calculation, or had to be increased at some point
to $3 \times 10^{-12}$ g cm$^{-3}$ or $10^{-11}$ g cm$^{-3}$
to maintain a stable solution. This results suggests that
the $\rho_{crit}$ artifice is not a major
determinant of the outcome.
\subsection{Models with Artificial Viscosity}
Hydrodynamical calculations where artificial viscosity is employed
generally have not found robust clump formation in either fully three
dimensional (Pickett et al. 2000) or in thin disk models (Nelson 2000).
Here we show that when artificial viscosity is included in three dimensional
disk models with radiative and convective cooling, the tendency to form
clumps is reduced somewhat, but not eliminated, unless the artificial
viscosity is increased by a factor of order ten.
These models have the standard spatial resolution (Boss 2002b) of 100
radial grid points distributed uniformly between 4 and 20 AU, 256 azimuthal
grid points, 22 theta grid points in a hemisphere (effectively over a million
grid points), and include terms up to $l,m = 32$ in the spherical harmonic
solution for the gravitational potential. The models begin after 322 years
of inviscid evolution of a disk with an initial mass of 0.091 $M_\odot$
(Boss 2002b), an outer disk temperature of 40 K, and a
minimum Toomre $Q = 1.3$.
Figures 23 through 26 show the results for four models that are identical
except for their treatment of artificial viscosity. It can be seen that
in the models with the standard artificial viscosity
(Figure 24: $C_{\Delta r} = C_\theta = 1$, $C_\phi = 0$, $C_r = 10^{-4}$;
Figure 25: same as Figure 24, but with $C_\phi = 1$),
clump formation occurs in a similar manner as in the model without
artificial viscosity (Figure 23, as in Boss 2002b). However, when
the artificial viscosity is increased by a factor of 10 (Figure 26), clump
formation is significantly inhibited because of the heating associated
with the assumed dissipation. These models support the suggestion that
microphysical shock heating can be important for clump formation (Pickett
et al. 2000), though with the standard amount of artificial viscosity,
the effects are relatively minor in these models. Calibrating the
proper amount of artificial viscosity that would be needed to properly
represent the correct level of microphysical (sub-grid) shock heating
remains as a challenge, but it is clear that large amounts of
artificial viscosity can suppress clump formation.
\section{Discussion}
\subsection{HD 188753 triple star system}
Recently Konacki (2005) has claimed the discovery of a hot Jupiter
in orbit around a 1.06 $M_\odot$ star that is a member of
the hierarchical triple star system HD 188753.
The average distance between the primary star and the binary secondary
is 12.3 AU, with the secondary being on an orbit
with $e = 0.5$ and having a total mass of 1.63 $M_\odot$. This means
that at periastron, the secondary passes within $\sim 6$ AU of
the primary, rendering orbits outside of $\sim 1.5$ AU unstable.
Hot Jupiters are thought to form at several AU from solar-mass
stars and then to migrate inward to short-period orbits by gravitational
interactions with the gaseous disk. However, the protoplanetary disk
around the primary star in HD 188753 would be restricted in extent
to $\sim 1.5$ AU and so could not extend out to regions
cool enough for icy grains to contribute to assembling the solid core
required for the core accretion mechanism or cool enough
for a disk instability to occur. Given the difficulty of
forming gas giant planets {\it in situ} on short period
orbits by either core accretion (Bodenheimer, Hubickyj, \& Lissauer 2000)
or disk instability (Boss 1997), the presence of the planet in
HD 188753 is thus puzzling, given the current orbital configuration,
if the discovery can be confirmed.
However, the fact that HD 188753 is a triple system offers a
possible solution. Hierarchical triples can form by the orbital evolution
of an initially equally-spaced multiple protostar system (e.g., Boss 2000).
This evolution proceeds over a period of $\sim 100$ orbital crossing
periods. For a multiple protostar system with an initial separation
of $\sim 100$ AU, the initial orbital period would be $\sim 10^3$ yrs,
so that the initial equally-spaced multiple protostar system would be
expected to undergo a series of close encounters and ejections
leading to the final, stable, hierarchical triple system within a
time period of $\sim 10^5$ yrs. If a gas giant planet could form
within the protoplanetary disk of one of the protostars within
$\sim 10^5$ yrs, it might then survive the subsequent orbital
evolution as a hot Jupiter. Rapid formation is required, suggesting
that a disk instability might be needed to explain HD 188753's
putative hot Jupiter.
\subsection{Previous calculations}
Contrary to the results of Nelson (2000), these models suggest
that tidal forces from binary companions need not prevent the
formation of giant planets, by either the disk instability or
core accretion mechanisms. The key difference is in the midplane
temperatures reached after periastrons, with the Nelson (2000)
models reaching temperatures high enough to sublimate icy dust
grains at $\sim 10$ AU and to prevent a robust disk instability
inside this radius. Here we try to understand why the present
results differ from those of Nelson (2000).
There are several important similarities and differences between the two
sets of calculations. Nelson (2000) used 60,000 SPH (smoothed particle
hydrodynamics) particles in each disk, compared
to effectively over $10^6$ grid points in the present models
with $N_\phi = 256$, though because Nelson's calculations were
restricted to two dimensional (thin) disks, the spatial resolution
was similar to that in the midplane of the present models with
$N_\phi = 512$. Nelson (2000) assumed a thin disk with
an adiabatic vertical temperature gradient, which assumes that
vertical convection is able to keep the vertical temperature gradient
at the adiabatic level. This results in the maximum possible temperature
difference between the disk surface (excluding the disk photosphere)
and the midplane, because if radiative transport were efficient, the
vertical temperature gradient would not be as steep. The present
models start out vertically isothermal, but then develop vertical
convective motions in regions where the vertical temperature gradient
exceeds the adiabatic value (i.e., the Schwarzschild criterion for
convection is met; Boss 2004a). Nelson (2000) also used disk surface
temperatures (100 K) greater than those assumed in the present models
(50 K), leading to higher midplane temperatures, though the higher
surface temperatures should lead to a higher rate of radiative cooling.
Perhaps the most likely source of the discrepancy is the amount
of artificial viscosity assumed in the two sets of models.
Artificial viscosity equivalent to an effective alpha viscosity
with $\alpha = $ 0.002 to 0.005 was intentionally included in the
Nelson (2000) models in an effort to include the effects of shocks
and sub-grid turbulence. In the present models, artificial viscosity is
not used, and the degree of implicit numerical viscosity appears
to be at a level equivalent to $\alpha \sim 10^{-4}$ (Boss 2004b),
a factor of 20 to 50 times lower than that in Nelson (2000).
As we have seen in Figures 23-26, a high level
of artificial viscosity can heat the disk sufficiently to suppress the
formation of clumps, though Figures 24 and 25 show that with a standard
amount of artificial viscosity, clumps can still form. The artificial
viscosity employed in SPH codes can lead to a ``large and unphysical
shear dissipation as a side effect in disk simulations'' (Nelson et al.
2000), though Nelson et al. (2000) and Nelson (2000) used a
formulation that was intended to minimize artificial viscous
dissipation. Nevertheless, the intentional use of a relatively
large amount of artificial viscosity (in order to attempt to
duplicate spectral energy distributions for observed disks)
is likely to be the main source of the discrepancy between the
models. This artificial viscous heating appears to be related to
the difference in cooling times in the two sets of models, as
the cooling time is critical for clump formation and survival.
Relatively short cooling times are obtained in the present models
($\sim 1$ to 2 orbital periods, Boss 2004a), compared to the effective
cooling time obtained in Nelson (2000) of $\sim 5$ to $\sim 15$
orbital periods for distances from 10 AU to 5 AU, respectively
(Nelson 2005, private communication).
One could reasonably ask whether the present models are able
to handle strong shocks properly in the absence of artificial
viscosity, as that is how these models have been run, with the
exception of the models shown in Figures 24-26. In order to
test this possibility, one dimensional shock tests performed with the
same hydrodynamic scheme as used in the present models and first
presented by Boss \& Myhill (1992) were repeated with and without
artificial viscosity. The shock test relies on the analytic
solution for the Burgers equation presented by Harten \& Zwas
(1972). Using the same numerical code and numerical parameters
as presented in Figure 7 of Boss \& Myhill (1992), Figures 27 and
28 depict the results with the standard amount of artificial
viscosity ($C_Q = 1$) and with zero artificial viscosity, respectively.
It can be seen that in both cases, the numerical solution
does an excellent job of reproducing the analytical solution,
including the shock front location. Figure 28 shows that
in the complete absence of artificial viscosity, there is a
similar degree of overshoot/undershoot immediately downstream of the
shock front as in Figure 27 with non-zero artificial viscosity (in
both cases, the overshoot/undershoot is minimal compared to that of
several other differencing schemes; see Figure 7 of Boss \& Myhill 1992).
These results suggest that the present models, even with zero
artificial viscosity, are able to handle strong shocks about as
well as if the standard amount of artificial viscosity were
being employed. It is thus likely that with the standard amount
of artificial viscosity, the effective $\alpha$ of the models
is similar to that caused by the implicit numerical
viscosity ($\alpha \sim 10^{-4}$; Boss 2004b). In that case,
the Nelson (2000) models effectively include viscous dissipation
at a rate roughly 20 to 50 times higher than the present models,
which appears to be sufficient to explain the suppression of clump
formation in the Nelson (2000) models, based on the results presented
in Figures 23-26.
There may be a related discrepancy between these models
and those of Nelson (2000). Nelson (2000) found that the long-wavelength
flux densities from his disk models were below those measured for the
L1551 IRS5 binary disk system upon which his models were based,
implying effective temperatures for the disk surface that were too low.
However, Boss \& Yorke (1993, 1996) found that they were able
to match the spectral energy distributions of the T Tauri system with
the same axisymmetric disk models that form the basis for the three dimensional
disk models used in the present models. It is unclear at present what
this means, but suffice it to say that a higher effective temperature
at the disk surface should increase radiative losses from the disk surface
and thereby reduce the overall disk cooling time, though perhaps
at the expense of higher midplane temperatures.
\section{Conclusions}
These models have shown that initially stable protoplanetary
disks can evolve over time periods of $\sim 10^3$ yrs to become
marginally gravitationally unstable and then begin to form clumps.
When these stable disks are perturbed by strong tidal forces
(i.e., periastrons less than $\sim$ 50 AU), spiral arms form
soon after peristron and typically evolve into self-gravitating,
dense clumps capable of forming gas giant planets. Periastrons
of $\sim$ 50 AU and larger lead to little effect on the
evolution of these disks, which are limited in extent to 20 AU.
Disk cooling processes such as convection appear to remain
effective enough to permit self-gravitating clumps to form,
even in the presence of the strong tidal forcing. As a result,
outer disk temperatures do not become high enough in general
for icy dust grains to be sublimated, meaning that giant planet
formation by core accretion would continue to be aided by
the enhanced surface density of solids associated with the ice
condensation boundary in the disk, even in binary star systems.
Given the tendency for these disks to form self-gravitating clumps
by disk instability on a time scale of $\sim 10^3$ yrs or less,
these models suggest that giant planets should be able to
form in binary systems with periastrons as small as 25 AU,
by either core accretion or disk instability. This general
conclusion seems to be consistent with the growing observational
evidence for giant planets in binary star systems.
Because of the nature of a spherical coordinate grid,
where $\Delta x_\phi = r sin \theta \Delta \phi$ increases linearly
with radius, the present models often fail to properly resolve any
clumps that try to form near the edge of the edge. An improved
treatment of disks being strongly perturbed by binary companions
would require the use of adaptive mesh refinement (AMR) code or
some other technique for better resolving clumps at large radii.
I thank Andy Nelson for details about the cooling times in his
calculations, the referee for extremely helpful comments and
questions about artificial viscosity and viscous dissipation,
and Sandy Keiser for her continued expert assistance with the
Carnegie Alpha Cluster. This research was supported in part by the NASA
Planetary Geology and Geophysics Program under grant NNG05GH30G,
by the NASA Origins of Solar Systems Program under grant NNG05GI10G,
and by the NASA Astrobiology Institute under grant NCC2-1056.
Calculations were performed on the Carnegie Alpha Cluster, the purchase
of which was supported in part by NSF MRI grant AST-9976645.
|
1,108,101,564,695 | arxiv |
\section{Introduction}
The Minimal Supersymmetric Standard Model (MSSM) contains five Higgs bosons:
a light CP-even Higgs boson h, a heavy
CP-even Higgs boson H, a CP-odd Higgs boson A and two charged Higgs bosons
H$^{\pm}$.
At tree-level the h (H) mass is bound to be below (above) the Z boson mass.
The higher order corrections
increase this upper (lower) bound,
the largest possible value being about 135~GeV/$c^2$ \cite{hmass}.
The fact that in the MSSM
one Higgs boson is bound to be light gives a strong prediction
for the mass region where the lightest Higgs boson might be seen.
The LEP and Tevatron results have already constrained the MSSM parameter
space significantly. The measurements yield lower bounds of 91.0 and
91.9~GeV/$c^2$ for the lightest CP-even Higgs boson h and for the CP-odd A,
respectively \cite{lep_mssm}. The excluded tan$\beta$ regions are
0.5~$<$ tan$\beta <$~2.4 for the maximal m$_{\rm h}$ scenario (maximal
mixing scenario)
and 0.7~$<$ tan$\beta <$~10.5 for the no-stop-mixing scenario~\cite{lep_mssm}, assuming top quark
mass 174.3 GeV/c$^2$. The recent results from the Tevatron, however, give the world average for
the top mass of 178.0$\pm$4.3~GeV/$c^2$ \cite{hep-ex/0404010}. The larger top mass softens the bounds,
for example assuming the top mass of 179.3 GeV/c$^2$ the excluded region by LEP in the maximal
mixing scenario is 0.9~$<$ tan$\beta <$~1.5 and for top mass of about 183 GeV/c$^2$ the
exclusion vanishes \cite{new_lep}.
Some constraints have been derived from the existing data for the other SUSY parameters.
The value of the trilinear coupling $\rm A_{\rm t}$ is limited to 350~GeV/$c^2$
$\lesssim \rm A_{\rm t} \lesssim$~1.5~(2.3)~TeV/$c^2$ for a light stop quark with
$\rm m_{\tilde{t}_1}$~=~200~(400)~GeV/$c^2$ and for the experimental constraint
$\rm m_{\rm h}\gsim$~90~GeV/$c^2$ \cite{hep-ph/9806315}.
The higgsino and
gaugino mass parameters $\mu$ and M$_2$ are related to
neutralino and chargino masses, and experimental mass bounds can be used to
exclude $|\mu|$ and M$_2$ values below 100~GeV/$c^2$ \cite{muM2}.
The present experimental lower bound from LEP for the stop quark mass is
$\sim$~100~GeV/$c^2$ \cite{lepsusy}.
The mass limit from the Tevatron RunII with the ultimate
luminosities ($\sim$~20~fb$^{-1}$) is expected to reach m$_{\tilde{\rm t}_1}
\sim$~240~GeV/$c^2$~\cite{fermilab}.
In this work three SUSY scenarios are considered:
a no-mixing scenario where the mixing of the left and right handed stop
eigenstates do not play any significant role and where all SUSY particles are
assumed to be heavy \cite{scenarios}, a maximal-mixing scenario which maximizes the h mass
\cite{scenarios},
and a light-stop scenario in which the stop quark mass is of the same order as the
top quark mass \cite{hep-ph/9806315}. These scenarios do not assume any particular
model for the soft SUSY-breaking mechanism. The stop mixing parameter is defined as X$_{\rm t}$ =
$\rm A_{\rm t} - \mu\cot\beta$, with $\rm A_{\rm t}$ being the trilinear Higgs-stop coupling.
The stop-mixing is maximized when X$_{\rm t}$ = $\sqrt{6} \times \rm M_{\rm SUSY}$,
where $\rm M_{\rm SUSY}$ is the heavy SUSY scale \cite{scenarios}. In this work,
the maximal stop-mixing scenario is defined taking
A$_{\rm t}$ = $\sqrt{6} \times \rm M_{\rm SUSY}$,
with $\rm M_{\rm SUSY}$ = 1 TeV/$c^2$.
With respect to the standard definition, this choice leads to a deviation of
less than 1\% in the
total production rate of $\rm pp \rightarrow \rm h + \rm X$, $\rm h \rightarrow \gamma\gamma$ at the LHC.
No sbottom mixing is assumed taking A$_{\rm b}$ = 0.
The higgsino mass parameter $\mu$ is set to 300~GeV/$c^2$ and the gaugino mass parameter
M$_2$ is set to 200~GeV/$c^2$,
values chosen large enough not to be already experimentally excluded.
All the soft SUSY breaking mass parameters are set to 1~TeV/$c^2$, and the
gluino mass M$_{\tilde{\rm g}}$ is set to 800~GeV/$c^2$.
The mass of the top quark is set to 175~GeV/$c^2$.
The values of these parameters are taken to be the same in the no-mixing scenario,
except that of the trilinear coupling
$\rm A_{\rm t}$ which is set to zero.
For the light-stop scenario $\rm A_{\rm t}$ is taken to be 1400~GeV/$c^2$
close to the highest possible experimentally allowed value
($\rm A_{\rm t}$~=~1500~GeV/$c^2$) with light stop quarks \cite{hep-ph/9806315}. In this scenario
$\mu$~is set to -250~GeV/$c^2$ and M$_2$ to 250~GeV/$c^2$. The soft SUSY breaking mass
parameters are set to 1~TeV/$c^2$ except the mass parameters of the stop sector,
which are required to be of the order of 500~GeV/$c^2$ to allow the stop quark to be light.
The actual value of the stop sector soft SUSY breaking mass
parameters vary depending on the chosen stop quark mass.
With the above values of the SUSY parameters
the upper bound of $\rm m_{\rm h}$ is about 127~GeV/$c^2$ in
the maximal-mixing scenario and
about 114~GeV/$c^2$ in the no-mixing scenario.
The sign of $\mu$ has only a small effect on the mass of the lightest
Higgs boson.
In the light-stop scenario with $\rm m_{\tilde{\rm t}_1}~=~200$ GeV/$c^2$,
the upper bound of $\rm m_{\rm h}$ is 113 GeV/$c^2$,
as in the no-mixing scenario. For $\rm m_{\tilde{\rm t}_1}~=~300$~GeV/$c^2$
this upper bound increases by 10~GeV/$c^2$ approaching to that of
the maximal-mixing scenario.
The $\rm H \rightarrow \gamma\gamma$ channel is considered one of the major discovery
channels for a light Standard Model (SM) Higgs boson and
for the lightest scalar MSSM Higgs boson at the LHC.
There can be, however, regions of the MSSM parameter space where this discovery
potential is reduced.
The effect of a light stop quark in the presence of large mixing has been
calculated and the consequences for the $\rm gg \rightarrow \rm h \rightarrow \gamma\gamma$
channel have been discussed in Ref.~\cite{hep-ph/9806315}. An experimental study
was performed in Ref.~\cite{NOTE2000/043} and
the discovery potential was calculated for the $\rm h \rightarrow \gamma\gamma$ channel in the CMS detector.
In this earlier work, however, only the gluon-gluon fusion production process was
simulated and other Higgs boson decay channels were not considered.
In the present work, all significant production processes are included in the calculation of the
inclusive $\rm h \rightarrow \gamma\gamma$ rate in the MSSM and the discovery potential
is evaluated also in the associated production and weak gauge boson
fusion production processes $\rm qq \rightarrow \rm qqh$. Furthermore, updated programs are
used to calculate the cross sections and branching ratios.
The aim of this paper is
to extend the study of the loop effects in the $\rm h \rightarrow \gamma\gamma$ channel
to the full discovery potential of the lightest scalar Higgs boson at the LHC.
Therefore, the $\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$ and
$\rm h/ H \rightarrow \tau^+\tau^-$ decay channels were also studied.
The $\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$ has not been
so far considered as a discovery channel in the MSSM at large tan$\beta$. It is
shown, however, in Section 3.2 that this channel can yield a large discovery potential if the
SUSY scenario is such that m$_{\rm h}^{\rm max} \gsim$~125~GeV/$c^2$.
The $\rm h/ H \rightarrow \tau^+\tau^-$ decay channels with lepton+jet
and two-lepton final states in the weak gauge boson
fusion production have been shown to be particularly
interesting and to cover the full of the MSSM parameter space \cite{zeppenfeld}.
In this paper the discovery potential for this channel is calculated with realistic
detector sensitivities. The CMS detector sensitivities are used and were obtained from the
recent simulations for the discovery potential of a light SM
Higgs boson \cite{summary_note}.
\section{Phenomenology}
\subsection{Production cross sections}
The lightest MSSM Higgs boson h is produced through the gluon fusion
$\rm gg \rightarrow h$,
the associated processes $\rm q\overline{\rm q}/gg \rightarrow \rm t \overline{\rm t}\rm h$,
$\rm q\overline{\rm q}/gg \rightarrow \rm b \overline{\rm b}\rm h$, $\rm qq \rightarrow \rm Wh/Zh$
and through the weak gauge boson fusion process $\rm qq \rightarrow \rm qqh$.
The gluon fusion process dominates
the production over the entire parameter space.
This process is mediated by heavy quark and squark triangle loops.
The cross sections to the leading order (LO) and to the next to leading order
(NLO) are calculated in this work with the program HIGLU~\cite{HIGLU}.
The top and bottom loops are included in the calculation of the Higgs boson coupling
to gluons in this program. Since the squark loops are not included,
the decay width $\Gamma(\rm h \rightarrow \rm gg$), calculated with the HDECAY
program \cite{HDECAY}, is used to include the squark loop effects: the cross section
given by HIGLU
is divided by the decay width $\Gamma(\rm h \rightarrow \rm gg$)
with sparticle loops switched off, and multiplied by the decay width with all
sparticle effects. The Higgs boson mass is kept constant in this procedure.
The corrected gluon fusion cross section with SUSY loop effects can be
presented with the
respective branching ratios and total widths as
\begin{eqnarray}
\sigma\cdot \rm BR & = &
\sigma( {\rm gg} \rightarrow {\rm h})
\cdot
\frac{\rm BR({\rm h}\rightarrow {\rm gg})^{\rm susy}}
{\rm BR(\rm h\rightarrow \rm gg)^{\rm nosusy}}
\frac{\Gamma_{\rm TOT}^{\rm susy}}{\Gamma_{\rm TOT}^{\rm nosusy}}
\cdot
\rm BR(\rm h\rightarrow\gamma\gamma)_{\rm susy},
\label{eq:err}
\end{eqnarray}
where \verb|nosusy| refers to the branching ratio and total width calculated
assuming heavy SUSY particles and \verb|susy| to the same variables with SUSY
spectrum determined by the given scenario.
The cross sections for
the associated production with weak gauge bosons
$\rm qq \rightarrow \rm Wh$ and $\rm qq \rightarrow \rm Zh$ are calculated with the program
V2HV~\cite{spira_web} to both
leading and next to leading order. The cross sections for the production with
associated quark pairs
$\rm q\overline{\rm q}/gg \rightarrow \rm t\overline{\rm t}\rm h$ and
$\rm q\overline{\rm q}/gg \rightarrow \rm b\overline{\rm b}\rm h$ are calculated with the
HQQ program \cite{spira_web}, which presently includes only the LO processes.
The cross sections for the weak gauge boson fusion $\rm qq \rightarrow \rm qqh$
are evaluated with the VV2H program \cite{spira_web}.
The production cross sections and the contributions from the individual production
processes to the total cross section at
$\rm m_{\rm h}$~=~125.8~GeV/$c^2$ ($\rm m_{\rm A}$~=~250 GeV/$c^2$),
tan$\beta$~=~10 are shown in Table~\ref{table:crosssection}.
The gluon fusion process contributes about 80\% to the total cross
section. This fraction is not sensitive to the Higgs boson mass and tan$\beta$.
\begin{table}[t]
\caption{Production cross sections for the lightest MSSM Higgs boson
for $\rm m_{\rm h}$~=~125.8~GeV/$c^2$
($\rm m_{\rm _A}$~=~250~GeV/$c^2$) and tan$\beta$~=~10 with
maximal stop mixing.}
\centering
\vskip 0.1 in
\small
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
process & \small gg~$\rightarrow$~h
& \small qq~$\rightarrow$~qqh
& \small qq~$\rightarrow$~Wh
& \small qq~$\rightarrow$~Zh
& \small $\rm pp \rightarrow$~b$\overline{\rm b}$h
& \small $\rm pp \rightarrow$~t$\overline{\rm t}$h
& $\sigma_{\rm TOT}$ \\
\hline
$\sigma$ (pb) & 27.3 & 4.17 & 1.59 & 0.64 & 0.72 & 0.32 & 34.1 \\
\hline
$\sigma/\sigma_{\rm TOT}$ & 79\% & 12\% & 4.6\% & 1.8\% & 2.1\% & 0.9 \% & 100 \% \\
\hline
\end{tabular}
\label{table:crosssection}
\end{table}
\vspace{2ex}
In the SM, the K-factor (defined as K = $\sigma_{\rm NLO}/\sigma_{\rm LO}$)
for the gluon fusion process is large varying between 1.5 and 1.7 \cite{NIMB453}.
In the MSSM, this K-factor depends on tan$\beta$ being about the same as in the SM
at small tan$\beta$ and closer to unity at large tan$\beta$
\cite{NIMB453}. The K-factors do not depend significantly on the squark mass and are
stable against the loop effects in the gluon fusion mechanism even in the extreme situation
when one of the stop mass eigenstates is light while the other squarks are heavy and decouple
\cite{hep-ph/9603423}.
The K-factor in the associated process qq~$\rightarrow$~Wh is almost independent
of $\rm m_{\rm A}$ and tan$\beta$ and is about 1.3 for both the no-mixing and maximal-mixing scenario.
\subsection{Decay channels}
Figures \ref{fig:brh_10} and \ref{fig:brh_30} show the branching ratios for
the lightest MSSM
Higgs boson as a function of $\rm m_{\rm A}$ and $\rm m_{\rm h}$ with maximal
stop quark mixing for
tan$\beta$ = 10 and 30, respectively. The branching ratios and decay widths are calculated with the program
HDECAY 3.0~\cite{HDECAY}. The next to leading order (NLO) values are used for
the decay modes throughout this study. The h~$\rightarrow \rm b \overline{\rm b}$ decay channel dominates.
The branching ratios to weak gauge bosons h~$\rightarrow$~ZZ$^{\ast}$ and h~$\rightarrow$~WW$^*$
increase rapidly when $\rm m_{\rm h}$ approaches its maximum value reaching $\sim$ 2\% and $\sim$ 20\%,
respectively, at large $\rm m_{\rm A}$.
For $\rm m_{\rm A}\gsim$~200~GeV/$c^2$ the
branching ratio for the h~$\rightarrow \gamma\gamma$ decay channel is
between one and two per mil. The branching ratios for the h~$\rightarrow \tau^+\tau^-$,
h~$\rightarrow \rm b \overline{\rm b}$ and h~$\rightarrow \mu^+\mu^-$ decay channels remain large
also for $\rm m_{\rm A}\lesssim$~200~GeV/$c^2$, where the lightest Higgs boson
is not SM-like, due to the
enhanced couplings to the down type fermions.
\begin{figure}[t]
\centering
\vskip 0.1 in
\begin{tabular}{cc}
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{hl_br10.eps}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{hl_br30.eps}
\end{minipage}\\
\begin{minipage}{7.5cm}
\centering
\caption{Branching ratios for the lightest MSSM Higgs boson as a
function of
$\rm m_{\rm A}$ and $\rm m_{\rm h}$ for tan$\beta$~=~10
with maximal stop quark mixing.}
\label{fig:brh_10}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\caption{The same as in Fig.~\ref{fig:brh_10} but for tan$\beta$~=~30.}
\label{fig:brh_30}
\end{minipage}
\end{tabular}
\end{figure}
\subsection{Effect of SUSY parameters}
In the MSSM, one of the stop quarks may become much
lighter than the other squarks if mixing between different
squark isospin eigenstates is large. The mixing can be described with the following mass
matrix \cite{mixing_matrix}
\begin{eqnarray}
\left(
\begin{array}{cc}
\rm m_{\tilde{\rm t}_{\rm L}}^2 & \rm m_{\rm top}(\rm A_{\rm t}-\mu\rm cot\beta) \\
\rm m_{\rm top}(\rm A_{\rm t}-\mu\rm cot\beta) & \rm m_{\tilde{\rm t}_{\rm R}}^2
\end{array}
\right)
\end{eqnarray}
where $\tilde{\rm t}_{\rm L}$ and $\tilde{\rm t}_{\rm R}$ are the left and right handed
eigenstates of the stop quark and
$\rm A_{\rm t}-\mu\rm cot\beta \equiv \rm X_{\rm t}$
is the squark mixing
parameter with A$_{\rm t}$ being the trilinear coupling and $\mu$ the higgsino mass
parameter. Mixing in the third generation squark sector may be important,
since, as seen from the off-diagonal terms of the mass matrix,
the squark mixing is proportional to the corresponding quark mass.
For a heavy top quark,
mixing in the stop sector may thus produce considerable splitting between the
mass eigenstates~$\tilde{\rm t}_{\rm 1}$,~$\tilde{\rm t}_{\rm 2}$
\begin{eqnarray}
\rm m_{\tilde{\rm t}_{\rm 1,2}}^2 & = &
\frac{1}{2}(\rm m_{\tilde{\rm t}_{\rm L}}^2+m_{\tilde{\rm t}_{\rm R}}^2)
\mp\frac{1}{2}\sqrt{(\rm m_{\tilde{\rm t}_{\rm L}}^2-
m_{\tilde{\rm t}_{\rm R}}^2)^2+
4 {\rm m}_{\rm top}^2(\rm A_{\rm t}-\mu\cot\beta)^2}
\end{eqnarray}
resulting in one very light and one very heavy stop quark.
In above relations
${\rm m}_{\tilde{\rm t}_{\rm L}}^2 = {\rm M}_{\tilde{\rm Q}}^2+
{\rm m}_{\rm Z}^2{\rm cos}2\beta({\rm I}_3^{\tilde{\rm t}}-
{\rm e}_{\tilde{\rm t}}\sin^2\theta_{\rm W})
+{\rm m}_{\rm top}^2$ and
${\rm m}_{\tilde{\rm t}_{\rm R}}^2 = {\rm M}_{\tilde{\rm U}}^2+
{\rm m}_{\rm Z}^2\cos2\beta {\rm e}_{\tilde{\rm t}}\sin^2\theta_{\rm W}+
{\rm m}_{\rm top}^2$
where ${\rm M}_{\tilde{\rm Q}}$ and ${\rm M}_{\tilde{\rm U}}$ are the
soft-SUSY breaking scalar
masses, I$_3^{\tilde{\rm t}}$ is the squark weak isospin and e$_{\tilde{\rm t}}$ the
squark charge.
It can be seen that in order to have
large splitting between the two stop eigenstates and therefore a light stop
quark, $\rm X_{\rm t}$ must be large.
For a common scalar mass parameter of 1~TeV, the mass of the lighter stop quark
is of the order of 800~GeV/$c^2$. A light stop quark ($\rm m_{\rm stop}\lesssim$~300~GeV/$c^{2}$)
can be obtained choosing the third generation scalar masses M$_{\tilde{\rm U}}$ and
M$_{\tilde{\rm Q}}$ to be of the order of 500~GeV/$c^2$.
For these parameter values, the mass of the lightest supersymmetric
particle LSP is of the order of 100~GeV/$c^2$, well
above the present experimental limit \cite{lep_mssm}.
In the next sections, the lighter of the two stop mass eigenstates is denoted
simply as a stop quark, ${\rm stop}\equiv\tilde{\rm t}_1$ and its mass $\rm m_{\rm stop}$.
\begin{figure}[h]
\centering
\vskip 0.1 in
\begin{tabular}{cc}
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{brtanb10c.eps}
\caption{Cross section for the gg~$\rightarrow\rm h$ process
with tan$\beta$~=~10 without stop mixing, with maximal stop
mixing and with light stop quark $\rm m_{\rm stop}$~=~200
GeV/$c^{2}$ for $\mu<$~0 and for $\mu>$~0.}
\label{fig:brsigma1}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{brtanb10a.eps}
\caption{The branching ratio for the
$\rm h \rightarrow\rm \gamma\gamma$ decay
with tan$\beta$~=~10 without stop mixing, with maximal stop
mixing and with light stop quark $\rm m_{\rm stop}$~=~200
GeV/$c^{2}$ for $\mu<$~0 and for $\mu>$~0.}
\label{fig:brsigma2}
\end{minipage}
\end{tabular}
\end{figure}
Since no supersymmetric particles have been found so far, the
supersymmetry must be broken, the squarks do not have the same masses as
quarks and the cancellation at loop level is less significant.
The results from existing experiments indicate that a scenario with large mixing
in the squark (stop) sector is
possible and more likely than a no-mixing scenario \cite{lep_mssm}.
If the mass of the stop quark is small, of the
order of the top quark mass, the cancellation effects become important.
It has been first shown in Ref.~\cite{hep-ph/9806315} that the rate
for the gg~$\rightarrow \rm h \rightarrow \gamma\gamma$ process could be strongly reduced
with large mixing
and with a light stop quark (m$_{\rm stop}\sim\rm m_{\rm top}$).
The top - stop interference leads to a
suppression of the top quark contribution in the loops mediating the
Higgs boson production, since the stop loop
interferes destructively with the top quark and the top and stop loops
partly cancel. The loop mediated
Higgs boson decay into photons is also affected, but since the dominant contribution comes
now from a W loop, which interferes
destructively with the top loop, a reduction of the top contribution by
interfering stop loops increases the h$\rightarrow\gamma\gamma$ partial width.
As the W loop dominates in this partial width, the interference effect is smaller
than in the gg~$\rightarrow$~h process dominated by a top quark loop.
In addition to the light stop, there are
contributions from the charged Higgs bosons, sfermions and especially from charginos,
but their net effect to the h$\rightarrow \gamma\gamma$ partial width is small, less
than $\sim$~10\% \cite{EurPhysJ_C1}. At large tan$\beta$ also the bottom loop contributes and even
becomes larger than the top loop contribution \cite{harlander}.
As the reduction of the gg~$\rightarrow$~h partial width is significantly stronger than the enhancement
of the h$\rightarrow\gamma\gamma$ partial width, the rate
for gg~$\rightarrow \rm h \rightarrow \gamma\gamma$ is reduced. For m$_{\rm A}\gsim$~100~GeV/$c^2$,
$\rm A_{\rm t}$~=~1.5~TeV/$c^2$,
m$_{\rm stop}$~=~200~GeV/$c^2$ and $\mu$~=~300~GeV/$c^2$ the rate is reduced by a factor of
$\sim$~10 relative to the no-mixing scenario with heavy SUSY particles. The squark
sector can affect the branching ratios of the lightest Higgs boson
only via this interference phenomenon because
the decays into gauginos are not kinematically allowed.
Figure~\ref{fig:brsigma1} shows the cross section for
the $\rm gg \rightarrow \rm h$ process
and Fig.~\ref{fig:brsigma2} the branching ratios for
the h$\rightarrow\gamma\gamma$ decay channel corrected for
the loop effects for the following scenarios:
no stop mixing, maximal stop mixing, light stop quark
$\rm m_{\rm stop}$~=~200~GeV/$c^2$ with $\mu<$~0
and light stop quark $\rm m_{\rm stop}$~=~200~GeV/$c^2$ with $\mu>$~0. The interference
between the stop and top quarks is clearly visible: the lighter the stop quark the
stronger the interference and smaller the cross section.
At large tan$\beta$ the b couplings are enhanced, but the contributions from the sbottom
loops are suppressed compared to the bottom loops
by ($\rm m_{\rm b}/\rm m_{\tilde{b}})^2$ \cite{hep-ph/9603423}. With the LEP lower bound
for the sbottom mass \cite{lep_sbottom}, the contribution is on a per mille level.
In this work the mixing in the sbottom sector is not considered.
\subsection{Search strategies}
In the expected mass range, $\rm m_{\rm h} \lesssim$~135~GeV/$c^2$,
the lightest MSSM Higgs boson h can be searched for
through the following decay channels: h~$\rightarrow \gamma\gamma$, h~$\rightarrow \gamma\rm Z$,
$\rm h \rightarrow \mu^+\mu^-$, h~$\rightarrow \rm b \overline{\rm b}$,
$\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$,
h~$\rightarrow$~WW$^* \rightarrow \ell^+\ell^-\nu_{\ell}\nu_{\ell}$ and h~$\rightarrow \tau^+\tau^-$. The searches in the
$\rm h \rightarrow \gamma\gamma$, $\rm h \rightarrow \mu^+\mu^-$ and
$\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$
channels are based on the small total width of the
Higgs boson in this mass range (in the SM and MSSM) exploiting the precise photon energy and lepton momentum
measurements for the Higgs boson mass reconstruction
\cite{tdr:ecal,tdr:tracker,tdr:muon}.
The $\rm h \rightarrow \gamma\gamma$ and
$\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$
channels are expected to yield their largest reaches in the inclusive production,
dominated by the gluon fusion process.
The $\rm h \rightarrow \gamma\gamma$ channel can be searched for also in
the associated production processes $\rm t\overline{\rm t}\rm h$ and
$\rm Wh$ with a requirement of an isolated lepton
from the $\rm W \rightarrow \ell\nu_{\ell}$ decay \cite{lgamma}. The
signal-to-background ratios are larger but the event rates are
smaller than for the inclusive production.
For the $\rm h \rightarrow \rm b \overline{\rm b}$ decay channel, suppression of the
QCD multi-jet background is possible only in the associated production
processes $\rm t \overline{\rm t}\rm h$ and Wh with a requirement of an
isolated lepton from the $\rm W \rightarrow \ell\nu_{\ell}$ decay \cite{volker}.
The $\rm h \rightarrow \gamma\gamma$, $\rm h \rightarrow \mu^+\mu^-$,
$\rm h \rightarrow \rm WW^* \rightarrow \ell^+\ell^-\nu_{\ell}\nu_{\ell}$, $\rm h \rightarrow \tau^+\tau^-$ and possibly
h~$\rightarrow \rm b \overline{\rm b}$ decay channels
can be searched for also in the weak gauge boson fusion production process
$\rm qq \rightarrow \rm qqh$. In this production mechanism tagging of the forward jets and vetoing
on central hadronic jets can be used to efficiently suppress the QCD multi-jet, W+jets and
$\rm t\overline{\rm t}$ backgrounds \cite{sasha}.
The $\rm h \rightarrow \tau^+\tau^-$ channel is particularly interesting in the MSSM as the couplings to
down type fermions are tan$\beta$ enhanced relative to SM couplings. Due to the tiny branching
ratios, the h~$\rightarrow \gamma\rm Z$ and
$\rm h \rightarrow \mu^+\mu^-$ decay channels may be exploited only with the integrated luminosities
exceeding 100~$\rm fb^{-1}$.
\vspace{ 3mm}
\section{Inclusive production channels}
\subsection{$\boldmath{\rm h \rightarrow \gamma\gamma}$}
\begin{figure}[h]
\centering
\vskip 0.1 in
\begin{tabular}{cc}
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{inc_nomix_lo.eps}
\caption{Isorate (cross section times branching ratio) curves for
the inclusive $\rm h \rightarrow \gamma\gamma$ channel in the
no-mixing scenario with LO cross sections. The
isomass curves for the lightest MSSM Higgs boson are shown with
dashed lines.}
\label{fig:nomix_gamma_lo}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{inc_nomix_nlo.eps}
\caption{Isorate (cross section times branching ratio) curves for
the inclusive $\rm h \rightarrow \gamma\gamma$ channel in the
no-mixing scenario with NLO cross sections. The
isomass curves for the lightest MSSM Higgs boson are shown with
dashed lines.}
\label{fig:nomix_gamma_nlo}
\end{minipage}
\end{tabular}
\vskip 0.1 in
\centering
\begin{tabular}{cc}
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{inc_maxmix_lo.eps}
\caption{Isorate curves for
the inclusive $\rm h \rightarrow \gamma\gamma$ channel in the
maximal-mixing scenario with LO cross sections. }
\label{fig:maxmix_gamma_lo}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{inc_maxmix_nlo.eps}
\caption{Isorate curves for
the inclusive $\rm h \rightarrow \gamma\gamma$ channel in the
maximal-mixing scenario with NLO cross sections. }
\label{fig:maxmix_gamma_nlo}
\end{minipage}
\end{tabular}
\end{figure}
The isorate (cross section times branching ratio)
curves for the $\rm h \rightarrow \gamma\gamma$ channel in the inclusive production
in the no-mixing scenario are shown in
Fig.~\ref{fig:nomix_gamma_lo} with LO cross sections and in Fig.~\ref{fig:nomix_gamma_nlo} with
NLO cross sections. The isorate curves for the inclusive production in the maximal-mixing scenario are
shown in Fig.~\ref{fig:maxmix_gamma_lo} with LO and in Fig.~\ref{fig:maxmix_gamma_nlo}
with NLO cross sections. Due to the
larger Higgs boson mass $\rm m_{\rm h}$ in the maximal-mixing
scenario for fixed $\rm m_{\rm A}$ and tan$\beta$
the cross section is smaller than that in the no-mixing scenario. This
decrease is compensated by a larger $\rm h \rightarrow \gamma \gamma$ branching ratio
resulting in a $\sim$~3\% lower production (cross section times branching ratio)
rate relative to the no-mixing scenario.
Although in this scenario the stop quark is rather heavy
$\rm m_{\rm stop} \simeq 800$~GeV/$c^2$, the effect of the
virtual stop loops suppresses the cross section by
approximately 10\% relative to the no-mixing scenario. A negative higgsino
mass parameter would yield a further small suppression.
Figure \ref{fig:expcurves} shows the cross section times branching ratio
required for a 5$\sigma$
statistical significance in the inclusive $\rm h \rightarrow \gamma\gamma$ channel as a
function of the invariant two-photon mass for 30 and 100~$\rm fb^{-1}$
in the CMS detector \cite{tdr:ecal}.
The NLO cross
sections are assumed for the signal and backgrounds.
In the mass range of the lightest MSSM Higgs boson,
$\rm m_{\rm h} \lesssim$~127~GeV/$c^2$, production rates at least 55 and 33~fb are needed
to obtain a 5$\sigma$ statistical significance with an integrated luminosities of 30 and 100~$\rm fb^{-1}$,
respectively.
In the no-mixing scenario with $\rm m_{\rm h} \lesssim$~114~GeV/$c^2$ the minimal production rates
required for these luminosities are 71 and 42~fb, respectively.
Larger rates are needed at lower mass values due to the increasing backgrounds.
With these detector sensitivities a 5$\sigma$-discovery potential is expected
for $\rm m_{\rm A} \gsim$~200 and 300~GeV/$c^2$
with integrated luminosities of 100 and 30~$\rm fb^{-1}$, respectively. The reach
is approximately the same in the no-mixing and maximal-mixing scenario.
\begin{figure}[t]
\centering
\vskip 0.1 in
\begin{tabular}{cc}
\begin{minipage}{7.5cm}
\includegraphics[height=75mm,width=80mm]{per5sigma_nlo_30fb.eps}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{ggh_stop200.eps}
\end{minipage}
\\
\begin{minipage}{7.5cm}
\caption{Cross section times branching ratio to give a
5$\sigma$ statistical significance
for the inclusive $\rm H \rightarrow \gamma\gamma$ channel in the SM
for 30 and 100 fb$^{-1}$ assuming NLO
cross sections for the backgrounds \cite{tdr:ecal}.}
\label{fig:expcurves}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\caption{Isorate curves for the $\rm gg \rightarrow \rm h \rightarrow \gamma\gamma$ channel
with a light stop quark $\rm m_{\rm stop}~=~200$~GeV/$c^2$
with LO cross sections.}
\label{fig:gfusion_stop200}
\end{minipage}
\end{tabular}
\begin{tabular}{cc}
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{inc_stop200.eps}
\caption{Isorate curves for the inclusive $\rm h \rightarrow \gamma\gamma$ channel
with a light stop quark $\rm m_{\rm stop}$~=~200~GeV/$c^2$ with NLO cross sections.}
\label{fig:inc_stop200}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{inc_stop300.eps}
\caption{Isorate curves for the inclusive $\rm h \rightarrow \gamma\gamma$ channel
with a light stop quark $\rm m_{\rm stop}$~=~300~GeV/$c^2$ with NLO cross sections.}
\label{fig:inc_stop300}
\end{minipage}
\end{tabular}
\end{figure}
Figures \ref{fig:gfusion_stop200}, \ref{fig:inc_stop200} and
\ref{fig:inc_stop300} show the isorate curves
for the $\rm h \rightarrow \gamma\gamma$ channel in the light-stop
scenario.
Figure \ref{fig:gfusion_stop200} shows the isorate curves for the
dominating gluon fusion process which is affected most by a light
stop quark.
Figure~\ref{fig:inc_stop200} shows the isorate curves
in the inclusive
production for a very light stop quark, m$_{\rm stop}~=~200$ GeV/$c^2$.
Since the gluon fusion process is the dominating production mechanism,
the effect of a light stop on the inclusive production is large, too.
A discovery in the inclusive $\rm h \rightarrow \gamma\gamma$ channel with such a
light stop quark
could be possible only for $\rm m_{\rm A} \gg $~500 GeV/$c^2$
for integrated luminosities exceeding 100~$\rm fb^{-1}$.
Figure \ref{fig:inc_stop300} shows the isorate curves for
the inclusive production with m$_{\rm stop}~=~300$ GeV/$c^2$.
For this value of m$_{\rm stop}$ a discovery is possible with 100~$\rm fb^{-1}$
in part of the parameter space for about
$\rm m_{\rm A} \gsim$~400~GeV/$c^2$ and
tan$\beta \gsim$~10. For $\rm m_{\rm stop}\gtrsim$~400~GeV/$c^2$ the
interference
effect is already small and the rate is close to that of the no-mixing and
maximal-mixing scenarios.
\subsection{$\boldmath{\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}}$}
The four-lepton channel, the $\rm H \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$,
has been shown to be the major discovery channel over the large mass rangion in the SM \cite{summary_note}.
In the MSSM, the heavier scalar H could be searched for in the four-lepton channel at small tan$\beta$.
For $\rm m_{\rm H}\lesssim 2 \rm m_{\rm Z}$, where the detector resolution dominates, the
discovery potential could be obtained from that for the SM Higgs boson while for
$\rm m_{\rm H}\gsim 2 \rm m_{\rm Z}$ dedicated studies are needed due to the difference
of total Higgs boson widths between the SM and MSSM in this region. For the lighter
scalar h, a discovery could be possible close to the maximal possible value of $\rm m_{\rm h}$ at
large tan$\beta$ and $\rm m_{\rm A}$.
The discovery potential is strongly dependent on the lowest possible mass
value accessible in the (pure) SM scenario, due to the fast decreasing $\rm h \rightarrow \rm ZZ^*$ branching ratio.
The CMS studies have shown that this value could be as low as $\rm m_{\rm H} \sim$~120~GeV/$c^2$
with an integrated luminosity of 100~$\rm fb^{-1}$ combining the electron and muon channels
\cite{4lepton,lassila}. Therefore a significant region at large tan$\beta$ could be covered
in the maximal-mixing scenario while no sensitivity is possible in the MSSM
in the scenarios where the mass of the lighter scalar is below $\sim$~120~GeV/$c^2$.
\section{Associated production channels}
The isorate curves for the $\rm h \rightarrow \gamma\gamma$ channel in the associated production
combining the $\rm qq \rightarrow \rm Wh$ and $\rm q\overline{\rm q}/ \rm gg \rightarrow \rm t \overline{\rm t}\rm h$ processes
are shown in Fig.~\ref{fig:associated_maxmix_lo}
in the maximal-mixing scenario. The branching ratio for the $\rm W \rightarrow \ell\nu_{\ell}$ decay is included.
The $\rm qq \rightarrow \rm Wh$ process dominates the production and is large at small tan$\beta$, enhancing the
total rate in this region.
The cross section of the associated
production is not sensitive to the mixing and stop mass effects. The production rate
can be only affected through the loop mediated $\rm h \rightarrow \gamma\gamma$ decay process.
In the SM, the $\rm H \rightarrow \gamma\gamma$ decay channel has been shown to be
accessible in the associated
$\rm qq \rightarrow \rm WH$ production process with an integrated luminosity of
100~$\rm fb^{-1}$ \cite{lgamma}.
The total production rate required for a 5$\sigma$ statistical significance
in the $\rm H \rightarrow \gamma\gamma$ decay channel is between 0.8 and 0.6~fb
for 110~$<\rm m_{\rm H}<$~127~GeV/$c^2$. In the MSSM such a rate is expected
only in the region of large $\rm m_{\rm A}$ and small tan$\beta$ as can be seen from
Fig.~\ref{fig:associated_maxmix_lo}. The
$\rm H \rightarrow \rm b\overline{\rm b}$ decay channel has been investigated in the associated
$\rm q\overline{\rm q} \rightarrow \rm t \overline{\rm t}\rm H$ process \cite{volker}.
A 5$\sigma$ statistical significance is reached in the SM with
integrated luminosities exceeding 40~$\rm fb^{-1}$ around $\rm m_{\rm H} \sim $~120~GeV/$c^2$ \cite{volker}.
Due to the enhanced $\rm h \rightarrow \rm b\overline{\rm b}$ couplings at large tan$\beta$,
a significant region has been shown to be covered with this decay channel in the MSSM \cite{volker2}.
\begin{figure}[h]
\centering
\vskip 0.1 in
\begin{tabular}{cc}
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{associated_maxmix_lo.eps}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{VBF_hgammagamma_maxmix.eps}
\end{minipage}\\
\begin{minipage}{7.5cm}
\centering
\caption{Isorate curves for h$\rightarrow\gamma\gamma$ in the
associated production processes qq~$\rightarrow$~Wh and
$\rm q\overline{\rm q}/\rm gg \rightarrow \rm t \overline{\rm t}h$
with $\gamma\gamma\ell$ final states. Maximal stop
mixing and LO cross sections assumed.}
\label{fig:associated_maxmix_lo}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\caption{Isorate curves for weak gauge boson fusion qq~$\rightarrow$~qqh,
h~$ \rightarrow \gamma\gamma$ with maximal stop mixing.}
\label{fig:wwhgg}
\end{minipage}
\end{tabular}
\end{figure}
\section{Weak gauge boson fusion production channels}
The SM Higgs boson is expected to be accessible in the weak gauge boson
fusion production process
$\rm qq \rightarrow \rm qqH$ for m$_{\rm H}\lesssim$~150~GeV/$c^2$ with the
$\rm H \rightarrow \gamma\gamma$ \cite{dubinin},
$\rm H \rightarrow \rm WW^* \rightarrow \ell^+\ell^-\nu_{\ell}\nu_{\ell}$ \cite{qqh_ww} and
$\rm H \rightarrow \tau^+\tau^-$ \cite{sasha} decay channels
for integrated luminosities exceeding $\sim$~60~$\rm fb^{-1}$.
For the $\rm H \rightarrow \gamma \gamma$ decay channel the total rate required
with 60~$\rm fb^{-1}$ is about 8~fb for
$\rm m_{\rm H}$~=~115 GeV/$c^2$ and 6.6~fb for $\rm m_{\rm H}$~=~127~GeV/$c^2$
\cite{dubinin}.
The $\rm H \rightarrow \tau^+\tau^-$ channel has been studied with lepton-plus-jet final states \cite{sasha}.
The total rate required
for a 5$\sigma$ statistical significance with an integrated luminosity of
30~$\rm fb^{-1}$ varies from 0.4 to 0.28~fb for
115~$<\rm m_{\rm H}<$~127~GeV/$c^2$ and from 0.28 to~0.19 fb in the interval of
127~$<\rm m_{\rm H}<$~145~GeV/$c^2$ for the searches of the SM-like heavy scalar H.
Figures \ref{fig:wwhgg} and \ref{fig:hl_tautau} show the isorate curves for the $\rm h \rightarrow \gamma \gamma$
and $\rm h \rightarrow \tau^+\tau^-$ decay channels in the weak gauge boson fusion process
in the maximal-mixing scenario with LO cross sections.
Figure \ref{fig:hh_tautau} shows the corresponding isorate curves for the heavy scalar
MSSM Higgs boson H in the $\rm H \rightarrow \tau^+\tau^-$ decay channel.
As can be seen
seen from Fig.~\ref{fig:hl_tautau}, the $\rm h \rightarrow \tau^+\tau^-$ channel could be accessible in
a large part of the parameter space already with low integrated luminosities. A sensitivity
at large $\rm m_{\rm A}$ and tan$\beta$ is expected also in the
$\rm h \rightarrow \rm WW^* \rightarrow \ell^+\ell^-\nu_{\ell}\nu_{\ell}$
decay channel because the studies in the SM framework indicate a 5$\sigma$ discovery for
$\rm m_{\rm H} \gsim$~120~GeV/$c^2$ \cite{qqh_ww}.
\begin{figure}[h]
\centering
\vskip 0.1 in
\begin{tabular}{cc}
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{VBF_h_maxmix.eps}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\includegraphics[height=75mm,width=80mm]{VBF_H_maxmix.eps}
\end{minipage}\\
\begin{minipage}{7.5cm}
\centering
\caption{Isorate curves for h$\rightarrow\tau^+\tau^-$ in the
weak gauge boson fusion qq~$\rightarrow$~qqh. Maximal stop
mixing and LO cross sections are assumed.}
\label{fig:hl_tautau}
\end{minipage}
&
\begin{minipage}{7.5cm}
\centering
\caption{Isorate curves for H$\rightarrow\tau^+\tau^-$ in the
weak gauge boson fusion qq~$\rightarrow$~qqH in the region of
the (m$_{\rm A},\tan\beta$) parameter space where H
is SM-like. Maximal stop mixing and LO cross sections
are assumed.}
\label{fig:hh_tautau}
\end{minipage}
\end{tabular}
\end{figure}
\section{Discovery potential}\label{sec:discovery_ranges}
\pagestyle{empty}
\begin{figure}[p]
\centering
\includegraphics[height=100mm,width=150mm]{discovery_h_30fb.eps}
\caption{The 5$\sigma$-discovery potential of CMS for the lightest MSSM Higgs
boson as a function of $\rm m_{\rm A}$ and tan$\beta$ for 30~fb$^{-1}$ with
maximal stop mixing. The reach in the
$\rm qq \rightarrow \rm qqh$, $\rm h \rightarrow \gamma\gamma$ channel
is shown for 60~fb$^{-1}$. The discovery potential for the $\rm t\overline{\rm t}\rm h$,
$\rm h \rightarrow$ b$\overline{\rm b}$ channel
for 60~fb$^{-1}$ is taken from Ref.~\cite{volker2}. The reach of the
$\rm H \rightarrow \tau^+\tau^-$ decay channel of the heavy scalar in the $\rm qq \rightarrow \rm qqH$
production process is also shown.}
\label{fig:5sigma30fb}
\centering
\includegraphics[height=100mm,width=150mm]{discovery_h_100fb.eps}
\caption{The 5$\sigma$-discovery potential for the lightest MSSM Higgs
boson as a function of $\rm m_{\rm A}$ and tan$\beta$ for 100~fb$^{-1}$ with
maximal stop mixing. The discovery potential for the $\rm t\overline{\rm t}\rm h$,
$\rm h \rightarrow$ b$\overline{\rm b}$ channel is taken from Ref.~\cite{volker2}.}
\label{fig:5sigma100fb}
\end{figure}
Figures \ref{fig:5sigma30fb} and \ref{fig:5sigma100fb}
show the discovery potential of CMS for the lightest MSSM
Higgs boson as a function of $\rm m_{\rm A}$ and tan$\beta$ assuming maximal stop mixing,
$\rm m_{\rm top}$~=~175~GeV/$c^2$ and $\rm M_{\rm SUSY}$~=~1~TeV/$c^2$, for 30~$\rm fb^{-1}$
and 100~$\rm fb^{-1}$, respectively.
The $\rm h \rightarrow \gamma\gamma$ and
$\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$ decay channels
in the inclusive production are shown with NLO cross sections.
With an integrated luminosity of 100~$\rm fb^{-1}$ these
channels cover a major part of the MSSM parameter space, for $\rm m_{\rm A}\gsim$~200~GeV/$c^2$
and $\rm m_{\rm A}\gsim$~250~GeV/$c^2$, respectively.
In the associated $\rm qq \rightarrow \rm Wh$ production the $\rm h \rightarrow \gamma\gamma$ channel
covers only a small region at
large $\rm m_{\rm A}$ and small ($\lesssim$~5) tan$\beta$ values. The
sensitivity for the $\rm h \rightarrow$ b$\overline{\rm b}$ channel in the associated
$\rm q\overline{\rm q}/ \rm gg \rightarrow \rm t\overline{\rm t}\rm h$ production with
60~$\rm fb^{-1}$ from Ref.~\cite{volker2}
is also shown in the figure.
The reach in the $\rm h \rightarrow \gamma\gamma$ and $\rm h \rightarrow \tau^+\tau^-$ decay channels in the
weak gauge boson fusion production is
shown in Fig.~\ref{fig:5sigma30fb} for 60 and 30~$\rm fb^{-1}$, respectively.
In this production mode, the region tan$\beta \gsim$~5 can be covered
with the $\rm h \rightarrow \gamma\gamma$ channel for
$\rm m_{\rm A}\gsim$~350~GeV/$c^2$ and with the $\rm h \rightarrow \tau^+\tau^-$ channel
for $\rm m_{\rm A}\gsim$~120~GeV/$c^2$.
The $\rm H \rightarrow \tau^+\tau^-$ decay channel of the heavy scalar, shown also in
Fig.~\ref{fig:5sigma30fb}, covers the region
$\rm m_{\rm A}\lesssim$~125~GeV/$c^2$ in the weak gauge boson fusion.
The region 90~GeV/$c^2 \lesssim \rm m_{\rm A}\lesssim$~130~GeV/$c^2$ at large tan$\beta$,
where the lightest Higgs boson is no more SM-like, is
outside the reach of the channels discussed in this paper.
To explore this region, the $\rm h \rightarrow \mu^+\mu^-$ and $\rm h \rightarrow \tau^+\tau^-$ decay channels
can be used in the associated production with b quarks,
$\rm q\overline{\rm q}/\rm gg \rightarrow \rm b \overline{\rm b}\rm h$, exploiting the enhanced
couplings to down type fermions in the MSSM at large tan$\beta$.
\vspace{ 3mm}
\section{Conclusions}
\vspace{ 3mm}
The production of the lightest MSSM Higgs boson h was studied, effects of
SUSY parameters were discussed.
The discovery potential was evaluated for CMS in the maximal-mixing scenario
for the inclusive $\rm h \rightarrow\gamma\gamma$ channel, for the $\rm h \rightarrow\gamma\gamma$
channel in the associated production Wh and $\rm t\overline{\rm t}\rm h$, for
the $\rm h \rightarrow \tau^+\tau^-$ channel in the weak gauge boson fusion and,
for the first time, for the $\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$
channel at large tan$\beta$. Consequencies of a light stop quark were shown
for the expected discovery regions.
Already with an integrated luminosity of 30~fb$^{-1}$ the parameter space
$\rm m_{\rm A}\gsim$~150~GeV/$c^2$ and tan$\beta \gsim$~5,
apart from a small region at large $\rm m_{\rm A}$ and tan$\beta$,
is covered with the
$\rm h \rightarrow \tau^+\tau^-$ decay channel in the weak gauge boson fusion qq~$\rightarrow~\rm qqh$.
The reach with the $\rm h \rightarrow\gamma\gamma$ decay channel with 30~fb$^{-1}$
is for $\rm m_{\rm A}\gsim$~300~GeV/$c^2$
in the inclusive production and for $\rm m_{\rm A}\gsim$~350~GeV/$c^2$ in the
$\rm qq \rightarrow qqh$ production process.
With 60~fb$^{-1}$ the parameter space 150~$\lesssim \rm m_{\rm A}\lesssim$~400~GeV/$c^2$
($\rm m_{\rm A}\gsim$~150~GeV/$c^2$ for tan$\beta \lesssim$~5)
is covered with the
$\rm h \rightarrow\rm b \overline{\rm b}$ decay channel in the
$\rm q\overline{\rm q}/\rm gg \rightarrow \rm t\overline{\rm t}\rm h$ production process.
With the large integrated luminosity of 100~fb$^{-1}$
the inclusive $\rm h \rightarrow\gamma\gamma$ channel yields a
5$\sigma$-discovery for $\rm m_{\rm A}\gsim$~200~GeV/$c^2$ and the
$\rm h \rightarrow \rm ZZ^* \rightarrow \ell^+\ell^-\ell^{\prime +}\ell^{\prime -}$ channel
for $\rm m_{\rm A}\gsim$~250~GeV/$c^2$, tan$\beta \gsim$~5.
The effects of loop corrections to the cross sections and branching ratios
were studied in a scenario with large mixing and light stop quark. The
consequences of the stop-top interference effects were shown for the
$\rm h \rightarrow\gamma\gamma$ decay channel in the gluon fusion and in the inclusive
production. The reduction
of the total production rate was found to be significant for
$\rm m_{\rm stop}\lesssim$~300~GeV/$c^2$. For
$\rm m_{\rm stop}\lesssim$~200~GeV/$c^2$ the
sensitivity in the inclusive $\rm h \rightarrow\gamma\gamma$ channel could be entirely lost.
In this scenario the production rate
is slightly enhanced for the associated production processes
$\rm q\overline{\rm q}/\rm gg \rightarrow \rm t\overline{\rm t}\rm h$ and $\rm qq \rightarrow \rm W \rm h$
and in the weak gauge boson fusion $\rm qq \rightarrow qqh$ process due to the
positive interference effects on the $\rm h \rightarrow \gamma\gamma$ decay width.
\section{Acknowledgments}
The authors would like to thank Michael Spira for helpful comments and for his efforts
in developing the program HIGLU compatible with the other programs used in this work.
P.S. and S.L. would also like to thank Katri Huitu for helpful discussions.
\pagestyle{plain}
|
1,108,101,564,696 | arxiv | \section{Introduction}\label{sec:introduction}
Complex networks have become a powerful tool for studying complex systems, whose nodes represent elements and edges describe their interactions~\cite{Ne03,Ba16}. In most existing work, complex systems are studied by using network models that only capture pairwise relationship among system elements. However, it was shown~\cite{BeAbScJaKl18,SaCaDaLa18} that in many realistic scenarios, the system structure involves interactions taking place among more than two entities at a time~\cite{BeGlLe16, GrBaMiAl17, LaPeBaLa19}, which are often called higher-order interactions or simplicial interactions. For example, in a scientific collaboration network~\cite{PaPeVa17}, the $q$ authors of a paper form a $q$-clique, which cannot be described by pairwise interactions, but is more adequate to be represented by simplicial structure. Other examples involving simplicial interactions include correlations in neuronal spiking activities~\cite{GiPaCuIt15,ReNoScect17}, interactions among proteins~\cite{WuOtBa03}, and so on.
The higher-order interactions in complex systems can be modeled via a collection of simplicial complexes~\cite{CoBi17,PeBa18}. As generalized network structures, simplicial complexes are not only formed by nodes and links but also by triangles, tetrahedra, and other cliques. They have thus become popular to model complex systems involving interactions among groups~\cite{GiGhBa16, CoBi17,PeBa18, CoBi18, daBiGiDoMe18,LaPeBaLa19,QiYiZh19}. The models generated by simplicial complexes can display simultaneously scale-free behavior, small-world property, and finite spectral dimensions~\cite{BiRa17}. In addition to describing higher-order interactions of complex systems, simplicial structure can also be adopted to study the impact of nonpairwise interactions on collective dynamics of such systems. Recently, it was demonstrated that even three-way interactions can give rise to a host of novel phenomena that are unexpected if only pairwise interactions are considered, for example, Berezinkii-Kosterlitz-Thouless percolation transition~\cite{BiZi18}, abrupt desynchronization~\cite{SkAr19}, and abrupt phase transition of epidemic spreading~\cite{MaGoAr20}.
A simplicial complex is a collection of simplices glued between their faces, where a $q$-dimensional simplex is a $(q+1)$-clique. More recently, inspired by simplicial complexes, a deterministic network model was proposed to describe higher-order interactions, based on an edge operation~\cite{WaYiXuZh19}. Given a graph, the edge operation is defined as follows: for each edge, create a $q$-clique and then connect all $q$ nodes to both ends of the edge. Iteratively using this edge operation to a $(q+2)$-clique generates a model for complex networks with higher-order interactions characterized by $q$. Since the resulting networks consist of simplexes, they are called simplicial networks. They exhibit similar properties as simplicial complexes, include scale-free small-world characteristics and a finite $q$-dependent spectral dimension, highlighting the role of simplicial interactions. Moreover, its normalized Laplacian spectrum can be explicitly determined. It thus serves as an exactly solvable model for simplicial interactions, on which dynamical processes (e.g., random walks) can be studied analytically to shed light on the effect of simplicial interactions.
In this paper, we provide an in-depth study on the properties of resistance distances in simplicial networks, which have found a large variety of applications and thus attracted considerable attention~\cite{SpSr11,DoBu12,YoScLe15,ThYaNa18,DoSibu18,ShZh19,SoHiLi19}. We first formulate recursive expressions for some related matrices, based on which we derive evolution relations of two-node resistance distances, expressing the resistance distance between two nodes in the current network in terms of those of the node pairs in the previous generation. We then provide explicit expressions for Kirchhoff index, additive degree-Kirchhoff index, and multiplicative degree-Kirchhoff index for simplicial networks. We show that the average resistance distance converges to a $q$-dependent constant. Thus, when studying complex systems with higher-order organizations, it is necessary to take into account their simplicial interactions. This work provides rich insights for understanding real systems with simplicial interactions.
$\mathcal{G}_q(t)$ can also be considered as a pure $(q+1)$-dimensional simplicial complex, by looking upon each $(q+1)$-simplex and all its faces as the constituent simplices.
The properties and high-order interaction of a simplicial complex $\mathcal{K}$ of $N$ nodes can be characterized by a corresponding weighted graph $\mathcal{G'}$~\cite{CoBi16,CoBi17,CaBaCeFa20}, which is obtained from the underlying graph $\mathcal{G}$ of $\mathcal{K}$ by assigning an approximate weight to each edge in $\mathcal{G}$.
\section{Preliminaries}\label{RanWalk}
In this section, we give a brief introduction to some basic concepts related to graphs, graph Laplacian, and resistance distances.
\subsection{Graph and Matrix Notation}
Let $\mathcal{G}=(\mathcal{V},\,\mathcal{E})$ be a connected graph with node set $\mathcal{V}$ and edge set~$\mathcal{E} \subset \mathcal{V}\times \mathcal{V}$, for which the number of nodes is $N = |\mathcal{V}|$ and the number of edges is $M=|\mathcal{E}|$. Then, the mean degree of all nodes in $\mathcal{G}$ is $2M /N$.
The $N$ nodes in graph $\mathcal{G}$ will be labeled by $1,2,3,\ldots, N$.
The adjacency matrix $A=(a_{ij})_{N \times N}$ of $\mathcal{G}$ captures the adjacency relation among the $N$ nodes, where the entry at row $i$ and column $j$ is defined as follows: $a_{ij}=1$ if the two nodes $i$ and $j$ are connected by an edge in $\mathcal{G}$, and $a_{ij} = 0$ otherwise. For a node $i \in \mathcal{G}$, let $\mathcal{N}(i) = \{x|(x,i)\in \mathcal{E}\}$ denote the set of its neighboring nodes. Then, the degree of node $i$ is $d_i=\sum_{j \in \mathcal{N}(i)} a_{ij}= \sum_{j=1}^{N} a_{ij} $. Let $D$ denote the diagonal degree matrix of $\mathcal{G}$, where the $i$th diagonal entry is the degree of node $i$, denoted as $d_i$, while all other entries are zeros. The Laplacian matrix of $\mathcal{G}$ is defined as $L = D - A$.
For a real symmetric square matrix $X$, not necessary invertible, one can define its $\{1\}-$inverse~\cite{Ti94}. Matrix $M$ is called a $\{1\}-$inverse of $X$, if and only if $XMX = X$.
Let $X^{\dag}$ denote a $\{1\}-$inverse of $X$. The following lemma gives a $\{1\}-$inverse of a block matrix $X$~\cite{Li19}.
\begin{lemma}\label{LemmaBlock1inv}
For a block matrix
$X = \left(
\begin{array}{cc}
A & B\\
B^{\top} & C \\
\end{array}
\right)
$,
where $C$ is nonsingular, if there exists a $\{1\}$-inverse $S^{\dag}$ with $S = A-BC^{-1}B^{\top}$,
then
\begin{align}
X^{\dag} = \left(
\begin{array}{cc}
S^{\dag} & -S^{\dag}BC^{-1}\\
-C^{-1}B^{T}S^{\dag} & C^{-1}B^TS^{\dag}BC^{-1} + C^{-1}
\end{array}
\right)\notag
\end{align}
is a $\{1\}$-inverse of $X$.
\end{lemma}
\subsection{Resistance Distances and Graph Invariants}
For any graph $\mathcal{G}=(\mathcal{V},\,\mathcal{E})$, if we replace each edge in $\mathcal{E}$ by a unit resistor, we obtain an electrical network~\cite{DoSn84}. For a pair of nodes $i$ and $j$ ($i \neq j$) of $\mathcal{G}$,
the effective resistance $\Omega_{ij}$ between them is defined as the potential difference between $i$ and $j$ when a unit current from $i$ to $j$ is maintained in the corresponding electrical network. In the case $i = j$, $\Omega_{ij}$ is defined to be zero. The effective resistance $\Omega_{ij}$ is called the resistance distance~\cite{KlRa93} between $i$ and $j$ of graph $\mathcal{G}$.
For a graph $\mathcal{G}$, the effective resistance between any node pair can be represented in terms of the elements of any \{1\}\textendash inverse of its Laplacian matrix~\cite{Ba99}.
\begin{lemma}\label{efpro1}
For a graph $\mathcal{G}$, let $L^{\dagger}_{ij}$ denote the $(i,j)$th entry of any \{1\}\textendash inverse $L^{\dagger}$ of its Laplacian matrix $L$. Then, for any pair of nodes $i,j\in \mathcal{V}$, the effective resistance $\Omega_{ij}$ can be obtained from the elements of $L^{\dagger}$ as
\begin{equation}
\Omega_{ij}=L^{\dagger}_{ii}+L^{\dagger}_{jj}-L^{\dagger}_{ij}-L^{\dagger}_{ji}.
\end{equation}
\end{lemma}
The properties of resistance distance have been extensively studied, and various sum rules have been established~\cite{Kl02}.
\begin{lemma}\label{Foster}(Foster's First Theorem~\cite{FoRo1949}).
For a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ with $N$ nodes and $M=|\mathcal{E}|$ edges, the sum of resistance distances over all $M$ pairs of adjacent nodes is $N-1$, that
is,
\begin{equation}
\sum_{\substack{i<j\\(i,j)\in\mathcal{E}}}\Omega_{ij}=N-1.
\end{equation}
\end{lemma}
Theorem~\ref{Foster} was later generalized by Foster himself in~\cite{Fo61}, which is called
Foster's second theorem. In~\cite{ThYaNa18}, further extensions were provided for these two Foster's
theorems.
\begin{lemma}\label{basic}(Sum rule~\cite{Ch10}).
For any two different nodes $i$ and $j$ in a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$,
\begin{equation}
d_{i}\Omega_{ij}+\sum_{k\in{\mathcal{N}(i)}}(\Omega_{ik}-\Omega_{jk})=2.
\end{equation}
\end{lemma}
The resistance distance is an important quantity~\cite{GhBoSa08}, based on which various graph invariants have been defined and studied. Among these invariants, the Kirchhoff index~\cite{KlRa93} is of vital importance.
For a graph $\mathcal{G}$, its Kirchhoff index is defined as
\begin{equation*}
\mathcal{K}(\mathcal{G})=\sum_{i,j=1}^{N}\Omega_{ij}=\sum_{\substack{i\in\mathcal{V}\\j\in\mathcal{V}}}\Omega_{ij}.
\end{equation*}
Kirchhoff index has found many
applications. For example, it can be used as
measures of the overall connectedness of a network~\cite{TiLe10}, the edge
centrality of complex networks~\cite{LiZh18}, as well as the robustness of the
first-order consensus algorithm in noisy networks~\cite{PaBa14, QiZhYiLi19,YiZhPa20}.
The first-order and second-order consensus problems have received considerable
attention from the scientific community~\cite{ShCaHu18, ZhXuYiZh22,YiYaZhZhPa22}.
In recent years, several modifications of the Kirchhoff index have been proposed, including multiplicative degree-Kirchhoff index~\cite{ChZh07} and additive degree-Kirchhoff index~\cite{GuFeYu12}. For a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, its multiplicative degree-Kirchhoff index $R^\ast(\mathcal{G})$ and additive degree-Kirchhoff index $R^+(\mathcal{G})$ are defined as
\begin{equation*}
R^\ast(\mathcal{G})=\sum_{\substack{i\in\mathcal{V}\\j\in\mathcal{V}}}(d_id_j)\Omega_{ij}
\end{equation*}
and
\begin{equation*}
R^+(\mathcal{G})=\sum_{\substack{i\in\mathcal{V}\\j\in\mathcal{V}}}(d_i+d_j)\Omega_{ij},
\end{equation*}
respectively.
It has been shown that the multiplicative degree-Kirchhoff index $R^\ast(\mathcal{G})$ of a graph $\mathcal{G}$ is equal to $4M$ times the Kemeny constant of the graph~\cite{ChZh07}. The Kemeny constant has been applied to different areas~\cite{Hu14,XuShZhKaZh20}. For example, it can be used as a metric of the efficiency of user navigation through the World Wide Web~\cite{LeLo02}. Also, it was used to measure the efficiency of robotic surveillance in network environments~\cite{PaAgBu15} and to characterize the noise robustness of a class of protocols for formation control~\cite{JaOl19}.
\section{Network Construction and Properties}
The network family studied here is constructed in an iterative way, controlled by two parameters: $q$ and $t$ with $q\geqslant1$ and $t \geqslant 0$. Let $\mathcal{G}_q(t)$ be the network after $t$ iterations. Let $\mathcal{K}_q$ ($q\geqslant1$) denote the complete graph with $q$ nodes. When $q=1$, for simplicity, suppose that $\mathcal{K}_1$ is a graph with an isolate node. Then, $\mathcal{G}_q(t)$ is constructed as follows.
\begin{definition}
For $t=0$ , $\mathcal{G}_q(0)$ is the complete graph $\mathcal{K}_{q+2}$. For $t\geqslant 0$, $\mathcal{G}_q(t+1)$ is obtained from $\mathcal{G}_q(t)$ by performing the following operation (see Fig.~\ref{build}): for every
existing edge of $\mathcal{G}_q(t)$, introduce a copy of the
complete graph $\mathcal{K}_q$ and connect all its nodes to both
end nodes of the edge.
\end{definition}
Figures~\ref{netA} and~\ref{netB} illustrate the networks for two particular cases of $q =1$ and $q =2$, respectively.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Construction.eps}
\caption{Network construction method. The next-iteration network is obtained from the current network by performing the operation on the right-hand side of the arrow for each existing edge.}
\label{build}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.30\textwidth]{Netq1.eps}
\caption{The networks of the first three iterations for $q =1$.}
\label{netA}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{Netq2.eps}
\caption{The networks of the first two iterations for $q =2$.}
\label{netB}
\end{figure}
For network $\mathcal{G}_q(t)$, let $\mathcal{V}_{g}$ denote its node set, and let $\mathcal{E}_{g}$ denote its edge set. Let $N_t= |\mathcal{V}_{g}|$ and $M_t=|\mathcal{E}_{g}|$ denote, respectively, the number of nodes and the number of edges in graph $\mathcal{G}_q(t)$. It is easy to verify that? for all $t\geqslant0$,
\begin{gather}
M_t=\left[\frac{(q+1)(q+2)}{2}\right]^{t+1}\label{M},\\
N_t\!=\!\frac{2}{q+3}\left[\frac{(q+1)(q+2)}{2}\right]^{t+1}\!+\!\frac{2(q+2)}{q+3}.\label{N}
\end{gather}
Then, the average degree of graph $G_q(t)$ is $2M_q(t)/N_q(t)$, which converges to $q + 3$ when $t$ is sufficiently large. Therefore, the graph family $G_q(t)$ is sparse.
Let $\mathcal{W}_{t+1}=\mathcal{V}_{t+1}\backslash\mathcal{V}g$ represent the set of new nodes generated at iteration $g+1$, and let $W_{t+1}=|\mathcal{W}_{t+1}|$ stand for the number of these newly generated nodes. Then,
\begin{equation}\label{W}
W_{t+1}=q\left[\frac{(q+1)(q+2)}{2}\right]^{t+1}.
\end{equation}
Let $d^{(t)}_v$ be the degree of a node $v$ in graph $G_q(t)$, which was generated at iteration $t_v$. Then,
\begin{gather}\label{d_v}
d^{(t)}_v=(q+1)^{t-t_v+1}.
\end{gather}
By construction, in graph $G_q(t)$ the degree of all simultaneously emerging nodes is the same. Thus, the number of nodes with degree $(q+1)^{t-t_v+1}$ is $q+2$ and $q\left[\frac{(q+1)(q+2)}{2}\right]^{t_v}$ for $t_v=0$ and $t_v>0$, respectively.
By construction, the resulting networks consist of cliques $K_{q+2}$ or smaller cliques and are thus called simplicial networks, characterized by parameter $q$. They display some remarkable properties as observed in most real networks~\cite{Ne03}. They are scale-free, since their node degrees obey a power-law distribution $P(d)\sim d^{-\gamma_q}$ with $\gamma_q=2 +\frac{\ln (q+2)}{\ln (q+1)}-\frac{\ln 2}{\ln (q+1)}$~\cite{WaYiXuZh19}. They are small-world, since their diameters grow logarithmically with the number of nodes and their mean clustering coefficients approach a large constant $\frac{q^2+3q+3 }{q^2+3q+5}$~\cite{WaYiXuZh19}. Moreover, they have a finite spectral dimension $ \frac{2[\ln(q^2+3q+3)-\ln 2]}{\ln (q+1)}$. Thus, many features of simplicial networks are related to higher-order interactions encoded in parameter $q$. It is expected that other properties are also dependent on group interactions, including the resistance distance to be studied in the following.
\section{ Relations among Various Matrices}
Before determining resistance distances, Kirchhoff index and its invariants in simplicial networks, we first provide some relations among matrices related to simplicial networks. These relations are very useful for deriving the properties of resistance distances, as well as those relevant quantities derived from resistance distances.
Let $A_t$ denote the adjacency matrix of graph $\mathcal{G}_q(t)$. Its element $A_t(i,j)$ at row $i$ and column $j$ of $A_t$ is: $A_t(i,j)=1$ if nodes $i$ and $j$ are connected by an edge in $\mathcal{G}_q(t)$, $A_t (i,j)=0$ otherwise. Let $D_t$ denote the diagonal degree matrix of matrix $\mathcal{G}_q(t)$, with the $i$th diagonal element being the degree $d_i^{(t)}$ of node $i$. Let $L_t$ denote the Laplacian matrix of $\mathcal{G}_q(t)$. Then, $L_t=D_t-A_t$. Next, the recursion relations are derived for the these matrices $A_t$, $D_t$, and $L_t$.
For graph $\mathcal{G}_{q}(t+1)$, let $\alpha$ be the set of old nodes that are already present in $\mathcal{G}_q(t)$, and $\beta$ the set of new nodes generated at iteration $t+1$, namely, those nodes in $\mathcal{W}_{t+1}$. Then, write $A_{t+1}$ in block form as
\begin{equation}
A_{t+1}=
\left
[\begin{array}{cc}
A_{t+1}^{\alpha,\alpha} & A_{t+1}^{\alpha,\beta}\\
A_{t+1}^{\beta,\alpha} & A_{t+1}^{\beta,\beta}
\end{array}
\right],
\end{equation}
where $A^{\alpha,\alpha}_{t+1}=A_t$, $A^{\alpha,\beta}_{t+1}=(A^{\beta,\alpha}_{t+1})^\top$, and $A^{\beta,\beta}_{t+1}$ is a diagonal block matrix of order/dimension $W_{t+1}/ q$, taking the form $A^{\beta,\beta}_{t+1}={\rm {diag}}(B_q,B_q,\ldots,B_q)$
with $B_q$ being the adjacency matrix of the complete graph $\mathcal{K}_q$ for $q \geq 1$.
In what follows, let $I$ denote the identity matrix of approximate dimension. Then, the diagonal matrix $D_{t+1}$ is given by
\begin{flalign}
&D_{t+1}\!=\!
\begin{bmatrix}
D^{\alpha,\alpha}_{t+1}\! & \!0\\
0\! &\! (q\!+\!1)I
\end{bmatrix}
\!=\!
\begin{bmatrix}
(q\!+\!1)D_{t}\! &\! 0\\
0 \!& \!(q\!+\!1)I
\end{bmatrix},
\end{flalign}
which is obtained based on the following facts: when the network evolves from iteration $t$ to iteration $t+1$, the degree of each node in set $\alpha$ increases by a factor of $q+1$ as shown in~\eqref{d_v}, while the degree of all nodes in set $\beta$ is equal to $q+1$. Therefore, the Laplacian matrix $L_{t+1}$ satisfies
\begin{small}
\begin{flalign}\label{lg+1}
L_{t+1}=&D_{t+1}-A_{t+1}\notag\\
=&
\begin{bmatrix}
(q\!+\!1)D_{t}\!-\!A_t\! &\! -A^{\alpha,\beta}_{t+1}\\
-A^{\beta,\alpha}_{t+1} \!& \!(q\!+\!1)I\!-\!A^{\beta,\beta}_{t+1}
\end{bmatrix}.
\end{flalign}
\end{small}
In addition to the above-derived recursion relations for adjacency matrix, degree diagonal matrix, and Laplacian matrix, relations for other relevant matrices can also be established.
\begin{lemma}\label{PFProA}
For graph $\mathcal{G}_q(t+1)$, $t\geqslant 0$,
\begin{equation}
A^{\alpha,\beta}_{t+1}A^{\beta,\alpha}_{t+1}=q(D_t+A_t).
\end{equation}
\end{lemma}
The proof of Lemma~\ref{PFProA} is provided in~\ref{AppA}.
\begin{lemma}\label{beauty}
For graph $\mathcal{G}_q(t+1)$, $t\geqslant 0$,
\begin{equation}
A^{\alpha,\beta}_{t+1}\left((q+1)I-A^{\beta,\beta}_{t+1}\right)^{-1}=\frac{1}{2}A^{\alpha,\beta}_{t+1}.
\end{equation}
\end{lemma}
The proof of Lemma~\ref{beauty} is provided in~\ref{AppB}.
\section{ Relations among Effective Resistances}
In this section, we study the relations governing resistance distances. For graph $\mathcal{G}_q(t+1)$, we first establish the evolution rule of resistance distance between any pair of old nodes in $\mathcal{G}_q(t)$. Then, we demonstrate that the resistance distance between two arbitrary nodes in $\mathcal{G}_q(t+1)$ can be exactly determined or expressed in terms of resistance distances of those old node pairs in $\mathcal{G}_q(t)$.
Write $\Omega^{(t)}_{ij}$ to represent the resistance distance between nodes $i$ and $j$ in graph $\mathcal{G}_q(t)$, and write $L^{\dagger}_t$ to denote a \{1\}\textendash inverse of Laplacian matrix $L_t$ for $\mathcal{G}_q(t)$.
\begin{lemma}\label{lemma7}
Let $i,j\in \mathcal{V}_t$ be a pair of old nodes in $\mathcal{G}_{t+1} (t\geqslant0)$. Then, $\Omega^{(t)}_{ij}$ satisfies the following recursive relation:
\begin{equation}
\Omega^{(t+1)}_{ij}=\frac{2}{q+2}\Omega^{(t)}_{ij}.
\end{equation}
\end{lemma}
\begin{proof}
Any \{1\}\textendash inverse $L^{\dagger}_{t+1}$ of matrix $L_{t+1}$ can be written as
\begin{equation}
L^{\dagger}_{t+1}=
\begin{bmatrix}
L^{\dagger}_{\alpha,\alpha} & L^{\dagger}_{\alpha,\beta}\\
L^{\dagger}_{\beta,\alpha} & L^{\dagger}_{\beta,\beta}
\end{bmatrix}.
\end{equation}
By~\eqref{lg+1} and Lemmas~\ref{efpro1}, ~\ref{PFProA} and~\ref{beauty}, one obtains
\begin{small}
\begin{align}\label{PFoosubmat}
L^{\dagger}_{\alpha,\alpha}=&\left(\!(q\!+\!1)D_t\!-\!A_t\!-\!A^{\alpha,\beta}_{t+1}\!\left((q\!+\!1)I\!-\!A^{\beta,\beta}_{t+1}\right)^{\!-\!1}\!\!\!A^{\beta,\alpha}_{t+1}\right)^{\dagger}\notag\\
=&\left((q+1)D_t-A_t-\frac{q}{2}\left(D_t+A_t\right)\right)^{\dagger}\notag\\
=&\left(\left(\frac{q}{2}+1\right)(D_t-A_t)\right)^{\dagger}\notag\\
=&\frac{2}{q+2}L^{\dagger}_{t}.
\end{align}
\end{small}
\end{proof}
By Lemma~$\ref{efpro1}$ and~\eqref{PFoosubmat}, for two nodes~$i,j \in \mathcal{V}_t$, one has
\begin{align}
&\Omega_{ij}(t+1) \nonumber\\
=& L^{\dag}_{\alpha,\alpha}(i,i) + L^{\dag}_{\alpha,\alpha}(j,j)
- L^{\dag}_{\alpha,\alpha}(i,j) - L^{\dag}_{\alpha,\alpha}(j,i)\nonumber\\
=& \frac{2}{q+2} \left( L^{\dag}_{t}(i,i) + L^{\dag}_{t}(j,j)
- L^{\dag}_{t}(i,j) - L^{\dag}_{t}(j,i) \right) \nonumber\\
=& \frac{2}{q+2} \Omega_{ij}^{(t)}\,,
\end{align}
as required.
In addition to the pairs of old nodes, the effective resistance between any other pairs of nodes in $\mathcal{G}_q(t+1)$ can be explicitly determined or be represented in terms of those for old nodes in $\mathcal{G}_q(t)$.
To this end, introduce some additional quantities. For any two subsets $X$ and $Y$ of set $\mathcal{V}_t$ for nodes in graph $\mathcal{G}_q(t)$, define
\begin{equation}
\Omega^{(t)}_{X,Y}=\sum_{i\in X,j\in Y}\Omega^{(t)}_{ij}.
\end{equation}
For a new node $i\in \mathcal{W}_{t+1}$ in $\mathcal{G}_q(t+1)$, let $\Delta_{i}=\{m,n\}$ be the set of two old neighbors of $i$. Define
\begin{equation}
\Omega^{(t+1)}_{\Delta_i}=\Omega^{(t+1)}_{mn}.
\end{equation}
\begin{lemma}\label{lemma8}
For $t\geqslant0$, $i,j\in \mathcal{W}_{t+1}$ that are adjacent to each other, one has
\begin{equation}\label{Omegaijw}
\Omega^{(t+1)}_{ij}=\frac{2}{q+2}.
\end{equation}
\end{lemma}
\begin{proof}
by Lemma~\ref{basic}, for $i\in \mathcal{W}_{t+1}$ and its neighboring node $j\in \mathcal{W}_{t+1}$, one obtains
\begin{equation}\label{Omegaijx}
(q+1)\Omega^{(t+1)}_{ij}+\sum_{k\in{\mathcal{N}(i)}}\left(\Omega^{(t+1)}_{ik}-\Omega^{(t+1)}_{jk}\right)=2.
\end{equation}
By symmetry, for any node $k \in \mathcal{N}(i)$ except $j$, one has
\begin{equation}
\Omega^{(t+1)}_{ik}=\Omega^{(t+1)}_{jk},
\end{equation}
which leads to
\begin{equation}\label{Omegaijy}
\sum_{k\in{\mathcal{N}(i)}}\left(\Omega^{(t+1)}_{ik}-\Omega^{(t+1)}_{jk}\right)=\Omega^{(t+1)}_{ij}.
\end{equation}
With~\eqref{Omegaijx} and~\eqref{Omegaijy}, one obtains~\eqref{Omegaijw}.
\end{proof}
\begin{lemma}\label{lemma9}
For a node $i\in \mathcal{W}_{t+1}$ with $t\geqslant0$, one has
\begin{equation}
\Omega^{(t+1)}_{i,\Delta_{i}}=\frac{3}{q+2}+\frac{1}{2}\Omega^{(t+1)}_{\Delta_{i}}.
\end{equation}
\end{lemma}
\begin{proof}
by Lemma~\ref{basic}, for $i\in \mathcal{W}_{t+1}$ and its two old neighbors $m$ and $n$ belonging to $\mathcal{V}_{t}$ and forming set $\Delta_{i}=\{m,n\}$, one has
\begin{equation}\label{eq91}
(q+1)\Omega^{(t+1)}_{im}+\sum_{k\in{\mathcal{N}(i)}}\left(\Omega^{(t+1)}_{ik}-\Omega^{(t+1)}_{mk}\right)=2
\end{equation}
and
\begin{equation}\label{eq92}
(q+1)\Omega^{(t+1)}_{in}+\sum_{k\in{\mathcal{N}(i)}}\left(\Omega^{(t+1)}_{ik}-\Omega^{(t+1)}_{nk}\right)=2.
\end{equation}
By symmetry, for any node $k\in\mathcal{N}(i)$ except $m$ and $n$,
$\Omega^{(t+1)}_{mk}=\Omega^{(t+1)}_{mi}$ holds, which implies that
\begin{equation}\label{eq93}
\sum_{k\in{\mathcal{N}(i)}}\Omega^{(t+1)}_{mk}=(q-1)\Omega^{(t+1)}_{im}+\Omega^{(t+1)}_{\Delta_{i}}.
\end{equation}
On the other hand, using Lemma~\ref{lemma8}, one obtains
\begin{equation}\label{eq94}
\sum_{k\in{\mathcal{N}(i)}}\Omega^{(t+1)}_{ik}=\frac{2(q-1)}{q+2}+\Omega^{(t+1)}_{i,\Delta_{i}}.
\end{equation}
With~\eqref{eq91},~\eqref{eq93}, and~\eqref{eq94}, one has
\begin{equation}
2\Omega^{(t+1)}_{im}+\Omega^{(t+1)}_{i,\Delta_{i}}-\Omega^{(t+1)}_{\Delta_{i}}=\frac{6}{q+2}.
\end{equation}
Analogously,
\begin{equation}
2\Omega^{(t+1)}_{in}+\Omega^{(t+1)}_{i,\Delta_{i}}-\Omega^{(t+1)}_{\Delta_{i}}=\frac{6}{q+2}.
\end{equation}
The above two equations show that $\Omega^{(t+1)}_{im}=\Omega^{(t+1)}_{in}$, which is easily understood according to the network construction.
Summing these two equations gives
\begin{equation}
4\Omega^{(t+1)}_{i,\Delta_{i}}-2\Omega^{(t+1)}_{\Delta_{i}}=\frac{12}{q+2}.
\end{equation}
That is
\begin{equation}
\Omega^{(t+1)}_{i,\Delta_{i}}=\frac{3}{q+2}+\frac{1}{2}\Omega^{(t+1)}_{\Delta_{i}},
\end{equation}
as required.
\end{proof}
\begin{lemma}\label{lemma10}
For $t\geqslant0$ and two nodes $i$ and $j$, with $i\in\mathcal{W}_{t+1}$ and $j\in\mathcal{V}_t$, one has
\begin{equation}\label{lemma10eq}
\Omega^{(t+1)}_{ij}=\frac{1}{2}\left(\frac{3}{q+2}-\frac{1}{2}\Omega^{(t+1)}_{\Delta_{i}}+\Omega^{(t+1)}_{\Delta_{i},j}\right).
\end{equation}
\end{lemma}
\begin{proof}
For the pair of nodes $i\in\mathcal{W}_{t+1}$ and $j\in\mathcal{V}_t$, by Lemma~\ref{basic}, one obtains
\begin{equation}\label{eq101}
(q+1)\Omega^{(t+1)}_{ij}+\sum_{k\in{\mathcal{N}(i)}}\left(\Omega^{(t+1)}_{ik}-\Omega^{(t+1)}_{jk}\right)=2.
\end{equation}
Considering the symmetry, for any node $k\in\mathcal{N}(i)$ except $m$ and $n$, one has
$\Omega^{(t+1)}_{jk}=\Omega^{(t+1)}_{ji}$, which yields
\begin{equation}\label{eq102}
\sum_{k\in{\mathcal{N}(i)}}\Omega^{(t+1)}_{jk}=(q-1)\Omega^{(t+1)}_{ji}+\Omega^{(t+1)}_{j,\Delta_{i}},
\end{equation}
Moreover, according to Lemma~\ref{lemma8}, one obtains
\begin{equation}\label{eq103}
\sum_{k\in{\mathcal{N}(i)}}\Omega^{(t+1)}_{ik}=\frac{2(q-1)}{q+2}+\Omega^{(t+1)}_{i,\Delta_{i}}.
\end{equation}
Combining~\eqref{eq101},~\eqref{eq102} and~\eqref{eq103} yields
\begin{equation}
2\Omega^{(t+1)}_{ij}+\Omega^{(t+1)}_{i,\Delta_{i}}-\Omega^{(t+1)}_{j,\Delta_{i}}=\frac{6}{q+2},
\end{equation}
which, together with Lemma~\ref{lemma9}, gives
\begin{equation}
2\Omega^{(t+1)}_{ij}+\frac{1}{2}\Omega^{(t+1)}_{\Delta_{i}}-\Omega^{(t+1)}_{j,\Delta_{i}}=\frac{3}{q+2},
\end{equation}
a formula equivalent to~\eqref{lemma10eq}.
\end{proof}
\begin{lemma}\label{lemma11}
For $t\geqslant0$, a pair of nonadjacent nodes $i$ and $ j$, both in $\mathcal{W}_{t+1}$, satisfy
\begin{equation} \Omega^{(t+1)}_{ij}=\frac{3}{q+2}-\frac{1}{4}\left(\Omega^{(t+1)}_{\Delta_{i}}+\Omega^{(t+1)}_{\Delta_{j}}\right)+\frac{1}{4}\Omega^{(t+1)}_{\Delta_{i},\Delta_{j}}.
\end{equation}
\end{lemma}
\begin{proof}
For two nonadjacent nodes $i$ and $j$ in $\mathcal{W}_{t+1}$, by Lemma~\ref{basic}, one has
\begin{equation}\label{eq111}
(q+1)\Omega^{(t+1)}_{ij}+\sum_{k\in{\mathcal{N}(i)}}\left(\Omega^{(t+1)}_{ik}-\Omega^{(t+1)}_{jk}\right)=2.
\end{equation}
Following the same process as in the proof of Lemma~\ref{lemma10}, one obtains
\begin{flalign}
\sum_{k\in{\mathcal{N}(i)}}\Omega^{(t+1)}_{ik}&=\frac{2(q-1)}{q+2}+\Omega^{(t+1)}_{i,\Delta_{i}},\label{eq112}\\
\sum_{k\in{\mathcal{N}(i)}}\Omega^{(t+1)}_{jk}&=(q-1)\Omega^{(t+1)}_{ji}+\Omega^{(t+1)}_{j,\Delta_{i}}.\label{eq113}
\end{flalign}
Combining~\eqref{eq111},~\eqref{eq112} and~\eqref{eq113} yields
\begin{equation}
2\Omega^{(t+1)}_{ij}+\Omega^{(t+1)}_{i,\Delta_{i}}-\Omega^{(t+1)}_{j,\Delta_{i}}=\frac{6}{q+2}\,,
\end{equation}
which, together with Lemmas~\ref{lemma9} and~\ref{lemma10}, leads to
\begin{small}
\begin{align}
&\Omega^{(t+1)}_{ij}=\frac{1}{2}\left(\frac{6}{q+2}-\Omega^{(t+1)}_{i,\Delta_{i}}+\Omega^{(t+1)}_{j,\Delta_{i}}\right)\notag\\
=&\frac{1}{2}\left(\frac{3}{q\!+\!2}\!-\!\frac{1}{2}\Omega^{(t\!+\!1)}_{\Delta_{i}}\!\!+\!\!\sum_{k\in\Delta_{i}}\!\frac{1}{2}\left(\frac{3}{q\!+\!2}\!-\!\frac{1}{2}\Omega^{(t\!+\!1)}_{\Delta_{j}}\!\!+\!\Omega^{(t\!+\!1)}_{\Delta_{j},k}\right)\right)\notag\\
=&\frac{1}{2}\left(\frac{3}{q\!+\!2}\!-\!\frac{1}{2}\Omega^{(t\!+\!1)}_{\Delta_{i}}\!+\!\left(\frac{3}{q\!+\!2}\!-\!\frac{1}{2}\Omega^{(t\!+\!1)}_{\Delta_{j}}\!+\!\frac{1}{2}\Omega^{(t\!+\!1)}_{\Delta_{i},\Delta_{j}}\right)\right)\notag\\
=&\frac{3}{q+2}-\frac{1}{4}\left(\Omega^{(t+1)}_{\Delta_{i}}+\Omega^{(t+1)}_{\Delta_{j}}\right)+\frac{1}{4}\Omega^{(t+1)}_{\Delta_{i},\Delta_{j}}.
\end{align}
\end{small}
Thus, the result follows.
\end{proof}
\section{Exact Solutions to Various Kirchhoff Indices}
In this section, we determine the multiplicative degree-Kirchhoff index, additive degree-Kirchhoff index, and Kirchhoff index for graph $\mathcal{G}_q(t)$. To do so, we define three more quantities about resistance distances related to
graph $\mathcal{G}_q(t)$. For two subsets $X $ and $Y $ of set $\mathcal{V}_t$ for nodes in $\mathcal{G}_q(t)$, define
\begin{align}
&R_{X,Y}(t)=\sum_{i\in X,j\in Y}\Omega^{(t)}_{ij},\\
&R^\ast_{X,Y}(t)=\sum_{i\in X,j\in Y}(d_id_j)\Omega^{(t)}_{ij},\\
&R^+_{X,Y}(t)=\sum_{i\in X,j\in Y}(d_i+d_j)\Omega^{(t)}_{ij}.
\end{align}
Then, $R_{\mathcal{V}_t,\mathcal{V}_t}(t)$, $R^\ast_{\mathcal{V}_t,\mathcal{V}_t}(t)$ and $R^+_{\mathcal{V}_t,\mathcal{V}_t}(t)$ are, respectively, the Kirchhoff index, multiplicative degree-Kirchhoff index, and additive degree-Kirchhoff index of graph $\mathcal{G}_q(t)$.
For $t=0$, it is easy to derive $R_{\mathcal{V}_{0},\mathcal{V}_{0}}(0)=2(q+1)$, $R^{\ast}_{\mathcal{V}_0,\mathcal{V}_0}(0)=2(q+1)^3$, and $R^{+}_{\mathcal{V}_{0},\mathcal{V}_{0}}(0)=4(q+1)^2$.
To obtain explicit formulas for $R_{\mathcal{V}_t,\mathcal{V}_t}(t)$, $R^\ast_{\mathcal{V}_t,\mathcal{V}_t}(t)$, and $R^+_{\mathcal{V}_t,\mathcal{V}_t}(t)$ for all $t\geq 0$, some intermediary results are needed.
\begin{lemma}\label{lemma12}
For graph $\mathcal{G}_q(t+1)$ with $t\geqslant0$,
\begin{equation}
\sum_{i\in \mathcal{W}_{t+1}}\Omega^{(t+1)}_{\Delta_{i}}=\frac{2q(N_t-1)}{q+2}.
\end{equation}
\end{lemma}
\begin{proof}
Note that every edge in $\mathcal{G}_q(t)$ creates exactly $q$ new nodes of $\mathcal{G}_q(t+1)$. Summing $\Omega^{(t+1)}_{\Delta_{i}}$ over $\Delta_{i}$ of all nodes $i\in \mathcal{W}_{t+1}$ is equivalent to summing $\Omega^{(t+1)}_{xy}$ for $q$ times over all edges $(x,y)$ belonging to $\mathcal{E}_t$. Then, by Lemma~\ref{Foster}, one obtains
\begin{equation}
\begin{split}
\sum_{i\in \mathcal{W}_{t+1}}\Omega^{(t+1)}_{\Delta_{i}}&=q\sum_{(x,y)\in \mathcal{E}_t}\Omega^{(t+1)}_{xy}\\
&=q\sum_{(x,y)\in \mathcal{E}_t}\frac{2}{q+2}\Omega^{(t)}_{xy}\\
&=\frac{2q(N_t-1)}{q+2},
\end{split}
\end{equation}
This completes the proof.
\end{proof}
\begin{lemma}\label{lemma13}
For $t\geqslant0$ and a set $Y\subseteq\mathcal{V}_{t}$, one has
\begin{equation}
\sum_{i\in \mathcal{W}_{t+1}}\Omega^{(t+1)}_{\Delta_{i},Y}=\sum_{x\in \mathcal{V}_{t}}qd_x^{(t)}\Omega^{(t+1)}_{x,Y}.
\end{equation}
\end{lemma}
\begin{proof}
For an arbitrary node $x\in\mathcal{V}_t$, there are $d^{(t+1)}_x-d^{(t)}_x=qd^{(t)}_x$ new nodes in $\mathcal{W}_{t+1}$ that are neighbors of $x$. Thus, $\Omega^{(t+1)}_{x,Y}$ is summed $qd^{(t)}_x$ times for each node $x\in \mathcal{V}_{t}$.
\end{proof}
Now, we are in position to prove the main results.
\begin{theorem}\label{lemma14}
For graph $\mathcal{G}_q(t)$ with $t\geqslant0$, its multiplicative degree-Kirchhoff index is
\begin{small}
\begin{equation}\label{mul}
\begin{aligned}
&R^{\ast}_{\mathcal{V}_t,\mathcal{V}_t}(t)\\
=&-(q+4)(q+1)^2\left(\frac{(q+2)(q+1)^2}{2}\right)^t\\
&+\frac{2(q+2)(q+1)^2}{q+3}\left(\frac{(q+1)(q+2)}{2}\right)^t\\
&+\frac{(q+2)(3q+7)(q+1)^2}{q+3}\left(\frac{(q+1)(q+2)}{2}\right)^{2t}.
\end{aligned}
\end{equation}
\end{small}
\end{theorem}
The proof of Theorem~\ref{lemma14} is provided in~\ref{AppC}.
\begin{theorem}\label{lemma15}
For graph $\mathcal{G}_q(t)$ with $t\geqslant0$, its additive degree-Kirchhoff index is
\begin{small}
\begin{align}\label{eqxxx}
&R^{+}_{\mathcal{V}_t,\mathcal{V}_t}(t)\notag\\
=&\frac{4(q+2)(q+1)^2}{(q+3)^2}\left(\frac{(q+1)(q+2)}{2}\right)^t\notag\\ &+\frac{2(3q\!+\!7)(q\!+\!1)^2(q^3\!+\!8q^2\!+\!22q\!+\!20)}{(q\!+\!3)^2(q^2\!+\!5q\!+\!8)}\left(\frac{(q\!+\!1)(q\!+\!2)}{2}\right)^{2t}\notag\\
&-\frac{2(q+4)(q+1)^2}{q+3}\left(\frac{(q+2)(q+1)^2}{2}\right)^t\notag\\
&+\frac{2(q+1)^2(q^2+9q+20)}{(q+3)(q^2+5q+8)}\left(q+1\right)^t +\frac{2(q+1)^2}{(q+3)^2}.
\end{align}
\end{small}
\end{theorem}
The proof of Theorem~\ref{lemma15} is provided in~\ref{AppD}.
\begin{theorem}\label{lemma16}
For graph $\mathcal{G}_q(t)$ with $t\geqslant0$, its Kirchhoff index is
\begin{small}
\begin{align}\label{eq160}
&R_{\mathcal{V}_t,\mathcal{V}_t}(t)\notag\\
=&\frac{2(q+2)(q^3+8q^2+15q+8)}{(q+3)^2(q^2+5q+8)}\left(\frac{(q+1)(q+2)}{2}\right)^t\notag\\
&+\frac{(q+2)(q+4)(3q+7)(q+1)^2}{(q+3)^2(q^2+5q+8)}\left(\frac{(q+1)(q+2)}{2}\right)^{2t}\notag\\
&-\frac{(q+4)(q+1)^2}{(q+3)^2}\left(\frac{(q+2)(q+1)^2}{2}\right)^{t}\notag\\
&+\frac{2(q+1)^2(q^2+9q+20)}{(q+3)^2(q^2+5q+8)}\left(q+1\right)^t\notag\\
&+\frac{4(q+1)(q+4)^2}{(q+3)^2(q^2+5q+8)}\left(\frac{2}{q+2}\right)^t -\frac{2(q+1)}{(q+3)^2}.
\end{align}
\end{small}
\end{theorem}
The proof of Theorem~\ref{lemma16} is provided in~\ref{AppE}.
Using the obtained Kirchhoff index, the average resistance distance $\langle\bar{\Omega}(t) \rangle$ for graph $\mathcal{G}_q(t)$ can be easily determined, as $\langle\bar{\Omega}(t) \rangle=\frac{R_{\mathcal{V}_t,\mathcal{V}_t}(t)}{N_t(N_t-1)}$.
\begin{theorem}\label{th1}
For graph $\mathcal{G}_q(t)$ with $t\geqslant0$, its mean resistance distance $\langle\bar{\Omega}(t) \rangle$ is
\begin{small}
\begin{align}\label{eq170}
&\langle\bar{\Omega}(t) \rangle=\notag\\
&\frac{(q\!+\!3)^2}{(q\!+\!1)^2(q\!+\!2)^2\!\left(\left(\frac{(q\!+\!1)(q\!+\!2)}{2}\right)^t\!\!\!+\!\frac{2}{q\!+\!1}\right)\!\!\left(\left(\frac{(q\!+\!1)(q\!+\!2)}{2}\right)^t\!\!\!+\!\frac{1}{q\!+\!2}\right)}\notag\\
&\bigg(\frac{2(q+2)(q^3+8q^2+15q+8)}{(q+3)^2(q^2+5q+8)}\left(\frac{(q+1)(q+2)}{2}\right)^t\notag\\
&+\frac{(q+2)(q+4)(3q+7)(q+1)^2}{(q+3)^2(q^2+5q+8)}\left(\frac{(q+1)(q+2)}{2}\right)^{2t}\notag\\
&-\frac{(q+4)(q+1)^2}{(q+3)^2}\left(\frac{(q+2)(q+1)^2}{2}\right)^{t}\notag\\
&+\frac{2(q+1)^2(q^2+9q+20)}{(q+3)^2(q^2+5q+8)}\left(q+1\right)^t\notag\\
&+\frac{4(q+1)(q+4)^2}{(q+3)^2(q^2+5q+8)}\left(\frac{2}{q+2}\right)^t -\frac{2(q+1)}{(q+3)^2}\bigg).
\end{align}
\end{small}
\end{theorem}
In the limit of large $t$ ($t\rightarrow\infty$),~\eqref{eq170} converges to a small $q$-dependent constant
\begin{equation}\label{eq170a}
\lim_{t\rightarrow\infty} \langle\bar{\Omega}_q(t) \rangle= \langle\bar{\Omega}_q\rangle=\frac{(q+4)(3q+7)}{(q+2)(q^2+5q+8)},
\end{equation}
demonstrating the impact of network geometry and topology.
In Figure~\ref{EffetRes}, we plot the average resistance distance $\langle\bar{\Omega}_q(t) \rangle$ as a function of $q$ and $t$ according to~\eqref{eq170}. Figure~\ref{EffetRes} shows that for all $q \geq 1$ and $t \geq 0$, the average resistance distance $\langle\bar{\Omega}_q(t) \rangle$ is a little greater than zero. Particularly, for massive graphs ($t \to \infty$), the average resistance distance $\langle\bar{\Omega}_q(t) \rangle$ expressed in~\eqref{eq170} tends a small constant $\langle\bar{\Omega}_q\rangle$ given by~\eqref{eq170a}, which
decreases with increasing parameter $q$.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth,trim=0 0 0 0]{MeanResDist.eps}
\caption{ The average resistance distance $\langle\bar{\Omega} _q(t) \rangle$ of network $\mathcal{G}_q(t)$ for various $q$ and $t$.}\label{EffetRes}
\end{figure}
\section{Conclusion}
Empirical studies have shown that in many realistic networks, such as biological and social networks, their structures involve non-pairwise interactions, that is simultaneous, higher-order interactions among a group of more than two nodes. This structure has a significant impact on various dynamical processes taking place on networks. In order to capture higher-order interactions, some models were proposed, for many of which simplexes are elementary building blocks.
In this paper, we presented an extensive analytical study of resistance distances for a family of iteratively generating networks. This network family consists of simplexes presenting higher-order interactions among nodes, characterized by a tunable parameter $q$, which determines many structural properties, such as power-law degree distribution, clustering coefficient, and spectral dimension. We obtained analytical or exact expressions for two-node resistance distance, multiplicative degree-Kirchhoff index, additive degree-Kirchhoff index, Kirchhoff index, and average resistance distance. Our explicit results show that the average resistance distance converges to a $q$-dependent constant, implying that simplicial structure has strong influence on the structural robustness. Since diverse dynamical processes (for example, noisy consensus~\cite{YiZhShCh17, QiZhYiLi19}) are determined by average resistance distance, we confidently conclude that higher-order organization in network structure greatly affects the network dynamics.
Note that the network $\mathcal{G}_q(t)$ under study can be considered as a pure $(q+1)$-dimensional simplicial complex, by looking upon each $(q+1)$-simplex and all its faces as the constituent simplices. Thus, the properties and high-order interaction of $\mathcal{G}_q(t)$ can be characterized by a corresponding weighted graph $\mathcal{G}'_q(t)$~\cite{CoBi16,CoBi17,CaBaCeFa20}, which is obtained from $\mathcal{G}_q(t)$ by assigning an approximate weight to each edge in $\mathcal{G}_q(t)$. In future, we will study various dynamical processes running in $\mathcal{G}_q(t)$, in order to explore the effects of the high-order interactions on dynamics~\cite{BaCeLaLqLuPaYoPe20}.
\section*{Acknowledgements}
This work was supported by the National
Natural Science Foundation of China (Nos. 61872093 and U20B2051), the National
Key R \& D Program of China (No. 2018YFB1305104), Shanghai Municipal Science and Technology
Major Project (Nos. 2018SHZDZX01 and 2021SHZDZX03), ZJ Lab, and Shanghai Center for Brain Science and Brain-Inspired Technology. Mingzhe Zhu was also supported by Fudan's Undergraduate Research
Opportunities Program (FDUROP) under Grant No. 20001.
\section*{Data Availability Statement}
No new data were generated or analysed in support of this research.
|
1,108,101,564,697 | arxiv | \section*{Acknowledgments}
Yu.M. thanks A.Migdal, A.Morozov, G.Semenoff and N.Weiss for e-mail
correspondences.
\section*{Added note}
When this paper was being prepared for publication, there appeared more
papers~\cite{new} on the Kazakov--Migdal model. The results by Gross
agree with ours for the quadratic potential while the Monte--Carlo study
by Gocksch and Shen seems to indicate that for $N=2$ the `large-$N$' phase
transition coincides with the Higgs one.
\vspace*{\fill}\pagebreak
|
1,108,101,564,698 | arxiv | \section{Introduction}
\subsection{The Schur graph and its boundary}
Let $\mathbb{S}_n$, $n\in\mathbb{Z}_{\ge0}$, denote the finite set
consisting of all strict partitions (that is, partitions
without equal parts) of $n$.\footnote{The set $\mathbb{S}_0$ consists of
the empty partition $\varnothing$.} Strict partitions are represented by
shifted Young diagrams \cite[Ch. I, \S1, Example 9]{Macdonald1995}.
The Schur graph
is the graded graph consisting of
all shifted Young diagrams (with edge multiplicities given
by (\ref{mult}) below).
As can be proved exactly as in \cite{Kerov1998}
using the results of \cite{IvanovNewYork3517-3530},\footnote{Another
proof can be found in the paper \cite{Nazarov1992}.}
the Martin boundary of the
Schur graph can be identified with the infinite-dimensional ordered simplex
\begin{equation*}
\Omega_+:=\left\{ \mathsf{x}=(\mathsf{x}_1,\mathsf{x}_2,\dots)\colon \mathsf{x}_1\ge\mathsf{x}_2\ge\dots\ge0,\ \sum_{i}\mathsf{x}_i\le1 \right\}.
\end{equation*}
The simplex $\Omega_+$ viewed as a subspace
of the infinite-dimensional cube $\left[ 0,1 \right]^{\infty}$
(which in turn is equipped with the product topology)
is a compact, metrizable and separable space.
\subsection{Projective representations of symmetric groups}\label{s0.2}
Each $\mathbb{S}_n$, $n\ge1$, may be regarded as a projective dual
object to the symmetric group $\mathfrak{S}_n$
in the sense that $\mathbb{S}_n$ parametrizes the
irreducible projective representations of $\mathfrak{S}_n$ \cite{Hoffman1992,Schur1911}.
The simplex $\Omega_+$ can be viewed as a kind of projective dual to
the infinite symmetric group $\mathfrak{S}_\infty$.
That is, the points of $\Omega_+$ parametrize the indecomposable normalized
projective characters of the
group $\mathfrak{S}_\infty$ \cite{Nazarov1992}.
The theory of projective representations of symmetric groups is in many
aspects similar to the theory of ordinary representations.
Let us indicate some of them:
\begin{itemize}
\item The (ordinary) dual object to $\mathfrak{S}_n$
is the set of all ordinary (i.e., not necessary strict)
partitions of~$n$;
\item The role of $\Omega_+$ in the theory
of ordinary representations
is played by the {\em{}Thoma simplex\/} $\Omega$ that
consists of couples
$(\omega;\omega')\in\left[ 0,1 \right]^{\infty}\times\left[ 0,1 \right]^{\infty}$
satisfying the following conditions:
\begin{equation*}
\omega_1\ge\omega_2\ge\dots\ge0,\qquad
\omega'_1\ge\omega'_2\ge\dots\ge0,\qquad
\sum_{i}\omega_i+
\sum_{j}\omega'_j\le1.
\end{equation*}
The points of $\Omega$ parametrize the indecomposable normalized
characters of $\mathfrak{S}_\infty$ \cite{Thoma1964}.
\item The role that Schur's $\mathcal{Q}$-functions play in the theory of projective
representations \cite{Hoffman1992} is taken by ordinary Schur functions.
\end{itemize}
There is a natural embedding
of $\Omega_+$ into $\Omega$ introduced in
\cite{GnedinIntern.Math.ResearchNotices2006Art.ID5196839pp.}.
This map sends $\mathsf{x}=(\mathsf{x}_1,\mathsf{x}_2,\dots)\in\Omega_+$ to $(\omega,\omega')\in\Omega$
with $\omega=\omega'=(\mathsf{x}_1/2,\mathsf{x}_2/2,\dots)$.
In \S\ref{s8.1} below we discuss this embedding in more detail.
\subsection{Multiplicative measures}\label{s0.3}
In \cite{Borodin1997}
A.~Borodin introduced
multiplicative
coherent systems of measures
(or central measures in the sense of \cite{Kerov1990})
on the
Schur graph.
This is a sequence of probability measures $M_n^\alpha$ on $\mathbb{S}_n$, $n\in\mathbb{Z}_{\ge0}$,
depending on one real parameter $\alpha\in(0,+\infty)$.
Below we call this object simply the {\em{}multiplicative measures\/}.
According to a general formalism (explained, e.g., in \cite{Kerov1998})
the multiplicative measures $\left\{ M_n^\alpha \right\}$ give rise to a one-parameter
family of probability measures $\mathsf P^{(\alpha)}$ on $\Omega_+$.
Namely, every set $\mathbb{S}_n$ can be embedded into $\Omega_+$:
a strict partition $\lambda\in\mathbb{S}_n$ maps to a point $(\lambda_1/n,\lambda_2/n,\dots)\in\Omega_+$, where $\lambda_i$
are the components of $\lambda$.
As $\alpha$ remains fixed and $n$ goes to infinity,
the images of $M_n^\alpha$
under these embeddings weakly converge to $\mathsf P^{(\alpha)}$.
\subsection{The model of random walks}
To a coherent system of measures on a graded graph
one can associate a sequence of random walks on the floors of the graph.\footnote{We
assume that a graded graph
satisfies some additional conditions
(listed, e.g., in \cite[\S3]{Fulman2007}) that allow to consider coherent systems on it.
In fact, the Schur graph satisfies them.
See also \cite[\S1]{Borodin2007}.}
These random walks are called the {\em{}up-down Markov chains\/}.
The up/down Markov chains first appeared in a paper by J.~Fulman~\cite{Fulman2005}.
He was interested in such questions as
their eigenstructure, eigenvectrors and convergence rates.
In the papers \cite{Fulman2005,Fulman2007}
many examples of up/down Markov chains associated to various coherent systems
on various graphs are studied
including
the Plancherel measures and the $z$-measures on the Young
graph, the Ewens-Pitman's partition structures
on the Kingman graph,\footnote{The Young
graph is the graph consisting of all ordinary Young diagrams
(they are identified with ordinary partitions as in \cite[Ch. I, \S1]{Macdonald1995}).
The $z$-measures originated
from the problem of harmonic analysis for the
infinite symmetric group $\mathfrak{S}_\infty$ \cite{Kerov1993,Kerov2004} and were
studied in detail by
A.~Borodin and G.~Olshanski, see the bibliography in \cite{Borodin2007}.
The set of vertices of the Kingman graph is the same as of the Young graph, but the edge
multiplicities are different. The Ewens-Pitman's partition structure
(this is a special name for the coherent system of measures
on the Kingman graph, the term is
due to J.~F.~C.~Kingman \cite{Kingman1978})
was introduced
in \cite{Ewens1979,Pitman1992}. It is closely related to the Poisson-Dirichlet
measure (see, e.g., \cite{Pitman2002} and bibliography therein).}
and the Plancherel
measures on the Schur graph.\footnote{We give the
definition of the Plancherel measures on the Schur graph in \S\ref{s1.4} below.}
In \cite{Borodin2007,Petrov2007,Olshanski2009}
the limit behaviour of various up/down Markov chains
is studied. The paper \cite{Borodin2007} deals with the limit behaviour
of the chains associated to the
$z$-measures on the Young graph. In \cite{Petrov2007}
the chains corresponding to the Ewens-Pitman's partition structure are studied, and
in \cite{Olshanski2009} the general case of the Young graph with Jack edge multiplicities
is considered.
In this paper we consider a sequence of up/down Markov
chains associated to the multiplicative
measures on the Schur graph.
These chains depend on the parameter $\alpha$.
The $n$th chain lives on $\mathbb{S}_n$ and preserves the
probability measure $M_n^\alpha$. We study the limit behaviour of these
random walks as $n\to\infty$.
\subsection{The limit diffusion and its pre-generator}
Assume that $\alpha\in(0,+\infty)$ is fixed.
Let us embed each set $\mathbb{S}_n$ into $\Omega_+$ as described above in \S\ref{s0.3},
and let the discrete
time of the $n$th up/down chain be scaled by the factor $n^{-2}$.
We show that under these space and time scalings
the up/down chains
converge, as $n\to\infty$,
to a diffusion process\footnote{By a diffusion process
we mean a strong Markov process with continuous sample paths.} $\X{\alpha}(t)$, $t\ge0$,
in the simplex $\Omega_+$.
We also show that $\X{\alpha}(t)$ preserves the measure $\mathsf{P}^{(\alpha)}$,
is reversible and ergodic with respect to it.
The main result of the present paper is the formula
for the pre-generator of the process $\X\alpha(t)$. To formulate the result we
need some notation.
By $C(\Omega_+)$ denote the Banach algebra of real-valued continuous functions on $\Omega_+$
with pointwise operations and the uniform norm.
Let $\mathcal{F}$ be a dense subspace of $C(\Omega_+)$ freely
generated (as a commutative unital algebra) by the
algebraically independent continuous functions
$\mathsf{q}_{2k}(\mathsf{x}):=\sum_{i=1}^{\infty}\mathsf{x}_i^{2k+1}$, $k=1,2,\dots$.
Define an operator $A\colon\mathcal{F}\to\mathcal{F}$ depending on the parameter $\alpha$:
\begin{equation}\label{f0.1}
\left.
\begin{array}{l}
\displaystyle
A=\sum_{i,j=1}^{\infty}(2i+1)(2j+1)\left( \mathsf{q}_{2i+2j}-\mathsf{q}_{2i}\mathsf{q}_{2j} \right)
\frac{\partial^2}{\partial\mathsf{q}_{2i}\partial\mathsf{q}_{2j}}\\\displaystyle\qquad+
2\sum_{i,j=0}^{\infty}\left( 2i+2j+3 \right)\mathsf{q}_{2i}\mathsf{q}_{2j}\frac{\partial}{\partial \mathsf{q}_{2i+2j+2}}
-\sum_{i=1}^{\infty}(2i+1)\left( 2i+\frac\al2 \right)
\mathsf{q}_{2i}\frac{\partial}{\partial\mathsf{q}_{2i}},
\end{array}
\right.
\end{equation}
where, by agreement, $\mathsf{q}_0=1$. This is a formal differential operator in the polynomial
algebra $\mathcal{F}=\mathbb{R}\left[ \mathsf{q}_2,\mathsf{q}_4,\mathsf{q}_6,\dots \right]$.
We show that
the operator $A$ is closable in $C(\Omega_+)$ and that the process
$\X{\alpha}(t)$ is generated by the closure $\overline A$ of the operator $A$.
\subsection{The method}
The formulation of the results is given in probabilistic terms.
However, the use of probabilistic technique in the proofs
generally reduces to the application of certain results from the paper
\cite{Trotter1958} and the book \cite{Ethier1986} concerning approximations of continuous
semigroups by discrete ones.
The essential part of the paper consists of the
computations in a polynomial algebra.
To obtain the formula (\ref{f0.1}) for the pre-generator
we use the methods similar to those of
\cite{Olshanski2009}.
This involves the restatement
of some of the results concerning ordinary
Young diagrams to our situation.
In particular, we introduce Kerov interlacing coordinates of shifted Young diagrams
which are similar to
interlacing coordinates of ordinary Young diagrams
introduced and studied by S.~Kerov in \cite{Kerov2000}.
Kerov interlacing coordinates of shifted Young diagrams are of separate interest.
We also give an alternative
expression
for the pre-generator $A$. Namely,
we
compute the action of
$A$ on Schur's $\mathcal{Q}$-functions
(Proposition \ref{p6.8} (2) below).
This is done in \S\ref{s2} exactly as in
\cite[\S4]{Borodin2007}
with ordinary Schur functions replaced by
Schur's $\mathcal{Q}$-functions.
In this argument we use the
formula (\ref{f36}) for dimension of skew shifted
Young diagrams which is due to V.~Ivanov \cite{IvanovNewYork3517-3530}.
Note that the formula (\ref{f86}) for the action of $A$ on Schur's $\mathcal{Q}$-functions
is not formally necessary for the rest of the results of the present paper
(see Remark \ref{p90.1} below).
\subsection{Organization of the paper}
In \S\ref{s1.1}--\ref{s1.3} we recall the definition of the Schur graph.
We also recall coherent systems
associated to this graph and the corresponding up/down Markov chains.
In \S\ref{s1.4} we recall the multiplicative measures
on the Schur graph
introduced by A.~Borodin \cite{Borodin1997}.
They depend on a parameter $\alpha\in(0,+\infty)$.\footnote{Note that
in \cite{Borodin1997} the parameter $\alpha$ is denoted by $x$.}
In \S\ref{s20} we introduce
Kerov interlacing coordinates of shifted Young diagrams and
study their properties. Here we restate
some of the results of the paper \cite{Kerov2000} and apply them
to our situation.
In \S\ref{s2.1} we consider the polynomial algebra $\Gamma$
generated by the odd Newton power sums.
The basis for $\Gamma$ is formed by Schur's $\mathcal{Q}$-functions $\mathcal{Q}_\lambda$
(indexed by strict partitions).
In \S\ref{s2.2}--\ref{s2.3} we
prove a useful formula for the action of the $n$th up/down Markov chain transition operator
(corresponding to the multiplicative measures on the Schur graph)
on Schur's $\mathcal{Q}$-functions.
The argument here is the same as in \cite[\S4]{Borodin2007}.
In \S\ref{s3} we prove some facts
concerning the algebra $\Gamma$ that are used in \S\ref{s4}--\ref{s5}.
In \S\ref{s4}--\S\ref{s5} we compute the ``differential'' form of
the $n$th up/down Markov chain
transition operator corresponding to the multiplicative measures
(see Theorem \ref{p5.1} below for the exact result).
In \S\ref{s6} we use the general results
of the paper \cite{Trotter1958} and the book \cite{Ethier1986}
to prove the convergence, as $n\to\infty$, of our up/down Markov
chains to a continuous time Markov process $\X{\alpha}(t)$ in $\Omega_+$.
We also prove the differential formula for the pre-generator of this process
and study some other properties
of $\X{\alpha}(t)$.
\subsection{Acknowledgement}
I am very grateful to Grigori Olshanski for the setting of the problem,
permanent attention and fruitful discussions, and to Vladimir Ivanov for helpful discussions.
\section{Multiplicative measures}\label{s1}
\subsection{The Schur graph}\label{s1.1}
A {\em{}partition\/} is an infinite non-increasing sequence of nonnegative integers
\begin{equation*}
\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell(\lambda)},0,0,\dots),\qquad
\lambda_1\ge\lambda_2\ge\dots\ge\lambda_{\ell(\lambda)}>0,\qquad \lambda_i\in\mathbb{Z}_{>0},
\end{equation*}
having only finitely many nonzero members.
Their number $\ell(\lambda)\ge0$ is called the
{\em{}length of the partition\/}.
The {\em{}weight of the partition\/}
is
$|\lambda|:=\sum_{i=1}^{\ell(\lambda)}\lambda_i$.
A partition $\lambda$ is called {\em{}strict\/} if
it does not contain similar terms:
$\lambda_1>\lambda_2>\dots>\lambda_{\ell(\lambda)}>0$.
We denote strict partitions by $\lambda,\mu,\nu,\varkappa,\dots$,
and ordinary (i.e., not necessary strict) partitions
by $\sigma,\rho,\tau,\dots$.
As explained in \cite[Ch. I, \S1, Example 9]{Macdonald1995}, to every strict partition corresponds a {\em{}shifted Young diagram\/}.
The shifted Young diagram of the form
$\lambda$ consists of $\ell(\lambda)$ rows.
Each $i$th row ($i=1,\dots,\ell(\lambda)$) has $\lambda_i$ boxes,
and for $j=1,\dots,\ell(\lambda)-1$ the first box of the $(j+1)$th
row is right under the second box of the $j$th row.
We identify strict partitions and corresponding shifted Young diagrams.
For example,
Figure \ref{fig1}
shows the shifted Young diagram of the form $(6,5,3,1)$.
\begin{figure}[htpb]
\begin{center}
\includegraphics{shYoung6531.eps}
\end{center}
\caption{Figure \ref{fig1}.}
\label{fig1}
\end{figure}
If $\lambda$ and $\mu$ are shifted Young diagrams and $\lambda$ is obtained from $\mu$
by adding one box, then we write $\lambda\searrow\mu$ (or, equivalently, $\mu\nearrow\lambda$).
Denote this box (that distinguishes $\lambda$ and $\mu$) by $\lambda/\mu$.
For two shifted Young diagrams $\mu$ and $\lambda$
such that $|\lambda|=|\mu|+1$ we set
\begin{equation}\label{mult}
\kappa(\mu,\lambda):=\left\{
\begin{array}{ll}
2,&\mbox{if $\mu\nearrow\lambda$ and $\ell(\lambda)=\ell(\mu)$};\\
1,&\mbox{if $\mu\nearrow\lambda$ and $\ell(\lambda)=\ell(\mu)+1$};\\
0,&\mbox{otherwise}.
\end{array}
\right.
\end{equation}
All shifted Young diagrams are organized in a graded set
$\mathbb{S}=\bigsqcup_{n=0}^{\infty}\mathbb{S}_n$, where
$\mathbb{S}_n:=\left\{ \lambda\colon|\lambda|=n \right\}$, $n\in\mathbb{Z}_{>0}$, and
$\mathbb{S}_0:=\left\{ \varnothing \right\}$.
This set is equipped with the structure of a graded
graph. It has edges only between consecutive floors $\mathbb{S}_n$ and $\mathbb{S}_{n+1}$, $n\in\mathbb{Z}_{\ge0}$.
If $\mu\in\mathbb{S}_n$ and $\lambda\in\mathbb{S}_{n+1}$, then we draw $\kappa(\mu,\lambda)$
edges between $\mu$ and $\lambda$. Let edges be oriented in the direction from $\mathbb{S}_n$ to $\mathbb{S}_{n+1}$.
We call this oriented graded graph
the {\em{}Schur graph\/}.\footnote{Sometimes (e.g., in \cite{Borodin1997})
the same graph with simple edges
is called the Schur graph.
These two graphs have the same down transition functions (see \S\ref{s1.3} below),
hence for us the difference between them is inessential.}
By $\mathsf{h}(\mu,\lambda)$ denote the total number
of (oriented) paths from $\mu$ to $\lambda$
in the graph $\mathbb{S}$.
Clearly, $\mathsf{h}(\mu,\lambda)$ vanishes unless
$\mu\subset\lambda$ (that is, unless the shifted Young diagram
$\mu$ is a subset of the shifted Young diagram $\lambda$).
Set $\mathsf{h}(\lambda):=\mathsf{h}(\varnothing,\lambda)$.
This function has the form \cite[Ch. III, \S8, Example 12]{Macdonald1995}:\footnote{The factor
$2^{|\lambda|-\ell(\lambda)}$ (that does not enter the corresponding formula
in \cite{Macdonald1995}) appears due to the edge multiplicities (\ref{mult})
in our version of the Schur graph.}
\begin{equation}\label{f1}
\mathsf{h}(\lambda)=2^{|\lambda|-\ell(\lambda)}\cdot\frac{|\lambda|!}{\lambda_1!\lambda_2!\dots\lambda_{\ell(\lambda)}!}
\prod_{1\le i<j\le\ell(\lambda)}\frac{\lambda_i-\lambda_j}{\lambda_i+\lambda_j},\qquad\lambda\in\mathbb{S}.
\end{equation}
Note that if $\lambda$ is not strict, then this formula reduces to $\mathsf{h}(\lambda)=0$.
There is also an explicit
formula for the function $\mathsf{h}(\mu,\lambda)$,
it was proved in \cite{IvanovNewYork3517-3530}.
We recall this result below, see (\ref{f36}).
\subsection{Coherent systems and up/down Markov chains}\label{s1.3}
Here we give definitions of a coherent system on the Schur graph and
of up/down Markov chains associated to it. We follow
\cite[\S1]{Borodin2007}.
The {\em{}down transition function\/}
for $\mu,\lambda\in\mathbb{S}$ such that $|\lambda|=|\mu|+1$
is
\begin{equation}\label{f2}
p^\downarrow(\lambda,\mu):=\frac{\mathsf{h}(\mu)}{\mathsf{h}(\lambda)}\kappa(\mu,\lambda).
\end{equation}
It can be easily checked that
\begin{itemize}
\item $p^\downarrow(\lambda,\mu)\ge0$ for all $\mu,\lambda\in\mathbb{S}$ such that $|\lambda|=|\mu|+1$;
\item $p^\downarrow(\lambda,\mu)$ vanishes unless $\mu\nearrow\lambda$;
\item if $|\lambda|=n\ge1$, then $\sum_{\mu\colon|\mu|=n-1}p^\downarrow(\lambda,\mu)=1$.
\end{itemize}
\begin{df}\rm{}
A coherent system on $\mathbb{S}$
is a system of
probability
measures $M_n$ on $\mathbb{S}_n$, $n\in\mathbb{Z}_{\ge0}$, consistent with the down transition function:
\begin{equation}\label{f3}
M_n(\mu)=\sum_{\lambda\colon\lambda\searrow\mu}p^\downarrow(\lambda,\mu)M_{n+1}(\lambda)\qquad
\mbox{for all $n\in\mathbb{Z}_{\ge0}$ and $\mu\in\mathbb{S}_n$}.
\end{equation}
Here by $M_n(\mu)$ we denote the measure of a singleton $\left\{ \mu \right\}$.
\end{df}
Fix a coherent system $\left\{ M_{n} \right\}$.
The {\em{}up transition function\/}
for $\lambda,\nu\in\mathbb{S}$ such that
$|\lambda|=n$, $|\nu|=n+1$, $n\in\mathbb{Z}_{\ge0}$, and $M_n(\lambda)\ne0$ is
\begin{equation*}
p^\uparrow(\lambda,\nu):=\frac{M_{n+1}(\nu)}{M_{n}(\lambda)}p^\downarrow(\nu,\lambda).
\end{equation*}
The up transition function depends on the choice of a coherent system.
Moreover, $\left\{ M_n \right\}$ and $p^\uparrow$
are consistent in a sense similar to (\ref{f3}):
\begin{equation}\label{f4}
M_{n+1}(\nu)=\sum_{\textstyle\genfrac{}{}{0pt}{}{\lambda\colon\lambda\nearrow\nu}{M_n(\lambda)\ne0}}
p^\uparrow(\lambda,\nu)M_n(\lambda)\qquad\mbox{for all $n\in\mathbb{Z}_{\ge0}$ and $\nu\in\mathbb{S}_{n+1}$}.
\end{equation}
\begin{df}\label{p1.6}\rm{}
A system of measures $\left\{ M_n \right\}$, where $M_n$ is a probability measure on $\mathbb{S}_n$, $n\in\mathbb{Z}_{\ge0}$,
is called {\em{}nondegenerate\/}, if $M_n(\lambda)>0$ for all $n\in\mathbb{Z}_{\ge0}$ and $\lambda\in\mathbb{S}_n$.
\end{df}
Let $\left\{ M_n \right\}$ be a nondegenerate coherent system on $\mathbb{S}$.
For all $n\in\mathbb{Z}_{>0}$ we define a Markov chain $T_n$ on the set $\mathbb{S}_n$
with the following
transition matrix:
\begin{equation*}
T_n(\lambda,\widetilde\lambda):=\sum_{\nu\colon|\nu|=n+1}p^\uparrow(\lambda,\nu)p^\downarrow(\nu,\widetilde\lambda),\qquad
\lambda,\widetilde\lambda\in\mathbb{S}_n.
\end{equation*}
This is the composition of the up and down transition functions,
from $\mathbb{S}_n$ to $\mathbb{S}_{n+1}$ and then back to $\mathbb{S}_n$.
From (\ref{f3}) and (\ref{f4}) it follows that
$M_n$ is a stationary distribution for $T_n$. It can be readily shown
that the matrix $M_n(\lambda)T_n(\lambda,\widetilde\lambda)$
is symmetric with respect to the substitution $\lambda\leftrightarrow\widetilde\lambda$.
This means that the chain $T_n$ is reversible with respect to $M_n$.
\subsection{Multiplicative measures}\label{s1.4}
In this subsection we recall some definitions and results from \cite{Borodin1997}
concerning multiplicative measures on the Schur graph.
\begin{df}\label{p1.7}\rm{}
For $n\in\mathbb{Z}_{\ge0}$ the {\em{}Plancherel measure\/} on the set $\mathbb{S}_n$ is defined as
\begin{equation*}
\mathrm{Pl}_n(\lambda):=\frac{\mathsf{h}(\lambda)^2}{n!}2^{\ell(\lambda)-n},\qquad \lambda\in\mathbb{S}_n,
\end{equation*}
where $\mathsf{h}(\lambda)$ is given by (\ref{f1}).
\end{df}
The Plancherel measures form a nondegenerate
coherent system $\left\{ \mathrm{Pl}_n \right\}$ on $\mathbb{S}$.
\begin{df}\rm{}\label{p1.8}
Let $M_n$ be a probability measure on $\mathbb{S}_n$ for all $n\in\mathbb{Z}_{\ge0}$.
The system of measures $\left\{ M_n \right\}$ is called {\em{}multiplicative\/} if
\begin{equation}\label{f5}
M_n(\lambda)=\mathrm{Pl}_n(\lambda)\cdot\frac1{Z(n)}\cdot\prod_{\square\in\lambda}
f\left( \mathrm{i}(\square),\mathrm{j}(\square) \right)\quad
\mbox{for all $n\in\mathbb{Z}_{\ge0}$ and $\lambda\in\mathbb{S}_n$}
\end{equation}
for some functions
$f\colon\mathbb{Z}_{\ge0}^2\to\mathbb{C}$ and $Z\colon\mathbb{Z}_{\ge0}\to\mathbb{C}$.
Here the product is taken
over all boxes in the shifted diagram $\lambda$, and the numbers
$\mathrm{i}(\square)$ and $\mathrm{j}(\square)$ are the
row and column numbers of the box $\square$, respectively.\footnote{The
row number is counted from up to down, and the column number is counted from
left to right.}
\end{df}
\begin{thm}[Borodin \cite{Borodin1997}]
A nondegenerate
multiplicative system of probability measures $\left\{ M_n \right\}$
is coherent if and only if
the functions $f(i,j)$ and $Z(n)$ from (\ref{f5}) have the form
\begin{equation}\label{f6}
\begin{array}{rcrcl}
f(i,j)&=& f_\alpha(i,j)&:=& (j-i)(j-i+1)+\alpha,\\
Z(n)&=& Z_\alpha(n)&:=& \alpha(\alpha+2)(\alpha+4)\dots(\alpha+2n-2)
\end{array}
\end{equation}
for some parameter $\alpha\in(0,+\infty]$.
\end{thm}
We denote the multiplicative coherent system corresponding to
$\alpha$ by $\left\{ M_n^\alpha \right\}$
Below we call this object simply the {\em{}multiplicative measures\/}.\footnote{
Note that for all $\lambda\in\mathbb{S}_n$
the ratio $\prod_{\square\in\lambda}f_{\alpha}(\mathrm{i}(\square),\mathrm{j}(\square))/Z_{\alpha}(n)$ tends to one as $\alpha\to+\infty$.
Thus, one can say that $\left\{ M_n^\infty \right\}$
coincides with the Plancherel coherent system.}
The up transition function
corresponding to $\left\{ M_n^\alpha \right\}$
can be written out explicitly:
\begin{equation}\label{f7}
p_\alpha^\uparrow(\lambda,\nu)=
\frac{\mathrm{c}(\nu/\lambda)\left( \mathrm{c}(\nu/\lambda)+1 \right)+\alpha}{2|\lambda|+\alpha}\cdot
\frac{\mathsf{h}(\nu)}{\mathsf{h}(\lambda)\left( |\lambda|+1 \right)}.
\end{equation}
where $\alpha\in(0,+\infty]$ and $\mathrm{c}(\square):=\mathrm{j}(\square)-\mathrm{i}(\square)$ is the {\em{}content\/} of the box $\square$.
\begin{rmk}\rm{}
One can consider {\em{}degenerate multiplicative measures\/}.
That is, for certain negative values of $\alpha$
the formulas (\ref{f5})--(\ref{f6}) define a system of measures $\left\{ M_n \right\}$
not on the whole Schur graph $\mathbb{S}$, but on a certain finite subset of $\mathbb{S}$.
Namely, if $\alpha=\alpha_N:=-N(N+1)$ for some $N\in\mathbb{Z}_{>0}$,
then $M_n^{\alpha_N}$ is a probability measure on $\mathbb{S}_n$ for
all $n=0,1,\dots,\frac{N(N+1)}2$. The system $\left\{ M_n^{\alpha_N} \right\}$
satisfies (\ref{f3}) for $n=0,1,\dots,\frac{N(N+1)}2$.
It is clear from (\ref{f5})--(\ref{f6}) that $M_n^{\alpha_N}(\lambda)>0$ iff $\lambda_1\le N$.
Thus, one can say that $\left\{ M_n^{\alpha_N} \right\}_{n=0,1,\dots,\frac{N(N+1)}2}$
is a coherent system of measures on the finite graded graph
$\mathbb{S}(N):=\left\{ \lambda\in\mathbb{S}\colon\lambda_1\le N \right\}\subset \mathbb{S}$.
\end{rmk}
The existence of
degenerate multiplicative coherent systems
is a useful observation, but in the present paper we concentrate on the case $\alpha\in(0,+\infty)$.
\begin{df}\label{p10.6}\rm{}
In the rest of the paper the parameter $\alpha$ takes values in
$(0,+\infty)$. From now on by $T_n$ we denote the one-step transition operator
of the $n$th up/down Markov chain corresponding to the multiplicative
measures
with parameter $\alpha$. This operator $T_n$ acts on functions on $\mathbb{S}_n$ (see \S\ref{s2.3} below
for more detail).
\end{df}
\section{Kerov interlacing coordinates\\ of shifted Young diagrams}\label{s20}
In this section we introduce Kerov interlacing coordinates of shifted
Young diagrams and study their basic properties.
These coordinates are similar to interlacing coordinates of ordinary Young diagrams
introduced by S.~Kerov,
see \cite{Kerov2000}.
In \S\ref{s20.2} and \S\ref{s20.3} we express the Schur graph's Plancherel
up transition function $p_{\infty}^{\uparrow}$ and the down transition function
$p^{\downarrow}$, respectively,
in terms of Kerov interlacing coordinates.
This approach is similar to that
explained in \cite{Kerov2000} and used in \cite{Olshanski2009},
but there are some significant differences.
\subsection{Definition and basic properties}\label{s20.1}
Let $\lambda\in\mathbb{S}_n$, $n\ge1$.
Denote by $X(\lambda)$ the set of numbers $\left\{ \mathrm{c}(\nu/\lambda)\colon\nu\searrow\lambda \right\}$,
that is, $X(\lambda)$ is the set of
contents of all boxes that can be added to the shifted Young diagram $\lambda$.
For every $x\in X(\lambda)$ there exists a unique shifted
diagram $\nu\searrow\lambda$ such that $\mathrm{c}(\nu/\lambda)=x$.
Denote this diagram $\nu$ by $\lambda+\square(x)$.
Similarly, let $Y(\lambda):=\left\{ \mathrm{c}(\lambda/\mu)\colon\mu\nearrow\lambda \right\}$ be the
set of contents of all boxes that can be removed from the shifted Young diagram $\lambda$.
For every $y\in Y(\lambda)$
there exists a unique shifted diagram $\mu\nearrow\lambda$ such that $\mathrm{c}(\lambda/\mu)=y$. Denote
this diagram $\mu$ by~$\lambda-\square(y)$.
For $\lambda=\varnothing$ we set
$X(\varnothing):=\left\{ 0 \right\}$,
$Y(\varnothing):=\varnothing$.
\begin{df}\rm{}\label{p1.1}
Let $\lambda\in\mathbb{S}$.
Suppose that the sets $X(\lambda)$ and $Y(\lambda)$ are written in ascending order.
The numbers $\left[ X(\lambda);Y(\lambda) \right]$ are called
{\em{}Kerov coordinates\/} of a shifted Young diagram $\lambda$.
\end{df}
Figure \ref{fig1a} shows Kerov coordinates of two different shifted Young diagrams.
Namely, for $\mu=(6,5,1)$ Kerov coordinates are $X(\mu)=\left\{ 1,6 \right\}$ and
$Y(\mu)=\left\{ 0,4 \right\}$ (Figure \ref{fig1a}a-b); and for $\nu=(6,5,3)$ these coordinates
are
$X(\nu)=\left\{ 0,3,6 \right\}$ and
$Y(\nu)=\left\{ 2,4 \right\}$ (Figure \ref{fig1a}c-d).
\begin{figure}[htpb]
\begin{center}
\includegraphics{abcd.eps}
\end{center}
\caption{Figure \ref{fig1a}.}
\label{fig1a}
\end{figure}
\begin{prop}[The interlacing property]\label{p1.2}
Let $\lambda$ be a shifted Young diagram.
{\bf{}(a)\/} If $\lambda$ contains a one-box row
(see, for example, Figure \ref{fig1a}a-b), then for some integer
$d\ge1$ we have
\begin{equation*}
X(\lambda)=\left\{ x_1,\dots,x_d \right\},\qquad Y(\lambda)=\left\{ 0,y_2,\dots,y_d \right\}
\end{equation*}
and
\begin{equation*}
0=y_1<x_1<y_2<x_2<\dots<y_d<x_d.
\end{equation*}
{\bf{}(b)\/} If $\lambda$ does not contain a one-box row
(see, for example, Figure \ref{fig1a}c-d), then for some
integer $d\ge0$ we have\footnote{Note that $d=0$ only for $\lambda=\varnothing$.}
\begin{equation*}
X(\lambda)=\left\{ 0,x_1,\dots,x_d \right\},\qquad Y(\lambda)=\left\{ y_1,\dots,y_d \right\}
\end{equation*}
and
\begin{equation*}
0=x_0<y_1<x_1<y_2<x_2<\dots<y_d<x_d.
\end{equation*}
\end{prop}
\begin{proof}
This can be simply proved by induction on the number of boxes of $\lambda$,
by consecutively adding a box to the diagram.
During this procedure the change
of the number of one-box rows in the diagram\footnote{This number is
always zero or one.}
leads to transition from the case
{\bf{}(a)\/} to the case {\bf{}(b)\/} and vice versa.
\end{proof}
\begin{rmk}\label{p1.3}\rm{}
In the case of ordinary Young diagrams (see, e.g., \cite{Kerov2000,Olshanski2009})
the number of elements in the set $X(\lambda)$ is always greater by one than the number
of elements in the set $Y(\lambda)$.
In our case it is not always true.
Let us define $X'(\lambda):=X(\lambda)\setminus\left\{ 0 \right\}$.
It is clear that
the numbers of elements in the sets
$X'(\lambda)$ and $Y(\lambda)$ are equal for all shifted diagrams $\lambda$.
We will use this fact below.
\end{rmk}
\begin{rmk}\label{p1.4}\rm{}
As can be also proved by induction on the number of boxes (similarly to
Proposition \ref{p1.2}),
a shifted Young diagram $\lambda$ is uniquely
determined by its Kerov coordinates $\left[ X(\lambda);Y(\lambda) \right]$, or,
equivalently, by the pair of sequences $X'(\lambda)$ and $Y(\lambda)$.
\end{rmk}
\begin{rmk}\label{p8.9}\rm{}
Let $\lambda$ be a nonempty shifted Young diagram, and
\begin{equation*}
X'(\lambda)=\left\{ x_1,\dots,x_d \right\},\qquad Y(\lambda)=\left\{ y_1,\dots,y_d \right\}
\end{equation*}
for some integer $d\ge1$.
It can be easily seen that $\lambda$ has the form
\begin{equation*}
\lambda=(x_d,x_d-1,\dots,y_d+1,x_{d-1},x_{d-1}-1,\dots,y_{d-1}+1,\dots,x_1,x_1-1,\dots,y_1+1)
\end{equation*}
(see Figure \ref{fig2}).
Here for all $j$ the numbers $x_j,x_j-1,\dots,y_j+1$ are
consecutive
decreasing integers
(for some $j$ it can happen that $x_j=y_j+1$).
Note that $y_1$ can be zero, this corresponds
to the case {\bf{}(a)\/} in Proposition~\ref{p1.2}.
\begin{figure}[htpb]
\begin{center}
\includegraphics{sum-sum.eps}
\end{center}
\caption{Figure \ref{fig2}.}
\label{fig2}
\end{figure}
\end{rmk}
\begin{prop}\label{p1.5}
For every shifted Young diagram $\lambda$ we have
\begin{equation*}
\sum_{x\in X(\lambda)}x(x+1)-\sum_{y\in Y(\lambda)}y(y+1)=2|\lambda|.
\end{equation*}
\end{prop}
\begin{proof}
If $\lambda=\varnothing$, the claim is obvious.
Suppose $\lambda\ne0$. We have
\begin{equation*}
\sum_{x\in X(\lambda)}x(x+1)-\sum_{y\in Y(\lambda)}y(y+1)=
\sum_{j=1}^{d}\big(x_j(x_j+1)-y_j(y_j+1)\big),
\end{equation*}
where the notation is as in Remark \ref{p8.9}.
For all~$j$ the value $x_j(x_j+1)-y_j(y_j+1)$ clearly equals twice the area
of the part of the shifted Young diagram $\lambda$ formed by the rows $x_j,x_j-1,\dots,y_j+1$
(see Figure~\ref{fig2}).
This concludes the proof.
\end{proof}
\subsection{The Plancherel up transition function}\label{s20.2}
The up transition function corresponding to the Plancherel
coherent system on~$\mathbb{S}$ (see \S\ref{s1.4})
can be written in terms of Kerov interlacing coordinates
of shifted Young diagrams.
Let $\lambda$ be an arbitrary shifted Young diagram and $v$ be a complex variable. By definition, put
\begin{equation}\label{f9.9}
\mathcal{R}^\uparrow(v;\lambda):=\frac{\prod_{y\in Y(\lambda)}(v-y(y+1))}{v\cdot\prod_{x\in X'(\lambda)}(v-x(x+1))}.
\end{equation}
It follows from Remark \ref{p1.3} that
the degree of the denominator is always
greater than the degree of the numerator
by one.
Next, from Proposition \ref{p1.2} it follows that
if $\lambda$ contains a one-box row, then the numerator and the denominator
of $\mathcal{R}^\uparrow(v;\lambda)$ can both be divided by the factor $v$, and if $\lambda$ does not contain
a one-box row, then the fraction in the RHS of (\ref{f9.9}) is irreducible.
In either case, the denominator of the irreducible form of the fraction
$\mathcal{R}^\uparrow(v;\lambda)$
is equal to $\prod_{x\in X(\lambda)}(v-x(x+1))$.
Let $\theta^\uparrow_x(\lambda)$, $x\in X(\lambda)$, be the following expansion coefficients
of
$\mathcal{R}^\uparrow(v;\lambda)$
as a sum of partial fractions:
\begin{equation}\label{f10}
\mathcal{R}^\uparrow(v;\lambda)=
\sum_{x\in X(\lambda)}\frac{\theta_x^\uparrow(\lambda)}{v-x(x+1)}.
\end{equation}
\begin{prop}\label{p1.10}
For every shifted Young diagram $\lambda$ and all $x\in X(\lambda)$ we have
\begin{equation*}
\theta^\uparrow_x(\lambda)=p_{\infty}^{\uparrow}(\lambda,\lambda+\square(x)),
\end{equation*}
where $p_\infty^\uparrow(\cdot,\cdot)$ is the up transition function
corresponding
to the Plancherel coherent system on $\mathbb{S}$ (see \S\ref{s1.4}).
\end{prop}
\begin{proof}
It follows from (\ref{f10}) by the residue formula that
\begin{equation}\label{f800}
\theta_{\widehat x}^\uparrow(\lambda)=\left\{
\begin{array}{ll}
\displaystyle\frac{\prod_{y\in Y(\lambda)}
\left( \widehat x(\widehat x+1)-y(y+1) \right)}
{\widehat x(\widehat x+1)\prod_{x \in X'(\lambda),\ x\ne\widehat x}
\left( \widehat x(\widehat x+1)-x(x+1) \right)},&
\mbox{if $\widehat x\ne0$};
\\\rule{0pt}{22pt}
\displaystyle
\frac{\prod_{y\in Y(\lambda)}y(y+1)}{\prod_{x\in X'(\lambda)}x(x+1)},&
\mbox{if $\widehat x=0$}
\end{array}
\right.
\end{equation}
for every $\widehat x\in X(\lambda)$.
Taking the limit as $\alpha\to+\infty$ in (\ref{f7}),
we obtain the following expression for the Plancherel up transition function:
\begin{equation*}
p_{\infty}^{\uparrow}(\lambda,\lambda+\square(\widehat x))=\frac{\mathsf{h}(\lambda+\square(\widehat x))}{\mathsf{h}(\lambda)\left( |\lambda|+1 \right)},\qquad
\widehat x\in X(\lambda),
\end{equation*}
where $\mathsf{h}$ is given by (\ref{f1}).
Let us check that the two above expressions coincide. Assume first that $\widehat x\ne0$.
Let $\lambda=(\lambda_1,\dots,\lambda_{\ell})$
and $\lambda+\square(\widehat x)=(\lambda_1,\dots,\lambda_{k-1},\lambda_k+1,\lambda_{k+1},\dots,\lambda_{\ell})$
for some $1\le k\le \ell$. Note that $\lambda_k=\widehat x$.
Using (\ref{f1}), we have
\begin{equation*}
\left.
\begin{array}{l}
\displaystyle
p_{\infty}^{\uparrow}(\lambda,\lambda+\square(\widehat x))=\frac{\mathsf{h}(\lambda+\square(\widehat x))}{\mathsf{h}(\lambda)\left( |\lambda|+1 \right)}
\\\displaystyle\qquad=\rule{0pt}{20pt}
\frac{2^{|\lambda|+1-\ell(\lambda+\square(\widehat x))}(|\lambda|+1)!}{(\widehat x+1)!\cdot\prod_{i\ne k}\lambda_i!}
\times\\\displaystyle\qquad\quad\times
\prod_{i=1}^{k-1}\frac{\lambda_i-\widehat x-1}{\lambda_i+\widehat x+1}
\cdot\prod_{j=k+1}^{\ell}\frac{\widehat x+1-\lambda_j}{\widehat x+1+\lambda_j}
\cdot\prod_{\textstyle\genfrac{}{}{0pt}{}{1\le i<j\le \ell}{i,j\ne k}}
\frac{\lambda_i-\lambda_j}{\lambda_i+\lambda_j}\times\\\displaystyle\qquad\quad\times
\frac{\widehat x!\cdot\prod_{i\ne k}\lambda_i!}{2^{|\lambda|-\ell(\lambda)}(|\lambda|+1)\cdot|\lambda|!}
\cdot\prod_{i=1}^{k-1}\frac{\lambda_i+\widehat x}{\lambda_i-\widehat x}
\cdot\prod_{j=k+1}^{\ell}\frac{\widehat x+\lambda_j}{\widehat x-\lambda_j}
\cdot\prod_{\textstyle\genfrac{}{}{0pt}{}{1\le i<j\le \ell}{i,j\ne k}}
\frac{\lambda_i+\lambda_j}{\lambda_i-\lambda_j}\\\displaystyle\qquad=
\frac{2^{\ell(\lambda)-\ell(\lambda+\square(\widehat x))+1}}{\widehat x+1}
\cdot
\prod_{\textstyle\genfrac{}{}{0pt}{}{1\le i\le \ell}{i\ne k}}
\frac{\widehat x(\widehat x+1)-\lambda_i(\lambda_i-1)}{\widehat x(\widehat x+1)-\lambda_i(\lambda_i+1)}.
\end{array}
\right.
\end{equation*}
Using Remark \ref{p8.9}, one can decompose the last product as follows:
\begin{equation*}
\prod_{\textstyle\genfrac{}{}{0pt}{}{1\le i\le \ell}{i\ne k}}
\frac{\widehat x(\widehat x+1)-\lambda_i(\lambda_i-1)}{\widehat x(\widehat x+1)-\lambda_i(\lambda_i+1)}=
\prod_{m=1}^{d}
\prod_{\textstyle\genfrac{}{}{0pt}{}{r=y_m+1}{r\ne \widehat x}}^{x_m}
\frac{\widehat x(\widehat x+1)-r(r-1)}{\widehat x(\widehat x+1)-r(r+1)},
\end{equation*}
where $X'(\lambda)=\left\{ x_1,\dots,x_d \right\}$ and $Y(\lambda)=\left\{ y_1,\dots,y_d \right\}$.
Fix $m=1,\dots,d$. It can be readily verified that
\begin{equation*}
\prod_{\textstyle\genfrac{}{}{0pt}{}{r=y_m+1}{r\ne \widehat x}}^{x_m}
\frac{\widehat x(\widehat x+1)-r(r-1)}{\widehat x(\widehat x+1)-r(r+1)}=
\left\{
\begin{array}{ll}\displaystyle
\frac{\widehat x(\widehat x+1)-y_m(y_m+1)}{\widehat x(\widehat x+1)-x_m(x_m+1)},&\widehat x\ne x_m;\\
\rule{0pt}{22pt}
\displaystyle
\frac1{2\widehat x}\big(\widehat x(\widehat x+1)-y_m(y_m+1)\big),&\widehat x=x_m.
\end{array}
\right.
\end{equation*}
Observe that if $\widehat x\ne0$, then $\ell(\lambda)=\ell(\lambda+\square(\widehat x))$.
It can be readily verified that the above expression (\ref{f800})
for $\theta^\uparrow_{\widehat x}(\lambda)$ coincides with
\begin{equation*}
p_{\infty}^{\uparrow}(\lambda,\lambda+\square(\widehat x))=
\frac{2}{\widehat x+1}
\prod_{m=1}^{d}
\prod_{\textstyle\genfrac{}{}{0pt}{}{r=y_m+1}{r\ne \widehat x}}^{x_m}
\frac{\widehat x(\widehat x+1)-r(r-1)}{\widehat x(\widehat x+1)-r(r+1)}.
\end{equation*}
The case $\widehat x=0$ can be considered similarly
with the observation that in this case $\ell(\lambda+\square(\widehat x))=\ell(\lambda)+1$.
This concludes the proof.
\end{proof}
\begin{rmk}\rm{}\label{p1.11}
If we take the limit transition as $v\to\infty$ in (\ref{f10}), we obtain
\begin{equation*}
\lim_{v\to\infty}v\mathcal{R}^\uparrow(v;\lambda)=1=\lim_{v\to\infty}
\sum_{x\in X(\lambda)}\frac{v\cdot p_{\infty}^{\uparrow}(\lambda,\lambda+\square(x))}{v+x(x+1)}
=\sum_{x\in X(\lambda)}p_{\infty}^{\uparrow}(\lambda,\lambda+\square(x))
\end{equation*}
for all $\lambda\in\mathbb{S}$ and $x\in X(\lambda)$, as it should be.
\end{rmk}
Note that now (\ref{f7}) can be rewritten as (here $\alpha\in(0,+\infty]$)
\begin{equation}\label{f14}
p_\alpha^\uparrow(\lambda,\lambda+\square(x))=\frac{x(x+1)+\alpha}{2|\lambda|+\alpha}\cdot\theta^\uparrow_x(\lambda)\qquad
\mbox{for all $\lambda\in\mathbb{S}$ and $x\in X(\lambda)$}.
\end{equation}
\subsection{The down transition function}\label{s20.3}
The down transition function of the Schur graph
can be written in terms of Kerov interlacing coordinates of shifted Young diagrams.
Let $\lambda$ be an arbitrary nonempty shifted Young diagram and $v$ be a complex
variable. By definition, put
\begin{equation*}
\mathcal{R}^\downarrow(v;\lambda):=\frac1{v\mathcal{R}^\uparrow(v;\lambda)}=
\frac{\prod_{x\in X'(\lambda)}(v-x(x+1))}{\prod_{y\in Y(\lambda)}(v-y(y+1))}.
\end{equation*}
Observe that the numerator and the denominator
both have $v^d$ as the term of maximal degree in $v$, where $d\ge0$ is the number of elements
in the set $X'(\lambda)$ (or, equivalently, in $Y(\lambda)$, see Remark \ref{p1.3}).
Let $\theta^\downarrow_y(\lambda)$, $y\in Y(\lambda)$,
be the following expansion coefficients of
$\mathcal{R}^\downarrow(v;\lambda)$
as a sum of partial fractions:
\begin{equation}\label{f15}
\mathcal{R}^\downarrow(v;\lambda)=1-\sum_{y\in Y(\lambda)}\frac{\theta_y^{\downarrow}(\lambda)}{v-y(y+1)}.
\end{equation}
\begin{prop}\label{p1.12}
For every nonempty shifted Young diagram $\lambda$ we have
\begin{equation*}
\theta_{y}^\downarrow(\lambda)=2|\lambda|\cdot p^\downarrow(\lambda,\lambda-\square(y)),\qquad y\in Y(\lambda),
\end{equation*}
where $p^\downarrow(\cdot,\cdot)$ is the down transition
function (see \S\ref{s1.3} for the definition).
\end{prop}
\begin{proof}
It follows from (\ref{f15}) by the residue formula that
\begin{equation*}
\theta_{\widehat y}^\downarrow(\lambda)=
\frac{\prod_{x\in X'(\lambda)}\left( \widehat y(\widehat y+1)-x(x+1) \right)}
{\prod_{y\in Y(\lambda),\ y\ne\widehat y}\left( \widehat y(\widehat y+1)-y(y+1) \right)}\qquad
\mbox{for every $\widehat y\in Y(\lambda)$}.
\end{equation*}
Next, we can rewrite the definition of the down transition function (\ref{f2}) as
\begin{equation*}
p^{\downarrow}(\lambda,\lambda-\square(\widehat y))=\frac{\mathsf{h}(\lambda-\square(\widehat y))}{\mathsf{h}(\lambda)}\cdot 2^{1-\delta(\widehat y)},
\end{equation*}
where $\delta(\cdot)$ is the Kronecker delta, and the function $h$ is given by (\ref{f1}).
It can be shown exactly as in the proof of Proposition \ref{p1.10}
(using Remark~\ref{p8.9})
that the two above expressions coincide.
\end{proof}
\section{The up/down Markov chains\\ and doubly symmetric functions}\label{s2}
In this section we
compute the action of the operators $T_n$ from Definition \ref{p10.6}
on doubly symmetric functions (Theorem \ref{p2.7}).
We argue similarly to \cite[\S4]{Borodin2007}.
\subsection{Doubly symmetric functions}\label{s2.1}
In this subsection we briefly recall the definitions
of the algebra of doubly symmetric functions and some related objects.
Exact definitions and proofs concerning this subject
can be found, e.g., in the paper by V.~Ivanov \cite{IvanovNewYork3517-3530}.
See also \cite[Ch. III, \S8]{Macdonald1995} and~\cite{Stembridge1985}.
Let $\Lambda$ denote the algebra
of real symmetric functions
in (formal) variables $y_1,y_2,\dots$.
This algebra is freely generated (as a commutative unital algebra)
by Newton power sums
$p_k:=\sum_{i=1}^{\infty}y_i^k$, $k\in\mathbb{Z}_{>0}$.
We write $\Lambda=\mathbb{R}\left[ p_1,p_2,p_3,\dots \right]$.
By $\Gamma$ we denote the subalgebra of $\Lambda$ generated by the odd Newton power
sums, $\Gamma=\mathbb{R}\left[ p_1,p_3,p_5,\dots \right]$.
We call $\Gamma$ the {\em{}algebra of doubly symmetric functions\/}.
\begin{rmk}\rm{}\label{p2.1a}
The subalgebra of $\Lambda$
generated by the odd Newton power sums was studied by various authors.
However, there is no common notation for it. For example, in
\cite{IvanovNewYork3517-3530} and
\cite{Macdonald1995}
it is denoted by $\Gamma$, in \cite{Hoffman1992}~---~by~$\Delta$,
in the papers \cite{Stembridge1989,Stembridge1992}
--- by $\Omega$, and in a recent paper \cite{Berele2009} --- by~$\mathcal{D}$.
In \cite{IvanovNewYork3517-3530}
it is called the algebra of supersymmetric functions, and in \cite{Berele2009}
--- the algebra of doubly symmetric functions.
In the present paper we adopt the latter term and
the notation $\Gamma$ for this algebra.
We do not use the term ``supersymmetric functions'' because it was used
by J.~Stembridge \cite{Stembridge1985} in a different sense.
Namely, he studied the unital algebra generated by
the following {\em{}supersymmetric power sums\/}\footnote{The
definitions and related discussions
can also be found in \cite{Macdonald1995}.} in two sets of variables $u_i$ and $v_j$:
\begin{equation*}
p_k(u_1,u_2,\dots;v_1,v_2,\dots)=\sum_{i=1}^{\infty}u_i^k-\sum_{j=1}^{\infty}v_j^k,\qquad
k=1,2,\dots.
\end{equation*}
The algebra $\Gamma$ defined above is generated by
supersymmetric power sums in variables
$\left\{ {y_1},{y_2},\dots \right\}$ and
$\left\{ -y_1,-y_2,\dots \right\}$.
Clearly, $\Gamma$ can also be viewed as a subalgebra of that supersymmetric algebra.
The algebra
$\Gamma$ consists of all $f\in\Lambda$ such that
for every $1\le i<j$ the expression
\begin{equation*}
f(y_1,\dots,y_{i-1},z,y_{i+1},\dots,y_{j-1},-z,y_{j+1},\dots)
\end{equation*}
does not depend on $z$ (here $z$ is another independent formal variable).\footnote{It
is clear that the odd Newton power sums satisfy that property
and the even do not. The fact that every $f\in\Lambda$ satisfying
that property is a polynomial in the odd Newton power sums
follows from \cite{Stembridge1985}.}
From \cite{Berele2009} it follows that $\Gamma$ can be viewed as the quotient
of $\Lambda$ by the ideal generated by all $s_\sigma-s_{\sigma'}$, where $s_\sigma$
is the ordinary Schur function, $\sigma$ runs over all ordinary partitions
and $\sigma'$ denotes the conjugate of the partition $\sigma$.
\end{rmk}
There is a natural filtration of the algebra $\Lambda$ by degrees of polynomials
in formal variables $y_i$. This filtration is
determined by setting $\deg p_k=k$, $k\in\mathbb{Z}_{>0}$.
The subalgebra $\Gamma\subset\Lambda$
inherits this filtration from $\Lambda$ and thus becomes a filtered algebra
with the filtration determined by setting $\deg p_{2m-1}=2m-1$, $m\in\mathbb{Z}_{>0}$.
More precisely,
$$
\Gamma=\bigcup_{m=0}^{\infty}\Gamma^{(m)},\qquad\Gamma^{(0)}\subset\Gamma^{(1)}\subset\Gamma^{(2)}\subset\ldots\subset \Gamma,
$$
where $\Gamma^{(m)}$ is the finite-dimensional subspace of $\Gamma$ consisting of
elements of degree $\le m$:
\begin{equation*}
\Gamma^{(0)}=\mathbb{R}1,\qquad \Gamma^{(m)}=\mathrm{span}\left\{ p_1^{r_1}p_3^{r_3}\dots\colon
r_1+3r_3+\dots\le m\right\},\quad m=1,2,\dots.
\end{equation*}
Finite products of the form
$p_1^{r_1}p_3^{r_3}\dots$
constitute a linear
basis for $\Gamma$ as a vector space over $\mathbb{R}$.
Every element
$p_1^{r_1}p_3^{r_3}\dots$ is homogeneous.
We will need two more linear bases for $\Gamma$,
of which one is also homogeneous and the other is not.
\begin{df}[Schur's $\mathcal{Q}$-functions]\rm{}\label{p2.2}
Let $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell(\lambda)},0,0,\dots)$ be an arbitrary strict partition.
For every $n\ge\ell(\lambda)$ set
\begin{equation}\label{f20}
R_{\lambda\mid n}(y_1,\dots,y_n):=y_1^{\lambda_1}\dots y_{\ell(\lambda)}^{\lambda_{\ell(\lambda)}}
\cdot\prod_{\textstyle\genfrac{}{}{0pt}{}{i\le\ell(\lambda)}{i<j\le n}}
\frac{y_i+y_j}{y_i-y_j}.
\end{equation}
If $n\ge\ell(\lambda)$, define\footnote{Here $\mathfrak{S}_n$ is the symmetric group.}
\begin{equation}\label{f21}
\mathcal{Q}_\lambda(y_1,\dots,y_n,0,\dots):=
\frac{2^{\ell(\lambda)}}{(n-\ell(\lambda))!}\sum_{w\in\mathfrak{S}_n}
R_{\lambda\mid n}(y_{w(1)},\dots,y_{w(n)}),
\end{equation}
and
$\mathcal{Q}_\lambda(y_1,\dots,y_n,0,\dots):=0$ otherwise.
The expressions
$\mathcal{Q}_\lambda(y_1,\dots,y_n,0,\dots)$, $n\in\mathbb{Z}_{>0}$,
define a doubly symmetric function
$\mathcal{Q}_\lambda\in\Gamma$.\footnote{This follows from \cite{IvanovNewYork3517-3530}. Note
that that paper deals with
Schur's $\mathcal{P}$-functions. They are linear multiples of the
$\mathcal{Q}$-functions: $\mathcal{P}_\lambda=2^{-\ell(\lambda)}\mathcal{Q}_\lambda$,
$\lambda\in\mathbb{S}$.} It is called {\em{}Schur's $\mathcal{Q}$-function\/}.
\end{df}
Each $\mathcal{Q}_\lambda$, $\lambda\in\mathbb{S}$, is a homogeneous element of degree $|\lambda|$. The system
$\left\{ \mathcal{Q}_\lambda \right\}_{\lambda\in\mathbb{S}}$ is a linear basis for the algebra $\Gamma$ over $\mathbb{R}$.
\begin{df}[Factorial Schur's $\mathcal{Q}$-functions]\rm{}\label{p2.3}
The factorial analogues
of Schur's $\mathcal{Q}$-functions are defined as in (\ref{f20})--(\ref{f21}),
with $R_{\lambda\mid n}$ replaced by
\begin{equation*}
R_{\lambda\mid n}^*(y_1,\dots,y_n):=y_1^{\downarrow\lambda_1}\dots y_{\ell(\lambda)}^{\downarrow\lambda_{\ell(\lambda)}}
\cdot\prod_{\textstyle\genfrac{}{}{0pt}{}{i\le\ell(\lambda)}{i<j\le n}}
\frac{y_i+y_j}{y_i-y_j}.
\end{equation*}
Here $y_i^{\downarrow \lambda_i}$
is the decreasing factorial power defined as
$a^{\downarrow k}:=a(a-1)\dots(a-k+1)$, $k\in\mathbb{Z}_{>0}$, $a^{\downarrow 0}:=1$.
The functions $\mathcal{Q}^*_\lambda$, $\lambda\in\mathbb{S}$, are called {\em{}factorial Schur's $\mathcal{Q}$-functions\/}.
\end{df}
For all $\lambda\in\mathbb{S}$ we have
$\mathcal{Q}_\lambda^*=\mathcal{Q}_\lambda+g$,
where $g$ is a doubly symmetric function with $\deg g<|\lambda|=\deg\mathcal{Q}_\lambda$.
It follows that the system
$\left\{ \mathcal{Q}^*_\lambda \right\}_{\lambda\in\mathbb{S}}$ is also a linear
basis for $\Gamma$ as a vector space over $\mathbb{R}$.
\subsection{A representation of $\mathfrak{sl}(2,\mathbb{C})$}\label{s2.2}
By $\mathrm{Fun}_0(\mathbb{S})$ denote the algebra of real finitely supported functions
on the Schur graph $\mathbb{S}$ with pointwise operations.
A natural basis for $\mathrm{Fun}_0(\mathbb{S})$ is $\left\{ \varepsilon_\mu \right\}_{\mu\in\mathbb{S}}$,
where
\begin{equation*}
\varepsilon_\mu(\lambda)=\left\{
\begin{array}{ll}
1,&\mbox{if $\lambda=\mu$};\\
0,&\mbox{otherwise}.
\end{array}
\right.
\end{equation*}
Let $E$, $F$, and $H$ be the following operators in $\mathrm{Fun}_0(\mathbb{S})$
which are similar to Kerov's operators (see \cite{Okounkov2001a}
for definition):\footnote{Here $\delta(\cdot)$
is the Kronecker delta.}
\begin{equation*}
\left.
\begin{array}{rcl}
E\varepsilon_\lambda&:=& \displaystyle\sum_{x\in X(\lambda)}2^{-\delta(x)}
\left( x(x+1)+\alpha \right)\varepsilon_{\lambda+\square(x)};\\
F\varepsilon_\lambda&:=& \displaystyle
-\sum_{y\in Y(\lambda)}\varepsilon_{\lambda-\square(y)};\\
H\varepsilon_\lambda&:=& \displaystyle
\left( \frac\al2+2|\lambda| \right)\varepsilon_\lambda.
\end{array}
\right.
\end{equation*}
\begin{lemma}\label{p2.4}
For all $\alpha\in\mathbb{R}$
these operators satisfy the commutation relations
\begin{equation}\label{f30}
\left[ E,H \right]=-2E,\qquad
\left[ F,H \right]=2F,\qquad
\left[ E,F \right]=H.
\end{equation}
\end{lemma}
\begin{proof}
The proof uses the results of
\S\ref{s20.1} and
is similar to the proof of Lemma 4.2 of the paper \cite{Borodin2007}.
\end{proof}
\begin{corollary}\rm{}\label{p2.5}
The correspondence
\begin{equation*}
\left(
\begin{array}{cc}
0&1\\0&0
\end{array}
\right)\to E,\qquad
\left(
\begin{array}{cc}
0&0\\1&0
\end{array}
\right)\to F,\qquad
\left(
\begin{array}{cc}
1&0\\0&-1
\end{array}
\right)\to H
\end{equation*}
defines a representation of the Lie algebra $\mathfrak{sl}(2,\mathbb{C})$
in the space $\mathrm{Fun}_0(\mathbb{S})$.
\end{corollary}
\begin{lemma}\label{p2.6}
Fix $N\in\mathbb{Z}_{>0}$.
Let
$V_N$
be the finite-dimensional subspace of $\mathrm{Fun}_0(\mathbb{S})$
spanned by the basis vectors $\varepsilon_\lambda$ with $\lambda_1\le N$.\footnote{In fact,
$\dim V_N=N(N+1)/2$.}
If $\alpha=-N(N+1)$, then $V_N$ is
invariant under the action of the operators $E$, $F$, and $H$, and
the action of $\mathfrak{sl}(2,\mathbb{C})$ in $V_N$ defined in Corollary \ref{p2.5}
lifts to a representation of the group $SL(2,\mathbb{C})$ in $V_N$.
\end{lemma}
\begin{proof}
This can be proved exactly as Lemma 4.3 of the paper \cite{Borodin2007}.
\end{proof}
\subsection{The action of $T_n$ on factorial Schur's $\mathcal{Q}$-functions}\label{s2.3}
For every set $\mathfrak{X}$ by $\mathrm{Fun}(\mathfrak{X})$
denote the algebra of real-valued
functions on $\mathfrak{X}$
with pointwise operations.
Consider an embedding of the algebra $\Gamma$ described in \S\ref{s2.1}
into the algebra $\mathrm{Fun}(\mathbb{S})$.
This embedding is defined on the generators
of $\Gamma$:
\begin{equation*}
p_k\to p_k(\lambda):=\sum_{i=1}^{\ell(\lambda)}\lambda_i^k,\qquad k=1,3,5,\dots.
\end{equation*}
Thus, to every element $f\in\Gamma$ corresponds a function from $\mathrm{Fun}(\mathbb{S})$.
Denote this function by $f(\lambda)$.
We identify the (abstract) algebra $\Gamma$
with its image under this embedding, that is, with
the algebra of functions
$\left\{ f(\cdot)\in\mathrm{Fun}(\mathbb{S})\colon f\in\Gamma \right\}$.
\begin{rmk}\label{90.9}\rm{}
In \cite{Borodin2007} the role of $\Gamma$ is
played by the algebra generated by supersymmetric
power sums (see Remark \ref{p2.1a}) in
$a_i$ and $-b_j$, where $a_i$ and $b_j$ are the modified Frobenius coordinates
of an ordinary Young diagram.
The paper \cite{Olshanski2009} deals
with Jack deformations of these power sums.
\end{rmk}
For any $f\in\Gamma$, by $f_n$ denote the restriction of the function $f(\cdot)$
to $\mathbb{S}_n\subset\mathbb{S}$. It can be easily checked that the algebra
$\Gamma\subset\mathrm{Fun}(\mathbb{S})$
separates points of $\mathbb{S}$.
It follows that the functions of the form $f_n$, with $f\in\Gamma$,
exhaust the (finite-dimensional) space $\mathrm{Fun}(\mathbb{S}_n)$,
$n\in\mathbb{Z}_{\ge0}$.
Our aim in this section is to prove the following
\begin{thm}\label{p2.7}
Let $T_n\colon\mathrm{Fun}(\mathbb{S}_n)\to\mathrm{Fun}(\mathbb{S}_n)$, $n\in\mathbb{Z}_{>0}$,
be the operator from Definition \ref{p10.6}.
Its action on the functions $(\mathcal{Q}_\mu^*)_n$, $\mu\in\mathbb{S}$,
is as follows:
\begin{equation}\label{f32}
\left.
\begin{array}{l}
\displaystyle
(T_n-{\bf1})(\mathcal{Q}_\mu^*)_n=\frac1{(n+1)(n+\alpha/2)}\Bigg[
-|\mu|\left( |\mu|+\alpha/2-1 \right)(\mathcal{Q}_\mu^*)_n\\\qquad\qquad\qquad\qquad\ \displaystyle+
(n-|\mu|+1)\sum_{y\in Y(\mu)}\left( y(y+1)+\alpha \right)
(\mathcal{Q}_{\mu-\square(y)}^*)_n
\Bigg],
\end{array}
\right.
\end{equation}
where ${\bf1}$ denotes the identity operator.
\end{thm}
\begin{rmk}\rm{}\label{p2.8}
The above Theorem states that $(T_n-{\bf1})(\mathcal{Q}_\mu^*)_n$ (for all $\mu\in\mathbb{S}$)
is a linear combination of the function $(\mathcal{Q}_\mu^*)_n$
and the functions of the form $(\mathcal{Q}_\varkappa^*)_n$,
where $\varkappa$ runs over all shifted diagrams
that can be obtained from $\mu$ by deleting one box.
Recall (\S\ref{s20.1}) that these diagrams are indexed by the set $Y(\mu)$.
\end{rmk}
The proof of Theorem \ref{p2.7} uses the technique
from \cite[\S4]{Borodin2007}. We do not want to repeat all details
and in the rest of the section we give a scheme of the proof.
Fix arbitrary $n\in\mathbb{Z}_{>0}$ and $\alpha\in(0,+\infty)$.
We write $T_n$ as the composition of ``down''
$D_{n+1,n}\colon\mathrm{Fun}(\mathbb{S}_{n})\to\mathrm{Fun}(\mathbb{S}_{n+1})$
and ``up''
$U_{n,n+1}\colon\mathrm{Fun}(\mathbb{S}_{n+1})\to\mathrm{Fun}(\mathbb{S}_{n})$
operators acting on functions:
\begin{equation}\label{f33}
\left.
\begin{array}{rcll}
\left( D_{n+1,n}f_n \right)(\lambda)&:=&
\displaystyle
\sum_{\mu\colon\mu\nearrow\lambda}
p^\downarrow(\lambda,\mu)f_n(\mu),&\lambda\in\mathbb{S}_{n+1};\\
\left( U_{n,n+1}f_{n+1} \right)(\nu)&:=&
\displaystyle
\sum_{\varkappa\colon\varkappa\searrow\nu}
p^\uparrow_\alpha(\nu,\varkappa)f_{n+1}(\varkappa),&\nu\in\mathbb{S}_n.
\end{array}
\right.
\end{equation}
The operator $D_{n+1,n}$ is constructed using the down
transition function $p^\downarrow$ and does not depend
on the parameter $\alpha$.
The operator $U_{n,n+1}$ is constructed using the
up transition function $p^\uparrow_\alpha$
and therefore depends on the parameter $\alpha$.
\begin{rmk}\rm{}
These ``down'' and ``up'' operators act on functions.
They are adjoint to the corresponding operators
acting on measures. The latter act in accordance
with their names, for example, the operator
$D_{n+1,n}^*$
maps $\mathcal{M}(\mathbb{S}_{n+1})$ into $\mathcal{M}(\mathbb{S}_n)$,
where
$\mathcal{M}(\mathfrak{X})$ denotes the space of
measures on $\mathfrak{X}$.
\end{rmk}
It clearly follows from the definition of
the $n$th up/down Markov chain (\S\ref{s1.3}) that
$T_n=U_{n,n+1}\circ D_{n+1,n}\colon\mathrm{Fun}(\mathbb{S}_n)\to\mathrm{Fun}(\mathbb{S}_n)$.
We deal with the operators $D_{n+1,n}$ and $U_{n,n+1}$ separately.
\begin{lemma}[The operator $D$]\label{p2.9}
There exists a unique operator
$D\colon\Gamma\to\Gamma$ such that
\begin{equation*}
D_{n+1,n}f_n=\frac1{n+1}\left( Df \right)_{n+1}
\end{equation*}
for all $n\in\mathbb{Z}_{\ge0}$ and $f\in\Gamma$. In the basis
$\{\mathcal{Q}_\mu^*\}_{\mu\in\mathbb{S}}$ for the algebra $\Gamma$ this operator has the form
\begin{equation}\label{f34}
D\mathcal{Q}_\mu^*=(p_1-|\mu|)\mathcal{Q}_\mu^*.
\end{equation}
\end{lemma}
\begin{proof}
The proof is exactly the same as the proof of Theorem 4.1 (1) of the paper \cite{Borodin2007},
but instead of the facts about Frobenius-Schur functions we refer to the following formula
which is due to V.~Ivanov \cite{IvanovNewYork3517-3530}.
Let $|\lambda|=n$, $\mu\in\mathbb{S}$ and $|\mu|\le n$. Then
\begin{equation}\label{f36}
\frac{\mathsf{h}(\mu,\lambda)}{\mathsf{h}(\lambda)}=2^{-|\mu|}
\frac{(\mathcal{Q}_\mu^*)_n(\lambda)}{n(n-1)\dots(n-|\mu|+1)}.
\end{equation}
We also use the recurrence relations for
the function
$\mathsf{h}(\mu,\lambda)$ which directly
follow from its definition (\S\ref{s1.1}):
\begin{equation*}
\mathsf{h}(\mu,\nu)=\sum_{\lambda\colon\lambda\nearrow\nu}
\mathsf{h}(\mu,\lambda)\kappa(\lambda,\nu)
\qquad\mbox{for all $\mu,\nu\in\mathbb{S}$}.
\end{equation*}
The rest of the proof repeats that of \cite[Theorem 4.1 (1)]{Borodin2007}.
\end{proof}
\begin{lemma}[The operator $U$]\label{p2.10}
For every $\alpha\in(0,+\infty)$ there exists a unique operator
$U\colon\Gamma\to\Gamma$ depending on $\alpha$ such that
\begin{equation*}
U_{n,n+1}f_{n+1}=\frac{1}{n+\alpha/2}(Uf)_n
\end{equation*}
for all $n\in\mathbb{Z}_{\ge0}$ and $f\in\Gamma$.
In the basis
$\{\mathcal{Q}_\mu^*\}_{\mu\in\mathbb{S}}$ for the algebra $\Gamma$ this operator has the form
\begin{equation}\label{f37}
\left.
\begin{array}{l}
\displaystyle
U\mathcal{Q}_\mu^*=
\left( p_1+|\mu|+\frac\al2 \right)\mathcal{Q}_\mu^*+
\sum_{y\in Y(\mu)}\big(y(y+1)+\alpha\big)\mathcal{Q}_{\mu-\square(y)}^*.
\end{array}
\right.
\end{equation}
\end{lemma}
\begin{proof}
The proof is similar to that of Theorem 4.1 (2) of the paper \cite{Borodin2007}.
We must prove that
\begin{equation}\label{f40}
\left.
\begin{array}{l}
\displaystyle
\left( n+\frac\al2 \right)
(U_{n,n+1}(\mathcal{Q}_\mu^*)_{n+1})(\lambda)=
\left( n+k+\frac\al2 \right)(\mathcal{Q}_\mu^*)_n(\lambda)
\\\displaystyle\qquad\qquad\qquad
+
\sum_{y\in Y(\mu)}
\big(y(y+1)+\alpha\big)(\mathcal{Q}_{\mu-\square(y)}^*)_n(\lambda)
\end{array}
\right.
\end{equation}
for all $\mu,\lambda\in\mathbb{S}$ such that $|\mu|=k$
and $|\lambda|=n\ge k$.
If $|\mu|=0$, that is, $\mu$ is an empty partition,
then $\mathcal{Q}_\mu^*\equiv1$ and (\ref{f40}) clearly holds.
Now let $|\mu|=k\ge1$. Using (\ref{f7}), (\ref{f36}) and the definition of
$U_{n,n+1}$ one can
reduce (\ref{f40}) to the following equivalent combinatorial
identity:
\begin{equation*}
\begin{array}{l}
\displaystyle
\sum_{x\in X(\lambda)}\big(x(x+1)+\alpha\big)\mathsf{h}(\mu,\lambda+\square(x))\\\displaystyle\qquad
=
2\left( n+k+\frac\al2 \right)\left( n-k+1 \right)\mathsf{h}(\mu,\lambda)\\\displaystyle\qquad\qquad+
\sum_{y\in Y(\mu)}\big(y(y+1)+\alpha\big)\mathsf{h}(\mu-\square(y),\lambda),
\end{array}
\end{equation*}
where $|\lambda|=n$ and $|\mu|=k\le n$.
This combinatorial identity is verified exactly as the corresponding
identity from the proof of \cite[Theorem 4.1 (2)]{Borodin2007}.
In our case one must use the results formulated in \S\ref{s2.2}.
\end{proof}
Theorem \ref{p2.7} now follows from Lemmas \ref{p2.9} and \ref{p2.10}
and the fact that
\begin{equation}\label{f41}
(T_n-{\bf1})f_n=-f_n+
\frac{(UDf)_n}{(n+1)(n+\alpha/2)},\qquad f\in\Gamma.
\end{equation}
\section{Doubly symmetric functions\\ on shifted Young diagrams}\label{s3}
In this section we study the algebra $\Gamma\subset\mathrm{Fun}(\mathbb{S})$
(defined in \S\ref{s2}) in more detail.
Let $\lambda$ be an arbitrary shifted Young diagram and $u$ be a complex variable.
By definition, put
\begin{equation*}
\phi(u;\lambda):=\prod_{i=1}^{\infty}\frac{u+\lambda_i}{u-\lambda_i}.
\end{equation*}
Note that this product is actually finite, because any strict partition $\lambda$
has only finitely many nonzero terms.
Note also that
$\phi(u;\lambda)$ is a rational function in $u$
taking value $1$ at $u=\infty$.
\begin{prop}\label{p3.1}
The algebra $\Gamma\subset\mathrm{Fun}(\mathbb{S})$
coincides with the commutative
unital subalgebra of $\mathrm{Fun}(\mathbb{S})$
generated by the Taylor expansion coefficients of $\phi(u;\lambda)$
(or, equivalently, of $\log\phi(u;\lambda)$) at $u=\infty$ with respect to $u^{-1}$.
\end{prop}
\begin{proof}
The Taylor expansion of $\log\phi(u;\lambda)$ at $u=\infty$ has the form
\begin{equation}\label{f44}
\log\phi(u;\lambda)=2\sum_{k\ge1\ \mbox{odd}}\frac{p_k(\lambda)}{k}u^{-k},
\end{equation}
where $p_k(\lambda)=\sum_{i=1}^{\ell(\lambda)}\lambda_i^{k}$ are the Newton power sums.
The algebra $\Gamma$ is freely generated by the functions $p_1,p_3,\ldots\in\mathrm{Fun}(\mathbb{S})$,
see \S\ref{s2}.
\end{proof}
By definition, put\footnote{Here $v$ is an independent
complex variable.}
\begin{equation*}
\Phi(v;\lambda):=\prod_{i=1}^{\infty}\frac{v-\lambda_i(\lambda_i-1)}{v-\lambda_i(\lambda_i+1)}.
\end{equation*}
The product here is also actually finite.
Clearly, $\Phi(v;\lambda)$
is a rational function in $v$ taking value $1$ at $v=\infty$.
It can be readily verified that
\begin{equation*}
\Phi(u^2-u;\lambda)=\frac{\phi(u-1;\lambda)}{\phi(u;\lambda)}.
\end{equation*}
\begin{df}\rm{}\label{p3.2}
Let $\mathbf{p}_m(\cdot), \mathbf{g}_m(\cdot), \mathbf{\hat g}_m(\cdot)\in\mathrm{Fun}(\mathbb{S})$, $m\in\mathbb{Z}_{>0}$,
be the following Taylor expansion coefficients at $v=\infty$ with respect to $v^{-1}$:
\begin{equation*}
\left.
\begin{array}{rcl}
\log\Phi(v;\lambda)&=&\displaystyle
\sum_{m=1}^{\infty}\frac{\mathbf{p}_m(\lambda)}{m}v^{-m};\\
\Phi(v;\lambda)&=& \displaystyle 1+\sum_{m=1}^{\infty}\mathbf{g}_m(\lambda)v^{-m};\\
\displaystyle\frac1{\Phi(v;\lambda)}&=& \displaystyle
1-\sum_{m=1}^{\infty}\mathbf{\hat g}_m(\lambda)v^{-m}.
\end{array}
\right.
\end{equation*}
\end{df}
Recall that the algebra $\Gamma$ has a natural filtration (defined in \S\ref{s2.1}) which is
determined by setting
\begin{equation}\label{f45}
\deg p_{2m-1}=2m-1,\qquad m=1,2,\dots.
\end{equation}
\begin{prop}\label{p3.3}
The functions $\mathbf{p}_m(\lambda)$ belong to the algebra $\Gamma$.
More precisely,
\begin{equation*}
\mathbf{p}_m(\lambda)=2m\cdot p_{2m-1}(\lambda)+\dots,\qquad m\in\mathbb{Z}_{>0},
\end{equation*}
where dots stand for lower degree terms
in the algebra $\Gamma$, which are a
linear combination of
$p_{2l-1}(\lambda)$, where $1\le l\le m-1$.
\end{prop}
\begin{proof}
On one hand,
by the definition of $\Phi$ and by (\ref{f44}) we have
\begin{equation*}
\left.
\begin{array}{rcl}\displaystyle
\log\Phi(u^2-u;\lambda)&=& \log\phi(u-1;\lambda)-\log\phi(u;\lambda)\\&=&\displaystyle
2\sum_{k=1}^{\infty}\frac{p_{2k-1}(\lambda)}{2k-1}
\left( \frac1{(u-1)^{2k-1}}-\frac1{u^{2k-1}} \right)
\end{array}
\right.
\end{equation*}
for all $\lambda\in\mathbb{S}$.
Observe that
\begin{equation*}
\frac1{(u-1)^{2k-1}}-\frac1{u^{2k-1}}=
(2k-1)u^{-2k}\left( 1+\frac{k}{u}+\dots \right),
\end{equation*}
where $k\in\mathbb{Z}_{>0}$ and dots stand for terms containing $u^{-2},u^{-3},\dots$.
On the other hand, by Definition \ref{p3.2} we have
\begin{equation*}
\log\Phi(u^2-u;\lambda)=\sum_{m=1}^{\infty}\frac{\mathbf{p}_m(\lambda)}{m}\frac1{(u^2-u)^m}
\end{equation*}
for all $\lambda\in\mathbb{S}$. Observe that
\begin{equation*}
\frac1{(u^2-u)^{m}}=u^{-2m}\left( 1-\frac mu+\dots \right),
\end{equation*}
where $m\in\mathbb{Z}_{>0}$ and again dots stand for
terms containing $u^{-2},u^{-3},\dots$.
Thus, we get the following identity:
\begin{equation*}
\begin{array}{l}\displaystyle
2\sum_{k=1}^{\infty}u^{-2k}p_{2k-1}(\lambda)\left( 1+\frac ku+\dots \right)
=\sum_{m=1}^{\infty}u^{-2m}\frac{\mathbf{p}_m(\lambda)}{m}\left( 1-\frac{m}{u}+\dots \right).
\end{array}
\end{equation*}
Comparing the coefficients of $u^{-2m}$ in both sides, we get the claim.
\end{proof}
\begin{prop}\label{p3.4}
We have\footnote{Here and below
we sometimes omit the argument $\lambda$ to shorten the notation.}
\begin{equation*}
\mathbf{g}_1=\mathbf{\hat g}_1=\mathbf{p}_1
\end{equation*}
and
\begin{equation*}
k\mathbf{g}_k=\mathbf{p}_k+\mathbf{p}_{k-1}\mathbf{g}_1+\dots+\mathbf{p}_1\mathbf{g}_{k-1},\qquad
\mathbf{\hat g}_k=\mathbf{g}_k-\mathbf{g}_{k-1}\mathbf{\hat g}_1-\dots-\mathbf{g}_1\mathbf{\hat g}_{k-1}
\end{equation*}
for all $k=2,3,\dots$.
\end{prop}
\begin{proof}
The technique of this proof is similar to \cite[Ch. I, \S2]{Macdonald1995}.
Let $w$ be an independent variable.
Observe that
\begin{equation*}
\sum_{m=1}^{\infty}\frac{\mathbf{p}_m(\lambda)}mw^m=\log\left( 1+\sum_{k=1}^{\infty}\mathbf{g}_k(\lambda)w^k \right).
\end{equation*}
If we take $d/dw$ of both sides
and compare the coefficients by $w^{k-1}$, we get the desired relation between
$\mathbf{p}_k$'s and $\mathbf{g}_k$'s.
To prove the remaining
relation between $\mathbf{g}_k$'s and $\mathbf{\hat g}_k$'s observe that
\begin{equation*}
\left( 1+\sum_{k=1}^{\infty}\mathbf{g}_k(\lambda)w^k \right)
\left( 1-\sum_{k=1}^{\infty}\mathbf{\hat g}_k(\lambda)w^k \right)=1.
\end{equation*}
This concludes the proof.
\end{proof}
\begin{corollary}\label{p3.5}
Each of the three families
$\left\{ \mathbf{p}_1,\mathbf{p}_2,\mathbf{p}_3,\dots \right\}$,
$\left\{ \mathbf{g}_1,\mathbf{g}_2,\mathbf{g}_3,\dots \right\}$ and
$\left\{ \mathbf{\hat g}_1,\mathbf{\hat g}_2,\mathbf{\hat g}_3,\dots \right\}$
is a system of algebraically independent
generators of the algebra $\Gamma$. Under the identification
of $\Gamma$ with any of the algebras of polynomials
\begin{equation*}
\mathbb{R}\left[ \mathbf{p}_1,\mathbf{p}_2,\dots \right],\quad
\mathbb{R}\left[ \mathbf{g}_1,\mathbf{g}_2,\dots \right]\quad\mbox{and}\quad
\mathbb{R}\left[ \mathbf{\hat g}_1,\mathbf{\hat g}_2,\dots \right],
\end{equation*}
the natural filtration (\ref{f45}) of $\Gamma$ is determined by setting
\begin{equation*}
\left.
\begin{array}{rcl}
\deg\mathbf{p}_m(\lambda)&=& 2m-1,\\
\deg\mathbf{g}_m(\lambda)&=& 2m-1,\\
\deg\mathbf{\hat g}_m(\lambda)&=& 2m-1,\qquad m\in\mathbb{Z}_{>0},
\end{array}
\right.
\end{equation*}
respectively.
\end{corollary}
\begin{prop}\label{p3.6}
Let $\lambda$ be
an arbitrary shifted Young diagram with Kerov interlacing coordinates
$\left[ X(\lambda);X(\lambda) \right]$ (see \S\ref{s20.1}).
Then
\begin{equation*}
\Phi(v;\lambda)=
\frac{\prod_{y\in Y(\lambda)}(v-y(y+1))}{\prod_{x\in X'(\lambda)}(v-x(x+1))}=
v\cdot\mathcal{R}^\uparrow(v;\lambda).
\end{equation*}
Here the function $\mathcal{R}^\uparrow$ is defined by (\ref{f9.9}).
Recall that $X'(\lambda)=X(\lambda)\setminus\left\{ 0 \right\}$.
\end{prop}
\begin{proof}
This can be proved exactly as Proposition \ref{p1.10} using Remark \ref{p8.9}.
\end{proof}
Using this Proposition one can express
the functions $\mathbf{p}_m,\mathbf{g}_m,\mathbf{\hat g}_m$, $m\in\mathbb{Z}_{>0}$, through Kerov coordinates:
\begin{prop}\label{p3.7}
Let $\lambda\in\mathbb{S}$ and $m\in\mathbb{Z}_{>0}$. Then\footnote{Recall that the numbers
$\{ \theta_x^\uparrow(\lambda) \}_{x\in X(\lambda)}$ and
$\{ \theta_y^\downarrow(\lambda) \}_{y\in Y(\lambda)}$ were introduced in \S\ref{s20}.}
\begin{equation*}
\left.
\begin{array}{rcl}
\displaystyle
\mathbf{p}_m(\lambda)&=& \displaystyle
\sum_{x\in X(\lambda)}\left( x(x+1) \right)^{m}-
\sum_{y\in Y(\lambda)}\left( y(y+1) \right)^{m};\\
\mathbf{g}_m(\lambda)&=& \displaystyle
\sum_{x\in X(\lambda)}\theta_x^\uparrow(\lambda)\cdot\left( x(x+1) \right)^{m};\\
\mathbf{\hat g}_m(\lambda)&=& \displaystyle
\sum_{y\in Y(\lambda)}\theta_y^\downarrow(\lambda)\cdot
\left( y(y+1) \right)^{m-1}.
\end{array}
\right.
\end{equation*}
\end{prop}
\begin{proof}
The first claim is a straightforward consequence of Proposition \ref{p3.6}.
Let us prove the second claim.
On one hand, from the definition of the numbers $\{\theta_x^{\uparrow}\}$
(see \S\ref{s20.2}) we have
\begin{equation*}
\left.
\begin{array}{l}\displaystyle
v\cdot\mathcal{R}^\uparrow(v;\lambda)=v\sum_{x\in X(\lambda)}\frac{\theta_x^\uparrow(\lambda)}{v-x(x+1)}
=\sum_{x\in X(\lambda)}\frac{\theta_x^\uparrow(\lambda)}{1-\frac{x(x+1)}v}\\
\displaystyle\qquad=
\sum_{x\in X(\lambda)}\theta_x^\uparrow(\lambda)\sum_{k=0}^{\infty}
\left( \frac{x(x+1)}v \right)^{k}
\\\displaystyle\qquad=
\sum_{k=0}^{\infty}v^{-k}\sum_{x\in X(\lambda)}\theta_x^\uparrow(\lambda)\cdot\left( x(x+1) \right)^{k}.
\end{array}
\right.
\end{equation*}
On the other hand, it follows from
Proposition \ref{p3.6} that
$v\cdot\mathcal{R}^\uparrow(v;\lambda)=\Phi(v;\lambda)$.
Using the definition of the functions $\mathbf{g}_m$ (Definition \ref{p3.2})
and comparing it to the above formula for $v\cdot\mathcal{R}^\uparrow(v;\lambda)$,
we get the second claim.
The third claim can be verified similarly.
\end{proof}
It follows from Propositions \ref{p1.5}, \ref{p3.4} and \ref{p3.7} that
\begin{equation}\label{f54}
\mathbf{p}_1(\lambda)=\mathbf{g}_1(\lambda)=\mathbf{\hat g}_1(\lambda)=2|\lambda|,\qquad \lambda\in\mathbb{S}.
\end{equation}
\begin{lemma}\label{p3.8}
Let $\lambda$ be an arbitrary
nonempty shifted Young diagram,
$x\in X(\lambda)$ and $y\in Y(\lambda)$.
Then
\begin{equation*}
\begin{array}{rcl}\displaystyle
\frac{\Phi(v;\lambda+\square(x))}{\Phi(v;\lambda)}&=& \displaystyle
\frac{(v-x(x+1))^{2}}{\left( v-x(x+1) \right)^{2}-2(v+x(x+1))}
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{rcl}\displaystyle
\frac{\Phi(v;\lambda-\square(y))}{\Phi(v;\lambda)}&=& \displaystyle
\frac{\left( v-y(y+1) \right)^{2}-2(v+y(y+1))}{(v-y(y+1))^{2}}.
\end{array}
\end{equation*}
\end{lemma}
\begin{proof}
This directly follows from Proposition \ref{p3.6}
and the definitions of the diagrams $\lambda+\square(x)$ and $\lambda-\square(y)$ (\S\ref{s20.1}).
\end{proof}
\section{The up and down operators\\ in differential form}\label{s4}
The aim of this section is to write the operators
$D$ and $U$ in the algebra $\Gamma$
(they were defined in Lemmas
\ref{p2.9} and \ref{p2.10}, respectively)
in differential form.
Here we use the results of \S\ref{s20} and \S\ref{s3}.
Our approach is inspired by the paper \cite{Olshanski2009}, but in our situation significant
modifications are required.
\subsection{Formulation of the theorem}\label{s4.1}
We identify $\Gamma$ with the polynomial algebra
$\mathbb{R}\left[ \mathbf{g}_1,\mathbf{g}_2,\dots \right]$.
Recall that $\Gamma$ is a filtered
algebra, and under this identification the filtration
is determined by setting (see Corollary \ref{p3.5})
$\deg\mathbf{g}_m=2m-1$, $m\in\mathbb{Z}_{>0}$.
\begin{df}\label{p4.1}\rm{}
We say that an operator $R\colon\Gamma\to\Gamma$ has degree $\le r$,
where $r\in\mathbb{Z}$, if
$\deg(Rf)\le\deg f+r$ for any $f\in\Gamma$.
\end{df}
\begin{rmk}\rm{}\label{p40.2}
Observe that any
operator in the algebra of polynomials
(in finitely or countably many variables)
can be written as a differential
operator with polynomial coefficients
--- a formal infinite sum of differential monomials.
This fact is well known and can be readily proved.
We do not need it but it is useful to keep it in mind while
reading the formulation and the proof of Theorem~\ref{p4.2}.
\end{rmk}
\begin{thm}\label{p4.2}
{\rm{}(1)\/}
The operator $D\colon\Gamma\to\Gamma$ defined in Lemma \ref{p2.9} has degree $1$ with respect
to the filtration of $\Gamma$ and looks as
\begin{equation*}
\left.
\begin{array}{l}
\displaystyle
D=\frac12\mathbf{g}_1+
\sum_{r,s\ge1}(2r-1)(2s-1)\mathbf{g}_{r+s-1}\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}
\\\displaystyle\qquad-
\sum_{r\ge1}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}+
\sum_{r,s\ge1}(r+s)\mathbf{g}_r\mathbf{g}_s\frac{\partial}{\partial\mathbf{g}_{r+s}}\\\displaystyle\qquad{}+{}
\mbox{\rm{}operators of degree $\le-2$};
\end{array}
\right.
\end{equation*}
{\rm{}(2)\/}
For any fixed $\alpha\in(0,+\infty)$
the operator $U\colon\Gamma\to\Gamma$ defined in Lemma \ref{p2.10} has degree $1$ with respect
to the filtration of $\Gamma$ and looks as
\begin{equation*}
\left.
\begin{array}{l}
\displaystyle
U=\frac12\mathbf{g}_1+\frac12\alpha+\alpha\frac{\partial}{\partial\mathbf{g}_1}+
\sum_{r,s\ge1}(2r-1)(2s-1)\mathbf{g}_{r+s-1}\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}
\\\displaystyle\qquad+
\sum_{r\ge1}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}+
\sum_{r,s\ge1}(r+s-1)\mathbf{g}_r\mathbf{g}_s\frac{\partial}{\partial\mathbf{g}_{r+s}}\\\displaystyle\qquad{}+{}
\mbox{\rm{}operators of degree $\le-2$}.
\end{array}
\right.
\end{equation*}
\end{thm}
\par\noindent{\em{}Scheme of proof.\/}
The functions $\mathbf{g}_k\in\Gamma$, $k\in\mathbb{Z}_{>0}$, generate the algebra $\Gamma$.
However, we will
not be dealing with actions of $D$ and $U$ on these generators.
Instead, we consider the products of the form
$\Phi(v_1;\lambda)\Phi(v_2;\lambda)\dots$,
where $v_1,v_2,\dots$
are independent complex variables in a finite number
(here $\Phi(v;\lambda)$ is defined in \S\ref{s3}).
It follows from Definition \ref{p3.2}
that the products of the form
$\Phi(v_1;\lambda)\Phi(v_2;\lambda)\dots$
assemble
various products of the generators, which in turn
constitute a linear basis for $\Gamma$.
Thus, we know the action of our operators if we
know how they transform such products.
It turns out that the transformation
of the products
$\Phi(v_1;\lambda)\Phi(v_2;\lambda)\dots$
can be written down in a closed form. From
this we can extract all the necessary information.
\subsection{Action of $D$ and $U$ on generating series}\label{s4.2}
It follows from Definition \ref{p3.2}
that for any finite collection of independent complex variables $v_1,v_2,\dots$
(we prefer not to indicate their number explicitly) we have
\begin{equation}\label{f55}
\Phi(v_1;\lambda)\Phi(v_2;\lambda)\ldots=\sum_{\rho}m_\rho(v_1^{-1},v_2^{-1},\dots)\mathbf{g}_\rho(\lambda),
\end{equation}
where the sum is taken over all ordinary partitions $\rho$ such that $\ell(\rho)$
does not exceed the number of $v_i$'s,
$m_\rho$ is the monomial symmetric function
and $\mathbf{g}_\rho=\mathbf{g}_{\rho_1}\dots\mathbf{g}_{\rho_{\ell(\rho)}}$.
It is convenient to set $\mathbf{g}_0:=1$.
\begin{rmk}\rm{}\label{p4.3}
Observe that
$m_\rho(v_1^{-1},v_2^{-1},\dots)$ vanishes if
$\ell(\rho)$ is greater than the number of $v_i$'s.
It follows that here and below
in the sums
similar to (\ref{f55})
we can
let $\rho$ run over all ordinary partitions.
\end{rmk}
We regard the LHS of (\ref{f55}) as a generating series for the
elements $\mathbf{g}_\rho$ that constitute a linear basis for $\Gamma$.
Thus, the action of an operator in $\Gamma$ on the LHS
is determined by its action on the functions $\mathbf{g}_\rho\in\Gamma$ in the
RHS.
Occasionally, it will be convenient to omit the
argument $\lambda$ in $\Phi(v;\lambda)$.
Recall from \S\ref{s2.3}
the notation $(\dots)_n$ for
the restriction of a function from $\Gamma\subset\mathrm{Fun}(\mathbb{S})$
to the subset $\mathbb{S}_n\subset\mathbb{S}$.
We start with the operators $U_{n,n+1}$ and $D_{n+1,n}$ defined by (\ref{f33}).
By the very definition of
$U_{n,n+1}$ we have
\begin{equation*}
\begin{array}{l}
\displaystyle
\left(
U_{n,n+1}
\Big(
\prod_l\Phi(v_l)
\Big)_{n+1}
\right)(\lambda)
\\\displaystyle\qquad
=
\sum_{x\in X(\lambda)}^{\phantom{A}}p_\alpha^\uparrow(\lambda,\lambda+\square(x))
\prod_l\Phi(v_l;\lambda+\square(x)),
\qquad\lambda\in\mathbb{S}_n.
\end{array}
\end{equation*}
Using (\ref{f14}) and Lemma \ref{p3.8}, we get (note that $|\lambda|=n$):
\begin{equation}\label{f56}
\begin{array}{l}
\displaystyle
\left(
(n+\alpha/2)U_{n,n+1}
\Big(
\prod_l\Phi(v_l)
\Big)_{n+1}\right)(\lambda)
=F^\uparrow(v_1,v_2,\dots;\lambda)\cdot
\prod_l\Phi(v_l;\lambda),
\end{array}
\end{equation}
where
\begin{equation}\label{f57}
\begin{array}{l}\displaystyle
F^\uparrow(v_1,v_2,\dots;\lambda)
\\\displaystyle\qquad:=\sum_{x\in X(\lambda)}\frac{x(x+1)+\alpha}2
\prod_l\frac{(v_l-x(x+1))^2}
{(v_l-x(x+1))^2-2(v_l+x(x+1))}
\theta_x^\uparrow(\lambda).
\end{array}
\end{equation}
Likewise, for the operator $D_{n+1,n}$ we have
\begin{equation*}
\begin{array}{l}
\displaystyle
\left(
D_{n+1,n}
\Big(
\prod_l\Phi(v_l)
\Big)_{n}
\right)(\lambda)
\\\displaystyle\qquad=
\sum_{y\in Y(\lambda)}^{\phantom{A}}
p^\downarrow(\lambda,\lambda-\square(y))
\prod_l\Phi(v_l;\lambda-\square(y)),
\qquad\lambda\in\mathbb{S}_{n+1}.
\end{array}
\end{equation*}
Using Proposition \ref{p1.12} and Lemma \ref{p3.8}, we get (note that now $|\lambda|=n+1$):
\begin{equation}\label{f58}
\begin{array}{l}
\displaystyle
\left(
(n+1)D_{n+1,n}
\Big(
\prod_l\Phi(v_l)
\Big)_{n}
\right)(\lambda)
=F^\downarrow(v_1,v_2,\dots;\lambda)\cdot
\prod_l\Phi(v_l;\lambda),
\end{array}
\end{equation}
where
\begin{equation}\label{f59}
\left.
\begin{array}{l}
\displaystyle
F^\downarrow(v_1,v_2,\dots;\lambda):=\sum_{y\in Y(\lambda)}\frac12
\prod_l\frac{(v_l-y(y+1))^2-2(v_l+y(y+1))}
{(v_l-y(y+1))^2}
\theta_y^\downarrow(\lambda).
\end{array}
\right.
\end{equation}
\begin{lemma}\label{p4.4}
As functions in $\lambda$, both $F^\uparrow(v_1,v_2,\dots;\lambda)$
and $F^\downarrow(v_1,v_2,\dots;\lambda)$ are elements of
the algebra $\Gamma$.
More precisely, the both expressions can be viewed as
elements of $\Gamma\big[ [v_1^{-1},v_2^{-1},\dots] \big]$.
\end{lemma}
\begin{proof}
Observe that the products on $l$ in (\ref{f57}) and (\ref{f59})
can be viewed as elements of
$\mathbb R[x(x+1)]\big[ [v_1^{-1},v_2^{-1},\dots] \big]$
and
$\mathbb R[y(y+1)]\big[ [v_1^{-1},v_2^{-1},\dots] \big]$,
respectively.\footnote{Here and below $\mathbb{R}\left[ z(z+1) \right]$ denotes the
algebra of polynomials in $z(z+1)$.}
Moreover, if
$f\left( x(x+1) \right)$
is a polynomial in $x(x+1)$,
then the expression
$$
\sum_{x\in X(\lambda)}
f\left( x(x+1) \right)
\theta_x^\uparrow(\lambda)
$$
as function in $\lambda$ belongs to $\Gamma$ (this follows from Proposition \ref{p3.7}).
Letting $f$ be the corresponding
formal power series in $v_1^{-1}, v_2^{-1},\dots$, we
get the claim about $F^\uparrow(v_1,v_2,\dots;\lambda)$.
The remaining claim about $F^\downarrow(v_1,v_2,\dots;\lambda)$ is verified similarly.
\end{proof}
Now we proceed to the operators $D$ and $U$ in the algebra $\Gamma$
defined in Lemmas \ref{p2.9} and \ref{p2.10}, respectively.
Using these Lemmas, we rewrite (\ref{f56}) and (\ref{f58}) as
\begin{equation}\label{f60}
\begin{array}{rcl}
U(\Phi(v_1)\Phi(v_2)\dots)&=& F^\uparrow(v_1,v_2,\dots)\Phi(v_1)\Phi(v_2)\dots;\\
D(\Phi(v_1)\Phi(v_2)\dots)&=& F^\downarrow(v_1,v_2,\dots)\Phi(v_1)\Phi(v_2)\dots.
\end{array}
\end{equation}
These formulas contain in a compressed form
all the information about the action
of $U$ and $D$ on the basis elements
$\mathbf{g}_\rho$, where $\rho$ runs over all ordinary partitions.
Our next step is to extract from
(\ref{f60}) some explicit expressions
for $U\mathbf{g}_\rho$ and $D\mathbf{g}_\rho$
using (\ref{f55}) and Proposition \ref{p3.7}.
\subsection{Action of $U$ and $D$ in the basis $\left\{ \mathbf{g}_\rho \right\}$}\label{s4.3}
Let us first introduce some extra notation.
Let $v$ and $\xi$ be independent variables.
Consider the following expansions at $v=\infty$ with respect to $v^{-1}$:
\begin{equation}\label{f62}
\begin{array}{lr}
\displaystyle
\frac{(v-\xi)^2}{(v-\xi)^2-2(v+\xi)}=
\sum_{s=0}^\infty a_s(\xi)v^{-s},&
a_s\in\mathbb R[\xi];
\\\displaystyle\qquad\qquad
\frac{(v-\xi)^2-2(v+\xi)}{(v-\xi)^2}=
\sum_{s=0}^\infty b_s(\xi)v^{-s},&
b_s\in\mathbb R[\xi].
\end{array}
\end{equation}
\begin{lemma}\label{p4.5}
We have
\begin{equation*}
a_0(\xi)=b_0(\xi)\equiv1,
\end{equation*}
and $a_s(\xi)$ and $b_s(\xi)$ have degree $s-1$ for $s\ge1$.
More precisely,
\begin{equation*}
b_s(\xi)=-2(2s-1)\xi^{s-1},\qquad s\ge1,
\end{equation*}
and $a_s(\xi)$ has the form
\begin{equation*}
a_s(\xi)=2(2s-1)\xi^{s-1}+\mbox{\rm{}terms of degree $\le (s-2)$ in the variable $\xi$},\qquad s\ge1.
\end{equation*}
\end{lemma}
\begin{proof}
First, we compute explicitly $b_s(\xi)$:
\begin{equation*}
\begin{array}{l}
\displaystyle
\frac{(v-\xi)^2-2(v+\xi)}{(v-\xi)^2}=
1-2\frac{v+\xi}{(v-\xi)^2}\\
\displaystyle\qquad
=1-\frac{4\xi}{(v-\xi)^2}-\frac2{v-\xi}=
1-4\sum_{s=1}^\infty s\xi^s v^{-s-1}-
2\sum_{s=0}^\infty \xi^s v^{-s-1}\\
\displaystyle\qquad
=1-\frac2v-\sum_{s=1}^\infty(4s+2)\xi^s v^{-s-1}=
1-\sum_{s=1}^\infty2(2s-1)\xi^{s-1}v^{-s}.
\end{array}
\end{equation*}
Next, observe that
$(\sum_{s=0}^\infty a_s(\xi)v^{-s})
(\sum_{s=0}^\infty b_s(\xi)v^{-s})=1$,
therefore, $a_0(\xi)=1$ and for $s\ge1$ the top degree term
of $a_s(\xi)$ is equal to $2(2s-1)\xi^{s-1}$.
\end{proof}
For an ordinary partition
$\sigma=(\sigma_1,\sigma_2,\dots,\sigma_{\ell(\sigma)})$ we set
\begin{equation*}
a_\sigma(\xi):=\prod_{i=1}^{\ell(\sigma)}a_{\sigma_i}(\xi),\qquad
b_\sigma(\xi):=\prod_{i=1}^{\ell(\sigma)}b_{\sigma_i}(\xi).
\end{equation*}
Using (\ref{f62}) and the above definition, we get
the following expressions for the products on $l$ in (\ref{f57}) and (\ref{f59}):
\begin{equation}\label{f63}
\begin{array}{lcr}
\displaystyle
\prod_l\frac{(v_l-x(x+1))^2}{(v_l-x(x+1))^2-2(v_l+x(x+1))}=
\sum_\sigma a_\sigma(x(x+1))m_\sigma(v_1^{-1},v_2^{-1},\dots);
\\\displaystyle
\prod_l\frac{(v_l-y(y+1))^2-2(v_l-y(y+1))}{(v_l-y(y+1))^2)}=
\sum_\sigma b_\sigma(y(y+1))m_\sigma(v_1^{-1},v_2^{-1},\dots).
\end{array}
\end{equation}
Here the sums in the right-hand sides are taken over all
ordinary partitions.
Next, introduce two linear maps
\begin{equation*}
\left.
\begin{array}{ll}
\mathbb{R}\left[ x(x+1) \right]\to\Gamma,& f\mapsto\langle f\rangle^\uparrow;\\
\mathbb{R}\left[ y(y+1) \right]\to\Gamma,& h\mapsto\langle h\rangle^\downarrow
\end{array}
\right.
\end{equation*}
by setting
\begin{equation}\label{f64}
\big\langle(x(x+1))^m\big\rangle^\uparrow:=\mathbf{g}_m,
\quad
\big\langle(y(y+1))^m\big\rangle^\downarrow:=\mathbf{\hat g}_{m+1},
\qquad
m\in\mathbb{Z}_{\ge0},
\end{equation}
where, by agreement, $\mathbf{g}_0=1$.
This definition is inspired by Proposition \ref{p3.7}.
Finally, let $c_{\sigma\tau}^\rho$
be the structure constants of the algebra $\Lambda$
of all symmetric functions
in the basis of monomial symmetric functions:
\begin{equation*}
m_\sigma m_\tau=\sum_\rho c^\rho_{\sigma\tau}m_\rho.
\end{equation*}
Note that $c^\rho_{\sigma\tau}$ can be nonzero only if $|\rho|=|\sigma|+|\tau|$.
Here $\rho$, $\sigma$, and $\tau$ are ordinary partitions.
Now we are in a position to compute $U\mathbf{g}_\rho$ and $D\mathbf{g}_\rho$.
\begin{lemma}\label{p4.6}
With the notation introduced above we have
\begin{equation*}
\begin{array}{rcl}
\displaystyle
U\mathbf{g}_\rho&=&\displaystyle
\sum_{\sigma,\tau\colon
|\sigma|+|\tau|=|\rho|}\frac12
c^\rho_{\sigma\tau}\Big\langle\big(x(x+1)+\alpha\big)\cdot
a_\sigma\big(x(x+1)\big)\Big\rangle^\uparrow
\mathbf{g}_\tau;
\\\displaystyle
D\mathbf{g}_\rho&=& \displaystyle\sum_{\sigma,\tau\colon
|\sigma|+|\tau|=|\rho|}\frac12
c^\rho_{\sigma\tau}
\Big\langle b_\sigma\big(y(y+1)\big)\Big\rangle^\downarrow\mathbf{g}_\tau.
\end{array}
\end{equation*}
\end{lemma}
\begin{proof}
Let us write
\begin{equation*}
\begin{array}{rcll}
\displaystyle
F^\uparrow(v_1,v_2,\dots)&=& \displaystyle\sum_\sigma F_\sigma^\uparrow \cdot
m_\sigma(v_1^{-1},v_2^{-1},\dots),&\quad F_\sigma^\uparrow\in\Gamma;
\\
\displaystyle
F^\downarrow(v_1,v_2,\dots)&=& \displaystyle\sum_\sigma F_\sigma^\downarrow \cdot
m_\sigma(v_1^{-1},v_2^{-1},\dots),&\quad F_\sigma^\downarrow\in\Gamma,
\end{array}
\end{equation*}
where sums are taken over all ordinary partitions $\sigma$.
Using (\ref{f55}), we get
\begin{equation*}
\begin{array}{l}
\displaystyle
\sum_\rho m_\rho(v_1^{-1},v_2^{-1},\dots)
U\mathbf{g}_\rho\\
\displaystyle\qquad
=
\bigg( \sum_\sigma
F_\sigma^\uparrow m_\sigma(v_1^{-1},v_2^{-1},\dots)\bigg)
\bigg(\sum_\tau m_\tau(v_1^{-1},v_2^{-1},\dots)\mathbf{g}_\tau\bigg),
\end{array}
\end{equation*}
which implies
\begin{equation*}
U\mathbf{g}_\rho=
\sum_{\sigma,\tau\colon|\sigma|+|\tau|=|\rho|}c^\rho_{\sigma\tau}F_\sigma^\uparrow\mathbf{g}_\tau.
\end{equation*}
Similarly we obtain
\begin{equation*}
D\mathbf{g}_\rho=
\sum_{\sigma,\tau\colon|\sigma|+|\tau|=|\rho|}c^{\rho}_{\sigma\tau}F_\sigma^\downarrow\mathbf{g}_\tau.
\end{equation*}
The facts that
\begin{equation*}
F_\sigma^\uparrow=\Big\langle\frac12\big(x(x+1)+\alpha\big)\cdot
a_\sigma\big(x(x+1)\big)\Big\rangle^\uparrow;\qquad
F_\sigma^\downarrow=\Big\langle
\frac12b_\sigma\big(y(y+1)\big)\Big\rangle^\downarrow
\end{equation*}
follow directly from (\ref{f57}), (\ref{f59}) and (\ref{f63}).
\end{proof}
\subsection{The operator $D$ in differential form}\label{s4.4}
In this subsection we prove claim {\rm{(1)}\/} of Theorem
\ref{p4.2}, that is, compute the top degree terms of the operator $D\colon\Gamma\to\Gamma$.
By virtue of Lemma \ref{p4.6}, we can write
\begin{equation}\label{f65}
D=\sum_\sigma D_\sigma,\qquad
D_\sigma\mathbf{g}_\rho:=\sum_{\tau\colon|\tau|=|\rho|-|\sigma|}
\frac12
\Big\langle b_\sigma\big(y(y+1)\big)\Big\rangle^\downarrow
c^\rho_{\sigma\tau}\mathbf{g}_\tau.
\end{equation}
\begin{lemma}\label{p4.7}
Let $\sigma$ be a nonempty ordinary partition.
Then
\begin{equation*}
\deg D_\sigma\le
\max_{\rho,\tau}(\ell(\rho)-\ell(\tau)-2\ell(\sigma)+1),
\end{equation*}
where the maximum is taken over all pairs $(\rho,\tau)$
such that $c^\rho_{\sigma\tau}\ne0$.
A more rough but simpler estimate is
\begin{equation*}
\deg D_\sigma\le-\ell(\sigma)+1.
\end{equation*}
\end{lemma}
\begin{proof}
We have
\begin{equation*}
\begin{array}{l}
\displaystyle
\deg D_\sigma\le
\max_{\rho,\tau}\big(\deg
\left\langle b_\sigma\big(y(y+1)\big)\right\rangle^\downarrow
+\deg\mathbf{g}_\tau-\deg\mathbf{g}_\rho\big)\\
\displaystyle\qquad
=\max_{\rho,\tau}\big(\deg
\left\langle b_\sigma\big(y(y+1)\big)\right\rangle^\downarrow
+2|\tau|-\ell(\tau)-2|\rho|+\ell(\rho)\big)\\
\displaystyle\qquad
\le
\max_{\rho,\tau}\big(\deg\left\langle
b_\sigma\big(y(y+1)\big)\right\rangle^\downarrow
-2|\sigma|-\ell(\tau)+\ell(\rho)\big),
\end{array}
\end{equation*}
where maximums are taken over all pairs of ordinary partitions
$(\rho,\tau)$ such that $c^\rho_{\sigma\tau}\ne0$.
The first line follows from the very definition
of $\deg D_\sigma$, the second line holds
because $\deg\mathbf{g}_m=2m-1$, $m=1,2,\dots$
(Corollary \ref{p3.5}), and therefore
\begin{equation*}
\deg\mathbf{g}_\rho=2|\rho|-\ell(\rho),\qquad \deg\mathbf{g}_\tau=2|\tau|-\ell(\tau)
\end{equation*}
for any ordinary partitions $\rho$ and $\tau$, and
the
third line holds because $c^{\rho}_{\sigma\tau}\ne0$
only if $|\rho|=|\sigma|+|\tau|$.
By assumption, $\sigma$ is nonempty, therefore, $\ell(\sigma)\ge1$.
Set $\sigma=(\sigma_1,\dots,\sigma_{\ell(\sigma)})$, where $\sigma_i\ge1$ for $i=1,\dots,\ell(\sigma)$,
and write $\left\langle b_\sigma\big(y(y+1)\big)\right\rangle^\downarrow$ in more detail:
\begin{equation*}
\begin{array}{l}
\displaystyle
\left\langle b_\sigma\big(y(y+1)\big)\right\rangle^\downarrow
\\\displaystyle\qquad=
\left\langle \prod_{i=1}^{\ell(\sigma)}
b_{\sigma_i}\big(y(y+1)\big)\right\rangle^\downarrow
=\left\langle \prod_{i=1}^{\ell(\sigma)}
-2(2\sigma_i-1)\big(y(y+1)\big)^{\sigma_i-1}\right\rangle^\downarrow.
\end{array}
\end{equation*}
We see that the polynomial $b_\sigma\big(y(y+1)\big)$ has degree $|\sigma|-\ell(\sigma)$
in $y(y+1)$, therefore $\left\langle b_\sigma\big(y(y+1)\big)\right\rangle^\downarrow$
is equal, within a nonzero scalar factor, to $\mathbf{\hat g}_{|\sigma|-\ell(\sigma)+1}$
(Proposition \ref{p3.7}).
Note that $\deg\mathbf{\hat g}_{|\sigma|-\ell(\sigma)+1}=2|\sigma|-2\ell(\sigma)+1$,
therefore,
\begin{equation*}
\deg D_\sigma\le\max_{\rho,\tau}\big(2|\sigma|-2\ell(\sigma)+1-2|\sigma|-\ell(\tau)+\ell(\rho)\big),
\end{equation*}
which is the first estimate. To prove the second one,
observe that $c^\rho_{\sigma\tau}\ne0$ implies $\ell(\rho)\le\ell(\sigma)+\ell(\tau)$.
\end{proof}
From the second estimate of the above lemma follows a
\begin{corollary}\label{p4.8}
If $\ell(\sigma)\ge3$, then $\deg D_\sigma\le-2$.
\end{corollary}
By this corollary, it suffices to examine
the operators $D_\sigma$ with $\ell(\sigma)=0$ (that is, $\sigma=\varnothing$),
$\ell(\sigma)=2$, and $\ell(\sigma)=1$.
The next three lemmas
consider these cases consequently.
\begin{lemma}\label{p4.9}
$D_\varnothing=\displaystyle\frac12\mathbf{g}_1$.
\end{lemma}
\begin{proof}
Suppose that $\sigma=\varnothing$ in (\ref{f65}).
Then
$\tau=\rho$ and $c^\rho_{\sigma\tau}=1$.
Moreover, since $b_\sigma=1$, then
$\left\langle b_\sigma\big(y(y+1)\big)\right\rangle^\downarrow=\langle1\rangle^\downarrow=\mathbf{\hat g}_1$.
By virtue of Proposition \ref{p3.4}, $\mathbf{\hat g}_1=\mathbf{g}_1$.
This concludes the proof.
\end{proof}
\begin{lemma}\label{p4.10}
\begin{equation*}
\sum_{\sigma\colon\ell(\sigma)=2}D_\sigma=
\sum_{r,s\ge1}(2r-1)(2s-1)\mathbf{g}_{r+s-1}
\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}{}+{}
\mbox{\rm{}operators of degree $\le-2$}.
\end{equation*}
\end{lemma}
\begin{proof}
Suppose that in (\ref{f65}) we have
$\ell(\sigma)=2$, that is,
$\sigma=(\sigma_1,\sigma_2)$, $\sigma_1\ge\sigma_2\ge1$.
Lemma \ref{p4.7} shows that it must be
$\ell(\rho)=\ell(\tau)+2$, otherwise
the corresponding distribution to $D_\sigma$ has degree $\le-2$.
This means that
\begin{equation*}
\sigma_1=\rho_i,\quad \sigma_2=\rho_j,\qquad \tau=
\left( \rho_1,\dots,\rho_{i-1},\rho_{i+1},\dots,\rho_{j-1},\rho_{j+1},\dots,\rho_{\ell(\rho)} \right)
\end{equation*}
for some $1\le i<j\le\ell(\rho)$.
Using Lemma \ref{p4.5}, we get
\begin{equation*}
\big\langle b_\sigma\left( y\big(y+1\big) \right)\big\rangle^\downarrow=
4(2\sigma_1-1)(2\sigma_{2}-1)\mathbf{\hat g}_{\sigma_1+\sigma_2-1}.
\end{equation*}
It follows that
\begin{equation*}
\left( \sum_{\sigma\colon\ell(\sigma)=2}D_\sigma \right)
\mathbf{g}_\rho=
\sum_{1\le i<j\le\ell(\rho)}
2(2\rho_i-1)(2\rho_j-1)
c^{\rho}_{\sigma\tau}
\mathbf{\hat g}_{\rho_i+\rho_j-1}
\mathbf{g}_{\rho\setminus\left\{ \rho_i,\rho_j \right\}}.
\end{equation*}
Note that for such $\rho$, $\sigma$, and $\tau$ as described above we have
\begin{equation}\label{f68}
c^{\rho}_{\sigma\tau}\mathbf{g}_{\rho\setminus\left\{ \rho_i,\rho_j \right\}}
=\left\{
\begin{array}{ll}
\displaystyle
\frac{\partial^2\mathbf{g}_{\rho}}{\partial\mathbf{g}_{\rho_i}\partial\mathbf{g}_{\rho_j}},
&\text{if $\rho_i\ne\rho_j$};\\
\rule{0pt}{24pt}
\displaystyle
\frac12\frac{\partial^2\mathbf{g}_{\rho}}{\partial\mathbf{g}_{\rho_i}^2},&
\text{if $\rho_i=\rho_j$}.
\end{array}
\right.
\end{equation}
It follows that we can write
\begin{equation*}
\begin{array}{l}
\displaystyle
\sum_{\sigma\colon\ell(\sigma)=2}D_\sigma=
\sum_{r_1>r_2\ge1}2(2r_1-1)(2r_2-1)\mathbf{\hat g}_{r_1+r_2-1}
\frac{\partial^2}{\partial\mathbf{g}_{r_1}\mathbf{g}_{r_2}}
\\
\displaystyle\qquad
+\frac12\sum_{r\ge1}2(2r-1)^2\mathbf{\hat g}_{2r-1}\frac{\partial^2}{\partial\mathbf{g}_r^2}.
\end{array}
\end{equation*}
Using Proposition \ref{p3.4}, we can substitute each of $\mathbf{\hat g}_k$'s
above by $\mathbf{g}_k$.
Indeed, since
$\mathbf{\hat g}_k=\mathbf{g}_k+\mbox{terms of degree $\le 2k-2$}$,
$k=1,2,\dots$,
then
replacing $\mathbf{\hat g}_k$ by $\mathbf{g}_k$ affects
only negligible terms (that is, summands of degree $\le-2$
in the expression for the operator $D$).
The result of this substitution is the desired expression.
\end{proof}
\begin{lemma}\label{p4.11}
\begin{equation*}
\begin{array}{l}
\displaystyle
\sum_{\sigma\colon\ell(\sigma)=1}D_\sigma=
-\sum_{r\ge1}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}
\\\displaystyle\qquad
+\sum_{r,s\ge1}(r+s)\mathbf{g}_r\mathbf{g}_s\frac\partial{\partial\mathbf{g}_{r+s}}
{}+{}
\mbox{\rm{}operators of degree $\le-2$}.
\end{array}
\end{equation*}
\end{lemma}
\begin{proof}
Suppose that in (\ref{f65}) we have $\ell(\sigma)=1$, that is, $\sigma=(s)$
for some $s\in\mathbb{Z}_{>0}$.
It follows from Lemma \ref{p4.7}
that either $\ell(\rho)=\ell(\tau)+1$, or $\ell(\rho)=\ell(\tau)$.
Let us examine these cases separately.
Assume first that $\ell(\rho)=\ell(\tau)+1$.
This means that
$$
s=\rho_i,\qquad\tau=(\rho_1,\dots,\rho_{i-1},\rho_{i+1},\dots,\rho_{\ell(\rho)})
$$
for some $1\le i\le \ell(\rho)$.
Using Lemma \ref{p4.5}, we get
\begin{equation*}
\left\langle
b_\sigma\big(y(y+1)\big)\right\rangle^\downarrow
=-2(2s-1)\mathbf{\hat g}_s.
\end{equation*}
Therefore, the case $\ell(\rho)=\ell(\tau)+1$ gives rise to the terms
\begin{equation*}
-\sum_{i=1}^{\ell(\rho)}(2\rho_i-1)c^{\rho}_{\sigma\tau}
\mathbf{\hat g}_{\rho_i} \mathbf{g}_{\rho\setminus\left\{ \rho_i \right\}}.
\end{equation*}
Note that in this case we have
$c^{\rho}_{\sigma\tau}\mathbf{g}_{\rho\setminus\left\{ \rho_i \right\}}=
\partial\mathbf{g}_\rho/\partial\mathbf{g}_{\rho_i}$, and it follows that we obtain terms
\begin{equation*}
-\sum_{r\ge1}(2r-1)\mathbf{\hat g}_r\frac{\partial}{\partial\mathbf{g}_r}
\end{equation*}
contributing to $\sum_{\sigma\colon\ell(\sigma)=1}D_\sigma$.
Now assume that $\ell(\rho)=\ell(\tau)$. This means that $\tau$
is obtained from $\rho$ by subtracting $s$ from one of the parts
$\rho_i$ of $\rho$; moreover, this part $\rho_i=r$
should be $\ge s+1$.
This gives rise to the terms
\begin{equation*}
-\sum_{\textstyle\genfrac{}{}{0pt}{}{r\ge2}{1\le s\le r-1}}
(2s-1)\mathbf{\hat g}_s\mathbf{g}_{r-s}\frac{\partial}{\partial\mathbf{g}_r},
\end{equation*}
Finally, we get
\begin{equation*}
\sum_{\sigma\colon\ell(\sigma)=1}D_\sigma
=-\sum_{r\ge1}(2r-1)\mathbf{\hat g}_r\frac\partial{\partial\mathbf{g}_r}
-\sum_{\textstyle\genfrac{}{}{0pt}{}{r\ge2}{1\le s\le r-1}}
(2s-1)\mathbf{\hat g}_s\mathbf{g}_{r-s}\frac\partial{\partial\mathbf{g}_r}.
\end{equation*}
It remains to express $\mathbf{\hat g}_k$'s above in terms of $\mathbf{g}_i$'s
using Proposition \ref{p3.4}.
In the first sum we should do this as
\begin{equation*}
\mathbf{\hat g}_r\to\mathbf{g}_r-\mathbf{g}_{r-1}\mathbf{g}_1-\dots-\mathbf{g}_1\mathbf{g}_{r-1}{}+{}
\mbox{terms of degree $\le 2r-2$},\qquad r=1,2,\dots,
\end{equation*}
and in the second sum as
\begin{equation*}
\mathbf{\hat g}_s\to\mathbf{g}_s{}+{}\mbox{terms of degree $\le 2s-1$},\qquad s=1,2,\dots.
\end{equation*}
It is clear that this substitution affects only negligible terms
(that is, summands of degree $\le-2$ in the expression for the operator $D$).
To conclude the proof it remains to perform a simple transformation.
\end{proof}
Theorem \ref{p4.2} (1) now follows from
Lemmas \ref{p4.9}, \ref{p4.10} and \ref{p4.11}.
\subsection{The operator $U$ in differential form}\label{s4.5}
Here we prove claim (2)
of Theorem \ref{p4.2}, that is, compute the top degree terms
of the operator $U\colon\Gamma\to\Gamma$.
The argument here is similar to that of the previous subsection.
However, there are two differences which
require us to go through the proof in full detail:
$\bullet$ There is a difference between the expressions for $U\mathbf{g}_\rho$ and $D\mathbf{g}_\rho$ (Lemma \ref{p4.6});
$\bullet$ The behaviour of the degree of $\langle(x(x+1))^{m}\rangle^{\uparrow}=\mathbf{g}_m$ differs from the behaviour
of the degree of $\langle(y(y+1))^{m}\rangle^{\downarrow}=\mathbf{\hat g}_{m+1}$.
Indeed, the expression $\deg\langle(y(y+1))^{m}\rangle^{\downarrow}=2m+1$
is valid for all $m\in\mathbb{Z}_{\ge0}$, while
the formula $\deg\langle(x(x+1))^{m}\rangle^{\uparrow}=2m-1$ is valid only for $m>0$.
It is convenient to decompose $U$ using Lemma \ref{p4.6} as follows:
\begin{equation*}
U=\sum_\sigma(U_\sigma^0+U_\sigma^1),
\end{equation*}
where the sum is taken over all ordinary partitions $\sigma$,
\begin{equation}\label{f69}
U_\sigma^0\mathbf{g}_\rho:=
\frac\al2
\sum_{\tau\colon|\tau|=|\rho|-|\sigma|}c^{\rho}_{\sigma\tau}
\big\langle a_\sigma\big(x(x+1)\big)\big\rangle^\uparrow\mathbf{g}_\tau
\end{equation}
and
\begin{equation}\label{f70}
U_\sigma^1\mathbf{g}_\rho:=
\frac12
\sum_{\tau\colon|\tau|=|\rho|-|\sigma|}c^{\rho}_{\sigma\tau}
\big\langle x(x+1)\cdot a_\sigma\big(x(x+1)\big)\big\rangle^\uparrow\mathbf{g}_\tau.
\end{equation}
\begin{rmk}\rm{}\label{p8.10}
This decomposition $U=U^0+U^1$ is similar to the decomposition of the corresponding
operator $U_{\theta,z,z'}=U^0_{\theta,z,z'}+U^1_{\theta,z,z'}+U^2_{\theta,z,z'}$
in the paper \cite{Olshanski2009}.
The operator $U_{\theta,z,z'}$ is constructed using the up
transition function
for the $z$-measures on the Young graph with the Jack edge multiplicities
(here $\theta>0$ is the Jack parameter), see \cite[Thm. 6.1 (ii)]{Olshanski2009}\footnote{Our
operator $U$ is related to the multiplicative measures on the Schur
graph in the same way, see Lemma \ref{p2.10}.}.
Note that our $U$ is expressed as the sum of two operators while
the expression for $U_{\theta,z,z'}$
has three terms.
This is because
the expression \cite[(4.3)]{Olshanski2009} for the up
transition function for the $z$-measures involves
terms of degrees zero, one and two in the (anisotropic) Kerov coordinates
while the expression (\ref{f7}) for
the up transition function for the multiplicative measures
involves only terms of degrees zero and one
in $x(x+1)$.\footnote{See
also \cite[Lemma 5.11]{Olshanski2009} and Proposition \ref{p3.7} in the present
paper, respectively.}
\end{rmk}
\begin{lemma}[Cf. Lemma \ref{p4.7}]\label{p4.12}
If $\sigma$ is a nonempty ordinary partition, then
\begin{equation*}
\deg U_\sigma^1\le
\max_{\rho,\tau}(\ell(\rho)-\ell(\tau)-2\ell(\sigma)+1),
\end{equation*}
where the maximum is taken over all pairs $(\rho,\tau)$
such that $c^{\rho}_{\sigma\tau}\ne0$.
A more rough but simpler estimate is
\begin{equation*}
\deg U_\sigma^1\le-\ell(\sigma)+1.
\end{equation*}
\end{lemma}
The case $U^0_\sigma$ will be investigated
separately, see Lemma \ref{p4.14} below.
\begin{proof}
Arguing as in Lemma \ref{p4.7}, we get the estimate
\begin{equation*}
\deg U^1_\sigma\le
\max_{\rho,\tau}\left( \Big\langle
x(x+1)\cdot a_\sigma\big(x(x+1)\big)\Big\rangle^\uparrow
-2|\sigma|-\ell(\tau)+\ell(\rho)\right).
\end{equation*}
It remains to compute
$\deg\left\langle x(x+1)\cdot a_\sigma\big(x(x+1)\big)\right\rangle^\uparrow$.
Observe that the polynomial $x(x+1)\cdot a_\sigma\big(x(x+1)\big)$
has degree $\ge1$ in $x(x+1)$, therefore,
by Lemma \ref{p4.5} and Proposition \ref{p3.7} we have
\begin{equation*}
\deg\Big\langle
x(x+1)\cdot a_\sigma\big(x(x+1)\big)
\Big\rangle^\uparrow=
\deg\Big\langle
\big(x(x+1)\big)^{|\sigma|-\ell(\sigma)+1}
\Big\rangle^\uparrow
=2|\sigma|-2\ell(\sigma)+1,
\end{equation*}
this gives the first estimate.
The second estimate is obtained
as before,
because if $c^{\rho}_{\sigma\tau}\ne0$, then
$\ell(\rho)\le\ell(\sigma)+\ell(\tau)$.
\end{proof}
From the second estimate of the above lemma follows a
\begin{corollary}[Cf. Corollary \ref{p4.8}]\label{p4.13}
If $\ell(\sigma)\ge3$, then
$\deg U_\sigma^1\le-2$.
\end{corollary}
In the next Lemma we deal with the whole operator $\sum_{\sigma}U_\sigma^0$.
\begin{lemma}\label{p4.14}
\begin{equation*}
\sum_\sigma U_\sigma^0=\frac\al2+\alpha\frac{\partial}{\partial\mathbf{g}_1}{}+{}
\text{\rm{}operators of degree $\le-2$}.
\end{equation*}
\end{lemma}
\begin{proof}
If $\sigma$ is empty, then $\rho=\tau$ and $c^{\rho}_{\sigma\tau}=1$ in (\ref{f69}), and we get
\begin{equation*}
U^0_\sigma\mathbf{g}_\rho=\frac\al2 c^{\rho}_{\sigma\tau}
\left\langle 1\right\rangle^\uparrow\mathbf{g}_\rho=\frac\al2\mathbf{g}_\rho
\end{equation*}
(compare this to Lemma \ref{p4.9}).
Assume now that $\sigma$ is nonempty. Then
$\sigma=(\sigma_1,\dots,\sigma_{\ell(\sigma)})$, and $\ell(\sigma)\ge1$.
Arguing as in Lemma \ref{p4.12}, we get the estimate
\begin{equation*}
\begin{array}{l}
\displaystyle
\deg U^0_\sigma\le\max_{\rho,\tau}
\left( \deg\left\langle a_\sigma\big(x(x+1)\big)\right\rangle^\uparrow
+2|\tau|-\ell(\tau)-2|\rho|+\ell(\rho)\right)
\\
\displaystyle\qquad
\le
\max_{\rho,\tau}\left( \deg\left\langle
a_\sigma\big(x(x+1)\big)\right\rangle^\uparrow
-2|\sigma|+\ell(\sigma)\right),
\end{array}
\end{equation*}
where the maximum is taken over all pairs $(\rho,\tau)$
such that $c^{\rho}_{\sigma\tau}\ne0$.
The second inequality holds because
$\ell(\rho)-\ell(\tau)\le\ell(\sigma)$.
If the polynomial $a_\sigma\big(x(x+1)\big)$
has degree $\ge1$ in $x(x+1)$,
then using Lemma \ref{p4.5} and Proposition \ref{p3.7} we can write
$$
\deg\left\langle a_\sigma\big(x(x+1)\big)\right\rangle^\uparrow
=\deg\left( \mathbf{g}_{|\sigma|-\ell(\sigma)} \right)=
2|\sigma|-2\ell(\sigma)-1,
$$
which implies
\begin{equation*}
\deg U^0_\sigma\le-\ell(\sigma)-1\le-2.
\end{equation*}
Thus, it remains to consider the case when
$\sigma$ is nonempty and the polynomial
$a_\sigma\big(x(x+1)\big)$ has zero degree in $x(x+1)$.
This case occurs if and only if $\sigma_1=\dots=\sigma_{\ell(\sigma)}=1$, and for
$\deg U_\sigma^0$ we have the estimate:
$\deg U^0_\sigma\le-\ell(\sigma)$.
If $\ell(\sigma)\ge2$, this is enough to conclude $\deg U^0_\sigma\le-2$.
Finally, examine the case $\sigma=(1)$. There are two possibilities:
$\ell(\rho)=\ell(\tau)+1$ and $\ell(\rho)=\ell(\tau)$.
In the latter case the estimate of $\deg U^0_\sigma$
can be refined because then $\ell(\rho)-\ell(\tau)=0$
is strictly smaller than $\ell(\sigma)=1$, which again implies $\deg U^0_\sigma\le-2$.
Thus, the only substantial contribution arises when $\sigma=(1)$,
$\rho_{\ell(\rho)}=1$ and
and $\tau=(\rho_1,\dots,\rho_{\ell(\rho)-1})$
(that is, $\tau$ is obtained from $\rho$ by deleting a singleton).
This gives rise to the term
$U^0_{\sigma}\mathbf{g}_\rho=
\alpha{\partial\mathbf{g}_\rho}/{\partial\mathbf{g}_1}$.
\end{proof}
Lemmas \ref{p4.15}, \ref{p4.16} and \ref{p4.17} below are similar to Lemmas
\ref{p4.9}, \ref{p4.10} and \ref{p4.11}, respectively, and
deal with the operator $\sum_{\sigma}U^1_\sigma$.\footnote{It follows from Corollary \ref{p4.13} that it suffices
to examine the cases when $\ell(\sigma)=0$ (that is, $\sigma=\varnothing$),
$\ell(\sigma)=2$, and $\ell(\sigma)=1$. We perform this consequently.}
\begin{lemma}\label{p4.15}
$U_\varnothing^1=\displaystyle\frac12\mathbf{g}_1$.
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma \ref{p4.9}.
We should only note that
$$
\left\langle x(x+1)\cdot a_{\varnothing}\big(x(x+1)\big)\right\rangle^\uparrow=
\big\langle x(x+1)\big\rangle^\uparrow=\mathbf{g}_1
$$
in (\ref{f70}), therefore, $U_\varnothing^1$ reduces to multiplication by $\mathbf{g}_1/2$.
\end{proof}
\begin{lemma}\label{p4.16}
\begin{equation*}
\sum_{\sigma\colon\ell(\sigma)=2}U_\sigma^1=
\sum_{r,s\ge1}(2r-1)(2s-1)
\mathbf{g}_{r+s-1}\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}
{}+{}\text{\rm{}operators of degree $\le-2$}.
\end{equation*}
\end{lemma}
\begin{proof}
The proof
is similar to the proof of Lemma \ref{p4.10}.
Instead of Lemma \ref{p4.7}, we refer to
its analogue, Lemma \ref{p4.12}.
Let $\ell(\sigma)=2$ in (\ref{f70}),
that is, $\sigma=(\sigma_1,\sigma_2)$ with $\sigma_1\ge\sigma_2\ge1$.
Using Lemma \ref{p4.5} and Proposition \ref{p3.7}, we get
\begin{equation*}
\begin{array}{l}\displaystyle
\Big\langle
x(x+1)\cdot a_\sigma\big(x(x+1)\big)
\Big\rangle^\uparrow
\\\rule{0pt}{16pt}\displaystyle\qquad
=4(2\sigma_1-1)(2\sigma_2-1)\mathbf{g}_{\sigma_1+\sigma_2-1}{}+{}\mbox{terms of degree $\le2|\sigma|-5$}.
\end{array}
\end{equation*}
Next,
\begin{equation*}
\begin{array}{rcl}\displaystyle
\left( \sum_{\sigma\colon\ell(\sigma)=2}U_\sigma^1 \right)\mathbf{g}_\rho&=& \displaystyle
\sum_{1\le i<j\le\ell(\rho)}2(2\rho_i-1)(2\rho_j-1)c^{\rho}_{\sigma\tau}\mathbf{g}_{\rho_i+\rho_j-1}\mathbf{g}_{\rho\setminus\left\{ \rho_i,\rho_j \right\}}
\\&&\qquad\displaystyle{}+{}\mbox{terms of degree $\le2|\rho|-\ell(\rho)-3$},
\end{array}
\end{equation*}
and using (\ref{f68}), we conclude the proof.
\end{proof}
\begin{lemma}\label{p4.17}
\begin{equation*}
\begin{array}{rcl}\displaystyle
\sum_{\sigma\colon\ell(\sigma)=1}U_\sigma^1&=& \displaystyle
\sum_{r\ge1}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}
+\sum_{r,s\ge1}(r+s-1)\mathbf{g}_r\mathbf{g}_s\frac{\partial}{\partial\mathbf{g}_{r+s}}\\&&\displaystyle\qquad
{}+{}\text{\rm{}operators of degree $\le-2$}.\qquad
\end{array}
\end{equation*}
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma \ref{p4.11}
(and we again use Lemma \ref{p4.12} instead of Lemma \ref{p4.7}).
Let $\ell(\sigma)=1$ in (\ref{f70}), that is, $\sigma=(s)$ for some $s\in\mathbb{Z}_{>0}$.
Again, two cases are possible: either
$\ell(\rho)=\ell(\tau)+1$, or $\ell(\rho)=\ell(\tau)$.
Assume first that $\ell(\rho)=\ell(\tau)+1$. Using Lemma \ref{p4.5} and Proposition \ref{p3.7},
we get
\begin{equation*}
\Big\langle
x(x+1)\cdot a_\sigma\big(x(x+1)\big)
\Big\rangle^\uparrow=
2(2s-1)\mathbf{g}_s{}+{}\mbox{terms of degree $\le 2s-3$}.
\end{equation*}
This gives rise to the terms
\begin{equation*}
\sum_{r\ge1}(2r-1)\mathbf{g}_r\frac\partial{\partial\mathbf{g}_r}{}+{}\text{operators of degree $\le-2$}.
\end{equation*}
If $\ell(\rho)=\ell(\tau)$, similarly to the proof of Lemma \ref{p4.11},
we get the terms
\begin{equation*}
\sum_{\textstyle\genfrac{}{}{0pt}{}{r\ge2}{1\le s\le r-1}}
(2s-1)\mathbf{g}_s\mathbf{g}_{r-s}\frac\partial{\partial\mathbf{g}_r}{}+{}
\text{operators of order $\le-2$}.
\end{equation*}
To conclude the proof it remains to perform a simple transformation.
\end{proof}
Theorem \ref{p4.2} (2) now follows from
Lemmas \ref{p4.14}, \ref{p4.15}, \ref{p4.16} and \ref{p4.17}.
\section{The operator $T_n$ in differential form}\label{s5}
Let $T_n\colon\mathrm{Fun}(\mathbb{S}_n)\to\mathrm{Fun}(\mathbb{S}_n)$, $n\in\mathbb{Z}_{>0}$,
be the operator from Definition \ref{p10.6}.
In \S\ref{s2} we have obtained a formula for the action of $T_n$ on Schur's $\mathcal{Q}$-functions
(Theorem \ref{p2.7}).
In this section using the results
of \S\ref{s3} and
Theorem \ref{p4.2} we prove another formula for $T_n$:
\begin{thm}\label{p5.1}
There exists a unique operator $\widetilde B\colon\Gamma\to\Gamma$ such that
\begin{equation*}
(T_n-\mathbf{1})f_n=\frac{(\widetilde Bf)_n}{(n+\alpha/2)(n+1)}
\end{equation*}
for all $f\in\Gamma$.
The operator $\widetilde B$ has zero degree. Under the identification of $\Gamma$ with the polynomial
algebra $\mathbb{R}\left[ p_1,p_3,p_5,\dots \right]$, the zero-degree
homogeneous component of $\widetilde B$,
the operator $B\colon\Gamma\to\Gamma$, has the form
\begin{equation}\label{f75}
\left.
\begin{array}{l}
\displaystyle
B=\sum_{i,j=2}^{\infty}(2i-1)(2j-1)
(p_1p_{2i+2j-3}-p_{2i-1}p_{2j-1})\frac{\partial^2}{\partial p_{2i-1}\partial p_{2j-1}}
\\\displaystyle\qquad
+2\sum_{i,j=1}^{\infty}(2i+2j-1)p_1p_{2i-1}p_{2j-1}\frac{\partial}{\partial p_{2i+2j-1}}
\\\displaystyle\qquad
-\sum_{i=2}^{\infty}(2i-1)\left(2i-2+\frac\al2\right)p_{2i-1}\frac{\partial}{\partial p_{2i-1}}.
\end{array}
\right.
\end{equation}
\end{thm}
By the zero-degree homogeneous component of the operator $\widetilde B$
we mean the unique homogeneous operator $B\colon\Gamma\to\Gamma$ of zero degree
such that
$$
\widetilde B=B+\mbox{operators of degree $\le-1$}.
$$
First, we note an important corollary of Theorem \ref{p5.1}:
\begin{corollary}\label{p5.2}
The operator $B\colon\Gamma\to\Gamma$ commutes with the operator of multiplication
by the element $p_1\in\Gamma$.
\end{corollary}
\begin{proof}
This follows from the fact that the expression (\ref{f75}) for $B$
does not contain partial derivatives with respect to $p_1$.
\end{proof}
In the rest of this section we prove Theorem \ref{p5.1}.
First, (\ref{f41}) and (\ref{f54}) imply
\begin{equation*}
(T_n-{\bf1})f_n=\frac{\big( (UD-\frac14(\mathbf{g}_1+\alpha)(\mathbf{g}_1+2))f\big)_n}{(n+\alpha/2)(n+1)}
\end{equation*}
for all $f\in\Gamma$.
Thus, $\widetilde B=UD-\frac14(\mathbf{g}_1+\alpha)(\mathbf{g}_1+2)$, and the uniqueness of $\widetilde B$ follows from the
fact that the algebra $\Gamma\subset\mathrm{Fun}(\mathbb{S})$ separates points.
Now using Theorem \ref{p4.2}
we write the operator $\widetilde B$ as a formal differential operator with respect
to the generators $\mathbf{g}_k$, $k\in\mathbb{Z}_{>0}$, of the algebra $\Gamma$:
\begin{lemma}\label{p5.3}
Under the identification of the algebra $\Gamma$
with the polynomial algebra $\mathbb{R}\left[ \mathbf{g}_1,\mathbf{g}_2,\dots \right]$,
the operator $\widetilde B=UD-\frac14(\mathbf{g}_1+\alpha)(\mathbf{g}_1+2)$ looks as follows:
\begin{equation*}
\left.
\begin{array}{l}\displaystyle
\widetilde B=
\sum_{r,s=2}^{\infty}(2r-1)(2s-1)(\mathbf{g}_1\mathbf{g}_{r+s-1}-\mathbf{g}_r\mathbf{g}_s)\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}
\\\displaystyle\qquad
+\sum_{r,s=1}^{\infty}(r+s-1/2)\mathbf{g}_1\mathbf{g}_r\mathbf{g}_s\frac{\partial}{\partial\mathbf{g}_{r+s}}\\\displaystyle\qquad
-
\sum_{r=2}^{\infty}
(2r-1)\left(2r-2+\frac\al2\right)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}
{}+{}\mbox{\rm{}operators of degree $\le-1$}.
\end{array}
\right.
\end{equation*}
\end{lemma}
Note that the operators
$U$ and $D$ both have degree $1$
in the sense of Definition \ref{p4.1}.
However, it turns out that the operator $\widetilde B$
has zero degree instead of degree $2$, because higher degree terms cancel out.
\begin{proof}
We write
\begin{equation*}
U=\frac12\mathbf{g}_1+U_0+U_{-1}+\dots,\qquad
D=\frac12\mathbf{g}_1+D_0+D_{-1}+\dots,
\end{equation*}
where dots stand for operators of degree $\le-2$,
\begin{equation*}
U_0:=\frac12\alpha+\sum_{r=1}^{\infty}(2r-1)\mathbf{g}_r\frac\partial{\partial\mathbf{g}_r},
\qquad
D_0:=-\sum_{r=1}^{\infty}(2r-1)\mathbf{g}_r\frac\partial{\partial\mathbf{g}_r}
\end{equation*}
(these are the zero degree parts),
\begin{equation*}
\begin{array}{l}
\displaystyle
U_{-1}:=\alpha\frac\partial{\partial\mathbf{g}_1}+\sum_{r,s=1}^{\infty}
(2r-1)(2s-1)\mathbf{g}_{r+s-1}\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}
\\\displaystyle\qquad
+\sum_{r,s=1}^{\infty}(r+s-1)\mathbf{g}_r\mathbf{g}_s\frac\partial{\partial\mathbf{g}_{r+s}},\\
\displaystyle
D_{-1}:=\sum_{r,s=1}^{\infty}(2r-1)(2s-1)\mathbf{g}_{r+s-1}\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}
+\sum_{r,s=1}^{\infty}(r+s)\mathbf{g}_r\mathbf{g}_s\frac\partial{\partial\mathbf{g}_{r+s}}
\end{array}
\end{equation*}
(these are the parts of degree $-1$).
We compute the top degree
terms of the operator $\widetilde B=UD-\frac14(\mathbf{g}_1+\alpha)(\mathbf{g}_1+2)$
consequently.
Terms of degree $2$:
\begin{equation*}
\frac14\mathbf{g}_1^2-\frac14\mathbf{g}_1^2=0.
\end{equation*}
Terms of degree $1$ are equal to
\begin{equation*}
\frac12\mathbf{g}_1D_0+\frac12U_0\mathbf{g}_1-\frac14(\alpha+2)\mathbf{g}_1.
\end{equation*}
Because the operator $\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}$ commutes
with the multiplication by $\mathbf{g}_1$, we have
\begin{equation*}
\left.
\begin{array}{l}
\displaystyle
\mathbf{g}_1D_0+U_0\mathbf{g}_1=-\mathbf{g}_1
\left( \mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}+
\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r} \right)
\\\displaystyle\qquad\qquad
+\left( \frac\al2+\mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}+
\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r} \right)\mathbf{g}_1\\\displaystyle\qquad=
-\mathbf{g}_1^2\frac{\partial}{\partial\mathbf{g}_1}+\frac\al2\mathbf{g}_1+\mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}\mathbf{g}_1
=\frac\al2\mathbf{g}_1+\mathbf{g}_1,
\end{array}
\right.
\end{equation*}
and we see that the terms of degree $1$ also cancel out.
It remains to compute terms of degree $0$. They are equal to
\begin{equation*}
\frac12\mathbf{g}_1D_{-1}+U_0D_0+\frac12U_{-1}\mathbf{g}_1-\frac\al2.
\end{equation*}
Observe that
\begin{equation*}
\left.
\begin{array}{l}
\displaystyle
\left[ U_{-1},\mathbf{g}_1 \right]=
\left[ \alpha\frac{\partial}{\partial\mathbf{g}_1},\mathbf{g}_1 \right]+
\left[ \mathbf{g}_1\frac{\partial^2}{\partial\mathbf{g}_1^2},\mathbf{g}_1 \right]
+
2\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}
\left[ \frac{\partial}{\partial\mathbf{g}_1},\mathbf{g}_1 \right]\\\displaystyle\qquad=
\alpha+
2\mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}
+2\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}.
\end{array}
\right.
\end{equation*}
It can be readily verified that
\begin{equation*}
\begin{array}{l}\displaystyle
U_0D_0\\\displaystyle\quad=-\left( \frac\al2+\mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}+\sum_{r=2}^{\infty}
(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}\right)
\left( \mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}+
\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}\right)
\\\displaystyle\quad=
-\frac12\alpha\mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}-
\mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}-
\mathbf{g}_1^2\frac{\partial^2}{\partial\mathbf{g}_1^2}
-2\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_1\mathbf{g}_r\frac{\partial^2}{\partial\mathbf{g}_1\partial\mathbf{g}_r}
\\\displaystyle\quad\qquad
-\frac\al2\sum_{r=2}^{\infty}
(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}-
\sum_{r,s=2}^{\infty}(2r-1)(2s-1)\mathbf{g}_r\mathbf{g}_s\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}\\\displaystyle
\qquad\quad-
\sum_{r=2}^{\infty}(2r-1)^2\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}
\end{array}
\end{equation*}
and
\begin{equation*}
\left.
\begin{array}{l}\displaystyle
\frac12(D_{-1}+U_{-1})=\frac12\alpha\frac{\partial}{\partial\mathbf{g}_1}+
\sum_{r,s=2}^{\infty}(2r-1)(2s-1)\mathbf{g}_{r+s-1}\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_s}
\\\displaystyle\qquad+
\mathbf{g}_1\frac{\partial^2}{\partial\mathbf{g}_1^2}+2\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_r\frac{\partial^2}{\partial\mathbf{g}_r\partial\mathbf{g}_1}
+\sum_{r,s=1}^{\infty}(r+s-1/2)\mathbf{g}_r\mathbf{g}_s\frac{\partial}{\partial\mathbf{g}_{r+s}}.
\end{array}
\right.
\end{equation*}
Now we finally are able to compute the terms of degree $0$:
\begin{equation*}
\begin{array}{l}\displaystyle
\frac12\mathbf{g}_1D_{-1}+U_0D_0+\frac12U_{-1}\mathbf{g}_1-\frac\al2
\\\displaystyle\qquad=
\frac12\mathbf{g}_1(D_{-1}+U_{-1})+U_0D_0+
\mathbf{g}_1\frac{\partial}{\partial\mathbf{g}_1}
+\sum_{r=2}^{\infty}(2r-1)\mathbf{g}_r\frac{\partial}{\partial\mathbf{g}_r}.
\end{array}
\end{equation*}
Combining three above formulas, we get the desired expression.
\end{proof}
To prove Theorem \ref{p5.1}
it remains to substitute
in the expression for $\widetilde B$ given by the previous Lemma
the inhomogeneous
generators $\mathbf{g}_k$, $k\in\mathbb{Z}_{>0}$,
by the homogeneous generators $p_{2m-1}$, $m\in\mathbb{Z}_{>0}$.
This should be done according to the next Lemma:
\begin{lemma}\label{p5.4}
{\rm{}(1)\/} $\mathbf{g}_k=2p_{2k-1}+{}\mbox{\rm{}terms of degree $\le(2k-1)$}$, \ $k\in\mathbb{Z}_{>0}$;
{\rm{}(2)\/} Let $f\in\Gamma$, then\footnote{Note
that both $\partial f/\partial \mathbf{g}_k$ and $\partial f/\partial p_{2k-1}$ have degree $\big(\deg f-(2k-1)\big)$.}
\begin{equation*}
\frac{\partial f}{\partial \mathbf{g}_k}=\frac12\frac{\partial f}{\partial p_{2k-1}}+{}\mbox{\rm{}terms of degree $\le\big(\deg f-(2k-1)\big)$},\qquad
k\in\mathbb{Z}_{>0}.
\end{equation*}
\end{lemma}
\begin{proof} Claim (1) directly follows from Propositions \ref{p3.3} and \ref{p3.4}.
To prove claim (2) observe that
\begin{equation*}
\left.
\begin{array}{c}
\displaystyle
\frac{\partial f}{\partial \mathbf{g}_k}=
\sum_{l\ge k}\frac{\partial(2p_{2l-1})}{\partial \mathbf{g}_k}\frac{\partial f}{\partial(2p_{2l-1})}=
\frac12\frac{\partial f}{\partial p_{2k-1}}+\sum_{l>k}\frac{\partial(2p_{2l-1}-\mathbf{g}_l)}{\partial \mathbf{g}_k}
\frac{\partial f}{\partial(2p_{2l-1})}.
\end{array}
\right.
\end{equation*}
The $l$th summand in the last sum has degree
\begin{equation*}
\le (2l-3)-(2k-1)+\deg f-(2l-1)=\deg f-(2k-1)-2<\deg f-(2k-1).
\end{equation*}
This concludes the proof.
\end{proof}
Together Lemmas \ref{p5.3} and \ref{p5.4} imply Theorem \ref{p5.1}.
\section{The limit diffusion}\label{s6}
In this section we prove that the Markov chains $T_n$
from Definition \ref{p10.6}
converge (within a certain time scaling) to a continuous time
Markov process $\X{\alpha}(t)$, $t\ge0$, in the simplex $\Omega_+$.
Using Theorem \ref{p5.1} we prove the expression (\ref{f0.1})
(from Introduction)
for its the pre-generator.
In \S\ref{s70.7}
we study some further properties of $\X{\alpha}(t)$.
In \S\ref{s8.1}
we discuss the embedding of
the simplex $\Omega_+$ into the Thoma simplex~$\Omega$
introduced in \cite{GnedinIntern.Math.ResearchNotices2006Art.ID5196839pp.}
(see also \S\ref{s0.2}). This construction leads to
another proof of one of the results
from this section (namely, the first claim of Proposition \ref{p6.6})
but it is also of separate interest.
\subsection{An operator semigroup approximation theorem}\label{s6.1}
We begin by stating a well-known general result on approximations of continuous contraction semigroups by
discrete ones.
We formulate it in a form (Theorem~\ref{p6.4}) best suitable
for the application to our concrete situation.
We refer to the paper \cite{Trotter1958}
and the book \cite{Ethier1986}.
In the book one can also find additional
references.
Let $L$ and $L_n$, $n\in\mathbb{Z}_{>0}$, be real Banach spaces.\footnote{The norms
of $L$ and of each $L_n$ are denoted by the same symbols $\|\cdot\|$.}
Let $\pi_n\colon L\to L_n$, $n\in\mathbb{Z}_{>0}$,
be bounded linear operators such that $\sup_n\|\pi_n\|<\infty$.
\begin{df}\rm{}\label{p6.1}
We say that a sequence of elements
$\left\{ f_n\in L_n \right\}$ {\em{}converges\/} to an element $f\in L$ if
$\lim\limits_{n\to\infty}\|\pi_n f-f_n\|=0$.
We write $f_n\to f$.
\end{df}
It our concrete situation described below in \S\ref{s6.2} the additional condition
\begin{equation}\label{f80}
\lim_{n\to\infty}\|\pi_n f\|=\|f\|\qquad\mbox{for all $f\in L$}
\end{equation}
is satisfied. This condition implies that any
sequence $\left\{ f_n\in L_n \right\}$ may have at most one limit in $L$.
\begin{df}\rm{}\label{p6.3}
An operator $D$ in $L$ is called {\em{}dissipative\/}
if $\|(s\mathbf{1}-D)f\|\ge s\|f\|$ for all $s\ge0$ and all $f$ from the domain of $D$,
where ${\bf1}$ denotes the identity operator.
\end{df}
Now, assume that for all $n\in\mathbb{Z}_{>0}$ we are given
a contraction operator $T_n$ in $L_n$.
Suppose that
$\{\varepsilon_n\}$ is a sequence
of positive numbers converging to
zero.
Assume that there exists a dense subspace $\mathcal{F}\subset L$ and an operator $A\colon \mathcal{F}\to L$
such that in the sense of Definition \ref{p6.1}
\begin{equation*}
\varepsilon_n^{-1}(T_n-\mathbf{1})\pi_nf\to Af
\qquad\mbox{for all $f\in\mathcal{F}$}.
\end{equation*}
\begin{thm}\label{p6.4}
If
$\bullet$ The operator $A\colon\mathcal{F}\to L$ is dissipative;
$\bullet$ For some $s>0$ the range of $(s\mathbf{1}-A)$ is dense in $L$,
Then the operator $A$ is closable in $L$; its
closure generates a strongly continuous
contraction semigroup $\left\{ T(t) \right\}_{t\ge0}$ in $L$;
and (again in the sense of Definition \ref{p6.1})
\begin{equation}\label{f81}
T_n^{[\varepsilon_n^{-1}t]}\pi_nf\to T(t)f\qquad\mbox{for all $f\in L$},
\end{equation}
for $t\ge0$ uniformly on bounded intervals.
\end{thm}
\begin{proof}
Let $\hat A\colon L\to L$ be the operator defined as
$f\mapsto \lim\limits_{n\to\infty}\varepsilon_n^{-1}(T_n-\mathbf{1})\pi_nf$
for all $f\in L$ such that this limit exists in the
sense of Definition \ref{p6.1}. The domain of
$\hat A$ consists of such $f\in L$. Clearly, $\hat A\mid_{\mathcal{F}}=A$.
Since each $T_n$ is a contraction, each operator $\varepsilon_n^{-1}(T_n-{\bf1})$ is dissipative.
Hence $\hat A$ is dissipative, too.
The operator $\hat A$ satisfies the conditions of \cite[Thm. 5.3]{Trotter1958} with $M=1$ and $K=0$.
Hence $\hat A$ is closable, its closure $\overline{\hat A}$ generates a semigroup $\left\{ T(t) \right\}_{t\ge0}$
in $L$, and the convergence (\ref{f81}) holds pointwise (with respect to $t$).
The uniform convergence follows from the implication
(b)$\Rightarrow$(a) of \cite[Ch. 1, Thm. 6.5]{Ethier1986}.
Because each operator $T_n$ is a contraction,
the fact that each
$T(t)$ is a contraction follows
from (\ref{f80}) and (\ref{f81}).
By the Hille-Yosida theorem
(see, e.g., \cite[Ch. 1,Thm. 2.6]{Ethier1986}),
the dissipativity of $\hat A$ implies
that the semigroup $T(t)$ is strongly continuous.
Since $\mathcal{F}$ is dense in $L$ and for some $s>0$
the range of $s\mathbf{1}-\overline{\hat A}\mid_\mathcal{F}=s\mathbf{1}-A$
is dense in $L$, the subspace $\mathcal{F}\subset L$
is a {\em{}core\/} for $\overline{\hat A}$ in the sense of \cite[Ch. 1, Sect. 3]{Ethier1986}.
By \cite[Ch. 1, Prop. 3.1]{Ethier1986}, the operator $A$ is closable in $L$
and $\overline{\hat A}=\overline A$. This concludes the proof.
\end{proof}
\subsection{The simplex $\Omega_+$}\label{s6.2}
We return to our concrete situation.
As $L_n$, $n\in\mathbb{Z}_{>0}$, we take the finite-dimensional vector space
$\mathrm{Fun}(\mathbb{S}_n)$ of real-valued
functions on $\mathbb{S}_n$
with the supremum norm.
As $T_n$ we take the Markov transition operators from Definition \ref{p10.6}.
Clearly, each $T_n$ is a contraction.
As the scaling factors we take $\varepsilon_n:=1/n^2$.
To define the space $L$ and the operators $\pi_n\colon L\to L_n$ we
need some extra notation.
Let $\Omega_+$ be the subset of the infinite-dimensional cube $\left[ 0,1 \right]^{\infty}$
defined as
\begin{equation*}
\Omega_+:=\left\{ \mathsf{x}=(\mathsf{x}_1,\mathsf{x}_2,\dots)\in\left[ 0,1 \right]^{\infty}\colon \mathsf{x}_1\ge\mathsf{x}_2\ge\dots\ge0,\ \sum_{i}\mathsf{x}_i\le1 \right\},
\end{equation*}
We equip the cube $\left[ 0,1 \right]^{\infty}$ with the standard product topology.
The subset~$\Omega_+\subset\left[ 0,1 \right]^{\infty}$ is a compact, metrizable and separable space.
As $L$ we take the Banach space $C(\Omega_+)$ of all real continuous functions on $\Omega_+$ with pointwise operations and the supremum norm.
For $n\in\mathbb{Z}_{>0}$, we define an embedding $\iota_n$
of the set $\mathbb{S}_n$ into the space $\Omega_+$:
\begin{equation}\label{f81.1}
\iota_n\colon\mathbb{S}_n\hookrightarrow\Omega_+,\qquad \lambda=(\lambda_1,\dots,\lambda_{\ell},0,0,\dots)\mapsto
\left( \frac{\lambda_1}n,\dots,\frac{\lambda_{\ell}}n,0,0,\dots \right)\in\Omega_+.
\end{equation}
Using $\iota_n$ we define the operators $\pi_n\colon L\to L_n$, that is, $\pi_n\colon C(\Omega_+)\to\mathrm{Fun}(\mathbb{S}_n)$:
\begin{equation*}
(\pi_nf)(\lambda):=f(\iota_n(\lambda)),\qquad\mbox{where $f\in C(\Omega_+)$ and $\lambda\in\mathbb{S}_n$}.
\end{equation*}
Clearly, $\|\pi_n\|\le1$.
Moreover, in our situation the condition (\ref{f80}) is satisfied
because the space $\Omega_+$ is approximated
by the sets $\iota_n(\mathbb{S}_n)\subset\Omega_+$ in the sense that every open subset
of $\Omega_+$ has a nonempty intersection with $\iota_n(\mathbb{S}_n)$ for all $n$ large enough.
\subsection{Moment coordinates}\label{s6.3}
Here we define the dense subspace $\mathcal{F}\subset L=C(\Omega_+)$.
To every point $\mathsf{x}\in\Omega_+$ we assign a probability measure
\begin{equation*}
\nu_\mathsf{x}:=\sum_{i=1}^{\infty}\mathsf{x}_i\delta_{\mathsf{x}_i}+\gamma\delta_0,\qquad \gamma:=1-\sum_{i=1}^{\infty}\mathsf{x}_i
\end{equation*}
on $\left[ 0,1 \right]$, where by $\delta_s$ we denote the Dirac measure at a point $s$.
Denote by $\mathsf{q}_k=\mathsf{q}_k(\mathsf{x})$ the $k$th moment of the measure $\nu_\mathsf{x}$:
\begin{equation*}
\mathsf{q}_k(\mathsf{x}):=\int_0^1u^k\nu_\mathsf{x}(du)=\sum_{i=1}^{\infty}\mathsf{x}_i^{k+1},\qquad k=1,2,\dots.
\end{equation*}
Following \cite{Borodin2007}, we call $\mathsf{q}_1,\mathsf{q}_2,\dots$ the {\em{}moment coordinates\/}
of the point $\mathsf{x}\in\Omega_+$.
They are continuous functions on $\Omega_+$.\footnote{Observe
that the function $\mathsf{x}\mapsto\sum_{i=1}^{\infty}\mathsf{x}_i$ is not continuous in $\mathsf{x}\in\Omega_+$.}
Note that the functions $\mathsf{q}_1,\mathsf{q}_2,\dots$ are algebraically independent as functions
on $\Omega_+$. Clearly, any subcollection of $\left\{ \mathsf{q}_1,\mathsf{q}_2,\dots \right\}$ is also
algebraically independent.
As $\mathcal{F}$ we take
the subalgebra of the Banach algebra $C(\Omega_+)$ freely generated by the {\em{}even\/} moment coordinates:
\begin{equation*}
\mathcal{F}:=\mathbb{R}\left[ \mathsf{q}_2,\mathsf{q}_4,\mathsf{q}_6,\dots \right]\subset C(\Omega_+).
\end{equation*}
\begin{prop}\label{p6.6}
The functions $\mathsf{q}_2,\mathsf{q}_4,\mathsf{q}_6,\dots$ separate points of $\Omega_+$.
Moreover,
any infinite subcollection
of $\left\{ \mathsf{q}_1,\mathsf{q}_2,\dots \right\}$
also possesses this property.
\end{prop}
\begin{proof}
Let $\left\{ \mathsf{q}_{k_1},\mathsf{q}_{k_2},\dots \right\}$
be any infinite subcollection of $\left\{ \mathsf{q}_1,\mathsf{q}_2,\dots \right\}$.
It suffices to show that a point $\mathsf{x}\in\Omega_+$
is uniquely determined by the sequence
$\left\{ \mathsf{q}_{k_1}(\mathsf{x}),\mathsf{q}_{k_2}(\mathsf{x}),\dots \right\}$.
Observe that for every $m=0,1,2,\dots$ we have
\begin{equation*}
\Big(\mathsf{q}_{k_n}(\mathsf{x})-\sum_{j=1}^{m}\mathsf{x}_j^{k_n}\Big)^{1/k_n}=
\mathsf{x}_{m+1}\cdot\Big(1+\sum_{i=m+1}^{\infty}
(\mathsf{x}_i/\mathsf{x}_{m+1})^{k_n}\Big)^{1/k_n}\to\mathsf{x}_{m+1}
\end{equation*}
as $n\to\infty$ (the convergence is pointwise).
Here if $m=0$, then by agreement there is no sum $\sum_{j=1}^{m}$ in the LHS.
Using this convergence, one can reconstruct the
coordinates
$\mathsf{x}_1,\mathsf{x}_2,\mathsf{x}_3,\dots$
one after another using the sequence
$\left\{ \mathsf{q}_{k_1}(\mathsf{x}),\mathsf{q}_{k_2}(\mathsf{x}),\dots \right\}$.
This concludes the proof.
\end{proof}
Since the subalgebra $\mathcal{F}\subset C(\Omega_+)$ separates points and contains the function $1$, it is dense in $C(\Omega_+)$.
Next, recall the algebra $\Gamma=\mathbb{R}\left[ p_1,p_3,p_5,\dots \right]$ of
doubly symmetric functions
introduced in \S\ref{s2.1}.
Let $I:=(p_1-1)\Gamma$ be the principal ideal in $\Gamma$ generated by $(p_1-1)$. Set $\Gamma^\circ:=\Gamma/I$.
To every element $f\in\Gamma$ corresponds an image in $\Gamma^\circ$ denoted by $f^\circ$.
In particular, $p_1^\circ=1$, and $\Gamma^\circ$ is freely generated
(as a commutative unital algebra) by the elements $p_3^\circ,p_5^\circ,p_7^\circ,\dots$.
Observe also that
\begin{equation}\label{f83}
\Gamma=\mathbb{R}\left[ p_1,p_3,p_5,p_7,\dots \right]=I\oplus\mathbb{R}\left[ p_3,p_5,p_7,\dots \right],
\end{equation}
and hence $\mathcal{F}\cong\mathbb{R}\left[ p_3,p_5,p_7,\dots \right]$. We will use this fact below.
The correspondence
\begin{equation*}
p_{2k+1}^{\circ}\longleftrightarrow \mathsf{q}_{2k},\qquad k=1,2,\dots
\end{equation*}
establishes an isomorphism between the algebras $\Gamma^\circ$ and $\mathcal{F}$.
We will identify elements
$g\in\Gamma^\circ$ and the corresponding continuous functions $g(\mathsf{x})$ on $\Omega_+$.
Moreover, to every element $\phi\in\Gamma$ corresponds a continuous function
on $\Omega_+$. Denote this function by $\phi^\circ(\mathsf{x})$.
Equivalently, the map $\phi\to\phi^\circ(\cdot)$ is determined by setting
\begin{equation*}
p_1^\circ(\mathsf{x}):\equiv 1,\qquad
p_{2k-1}^\circ(\mathsf{x}):=\sum_{i=1}^{\infty}\mathsf{x}_i^{2k-1},\qquad k=2,3,\dots.
\end{equation*}
\subsection{The limit theorem for coherent systems}\label{s6.4}
At this point it is convenient to formulate the following theorem
about coherent systems on $\mathbb{S}$.
\begin{thm}\label{p6.7}
Let $\left\{ M_n \right\}$ be a coherent system on $\mathbb{S}$ (see \S\ref{s1.3} for the definition).
Then the push-forward of the measure $M_n$ under the embedding $\iota_n$ (defined in \S\ref{s6.2})
weakly converges, as $n\to\infty$, to a probability measure $\mathsf{P}$ on $\Omega_+$.
The measure $\mathsf{P}$ is called the {\em{}boundary measure\/} of the system $\left\{ M_n \right\}$.
Conversely, any coherent system on $\mathbb{S}$ can be reconstructed from its boundary measure as follows:
\begin{equation*}
M_n(\lambda)=2^{-|\lambda|}\mathsf{h}(\lambda)\int_{\Omega_+}\mathcal{Q}_\lambda^\circ(\mathsf{x})\mathsf{P}(d\mathsf{x})\qquad\mbox{for all $n\in\mathbb{Z}_{>0}$ and $\lambda\in\mathbb{S}_n$}.
\end{equation*}
Here $\mathsf{h}(\lambda)$ is given by (\ref{f1}) and $\mathcal{Q}_\lambda^\circ$
is the image in $\Gamma^\circ$ of doubly symmetric $\mathcal{Q}$-Schur function
defined in \S\ref{s2.1}.
\end{thm}
\begin{proof}
This theorem can be proved exactly as Theorem B of the paper
\cite{Kerov1998} with the two following changes:
instead of $\theta$-shifted Jack polynomials one should use
factorial Schur's $\mathcal{Q}$-functions, and
instead of \cite[Theorem 6.1]{Kerov1998}
one should refer to the relation (\ref{f36}) (proved in the paper by V.~Ivanov \cite{IvanovNewYork3517-3530}).
\end{proof}
To the multiplicative coherent system $\left\{ M_n^{\alpha} \right\}$ with parameter $\alpha\in\left( 0,+\infty \right)$
corresponds a measure $\mathsf{P}^{(\alpha)}$ on $\Omega_+$. We may call it the
{\em{}multiplicative boundary measure\/}.
\subsection{Doubling of shifted Young diagrams\\ and the Thoma simplex}\label{s8.1}
In this subsection we discuss the embedding
of $\Omega_+$
into the Thoma simplex~$\Omega$
introduced in \cite{GnedinIntern.Math.ResearchNotices2006Art.ID5196839pp.}
(see also \S\ref{s0.2}).
\subsubsection{Modified Frobenius coordinates and the Thoma simplex}
Here we recall some definitions from \cite[\S3.1 and \S3.3]{Borodin2007}.
Let $\sigma$ be an ordinary partition.\footnote{Ordinary partitions are
identified with ordinary Young diagrams as in \cite[Ch. I, \S1]{Macdonald1995}}
Denote by $a_1,\dots,a_k,b_1,\dots,b_k$ its
{\em{}modified Frobenius coordinates\/}. That is,
$k$ is the number of diagonal boxes in $\sigma$,
$a_i$ equals $\frac12$ plus the number of boxes in the $i$th row
to the right of the diagonal, and $b_j$ equals $\frac12$ plus the
number of boxes in the $j$th column below the diagonal.
We write $\sigma=\left( a_1,\dots,a_k\mid b_1,\dots,b_k \right)$.
Note that $\sum(a_i+b_i)=|\sigma|$, the number of boxes in the diagram $\sigma$.
Note also that each of the sequences $\left\{ a_1,\dots,a_k \right\}$ and
$\left\{ b_1,\dots,b_k \right\}$ is strictly decreasing.
Recall that by $\mathbb{Y}_n$ we denote the set of all ordinary partitions
of weight $n$.
Let $\Omega$
be the {\em{}Thoma simplex\/}, that is, the space of couples
$(\omega;\omega')\in\left[ 0,1 \right]^{\infty}\times\left[ 0,1 \right]^{\infty}$
satisfying the following conditions:
\begin{equation*}
\omega_1\ge\omega_2\ge\dots\ge0,\qquad
\omega'_1\ge\omega'_2\ge\dots\ge0,\qquad
\sum_{i}\omega_i+
\sum_{j}\omega'_j\le1.
\end{equation*}
Here $\left[ 0,1 \right]^{\infty}$ is equipped with the product topology
and hence the space
$\Omega$ is a compact subset of $\left[ 0,1 \right]^{\infty}\times\left[ 0,1 \right]^{\infty}$.
Consider for all $n\in\mathbb{Z}_{>0}$ embeddings:
\begin{equation*}
\hat\iota_n\colon\mathbb{Y}_n\hookrightarrow\Omega,\qquad \sigma\mapsto
\left( \frac{a_1}n,\dots,\frac{a_k}n,0,\dots;
\frac{b_1}n,\dots,\frac{b_k}n,0,\dots\right),
\end{equation*}
where $\sigma=(a_1,\dots,a_k\mid b_1,\dots,b_k)$ is written in terms of the modified Frobenius
coordinates.
\subsubsection{Doubling of shifted Young diagrams}
Let $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ be a shifted Young diagram.
By $\mathsf{D}\lambda$ denote the {\em{}double\/} of $\lambda$,
that is, the ordinary Young diagram that is
written in the modified Frobenius coordinates as
$$
\mathsf{D}\lambda=\left( \lambda_1+\frac12,\dots,\lambda_\ell+\frac12\mid\lambda_1-\frac12,\dots,\lambda_{\ell}-\frac12 \right),
$$
see \cite[Ch.~I,~\S1, Example~9]{Macdonald1995}.\footnote{By
agreement, $\mathsf{D}\varnothing=\varnothing$.}
In \cite{Hoffman1992} this object is called the
{\em{}shift-symmetric diagram\/} associated to
a strict partition $\lambda$.
The number of boxes in $\mathsf{D}\lambda$ clearly equals
twice the number of boxes in $\lambda$.
In this way we obtain embeddings $\mathbb{S}_n\hookrightarrow \mathbb{Y}_{2n}$ for all $n\in\mathbb{Z}_{\ge0}$
(and hence the whole Schur graph is embedded into the Young graph).
\subsubsection{The embedding $T\colon\Omega_+\hookrightarrow\Omega$}
The sets $\hat\iota_n(\mathbb{Y}_n)$ approximate the Thoma simplex $\Omega$
in the same sense as the sets $\iota_n(\mathbb{S}_n)$ approximate $\Omega_+$ (see \S\ref{s6.2}).
Thus, it is natural to consider the following ``limit''
of the embeddings $\mathbb{S}_n\hookrightarrow\mathbb{Y}_{2n}$
as $n\to\infty$:
\begin{equation*}
T\mathsf{x}=(\omega;\omega')=\left(
\frac{\mathsf{x}_1}2,\frac{\mathsf{x}_2}2,\dots;
\frac{\mathsf{x}_1}2,\frac{\mathsf{x}_2}2,\dots\right),\qquad \mathsf{x}=(\mathsf{x}_1,\mathsf{x}_2,\dots)\in\Omega_+.
\end{equation*}
The image of $\Omega_+$ is the whole diagonal subset
$\left\{ (\omega,\omega')\colon\omega=\omega' \right\}$
of $\Omega$. Moreover, $T$ is a homeomorphism
between $\Omega_+$ and this subset. The points $\mathsf{x}\in\Omega_+$ such that $\sum\mathsf{x}_i=1$ map
to the points $(\omega;\omega)$ such that $\sum(\omega_i+\omega_i)=1$.
This embedding $T$ was introduced
in \cite[\S8.6]{GnedinIntern.Math.ResearchNotices2006Art.ID5196839pp.}.
The property that $T\colon\Omega_+\hookrightarrow\Omega$
is a limit in some sense of the embeddings $\mathbb{S}_n\hookrightarrow\mathbb{Y}_{2n}$
may be expressed as follows:
\begin{prop}
Let $\left\{ \lambda(n) \right\}$, $n=1,2,\dots$, be a sequence of shifted Young diagrams,
$\lambda(n)\in\mathbb{S}_n$, such that, as $n\to\infty$, the points $\iota_n(\lambda(n))$
tend to some point $\mathsf{x}\in\Omega_+$.
Then the points $\hat\iota_{2n}(\mathsf{D}\lambda(n))$ tend to $T\mathsf{x}\in\Omega$.
\end{prop}
\begin{proof}
Clearly, $T\iota_n(\lambda(n)))\to T\mathsf{x}$ as $n\to\infty$.
For any $\mu=(\mu_1,\dots,\mu_\ell)\in\mathbb{S}_n$ we have
\begin{equation*}
T\iota_n(\mu)=\left(
\frac{\mu_1}{2n},\dots,\frac{\mu_\ell}{2n},0,\dots;
\frac{\mu_1}{2n},\dots,\frac{\mu_\ell}{2n},0,\dots\right)
\end{equation*}
and
\begin{equation*}
\hat\iota_{2n}(\mathsf{D}\mu)=\left(\frac{\mu_1+1/2}{2n},\dots,\frac{\mu_\ell+1/2}{2n},0,\dots;
\frac{\mu_1-1/2}{2n},\dots,\frac{\mu_\ell-1/2}{2n},0,\dots\right).
\end{equation*}
To conclude the proof observe that
\begin{equation*}
\Big( \underbrace{\frac1{2n},\dots,\frac1{2n}}_n,0,\dots;
\underbrace{\frac1{2n},\dots,\frac1{2n}}_n,0,\dots\Big)\to0,\qquad n\to\infty
\end{equation*}
in the topology of $\Omega$.
\end{proof}
Informally, one can say that the previous Proposition states
\begin{equation*}
T\iota_n(\lambda)\approx \hat\iota_{2n}(\mathsf{D}\lambda),\qquad \lambda\in\mathbb{S}_n.
\end{equation*}
\subsubsection{Symmetric Thoma's measures}
Here we use the above construction to give another proof of the first claim
of Proposition \ref{p6.6}.
Let us recall the definition
of the moment coordinates on the Thoma simplex
\cite[\S3.4]{Borodin2007}.
To every point $(\omega;\omega')\in\Omega$ one can assign the following
probability measure on $\left[ -1,1 \right]$:
\begin{equation*}
\hat\nu_{(\omega;\omega')}:=
\sum_{i=1}^{\infty}\omega_i\delta_{\omega_i}+
\sum_{i=1}^{\infty}\omega_i'\delta_{-\omega_i'}+
\hat\gamma(\omega;\omega')\delta_0,
\end{equation*}
where $\hat\gamma(\omega;\omega')=1-\sum_{i=1}^{\infty}(\omega_i+\omega'_i)$
and $\delta_s$ denotes the Dirac measure at a point~$s$.
This measure is called {\em{}Thoma's measure\/}.
The moments of Thoma's measure $\hat\nu_{(\omega;\omega')}$
are called the moment coordinates of $(\omega;\omega')\in\Omega$:
\begin{equation*}
\hat\mathsf{q}_m(\omega;\omega')=\sum_{i=1}^{\infty}\omega_i^{m+1}+
(-1)^{m}\sum_{j=1}^{\infty}\omega_j'^{m+1},\qquad m=1,2,\dots
\end{equation*}
(compare these
definitions of $\hat\nu_{(\omega;\omega')}$
and $\hat\mathsf{q}_m(\omega;\omega')$
to the definitions
of $\nu_\mathsf{x}$ and $\mathsf{q}_m(\mathsf{x})$ from~\S\ref{s6.3}).
If $\mathsf{x}\in\Omega_+$, then Thoma's measure $\hat\nu_{T\mathsf{x}}$ is symmetric
with respect to the origin. Hence the odd moments of $\hat\nu_{T\mathsf{x}}$ vanish.
More precisely,
\begin{equation*}
\hat\mathsf{q}_m(T\mathsf{x})=\left\{
\begin{array}{ll}
2^{-m}\mathsf{q}_m(\mathsf{x}),&\mbox{if $m$ is even};\\
0,& \mbox{if $m$ is odd}.
\end{array}
\right.
\end{equation*}
Here $\mathsf{q}_m(\mathsf{x})$ are the moment coordinates on $\Omega_+$.
A probability measure on $\left[ -1,1 \right]$
is uniquely determined by its moments. Hence the functions $\hat\mathsf{q}_1,\hat\mathsf{q}_2,\dots$
separate points of $\Omega$.
It follows that a point $T\mathsf{x}\in\Omega$ (where $\mathsf{x}\in\Omega_+$) is uniquely determined by its
moment coordinates
$\hat\mathsf{q}_2(T\mathsf{x}),\hat\mathsf{q}_4(T\mathsf{x}),\hat\mathsf{q}_6(T\mathsf{x}),\dots$ (it suffices to
take only even coordinates because the
odd coordinates vanish).
This is the same as to say that a point $\mathsf{x}\in\Omega_+$ is uniquely determined
by its even moment coordinates
$\mathsf{q}_2(\mathsf{x}),\mathsf{q}_4(\mathsf{x}),\mathsf{q}_6(\mathsf{x}),\dots$.
Hence
the first claim of Proposition \ref{p6.6}
holds.
\subsection{Convergence of generators}\label{s6.5}
In this subsection
we prove the convergence of the operators
$n^2(T_n-{\bf1})$ to
an operator $A\colon \mathcal{F}\to\mathcal{F}$.
In the next section using this convergence we apply the abstract Theorem
\ref{p6.4} to our situation and prove the convergence
of Markov chains
corresponding to $n^2(T_n-{\bf1})$ to the Markov process $\X\alpha(t)$ in $\Omega_+$.
\begin{prop}\label{p6.8}
In the sense of Definition \ref{p6.1} we have
\begin{equation*}
n^2(T_n-\mathbf{1})\pi_nf\to Af\qquad\mbox{for all $f\in\mathcal{F}$},
\end{equation*}
where the operator $A\colon\mathcal{F}\to\mathcal{F}$ can be written in one of the two following ways:
{\rm{}(1)} As a formal differential operator\footnote{See also Remark
\ref{p40.2} about formal differential operators in polynomial algebras.}
in the algebra $\mathcal{F}=\mathbb{R}\left[ \mathsf{q}_2,\mathsf{q}_4,\mathsf{q}_6,\dots \right]$:
\begin{equation}\label{f85}
\left.
\begin{array}{l}
\displaystyle
A=\sum_{i,j=1}^{\infty}(2i+1)(2j+1)\left( \mathsf{q}_{2i+2j}-\mathsf{q}_{2i}\mathsf{q}_{2j} \right)
\frac{\partial^2}{\partial\mathsf{q}_{2i}\partial\mathsf{q}_{2j}}\\\displaystyle\qquad+
2\sum_{i,j=0}^{\infty}\left( 2i+2j+3 \right)\mathsf{q}_{2i}\mathsf{q}_{2j}\frac{\partial}{\partial \mathsf{q}_{2i+2j+2}}
-\sum_{i=1}^{\infty}(2i+1)\left( 2i+\frac\al2 \right)
\mathsf{q}_{2i}\frac{\partial}{\partial\mathsf{q}_{2i}},
\end{array}
\right.
\end{equation}
where, by agreement, $\mathsf{q}_0:=1$;
{\rm{}(2)}
As an operator acting on functions $\mathcal{Q}_\mu^\circ\in\mathcal{F}\subset C(\Omega_+)$:\footnote{Observe that
the functions $\mathcal{Q}_\mu^\circ\in\Gamma^\circ\cong\mathcal{F}$, $\mu\in\mathbb{S}$,
are not linearly independent. However, their linear span is $\mathcal{F}$ because
the system $\left\{ \mathcal{Q}_\mu \right\}_{\mu\in\mathbb{S}}$ is a basis for $\Gamma$.
However, from the claim (1)
it follows that the formula (\ref{f86})
for
the action of $A$ on $\mathcal{Q}_\mu^\circ$, $\mu\in\mathbb{S}$,
is consistent.}
\begin{equation}\label{f86}
\displaystyle
A\mathcal{Q}_\mu^\circ=-|\mu|(|\mu|+\alpha/2-1)\mathcal{Q}_\mu^\circ+
\sum_{y\in Y(\mu)}\big(y(y+1)+\alpha\big)\mathcal{Q}_{\mu-\square(y)}^\circ;
\end{equation}
\end{prop}
First, we prove two Lemmas.
\begin{lemma}\label{p6.9}
Let $\phi\in\Gamma$ and $\deg\phi\le m-1$ for some $m\in\mathbb{Z}_{>0}$. Then
\begin{equation*}
\frac1{n^m}\phi_n\to0,\qquad n\to\infty
\end{equation*}
in the sense of Definition \ref{p6.1}.\footnote{Recall that
$(\cdots)_n$ denotes the restriction of a function from the algebra
$\Gamma\subset\mathrm{Fun}(\mathbb{S})$
to the subset $\mathbb{S}_n\subset\mathbb{S}$.}
\end{lemma}
\begin{proof}
The convergence to zero means that
\begin{equation*}
\sup_{\lambda\in\mathbb{S}_n}\frac1{n^m}\left|\phi_n(\lambda)\right|\to0,\qquad n\to\infty.
\end{equation*}
Observe that
for all $\lambda\in\mathbb{S}_n$
we have
$\lambda_i\le n$, $i=1,\dots,\ell(\lambda)$.
Hence $\left|\phi_n(\lambda)\right|\le \mathrm{Const}\cdot n^{m-1}$.
\end{proof}
Let $G$ denote the operator $\sum_{i=1}^{\infty}(2i-1)p_{2i-1}\frac{\partial}{\partial p_{2i-1}}$ in the algebra $\Gamma$.
In other words,
on the homogeneous component of degree $m$ of the algebra $\Gamma$, $m\in\mathbb{Z}_{\ge0}$,
the operator $G$ acts as multiplication by $m$.
Let for all $s>0$ the operator $s^G\colon\Gamma\to\Gamma$ be the automorphism of $\Gamma$ which reduces to
multiplication by $s^m$ on the $m$th homogeneous component of $\Gamma$.
Recall the maps $\pi_n\colon C(\Omega_+)\to\mathrm{Fun}(\mathbb{S}_n)$ defined in \S\ref{s6.2}.
\begin{lemma}\label{p6.10}
Let $g\in\Gamma$ and $f=g^\circ\in\mathcal{F}$. For all $n\in\mathbb{Z}_{>0}$ we have
\begin{equation*}
\pi_nf=\left( n^{-G}g \right)_n.
\end{equation*}
\end{lemma}
\begin{proof}
Fix $\lambda\in\mathbb{S}_n$.
Consider the homomorphism $\Gamma\to\mathbb{R}$
defined on the generators as follows:
\begin{equation*}
p_{2m-1}\to\frac1{n^{2m-1}}{\sum_{i=1}^{\ell(\lambda)}\lambda_i^{2m-1}},\quad m=1,2,3,\dots.
\end{equation*}
On one hand, this homomorphism is a composition of the automorphism $n^{-G}$ of $\Gamma$
and the map $\Gamma\to\mathbb{R}$, $\phi\mapsto \phi_n(\lambda)$, hence
$g\in\Gamma$ maps to $(n^{-G}g)_n(\lambda)$.
On the other hand, since $p_1$ maps to $1$, this homomorphism can be viewed as a composition
of the canonical map $\Gamma\to\Gamma^\circ$ and the map $\Gamma^\circ\to\mathbb{R}$, $\psi\mapsto\psi(\iota_n(\lambda))$
(here $\iota_n$ is defined in \S\ref{s6.2}).
Hence $g\in\Gamma$ maps to $f(\iota_n(\lambda))$.
This concludes the proof.
\end{proof}
\par\noindent{\em{}Proof of Proposition \ref{p6.8}.\/}
Fix arbitrary $f\in\mathcal{F}$.
Let $g\in\Gamma$ be such that $f=g^\circ$.
Theorem \ref{p5.1} and Lemma \ref{p6.10} imply
\begin{equation}\label{f90}
n^2(T_n-{\bf1})\pi_n(f)=
n^2(T_n-{\bf1})\left( n^{-G}g \right)_n=
\frac{n^2}{(n+\alpha/2)(n+1)}\big(\widetilde Bn^{-G}g\big)_n,
\end{equation}
where $\widetilde B$ is a zero
degree operator in the algebra $\Gamma$
with the top degree homogeneous part $B\colon\Gamma\to\Gamma$
given by (\ref{f75}).
Because $B-\widetilde B$ has degree $-1$, we can
replace $\widetilde B$ by $B$ in (\ref{f90}),
this will affect only negligible terms.\footnote{One can argue as follows.
Without loss of generality, assume that $g$ is homogeneous of degree $m\in\mathbb{Z}_{>0}$.
Thus, $n^{-G}g=n^{-m}g$. Moreover, $\deg(B-\widetilde B)g\le m-1$ and hence
by Lemma \ref{p6.9},
$\big( (B-\widetilde B)n^{-G}g\big)_n=\frac1{n^m}\big( (B-\widetilde B)g\big)_n\to 0$, $n\to\infty$.}
We can also remove the
factor $\frac{n^2}{(n+\alpha/2)(n+1)}$.
Thus, we have
\begin{equation*}
n^2(T_n-{\bf1})\pi_n(f)-(Bn^{-G}g)_n\to0,\qquad n\to\infty.
\end{equation*}
The operator $B\colon \Gamma\to\Gamma$ is homogeneous, therefore, $Bn^{-G}=n^{-G}B$, and
by Lemma \ref{p6.10} we have
\begin{equation*}
\begin{array}{l}\displaystyle
(Bn^{-G}g)_n=(n^{-G}Bg)_n=
\pi_n\big( (Bg)^{\circ} \big)
\end{array}
\end{equation*}
Recall that the operator $B\colon\Gamma\to\Gamma$ commutes with the multiplication by $p_1$
(Corollary \ref{p5.2}), therefore,
it induces an operator $A\colon\mathcal{F}\to\mathcal{F}$,
$f\mapsto (Bg)^\circ$, where $g\in\Gamma$ is such that $f=g^{\circ}$.
Clearly, $Af$ does not depend on the choice of $g$.
Since $\mathcal{F}\cong\mathbb{R}\left[ p_3,p_5,\dots \right]$,
we get (\ref{f85}) from (\ref{f75})
by replacing $p_1$ by $1$ and each $p_{2m+1}$ by $\mathsf{q}_{2m}$, $m\in\mathbb{Z}_{>0}$.
It remains to prove (\ref{f86}).
Fix $\mu\in\mathbb{S}$. Multiply (\ref{f32}) by $n^{-|\mu|}n^2$:
\begin{equation*}
\begin{array}{l}\displaystyle
n^{-|\mu|}n^2(T_n-1)(\mathcal{Q}_\mu^*)_n\\\displaystyle\qquad=
\frac{n^2}{(n+\alpha/2)(n+1)}
\Bigg[-|\mu|(|\mu|+\alpha/2-1)n^{-|\mu|}(\mathcal{Q}_\mu^*)_n\phantom{\Bigg].}\\\displaystyle\qquad\qquad
+\frac{n-|\mu|+1}{n}\sum_{y\in Y(\mu)}\big(y(y+1)+\alpha\big)n^{-|\mu|+1}(\mathcal{Q}_{\mu-\square(y)}^*)_n
\Bigg].
\end{array}
\end{equation*}
Since $\deg(\mathcal{Q}_{\lambda}^*-\mathcal{Q}_\lambda)\le|\lambda|-1$ (see \S\ref{s2.1}), each
function of the form $(\mathcal{Q}_\lambda^*)_n$ in the RHS can be replaced by $(\mathcal{Q}_\lambda)_n$,
this affects only negligible terms. We can also remove fractions containing $n$. Thus,
\begin{equation*}
\begin{array}{l}\displaystyle
\Big(n^{-|\mu|}n^2(T_n-1)(\mathcal{Q}_\mu)_n+
|\mu|(|\mu|+\alpha/2-1)n^{-|\mu|}(\mathcal{Q}_\mu)_n
\\\qquad\qquad\qquad\displaystyle
-\sum_{y\in Y(\mu)}\big(y(y+1)+\alpha\big)n^{-|\mu|+1}(\mathcal{Q}_{\mu-\square(y)})_n
\Big)\to0,\qquad n\to\infty
\end{array}
\end{equation*}
in the sense of Definition \ref{p6.1}.
To conclude the proof of Proposition \ref{p6.8}
observe that $n^{-|\lambda|}(\mathcal{Q}_\lambda)_n=\left( n^{-G}\mathcal{Q}_\lambda \right)_n=\pi_n(\mathcal{Q}_\lambda^\circ)$.
\qed
\subsection{Convergence of semigroups\\ and the existence of the process}\label{s6.6}
To apply Theorem \ref{p6.4} to our situation
and finally get
the existence of the process $\X\alpha(t)$ in $\Omega_+$
it remains to prove the following:
\begin{lemma}\label{p6.11}
{\rm{}(1)\/}
The operator $A\colon\mathcal{F}\to\mathcal{F}$ from Proposition \ref{p6.8} is dissipative.
{\rm{}(2)\/} For all $s>0$, the range of $s{\bf1}-A$ is dense in $C(\Omega_+)$.
\end{lemma}
\begin{proof} (cf. \cite[Proof of Proposition 1.4]{Borodin2007})
(1)
Consider the filtration of the algebra $\mathcal{F}=\Gamma/(p_1-1)\Gamma$ inherited from
the natural filtration (\ref{f45}) of
$\Gamma$:
\begin{equation*}
\mathcal{F}=\bigcup_{m=0}^{\infty}\mathcal{F}^{m},\qquad \mathcal{F}^0\subset\mathcal{F}^1\subset\mathcal{F}^2\subset\dots\subset\mathcal{F}.
\end{equation*}
It is clear that for fixed $m$
the operator $\pi_n\colon C(\Omega_+)\to\mathrm{Fun}(\mathbb{S}_n)$
is injective on $\mathcal{F}^m$ for all $n$ large enough (this is true because each $\mathcal{F}^m$ is finite-dimensional
and the spaces $\iota_n(\mathbb{S}_n)$ approximate the space $\Omega_+$, see \S\ref{s6.2}).
Thus, we can identify $\mathcal{F}^m$ with $\pi_n(\mathcal{F}^m)$ for such $n$.
It follows from (\ref{f32}) that
the operator $T_n$ does not increase the degree of functions.
Therefore,
we can think of $T_n$ (and, clearly, of $n^2(T_n-{\bf1})$)
as an operator in $\mathcal{F}^m$.
The convergence $n^2(T_n-\mathbf{1})\to A$
established in Proposition~\ref{p6.8} implies that $n^2(T_n-\mathbf{1})$ converges
to $A$ in every finite-dimensional space $\mathcal{F}^m$, $m\in\mathbb{Z}_{\ge0}$.
Fix $m\in\mathbb{Z}_{\ge0}$.
For all $n$ large enough the operator
$n^2(T_n-\mathbf{1})$ (viewed as an operator in $\mathcal{F}^m$)
is dissipative with respect to the norm of $\mathrm{Fun}(\mathbb{S}_n)$
because the operator $T_n$ is a transition operator of a Markov chain.
Since the norms of $\mathrm{Fun}(\mathbb{S}_n)$ converge to the norm of $C(\Omega_+)$
(in the sense of (\ref{f80})),
we conclude that $A$ is dissipative.
(2) For every $m$ and $s>0$ the operator $s{\bf1}-A$ maps $\mathcal{F}^m$ onto itself
(this fact can be derived either from (\ref{f86})
or from the above proof of the claim (1) of the present Lemma).
Thus, $(s{\bf1}-A)\mathcal{F}=\mathcal{F}$ and therefore $s{\bf1}-A$ has a dense range.
\end{proof}
Now, from
Theorem \ref{p6.4}
it follows that the operator $A$ (given by Proposition \ref{p6.8})
is closable in $C(\Omega_+)$
and its closure generates a strongly
continuous contraction semigroup $\left\{ T(t) \right\}_{t\ge0}$.
We also have the convergence of semigroups $\left\{ \mathbf{1},T_n,T_n^2,\dots \right\}$
to $\left\{ T(t) \right\}$ (\ref{f81}).
Hence the
semigroup $\left\{ T(t) \right\}$
preserves the cone of nonnegative functions and the
constant
function $1$ because each $T_n$ possesses this property.
From \cite[Chapter 4, Theorem 2.7]{Ethier1986} follows that the semigroup $\left\{ T(t) \right\}$
gives rise to a strong Markov process $\X{\alpha}(t)$ in $\Omega_+$. This process
has c\`adl\`ag sample paths and can start from any point and any probability distribution on $\Omega_+$.
The operator $A$ is called the {\em{}pre-generator\/} of the process $\X{\alpha}(t)$, $t\ge0$.
\subsection{Some properties of the process $\X{\alpha}(t)$}\label{s70.7}
Here we state some properties of the process $\X{\alpha}(t)$.
They follow from the construction of
$\X{\alpha}(t)$ and from the formulas (\ref{f85}) and (\ref{f86})
for its pre-generator.
We do not give the proofs because they are similar to
those from the paper \cite{Borodin2007}.
\begin{rmk}\rm{}\label{p90.1}
The formula
(\ref{f86})
for the action of the pre-generator
$A$ on functions $\mathcal{Q}_\mu^\circ\in\mathcal{F}\subset C(\Omega_+)$, $\mu\in\mathbb{S}$,
is not formally necessary
for the convergence of the up/down Markov chains from Definition \ref{p10.6}
to a continuous time Markov process in $\Omega_+$,
as well as for the properties
of the limit diffusion $\X\alpha(t)$ that are listed
in this subsection.\footnote{However,
(\ref{f86}) allows to argue in a more straightforward way at some
points. It can be also interesting to compare this formula
with similar ones
in other models
(namely, \cite[(5.1)]{Borodin2007} and \cite[(14)]{Petrov2007}).}
Indeed, from (\ref{f85}) one can easily obtain that
\begin{equation}\label{f8080}
Af=-m(m-1+\alpha/2)f+g,\qquad\mbox{where $g\in\mathcal{F}^{m-1}$}
\end{equation}
for all $f\in \mathcal{F}^m$, $m\in\mathbb{Z}_{\ge0}$.
In other words, the action of $A$ on $\mathcal{F}^m$
up to lower degree terms (that is, terms from $\mathcal{F}^{m-1}$)
is the multiplication
by $-m(m-1+\alpha/2)$.
It suffices to check (\ref{f8080}) for $f=p_{\sigma_1}^\circ\dots p_{\sigma_\ell}^\circ$,
where $p_i$'s are the Newton power sums and $\sigma=(\sigma_1,\dots,\sigma_\ell)$ is an odd partition
without parts equal to one. Indeed, for each $m\in\mathbb{Z}_{\ge0}$
the functions of this form
with $|\sigma|\le m$
constitute a basis for $\mathcal{F}^m$.
For such $f$
the relation (\ref{f8080})
can be easily checked directly
using (\ref{f85})
(note that $p^{\circ}_{2i+1}=\mathsf{q}_{2i}$, $i\ge1$)
\end{rmk}
\medskip
{\bf{}Continuity of sample paths.\/}
{\em{}The process $\X{\alpha}(t)$ has continuous sample paths.\/}
\smallskip
This is proved exactly as in
\cite[Coroll. 6.4 and Thm. 7.1]{Borodin2007}, the proof uses the expression (\ref{f85}) for $A$.
\medskip
{\bf{}The invariant symmetrizing measure.\/}
{\em{}The multiplicative boundary measure $\mathsf{P}^{(\alpha)}$ defined in Theorem \ref{p6.7}
is an invariant measure for $\X{\alpha}(t)$. The process is reversible with
respect to $\mathsf{P}^{(\alpha)}$.\/}
\smallskip
This follows from the facts that
$\bullet$ For all $n\in\mathbb{Z}_{>0}$ the measure $M_n^{\alpha}$ is an
invariant symmetrizing distribution for the Markov chain $T_n$ (see \S\ref{s1});
$\bullet$ The measures $M_n^\alpha$ approximate the measure $\mathsf{P}^{(\alpha)}$ in the sense of Theorem \ref{p6.7};
$\bullet$ The chains $T_n$ approximate the process $\X{\alpha}(t)$ (see \S\ref{p6.6}).
See also \cite[Prop. 1.6, 1.7, Thm. 7.3 (2)]{Borodin2007}.
\medskip
{\bf{}Convergence of finite-dimensional distributions\/}
(Cf. \cite[Prop. 1.8]{Borodin2007}).
{\em{}Let $\X{\alpha}(t)$ and all the chains $T_n$ are viewed in equilibrium (that is, starting
from the invariant distribution). Then the finite-dimensional distributions for the
$n$th chain converge, as $n\to\infty$, to the corresponding finite-dimensional distributions
of the process $\X{\alpha}(t)$. Here we assume a natural scaling of time: one step of $n$th
Markov chain $T_n$ corresponds to a small time interval $\Delta t\sim 1/n^2$.\/}
\medskip
{\bf{}The spectrum of the Markov generator in $L^2(\Omega_+,\mathsf{P}^{(\alpha)})$.\/}
{\em{}The space $\mathcal{F}$
viewed as the subspace of the Hilbert space
$L^2(\Omega_+,\mathsf{P}^{(\alpha)})$
is decomposed into the orthogonal
direct sum of eigenspaces of the operator $A\colon\mathcal{F}\to\mathcal{F}$.
The eigenvalues of $A$ are
\begin{equation}\label{f99}
\left\{ 0 \right\}\cup\left\{ -m\left( m-1+\frac\al2 \right) \colon m=2,3,\dots\right\}.
\end{equation}
The eigenvalue $0$ is simple, and the multiplicity of each $m$th eigenvalue
is equal to the number of odd
partitions of $m$ without parts equal to one, that is, to the number of solutions of the equation
\begin{equation*}
3n_3+5n_5+7n_7+\ldots=m
\end{equation*}
in nonnegative integers.\/}
\smallskip
The reversibility property of the process $\X{\alpha}(t)$ implies that
the operator
$A$ is symmetric with respect to the inner product inherited from $L^2(\Omega_+,\mathsf{P}^{(\alpha)})$
Moreover, $A$
preserves the filtration of $\mathcal{F}$ (defined in \S\ref{s6.6}).
Indeed, this follows from the expression (\ref{f86}) for the pre-generator.
The fact that the eigenvalues of $A$ are described by (\ref{f99}) follows from (\ref{f86}). Indeed,
$\mathcal{Q}_\mu^\circ\in\mathcal{F}^{|\mu|}$ for all $\mu\in\mathbb{S}$,
and (\ref{f86}) is rewritten as
$A\mathcal{Q}_\mu^\circ=-|\mu|(|\mu|-1+\alpha/2)\mathcal{Q}_\mu^\circ+g$, where $g\in\mathcal{F}^{|\mu|-1}$.
Thus, the eigenvalue $0$ is simple, and the multiplicity of each $-m(m-1+\alpha/2)$ is
equal to $(\dim\mathcal{F}^m-\dim\mathcal{F}^{m-1})$,
$m\in\mathbb{Z}_{>0}$.
Since $\mathcal{F}\cong\mathbb{R}\left[ p_3,p_5,p_7,\dots \right]$, finite products of the form
$p_3^{r_3}p_5^{r_5}p_7^{r_7}\dots$ constitute a linear basis for $\mathcal{F}$. This basis is compatible
with the filtration $\left\{ \mathcal{F}^m \right\}$.
Hence $(\dim\mathcal{F}^m-\dim\mathcal{F}^{m-1})$ is equal to
the number of basis vectors of degree $m$, $m\in\mathbb{Z}_{>0}$, which is exactly the number
of odd partitions of $m$ without parts equal to one.
\medskip
{\bf{}The uniqueness of the invariant measure\/}
(Cf. \cite[Thm. 7.3 (1)]{Borodin2007}).
{\em{}The measure $\mathsf{P}^{(\alpha)}$
is a unique invariant measure for the process $\X{\alpha}(t)$.\/}
\medskip
{\bf{}Ergodicity.\/}
{\em{}The process $\X{\alpha}(t)$ is ergodic with respect to the measure $\mathsf{P}^{(\alpha)}$.\/}
\smallskip
This follows from the existence of a spectral gap of the process’ generator, see
the eigenstructure above. See also \cite[Thm. 7.3 (3)]{Borodin2007}.
|
1,108,101,564,699 | arxiv | \section{}
\end{document}
|
1,108,101,564,700 | arxiv | \section{PROGRAM SUMMARY}
\label{sec:section1}
\bigskip
\noindent
{\bf Title of program:} VNI-3.1
\bigskip
\noindent
{\bf Computer fo which the program has been designed and
others on which it has been tested:}
IBM RS-6000, Sun Sparc, Hewlett Packard UX A-9000
\bigskip
\noindent
{\bf Operating systems:}
IBM-AIX, Sun-OS, and any other UNIX operating systems, as well as LINUX.
\bigskip
\noindent
{\bf Programming language used:}
Fortran 77
\bigskip
\noindent
{\bf Memory required to execute with typical data:} 2000 kwords
\bigskip
\noindent
{\bf No. of bits in a word:} 32
\bigskip
\noindent
{\bf No. of lines in distributed program:}
25760 lines of main program, plus 49 and 244 lines for two example programs.
\bigskip
\noindent
{\bf Keywords:}
Monte Carlo simulation, event generator,
QCD kinetic theory, parton cascades, parton coalescence,
hadronic final states.
\bigskip
\noindent
{\bf Nature of physical problem:}
\noindent
In high-energy particle collisions certain phase-space regions
can be populated by a large number of quanta, such that
statistical correlations among them
(e.g., in space, momentum, or color) become of essential importance.
Examples are deep-inelastic lepton-hadron scattering and
hadron-hadron collisions in the region of very small Bjorken-$x$, or,
collisions involving heavy nuclei in the central rapidity region.
In these cases the produced particles evolve in a
complicated non-equilibrium environment created by the presence
of neighboring ones. The `deterministic' quantum evolution
of particle states due to self-interactions (depending only on the
particle itself), receives a new
`statistical' kinetic contribution due to mutual interactions
(depending crucially on the local density).
The theoretical basis for addressing the solution for
the dynamics of such particle systems is a quantum-kinetic
formulation of the QCD equations of motion,
an approximation that combines field-theoretical aspects
associated with the renormalization group (including
well-known resummation techniques) with aspects of
transport theory associated with non-equilibrium multi-particle
dynamics (including important quantum effects beyond the classical level).
\bigskip
\noindent
{\bf Method of solution:}
\noindent
The solution of the underlying quantum-kinetic equations
of motion for non-equilibrium multi-particle QCD
by Monte-Carlo simulation of collisions allowing for a variety of
combinations of beam and target particles.
To simulate the real-time evolution of the collision system in
position space and momentum space on the basis of the equations
of motions, the procedure is three-fold:
i) the construction of the initial state including the
decomposition of the beam and target particles
into their partonic substructure, (ii) the evolution
of parton cascades including multiple scatterings, emission- and
fusion-processes, and (iii) the self-generated
conversion of partons into hadrons using a phenomenological
model for parton-coalescence into pre-hadronic clusters and
subsequent decay into final-state particles.
\bigskip
\noindent
{\bf Restriction on the complexity of the problem:}
\noindent
For very high collision energy ($\sqrt{s} \gg 10$ TeV in
hadronic collisions, and
$\sqrt{s} \gg 5$ TeV/nucleon in nuclear collisions)
numerical inaccuracies due to repeated Lorentz boosts, etc.,
may accumulate to cause problems.
Although the most concerned parts of the program use double precision,
for extreme energies the code would require a conversion in full
to double precision format (which is planned in the near future).
\bigskip
\noindent
{\bf Typical running time:}
\noindent
The CPU time for a typical simulation is strongly dependent on the type of
beam and target, the magnitude of collision energy, as well
as on the time interval $\Delta t$ chosen to follow an event in its real-time
evolution.
Examples are (for $\Delta t = 35$ $fm$):
a) $e^++e^-$ at $\sqrt{s} = 100$ GeV: 10000 events/hour;
b) $p+\bar{p}$ at $\sqrt{s} = 200$ GeV: 5000 events/hour;
c) $p+ Au$ at $\sqrt{s} = 200$ GeV/nucleon: 100 events/hour;
d) $Au+Au$ at $\sqrt{s} = 200$ GeV/nucleon: 1 event/hour;
All of the above quotes are approximate, and
refer to a typical
133 Mhz or 166 Mhz processor on a modern
Power-Workstation or Power-PC.
\bigskip
\newpage
\section{LONG WRITE-UP}
\label{sec:section2}
\bigskip
\subsection{Introduction}
VNI
\footnote{
The three letters VNI do not mean anything profound. VNI is
pronounced "Vinnie", short for "Vincent Le CuCurullo Con GiGinello",
the little guy who likes to hang out with his pals, the quarks
({\it cucu}rullos) and gluons ({\it gigi}nellos).
That is QCD in Wonderland, and that is the whole, true story.
}
is the Monte Carlo implementation
of a relativistic quantum-kinetic approach \cite{ms39,ms42} to
the dynamics of high-energy particle collisions, inspired by the
QCD parton picture of hadronic interactions \cite{kogut73,dok80,glr83}.
It is a product of several years of development in both
the improving physics understanding of high-energy multiparticle
dynamics in QCD,
as well as the technical implementation in the form of a computer simulation
program. The most relevant references
for the following are Refs. \cite{ms0,ms3,msrep,ms37,ms40,ms41},
where details of the
main issues, discussed below, can be found.
The puropse of VNI is to provide a comprehensive
description of particle collisions involving beams of
leptons, hadrons, or nuclei,
in terms of the space-time evolution of parton cascades and
parton-hadron conversion.
The program VNI is concepted as a useful {\it tool} (and nothing more) to study
the causal development of the collision dynamics in real time from
a specified initial state of beam and target particles, all the way to the
final yield of hadrons and other observable particles.
The collision dynamics is traced in detail on the microscopic level
of quark and gluon interactions in the framework of perturbative QCD,
supplemented by a phenomenological treatment of the non-perturbative dynamics,
including soft parton-collisions and parton-hadronization.
The generic structure of the simulation concept is illustrated in Fig. 1.
\medskip
\begin{figure}
\epsfxsize=450pt
\centerline{ \epsfbox{a018.ps} }
\caption{
Generic flow-chart of the simulation concept of VNI:
It starts with
the initial beam target particles $A$ and $B$, decomposing them
(except for leptonic $A$ and/or $B$)
in their hadronic constituents with partonic substructure, then proceeds through
the parallely evolving stages of parton-cascade, parton-hadron conversion,
and fragmentation of beam/target remnants, and finishes up with
the final particle yield that reflects observables in a detector.
\label{fig:fig1}
}
\end{figure}
The main strength of VNI lies in addressing
the physics of high-density QCD, which becomes an increasingly popular
object of research, both from the experimental, phenomenological
interest, and from the theoretical, fundamental point of view.
Presently, and in the near future, the collider facilities HERA
($ep$, $eA$?), Tevatron ($p\bar{p}$, $pA$), RHIC and LHC ($p\bar{p}$,
$AA$) are able to probe new regimes of dense quark-gluon matter
at very small Bjorken-$x$ or/and at large $A$,
with rather different dynamical properties.
The common feature of high-density
QCD matter that can be produced in these experiments, is an expected
novel exhibition
of the interplay between the high-momentum (short-distance) perturbative
regime and the low-momentum (long-wavelength) non-perturbative physics.
For example, with HERA and Tevatron experiments,
one hopes to gain insight into problems concerning
the saturation of the strong rise
of the proton structure functions at small Bjorken-$x$,
possibly due to color-screening effects that
are associated with the overlappping of a large number of small-$x$
partons.
Another example is the anticipated formation of a quark-gluon plasma in
RHIC and LHC heavy ion collisions,
where multiple parton rescattering and cascading generates a
high-density environment, in which the collective
motion of the quanta imust be taken into consideration.
In this context the most advantageous and novel feature
of VNI
is the space-time cascade picture that provides a potentially powerful
tool to study high-density particle systems in QCD, where
accounting for the dynamically
evolving space-time structure of the collisions is most important.
The {\it necessity of including space-time variables} in addition to
energy-momentum variables for high-density systems may
be common knowledge in the field of heavy-ion physics, where
the space-time aspect of many-body transport theory is essential
ingredient to describe nucleus-nucleus collisions, but it is
new to many high-energy particle physicists which only recently
began to acknowleged the necessity to include time and space on top
of the commonly used, pure momentum space description of, e.g., lepton and
hadron collisions.
\bigskip
\noindent
{\it
Where does VNI fit into the diverse family of modern
event generators for physics simulation of particle collisions
(where `particle' stands for any beam/target particle from an electron
to a heavy nucleus)?
}
\smallskip
\noindent
The Monte Carlo models for high-energy particle collisions that are on the
market, may be crudely divided into two classes:
\begin{description}
\item{(i)}
The first class embodies event generators that restrict to
"clean" reactions involving lepton or proton beams only,
and which aim at a high-precision description of experimental
tests of QCD's first principles.
Popular examples are JETSET/PYTHIA \cite{jetset},
HERWIG \cite {herwig}, ARIADNE \cite{ariadne}, LEPTO \cite{lepto},
and ISAJET \cite{isajet}.
The common feature of these event generators is a combination of
well understood perturbative QCD parton-shower description and
a non-perturbative hadronization prescription to convert
the partonic final state into hadrons.
\item{(ii)}
The second class are event generators that aim to describe
"dirty" reactions involving nuclei, much less based on
first-principle knowledge, but instead rely on phenomenological
models to mimic the unknown details of the underlying physics.
Here the widely used concept is to visualize a nuclear
collision in terms of nucleon-nucleon collisions on the
basis of a constituent valence-quark picture plus
string-excitation and -fragmentation.
Examples for these models are
FRITIOF \cite{fritiof}, DPM \cite{dpm},
VENUS \cite{venus},
RQMD \cite{rqmd}, and HIJET \cite{hijet}.
Distinct from these is HIJING \cite{hijing},
which also incorporates a perturbative QCD approach to multiple minijet
production, however it does not incorporate a space-time description.
\end{description}
With the exception of HERWIG, all of the
above Monte Carlo generators utilize some form of
{\it string fragmentation phenomenology} to model
the non-perturbative hadronization
and final-state particle production.
Most commonly used is the Lund string model \cite{string}.
HERWIG on the other hand is built on a very different
{\it parton-cluster formation/fragmentation approach},
which forms the basis of the hadronization scheme developed in VNI.
\medskip
\noindent
{\it
What are the shortcomings of the above-mentioned Monte Carlo models
with respect to the particle dynamics in finite-density regions
created by high-energy collisions?
}
\smallskip
The high-energy particle physics generators of the first class
lack the inclusion space-time variables in the dynamics
description in both the
perturbative QCD parton evolution and the non-perturbative
process of parton-hadron transition.
These models therefore cannot account for statistical
interactions due to the presence of a finite density of particles
close-by in space, such as rescattering, absorption of recombination
processes.
Hence, although these models to large extent use QCD's fundamental
quark-gluon degrees of freedom, important aspects of
parton dynamics and interactions at finite density are left out,
because the particles are assumed to propagate unscathed in free space.
The event generators of the second class, on the other hand,
mostly do utilize a space-time description, however on the level
of hadronic degrees of freedom (strings, baryons, mesons and resonances)
rather than partonic degrees of freedom.
For ultra-relativistic nucleus-nucleus collisions
the parton approach appears to be more realistic than the
hadronic or string picture, as it
has been realized that short-range parton interactions play
a major role for heavy ion collisions at collider energies of
$\sqrt{s} \, \lower3pt\hbox{$\buildrel >\over\sim$}\,100$ GeV,
at least during the early and most dissipative stage of the first few $fm$.
Here copiously produced quark-gluon mini-jets cannot be considered as
isolated rare events, but must be embedded in complicated multiple
cascade-type processes.
Thus, the short range character of these
interactions implies that perturbative QCD can and must be used, and
that the picture of comparably large distance excitations
of strings or hadronic resonances does not apply in this kinematic regime.
\medskip
In view of this discussion,
the program VNI can be viewed in between the above two classes of
Monte Carlo models:
It provides a kinetic space-time description of parton evolution
by utilizing well-developed techniques for perturbative QCD simulations
at zero density or free space,
as the event generators of the first class. On the other hand,
it applies this concept also to the physics of
finite-density particle systems, e.g., in
collisions involving nuclei, as the event generators of the second class.
Comparing VNI with the above-mentioned Monte Carlo models,
the essential differences and partly new aspects are the following:
\begin{description}
\item{a)}
the aspects of the space-time evolution of the particle distributions
in addition to the evolution in momentum space \cite{msrep},
and the concepts of quantum kinetic theory and statistical physics
\cite{ms39,ms42}.
\item{b)}
the self-consistent interplay of coherent (angular ordered)
perturbative
parton evolution according to the DGLAP \cite{DGLAP} equations, with the
fully dynamical cluster-hadronization according to the
phenomenological model of Ref. \cite{ms37}.
\item{c)}
the microscopic tracing of color degrees of freedom and the effect of
color-correlations by using explicit color-labels for each parton,
which allows to investigate final state interactions in the process
of hadron formation \cite{ms40}.
\item{d)}
the diverse advantages of a stochastic simulation technique with
which the various particle interaction processes are determined by
the dynamics itself, through the local density of particles as they
evolve causally in time.
\item{e)}
the statistical many-particle description for general non-equilibrium
systems, which allows to study thermodynamic behaviour of the bulk
matter \cite{ms2}, such as the evolution of macroscopic energy density,
pressure, etc., or the dynamical development of the sytem to
thermal/chemical equilibrium in heavy ion reactions.
\end{description}
In summary, the improvement to be expected from VNI
for the physics simulation of high-energy particle collisions
lies clearly in the `dirty' high-density parton regime,
where the space-time aspects are most important, and which
currently and in the future is
of central interest
in experiments at, e.g., HERA, RHIC and LHC.
On the other hand, VNI may also be perceived as a valuable alternative
to high-energy event generators for the
study of `clean', zero-density collisions as $e^+e^-$ annihilation
or $p\bar{p}$ collisions,
where the space-time aspects can provide useful additional insight in
the collision dynamics for experiments at, e.g., LEP or the Tevatron.
\bigskip
\bigskip
\subsection{General concept}
The central element in the physics description implemented in VNI
is the use of
QCD transport theory \cite{msrep} and quantum field kinetics \cite{ms39}
to follow the evolution of a generally mixed multi-particle system
of partons and hadrons
in 7-dimensional phase-space $d^3 r d^3 k dE$.
Included are
both the parton-cascade development
\cite{dok80,glr83,jetcalc,bassetto}
which embodies the renormalization-group improved evolution of
multiple parton collisions
including inelastic (radiative) processes,
and the phenomenological
parton-hadron conversion model of Refs. \cite{ms37,ms40,ms41},
in which the hadronization mechanism is described in terms of
dynamical parton-cluster formation as
a local, statistical process that depends on the spatial separation and
color
of nearest-neighbor partons, followed by the decay of clusters into
hadrons.
In contrast to the commonly-used momentum-space description,
the microscopic history of the dynamically-evolving particle system
is traced in space-time {\it and} momentum space, so that
the correlations of partons in space, time, and color can be taken
into account for both the cascade evolution
and the hadronization mechanism.
It is to be emphasized,
that the interplay
between perturbative and non-perturbative regimes is controlled locally
by the space-time evolution of the mixed parton-hadron system itself
(i.e., the time-dependent local parton density),
rather than by an arbitrary global division
between parton and hadron degrees of freedom
(i.e., a parametric energy/momentum cut-off).
In particular the parallel evolution of the mixed system
of partons, pre-hadronic clusters, and hadrons, with the relative proportions
determined by the dynamics itself, is a novel feature that
is only possible by keeping track of both space-time and
energy-momentum variables.
Probably the greatest strength of this approach lies in its
application to the collision dynamics of
complicated multi-particle systems, as for example in collisions involving
nuclei ($eA$, $pA$ and $AB$),
for which a causal time evolution in position space
and momentum space is essential: Here statistical,
non-deterministic particle interactions are most important,
which can only be accounted for by
following the time-evolution of the particle densities in space {\it and}
energy-momentum.
This approach allows to
study the time evolution of an initially prepared beam/target collision system
in complete phase-space from the instant
of collisional contact, through the QCD-evolution of
parton distributions, up to the formation of final hadronic states.
It provides a self-consistent scheme to solve the underlying
equations of motion for the particle densities as determined by the
microscopic dynamics.
\smallskip
The model as a whole consists of
three major building-blocks, each of which is illustrated schematically
in Fig. 2:
\begin{description}
\item[1.]
The {\it initial state} associated with the incoming
collision partners (leptons, hadrons, or nuclei). Except
for lepton beams, this involves
the phenomenological construction of hadrons or nuclei in terms of the
partons' phase-space distributions
on the basis of the experimentally measured
hadron (nucleon) structure functions and
elastic form-factors.
\item[2.]
The {\it parton cascade development}
with mutual- and self-interactions of the system of quarks and gluons.
This includes multiple inelastic processes, described
as sequences of elementary $2 \rightarrow 2$ scatterings, $1\rightarrow 2$
emissions and $2 \rightarrow 1$ fusions.
Moreover, correlations are accounted for between primary virtual
partons, emerging as unscathed remainders from the initial state, and
secondary real partons, materialized or produced
through the partonic interactions.
\item[3.]
The {\it hadronization dynamics} of the evolving system
in terms of parton-coalescence to color-neutral clusters
as a local, statistical process that depends on the spatial separation
and color of nearest-neighbor partons.
Each pre-hadronic parton-cluster fragments through isotropic two-body decay
into primary hadrons, according to the density of
states, followed by the decay of the latter into final
stable hadrons.
\end{description}
Such a pragmatical division, which assumes complex interference
between the different physics regimes to be negligible, is possible if
the respective dynamical scales are such that
the short-range (semi)hard parton interactions (scattering, radiation,
fusion) of perturbative nature,
and the non-perturbative
mechanism of hadron formation (parton-coalescence and cluster-decay),
occur on well-separated space-time scales (or momentum scales, by
virtue of the uncertainty principle).
Loosely speaking, the typical momentum scale associated with
parton collisions, radiative emissions, or parton fusion, has to be
larger than
the inverse `confinement length scale' $\sim1\,fm$ which
separates perturbative and non-perturbative domains.
Further discussion of this condition of validity can be found in, e.g.,
\cite{msrep,dokbook}.
\begin{figure}
\epsfxsize=500pt
\centerline{ \epsfbox{a022.ps} }
\caption{
The three components of the model:
a) the initial state constructed in terms of the parton distribution of
the incoming nuclei;
b) the time-evolution of parton cascades in 7-dimensional phase-space
c) the formation of color neutral clusters from secondary partons
emerging from cascading, as well of the remnant primary partons from
the initial state, followed by
the fragmentation of the clusters into final hadrons.
\label{fig:fig2}
}
\end{figure}
\bigskip
\subsection{Equations of motion from quantum kinetics for multi-particle
dynamics}
A space-time description of multiparticle systems
in high-energy QCD processes can be derived systematically from
{\it quantum-kinetic theory} on the basis of
QCD's first principles in a stepwise approximation scheme
(see e.g., Refs. \cite{ms39,ms42} and references therein).
Applied to the concept sketched in the preceeding subsection,
this framework allows to
cast the time evolution of the mixed system of
individual partons, composite parton-clusters, and physical hadrons
in terms of a closed set of
integro-differential equations for
the phase-space densities of the different particle excitations.
The definition of these phase-space densities,
denoted by
$F_\alpha$, where $\alpha\equiv p, c, h$
labels the species of partons, pre-hadronic clusters, or hadrons,
respectively, is:
\begin{equation}
F_\alpha(r,k)\;\,\equiv\; \, F_\alpha (t, \vec r; E, \vec k)
\;\,=\;\,
\frac{dN_\alpha (t)}{d^3r d^3k dE}
\;,
\label{F}
\end{equation}
where $k^2 = E^2 -\vec{k}^{\,2}$ can be offshell or on-shell,
as will be discussed in the following subsections.
The densities (\ref{F}) measure the number of particles
of type $\alpha$ at time $t$ with position in $\vec r + d\vec{r}$,
momentum in $\vec k + d\vec{k}$,
and energy in $E + dE$ (or equivalently invariant mass in $k^2 + dk^2$).
The $F_\alpha$ are the quantum analogues of the
classical phase-space distributions, including both off-shell and on-shell
particles, and hence
contain the essential microscopic
information required for a statistical description
of the time evolution of a many-particle system in
complete 7-dimensional phase-space $d^3rd^3kdE$,
thereby providing the basis for calculating
macroscopic observables.
The phase-space densities (\ref{F}) are determined by the
self-consistent solutions of
a set of {\it transport equations} (in space-time) coupled with
renormalization-group-type {\it evolution equations} (in momentum space).
Referring to Refs. \cite{ms37,ms39} for details,
these equations can be generically expressed as
convolutions of the densities $F_\alpha$ of particle species $\alpha$,
interacting with specific cross sections $\hat{I}_j$ for the processes $j$.
The resulting coupled equations for the
space-time development of
the densities of partons $F_{p}$, clusters $F_c$ and
hadrons $F_h$ is a self-consistent set in which the change
of the densities $F_\alpha$ is governed by the balance of
the various possible interaction processes among the particles.
Fig. 3 represents these equations pictorially.
For the densities of {\it partons}
the {\it transport equation}
(governing the space-time change with $r^\mu$) and the {\it evolution equation}
(controlling the change with momentum scale $k^\mu$), read, respectively,
\begin{eqnarray}
k_\mu \frac{\partial}{\partial r^\mu}\; F_p(r,k)
&=&
F_{p''} F_{p'''}\circ
\left[\frac{}{}
\hat{I}(p''p'''\rightarrow p p') \;+\;\hat{I}(p''p'''\rightarrow p)
\right]
\;-\;
F_{p} F_{p'}\circ
\left[\frac{}{}
\hat{I}(pp'\rightarrow p'' p''')\;+\; \hat{I}(pp'\rightarrow p'')
\right]
\nonumber \\
& &
-\;\;
F_{p} F_{p'}\circ \left[\frac{}{}
\hat{I}(p'p''\rightarrow p)\;-\; \hat{I}(pp'\rightarrow p'')\right]
\;-\;
F_p\,F_{p'}\circ \hat{I}(p p'\rightarrow c)
\label{e1}
\\
k^2 \frac{\partial}{\partial k^2}\; F_p(r,k)
&=&
F_{p'}\circ \hat{I}(p'\rightarrow p p'')\;-\;
F_{p}\circ \hat{I}(p\rightarrow p' p'')
\label{e2}
\;.
\end{eqnarray}
For the densities of {\it pre-hadronic clusters} and
{\it hadrons}, the evolution equations are homogeneous
to good approximation,
so that one is left with non-trivial transport equations only,
\footnote{
It is worth noting that eq. (\ref{e2}) embodies the momentum space
($k^2$) evolution of partons through
the renormalization of the phase-space densities $F_p$, determined
by their change $k^2 \partial F_p(r,k)/\partial k^2$
with respect to a variation of the mass (virtuality) scale $k^2$
in the usual QCD evolution framework \cite{dok80,jetcalc,bassetto}.
On the other hand,
for pre-hadronic clusters and hadrons, renormalization effects
are comparatively small, so that their
mass fluctuations $\Delta k^2/k^2$ can be ignored to first
approximation,
implying $k^2 \partial F_c(r,k) /\partial k^2
= k^2 \partial F_h(r,k) / \partial k^2 =0 $.
}
\begin{eqnarray}
k_\mu \frac{\partial}{\partial r^\mu}\; F_c(r,k)
&=&
F_p\,F_{p'}\circ \hat{I}(p p'\rightarrow c)
\;-\;
F_c\circ \hat{I}(c\rightarrow h)
\;\;\;,\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;\;
k^2 \frac{\partial}{\partial k^2}\; F_c(r,k)\;\;=\;\;0
\;\;\;\;\;\;\;\;\frac{}{}
\label{e3}
\\
k_\mu \frac{\partial}{\partial r^\mu}\; F_h(r,k)
&=&
F_c \circ\hat{I}(c\rightarrow h)
\;+\;
\left[\frac{}{}
F_{h'}\circ \hat{I}(h'\rightarrow h)
\;-\;
F_h\circ \hat{I}(h\rightarrow h')
\right]
\;\;,\;\;\;\;\;\;
k^2 \frac{\partial}{\partial k^2}\; F_h(r,k)\;\;=\;\;0
\;\;\;\;\;\;\;\;\frac{}{}
\label{e4}
\;.
\end{eqnarray}
In (\ref{e1})-(\ref{e4}),
each convolution $F \circ\hat{I}$ of
the density of particles $F$ entering a particular vertex
${\hat I}$ includes a sum over contributing
subprocesses, and a phase-space integration
weighted with the associated subprocess probability distribution
of the squared amplitude. Explicit expressions are given in
Refs. \cite{msrep,ms37}.
The terms on the right-hand side
of the transport- and evolution-equations (\ref{e1})-(\ref{e4})
corresponds to one of the following categories:
\begin{description}
\item[(i)]
parton scattering and parton fusion through 2-body collisions,
\item[(ii)]
parton multiplication through radiative emission processes
on the perturbative level,
\item[(iii)]
colorless cluster formation through parton recombination
depending on the local color and spatial configuration,
\item[(iv)]
hadron formation through decays of the cluster excitations
into final-state hadrons.
\end{description}
As mentioned before, the
equations (\ref{e1})-(\ref{e4}) reflect a {\it probabilistic
interpretation} of the multi-particle evolution in
space-time and momentum space
in terms of sequentially-ordered interaction processes $j$,
in which the rate of change of the particle distributions $F_\alpha$
($\alpha=p,c,h$)
in a phase-space element $d^3rd^4k$
is governed by the balance of gain (+) and loss ($-$) terms.
The left-hand side
describes free propagation of a
quantum of species $\alpha$, whereas
on the right-hand side the interaction kernels $\hat{I}$
are integral operators that incorporate the effects of
the particles' self- and mutual interactions.
This probabilistic character
is essentially an effect of time dilation, because in any frame
where the particles move close to the speed of light, the associated
wave-packets are highly localized to short space-time extent, so that
comparatively
long-distance quantum interference effects are generally very small.
\begin{figure}
\epsfxsize=500pt
\centerline{ \epsfbox{a015.ps} }
\caption{
Graphical representation of the equations (2)-(5) for the
particle phase-space densities $F_p$ of partons, $F_c$ of
pre-hadronic clusters, and $F_h$ of hadrons.
\label{fig:fig3}
}
\end{figure}
\bigskip
\subsection{Scheme of solution in global Lorentz-frame of reference}
In the above kinetic approach to the multi-particle
dynamics,
the probabilistic character of the transport- and evolution equations
(\ref{e1})-(\ref{e4})
allows one to solve for the phase-space densities $F_\alpha(r,k)$ by
simulating
the dynamical development as a Markovian process causally in time.
Because it is an `initial-value problem', one must specify
some physically appropriate initial
condition $F_\alpha(t_0,\vec{r},k)$ at starting time $t_0$, such that
all the dynamics prior to this point is effectively embodied in this
initial form of $F_\alpha$.
The set of equations (\ref{e1})-(\ref{e4}) can
then be solved
in terms of the evolution of the phase-space densities $F_\alpha$
for $t > t_0$ using Monte Carlo methods to
simulate the time development of the mixed system
of partons, clusters, and hadrons
in position and momentum space \cite{ms37,msrep}.
With the initial state specified as discussed below,
the phase-space distribution of particles at $t=t_0 \equiv 0$ can be
constructed and then evolved in small time steps
$\Delta t =O(10^{-3}\;fm)$ forward throughout the parallely evolving
stages of parton cascade, parton-cluster formation, and cluster-hadron decays,
until stable final-state hadrons and other particles (photons, leptons, etc.)
are left as freely-streaming particles.
The partons propagate along classical trajectories until they interact,
i.e., collide (scattering or fusion process),
decay (emission process) or recombine to pre-hadronic composite
states (cluster formation).
Similarly, the so-formed pre-hadronic parton-clusters
travel along classical paths until they convert into
primary hadrons (cluster decay), followed by the hadronic decays
into stable final state particles.
The corresponding probabilities and time scales of interactions are
sampled stochastically from the relevant probability distributions
in the kernels $\hat{I}$ of eq. (\ref{e1})-(\ref{e4}).
It is important to realize, that
the spatial density and the momentum distribution of the
particles are intimately connected: The momentum
distribution continously changes through the interactions and
determines how the quanta propagate in coordinate space.
In turn, the probability for subsequent interactions depends on the
resulting local particle density. Consequently, the development
of the phase-space densities is a complex
interplay, which - at a given point of time - contains implicitely the
complete preceeding history of the system.
\medskip
It is clear that
the description of particle evolution is
Lorentz-frame dependent, and a suitable reference frame
(henceforth called {\it global frame})
must be chosen (not necessarily the laboratory frame of an experiment).
When computing Lorentz-invariant quantities, such as
cross sections or final-state hadron spectra, the particular choice is
irrelevant, whereas for non-invariant observables, such as energy
distibutions
or space-time-dependent quantities, one must at the end transform
from the arbitrarily-chosen frame of theoretical description to the
actual frame of measurement.
For calculational convenience, it is most suitable
to choose the {\it global center-of-mass ($cm$) frame of the colliding
beam particles}, with the collision axis in the $z$-direction.
In this global $cm$-frame, the incoming particles $A$, $B$
(= lepton, hadron, or nucleus) have four-momenta,
\begin{eqnarray}
& &
\;\;\;\;\;\;\;\;\;\;\;
P^\mu_{A,B} \; = \;\left( E_{A,B}, 0,0,\pm P_{cm}\right)
\nonumber
\\
E_{A,B} &=& \frac{1}{2 \sqrt{s}}\;\left(s \,+\,M_A^2\,\pm\,M_B^2\right)
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
s\;=\; E_{cm}^2 = \left(P_A^\mu+P_B^\mu\right)^2
\label{cm}
\\
P_{cm}&=& \frac{1}{2\sqrt{s}}\;
\sqrt{\left[s-(M_A+M_B)^2\right]\; \left[s-(M_A- M_B)^2\right]}
\; ,
\nonumber
\end{eqnarray}
where $M_{A,B}$ are the masses of $A$ and $B$,
and so incoming particles then carry well-defined fractions of $P_{cm}$,
having only a non-vanishing longitudinal momentum along
the $z$-axis.
Particularly in the case of nuclei $A$, the daughter nucleons $N_i=1,\ldots A$,
have momenta, $\vec P_{N_i}= (0,0,\pm P_{cm}/A)$.
\smallskip
In the following, the global $cm$-frame of $A$ and $B$ is assumed to be the
reference frame, with the initial energy-momentum of the collision system
given by (\ref{cm}).
Furtermore the terms `hadron', respectively `nucleon', are used to
distinguish initial states $A+B$ with $A$ and/or $B$ being
a single hadron, respectively a nucleus with $A$ ($B$) nucleons.
\bigskip
\subsection{Initial state}
If one or both beam particles $A$, $B$ are leptons,
they are considered as point-like objects
which carry the full beam energy, meaning
that any QED or QCD substructure of the leptons,
as well as initial-state photon radiation by the leptons, is neglected.
For {\it lepton-lepton annihilation},
it is assumed that the colliding leptons produce a
time-like $\gamma$ or $Z^0$ boson of invariant mass
$Q^2\equiv +q^2$ at time $t = -Q^{-1}$, so that $t=0$ characterizes the point
when the $\gamma$ ($Z^0$) decays into a quark-antiquark pair.
Similarly, for {\it lepton-hadron (nucleus) collisions}, the lepton is
emitting a space-like virtual $\gamma$
of invariant mass $Q^2\equiv -q^2$ at time $t = -Q^{-1}$, and hence $t=0$
is the point when the $\gamma$ hits the hadron (nucleus).
In the general case of {\it collisions involving hadrons and/or nuclei},
the incoming particles $A$ and $B$ are
decomposed into their parton substructure by
phenomenological construction of the
momentum and spatial distributions of their daughter partons
on the basis of the known hadron (nucleon) structure functions
and elastic hadron (nucleon) form-factors.
In the $cm$-frame, where the two incoming particles
$A, B$ (= hadron, nucleus), are moving close to the speed of light,
the parton picture is applicable and the parton substructure
of the hadrons or nucleons
can be resolved with reference to some {\it initial resolution scale} $Q_0$.
This resolution scale
generally varies with beam-energy and mass number $A$, $B$, in that it depends
on the typical momentum and spatial density of partons as well as on
their interaction probability.
To be specific, it may be identified with
the statistically estimated expectation value for the
interaction scale $Q^2$ of all {\it primary} parton-parton collisions
(that are those in which at least one initial state parton is involved)
\cite{miklos}:
\begin{equation}
Q_0^2 \;\,\equiv\;\,
Q_0^2 (x,P,A) \,=\, A^\alpha \;\left(\frac{1}{\langle p_\perp^2 \rangle_{prim}}\,+\,
\frac{\langle p_\perp^2 \rangle_{prim}\,R_0^2}{2 x P}
\right)^{-1}
\;,
\label{Q0}
\end{equation}
where
$\langle p_\perp^2 \rangle_{prim}$
is the average relative transverse momentum squared generated
by primary parton-parton collisions.
and $R_0 = 1 GeV^{-1}$.
The pre-factor $A^\alpha$ ($0\le\alpha\le 4/3$ a parameter)
accounts for the nuclear dependence for $A,B > 1$,
and
$x$ is the parton's momentum fraction of the mother hadron or nucleon
which carries momentum $P$ ($= P_{cm}$ for single hadron, or $=P_{cm}/A$
for a nucleon in a nucleus $A$).
The scale $Q_0$ defines the initial point in momentum space above which the
system of partons is treated as an ensemble
of incoherent quanta. The dynamics prior to this point
is contained in the initial parton-structure function of the mother hadron
(nucleon).
Clearly, the convention (\ref{Q0}) yields only an average value
dominated by the most probable (semi)hard parton collisions with
relatively small momentum transfers of a few GeV.
However, primary parton collisions with a momentum scale
$Q^2\gg Q_0^2$, which correspond to relatively rare fluctuations,
are taken into account individually
by the $Q^2$-evolution of the hadron (nucleon) structure functions through
space-like and time-like radiation processes, discussed later.
The actual form of the phase-space distribution of partons,
eq. (\ref{F}) is initially to be specified at the reference point $Q_0$.
It is constructed as the following superposition of the
parton distributions in the individual
hadrons at $Q_0$ and at time $t=t_0\equiv 0$, the point of collision of $A$ and $B$:
\begin{equation}
\left. F_a (r,k)\right|_{r^0 = t_0}\; = \; \sum_{i=1}^{N_{h}}
P_a^{N_i} ( k, \vec P; Q_0^2) \cdot R_a^{N_i} ( k, \vec r,\vec R)
\left.\frac{}{}\right|_{r^0=t_0}
\;\;\;.
\label{Fa0}
\end{equation}
The right hand side is a sum over all $1 \le N_i\le N_{h}$
hadrons or nucleons contained in the collision system $A+B$.
Each term in the sum is a parton phase-space density given by a
convolution of an initial momentum distribution $P_a^{N_i}$ and a spatial
distribution $R_a^{N_i}$.
The subscript $a = g, q_j, \bar q_j$ ($j= 1,\ldots , n_f)$
labels the species of partons, and $N_i$ refers to the type of the $i$-th
hadron (for nuclei this is just proton or neutron). The four-vectors
$k\equiv k^\mu=(E,\vec k)$ and $r \equiv r^\mu = (t, \vec r)$
refer to the partons. The 3-momenta and
initial positions of the hadrons or nucleons are
denoted $\vec P$, respectively $\vec R$. All vectors are understood
with respect to the global $cm$-frame, at time $t = t_0 =0$.
The partons' energies
$E \equiv E_a(\vec{k}^{\,2},q^2) = \sqrt{\vec{k}^{\,2} + m_a^2 + q^2}$
take into account their initial space-like virtualities $q^2<0$
which are distributed around $\langle \vert q^2 \vert \rangle = -Q_0^2/4$,
under the constraint that for each hadron in the initial state,
the total invariant mass of the daughter partons must equal the mother
hadron mass. This mimics the fact that
the initial partons are confined inside their parent hadrons or nucleons
and cannot be treated as free particles, i.e. they have not enough energy to
be on mass shell, but are off mass shell by a
space-like virtuality $q^2$ (eq. (\ref{invmass}) below).
\smallskip
As suggestively illustrated in Fig. 2 and discussed in more detail now,
the initial state $A+B$ involving hadrons and/or nuclei,
appears in the global $cm$-frame as two approaching `tidal waves' of
large-$x$ partons (mainly valence quarks), where each
`tidal wave' has an extended tail
of low$-x$ partons (mostly gluons and sea-quarks).
\medskip
\subsubsection{Initial momentum distribution}
\noindent
For each hadron or nucleon the number of partons, the distribution of
the flavors, their momenta and associated initial space-like virtualities,
are obtained from the function $P_a^{N_i}$ in (\ref{Fa0}).
Denoting $P\equiv P_{cm}/N_{h}$, with $N_h$ the total number
of hadrons (nucleons) in $A+B$, it is represented in the form
\begin{equation}
P_a^{N_i} (k,\vec P; Q_0^2)\;=\;
\left(\frac{x}{\tilde x}\right) \; F_a^{N_i} (x,Q_0^2) \;\, \rho_a^A(x)\;\,
g (\vec k_{\perp})
\;\,\delta\left(P_z \,-\,P\right) \; \delta^2\left(\vec P_\perp\right)
\label{PaN}
\end{equation}
with the momentum and energy fractions
$
x = k_z/P
$,
$
\tilde x= E/P= \sqrt{x^2 + (k_\perp ^2 + m_a^2 + q^2)/P^2)}
$
and the normalizations
\begin{eqnarray}
&&
\;\;\;\;\;\;\;\;\;\;
\sum_a \int_0^1 dx \; x F_a^{N_i} (x, Q_0^2) \;= \; 1
\;,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\int_0^\infty d^2 k_\perp \;g(\vec k_\perp) \;=\; 1
\label{norm1}
\\
&&
\;\;\;\;\;\;\;\;\;\;
\sum_a \int \, \frac{dk^2\,d^3 k}{(2\pi)^3 2 E}
\; E \; P_a^{N_i} ( k^2,\vec k, \vec P; Q_0^2) \;=\;
n^{N_i} (Q_0^2,\vec P)
\label{norm3}
\;\; .
\end{eqnarray}
\noindent
The physics behind the ansatz (\ref{PaN}) is the following:
\begin{description}
\item[(i)]
The functions $F_a^{N_i} (x,Q_0^2)$ are the scale-dependent measured
hadron (nucleon) structure functions with $x$ being the fraction
of the parent hadron's or nucleon's longitudinal momentum $P$ carried by the parton.
The transverse momentum distribution $g(k_\perp)$ is specified below.
The factor
$x / \tilde x$ in (\ref{norm1}) is included to form the
invariant momentum integral $\int d^3 k / [(2 \pi)^3 2 E]$ out of
the distribution $P_a^{N_i}$ \cite{feyfld}.
In (\ref{norm3}) the quantity $n^{N_i}$ is the total
number of partons in a given nucleon with momentum $P=|\vec P|$ at $Q_0^2$.
\item[(ii)]
The function $\rho_a^A(x)$ in (\ref{PaN}) takes into account
nuclear shadowing effects
affecting mainly soft (small $x$) partons in a nucleus
\footnote{This feature is optional in the simulation, by default it is
switched off.}.
These shadowing effects are evident
in the observations of the European Muon Collaboration \cite{emc} as
a depletion of the nuclear structure functions at small $x$ relative to
those of a free nucleon.
Several mechanisms have been proposed to explain this nuclear shadowing
effect on the basis of the parton model \cite{nucshad4,nucshad2,nucshad3}.
However, here instead the phenomenological approach of Wang and Gyulassy
\cite{hijing2} is adopted
which is based on the following parametrization \cite{nucshad3}
for the $A$ dependence of the shadowing for both quarks and gluons:
\begin{eqnarray}
\rho_a^A(x) &\equiv&
\frac{ {\sl F}_a^{A, N_i} ( x, Q_0^2)}{A \, F_a^{N_i}(x, Q_0^2)}
\;\,=\;\,
1 \,+ \,1.19 \,\left(\frac{}{}\ln A\right)^{1/6} \, \left[
x^3 - 1.5 (x_0 + x_L) x^2 + 3 x_0 x_L x \right]
\\
& &
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;
- \left[ \;
\beta_A (R_\perp) -
\frac{1.08 \,( A^{1/3} -1)}{\ln(A + 1)} \, \sqrt{x}\, \right]
\; \exp(-x^2 / x_0^2)
\;\;\;.
\label{rhoA}
\nonumber
\end{eqnarray}
where ${\sl F}_a^{A, N}$ and $F_a^{N}$ represent the structure function of
a nucleus $A$, respectively those of a free nucleon, and
$x_0 = 0.1$, $x_L = 0.7$. The function
$
\beta_A (r) = 0.1 \,\left(\frac{}{}A^{1/3} -1\right)
\,\frac{4}{3} \,\sqrt{1 - \frac{R_\perp^2}{R_A^2}}
$
takes into account the impact parameter dependence, with $R_\perp$ labeling
the transverse distance of a nucleon from
its nucleus center and $R_A$ the radius of the nucleus.
\item[(iii)]
The distribution $g(k_\perp)$ specifies
the primordial transverse momenta of partons according to
a normalized Gaussian
\begin{equation}
g (\vec k_{\perp})\; = \; \frac{1}{2\pi k_0^2} \;
\exp \left[-\frac{\vert \vec k_\perp\vert^2}{k_0^2}\right]
\;,
\label{gpT}
\end{equation}
independent of the type of parton or
hadron (nucleon).
It takes into account the uncertainty of momentum
(Fermi motion) due to the fact that the initial
partons are confined within the nucleons.
This intrinsic
$k_{\perp}$ can be observed in Drell-Yan experiments
where it is found that the distribution is
roughly independent of $s$ and $Q^2$ \cite{pTprim}.
As inferred from these analyses, the
width in (\ref{gpT}) is
$k_0 = 0.42$ GeV, corresponding to a mean
$\langle k_\perp \rangle \approx 0.38$ GeV.
\end{description}
The scheme (i)-(iii) to sample the flavor and momentum distribution is carried out
independently for each nucleon subject to the requirement
\begin{equation}
\left( \sum_i E_i \right)^2 - \left( \sum_i k_{x_i}\right) - \left(
\sum_i k_{y_i}\right)^2 - \left( \sum_i k_{z_i}\right)^2\; = \;
M_h^2
\label{invmass}
\end{equation}
where the summation runs over all partons belonging to the same nucleon as determined by
(\ref{norm1}) and (\ref{norm3}),
and $M_h$ is the mother hadron (nucleon) mass.
With the partons' 4-momenta distributed as outlined above,
the constraint (\ref{invmass}) determines the distribution
in the variable
$q^2 = k^2 - m_a^2 = E^2 -\vec k^2 -m_a^2 < 0$,
the partons' initial space-like virtualities.
The resulting distribution in $q^2$
is a strongly peaked Gaussian with a mean value of $\approx Q_0^2/4$.
\medskip
\subsubsection{Initial spatial distribution}
\noindent
The initial spatial distribution of the partons, $R_a^{N_i}$,
appearing in eq. (\ref{Fa0}),
depends on the magnitude of their momenta, the
positions of their parent hadrons (nucleons),
as well as on the spatial substructure
of the latter. It is represented as
\begin{equation}
R_a^{N_i} (\vec p, \vec r, \vec R) \;=\;
\delta^3\left(\vec R \,-\,\vec R_{AB}\right)\;\,
\left[\frac{}{} \, h_a^{N_i} (\vec r) \;
\,H_{N_i} (\vec R) \, \right]_{boosted}
\;\;,
\label{RaN}
\end{equation}
where the momentum dependence is here purely due to
boosting the distributions to the global $cm$-frame.
The components of the ansatz (\ref{RaN}) have the following meaning:
\begin{description}
\item[(i)]
The incoming beam particles (a single hadron, or a nucleus)
are assumed to have initial $cm$-positions
$\vec R_{AB} = (\pm \Delta Z_{AB}/2, \vec B_{AB})$,
corresponding to a chosen impact parameter $\vec B_{AB}=(B_x,B_y)$
and a minimum longitudinal separation $\Delta Z_{AB}$.
In the case where $A$ and/or $B$ is not a single hadron, but
a nucleus, the individual nucleons are
assigned positions around the centers of the parent nucleus in its rest-frame,
according to a Fermi distribution for nuclei with mass number $A \geq 12$ and
a Gaussian distribution for nuclei $A < 12$,
\begin{equation}
H_{N_i} (\vec R)\;=\; \left\{ \,
\begin{array}{lr}
\, \frac{1}{4 \pi}\, \left( 1 + \exp \left[ (R-c)/a \right] \right)^{-1}
& \, ( A \geq 12 ) \\
\, \frac{1}{4 \pi}\, \frac{2}{\sqrt{\pi} b} \; \exp \left[ - R^2 / b^2
\right]
& \, ( A < 12 )
\end{array}
\right.
\;\;.
\end{equation}
The parameters are $c = r_0 \, A^{1/3}$, $r_0 = 1.14$ $fm$, $a = 0.545$ $fm$ and
$b = \sqrt{2/3}\, R_A^{ms}$, where $R_A^{ms}$ is the mean square radius
of the nucleus.
\item[(ii)]
Next, the partons are distributed around the centers of their
mother hadrons or nucleons, still in the rest frame of the $A$, respectively $B$,
with an exponential distribution
\begin{equation}
h_a^{N_i} (\vec r)\;=\;
\, \frac{1}{4 \pi}\,\frac{\nu^3}{8 \pi} \, \exp \left[ - \nu r \right]
\;\;,
\end{equation}
where $\nu = 0.84$ GeV corresponds to the measured
elastic formfactor of the mother hadron or nucleon, with a mean square radius
of $R_h^{ms} \equiv \sqrt{12/\nu} = 0.81$ $fm$.
\item[(iii)]
Finally, as indicated by the subscript {\it boosted} in eq. (\ref{RaN}),
the positions of the hadrons or nucleons and their associated valence quarks
are boosted into the global $cm$-frame of the colliding beam particles $A$ and $B$.
The valence quarks then occupy the Lorentz contracted region
$(\Delta z)_v \approx 2 R_A \,M_h/P$, whereas
the sea quarks and gluons are smeared out in the longitudinal
direction by an amount $(\Delta z)_{g,s} \approx 1/k_z < 2 R_A$ ($R_A=r_0 A^{1/3}$)
{\it behind} the valence quarks. This is an important
feature of the partons when boosting a hadron or nucleus to high rapidities
\cite{bj76,hwa86,mueller89}.
As a consequence, the parton positions are correlated in longitudinal
direction with their momenta, as required by the uncertainty principle.
This leads to an enhancement of the densities of gluons and sea quarks
with $x< 1/(2R_h M_h)$ proportional to $A^{1/3}$, because such partons from
different nucleons overlap spatially when the nucleons are at the same impact parameter.
\end{description}
\bigskip
\subsection{Parton cascade development}
With the above construction of the initial state,
the incoming beam particles $A$ and $B$ are decomposed in
their associated parton content at the initial
resolution scale $Q_0^2$ at time $t=t_0\equiv0$,
and the subsequent dynamical development of the
system for $t > t_0$
can now be traced according to the kinetic equations
(\ref{e1})-(\ref{e4}).
In the kinetic approach, the space-time evolution of parton densities
may be described in terms of
parton cascades. Fig. 4 illustrates a typical parton cascade sequence.
It is important to realize that, in general,
there can be many such cascade sequences, being internetted
and simultanously evolving (typical for hadron-nucleus or nucleus-nucleus
collisions).
\begin{figure}
\epsfxsize=450pt
\centerline{ \epsfbox{a017.ps} }
\caption{
Schematical illustration of
a typical parton cascade development initiated by a
collision of two partons $a$ and $b$.
The incoming primary parton $a$ evolves through
a space-like cascade from the
initial resolution scale $Q_0^2$ up to
the scale $Q_{a b}^2$ at which it collides with parton $b$.
The outgoing partons $c$ and $d$ both initiate time-like
cascades which are described as a combination of
multiple branchings (or emissions), fusions (or absorptions),
or secondary scatterings (or rescatterings).
A particular branch in a cascade tree terminates
(locally in space-time), when the partons in that branch
recombine with neighboring ones to pre-hadronic clusters and
their subsequent decay into hadrons.
\label{fig:fig4}
}
\end{figure}
Each cascade can be subdivided into elementary
$2 \rightarrow 2$ scatterings, $1 \rightarrow 2$ branchings (emissions), and
$2\rightarrow 1$ fusions (absorptions).
In Fig. 4, a primary parton $a$ that originates from one of the incident
nuclei, collides with another parton $b$ with some momentum transfer
characterized by the scale $Q_{a b}^2$ at the collision vertex.
The parton $a$ has evolved from the initial scale $Q_0^2$, at which it was
resolved in its parent hadron or nucleus,
up to $Q_{a b}^2$ by successive space-like
branchings. From the scattering of $a$ and $b$ the partons $c$ and $d$ emerge,
both of which can initiate sequences of time-like branchings. These newly
produced partons can themselves branch,
rescatter, or undergo fusion with other partons.
In practice the dynamical interplay of $2 \rightarrow 2$, $2 \rightarrow 1$
and $1 \rightarrow 2$ processes can be simulated
as follows:
Since the time evolution of the parton system is described in
small discrete time steps $\Delta t = O(10^{-3} fm)$,
the time scale of an interaction
process is compared with $\Delta t$ to decide whether the
interaction occurs within this time interval.
A virtual parton
$e^\ast$ with momentum $k^\mu$, $k^2\ne 0$,
that is produced via $a + b \rightarrow e^\ast$, has a short
life-time $\tau_{e^\ast}\propto 1/\sqrt{|k^2|}$,
if its invariant mass $|k^2|$ is large and it is likely
to decay within $\gamma \tau_{e^\ast} < \Delta t$
(the full formulae are derived in \cite{ms1}).
Therefore, the $2 \rightarrow 2$ process
$a + b \rightarrow e^\ast \rightarrow c + d$ preferably occurs.
On the other hand, if $|k^2|$ is small, corresponding to a long
$\tau_{e^\ast}$, parton $e^\ast$ is likely not to decay within
the time step $\Delta t$
and the fusion process $a + b \rightarrow e^\ast$ rather happens.
In this case, $e^\ast$ propagates as a quasi-stable particle until in
the following time step it has an increased decay probability \cite{ms1}.
This may then result in the
$1 \rightarrow 2$ decay $e^\ast \rightarrow c + d$. Alternatively,
$e^\ast$ might collide with another parton before its decay or
may propagate freely until the following time step, and so on.
In this manner the elementary $2\rightarrow 2$, $2\rightarrow 1$,
and $1\rightarrow 2$ processes are treated on equal footing
and double counting is prevented:
either a $2\rightarrow 2$, or a $2\rightarrow 1$,
and possibly a subsequent $1\rightarrow 2$ process can occur,
but not both. The relative probability is determined by
the uncertainty principle, i.e. by relating the momentum scale of
the process to the time scale as explained.
For the collisional processes $2\rightarrow 2$ and $2 \rightarrow 1$,
the statistical occurence is determined by the 2-body cross-section
in Born apprpoximation, with higher-order inelastic
corrections effectively included in the parton evolution
before and after each collision.
This parton-evolution is calculated
using the well-known jet calculus \cite{jetcalc,bassetto}
based
on the `modified leading logarithmic approximation' (MLLA)
to the QCD evolution of hard processes \cite{dokbook,dok88}.
Each individual parton-collision factorizes then in an elementary
2-body collision, in which the in- and out-going partons undergo
an ordered sequence
of elementary branchings $a\rightarrow b+c$, accounting for
higher-order radiative corrections to the lowest-order Born approximation.
These branchingss can be described stochastically as a
Markov cascade in position and momentum space.
One distinguishes initial-state, {\it space-like} branchings
of the two partons entering a collision vertex,
and final-state, {\it time-like} radiation off the
collided partons after the collision.
The specific feature of the present approach is that, in addition to the
definite
virtuality and momentum, each elementary vertex has a certain space and
time
position which is obtained by assuming that the partons in the cascade
propagate on
straight-line trajectories in between their interactions.
In the MLLA framework, the
basic properties of both space-like and time-like showers are
determined by the DGLAP equations \cite{DGLAP}, but with essential
differences in time ordering, kinematics and the treatment of
infrared singularities associated with soft gluon emission.
\medskip
\subsubsection{Elementary parton-parton collisions}
For the elementary parton scatterings $a + b \rightarrow c + d$,
and fusion processes $a + b \rightarrow c^\ast$,
two distinct classes of processes
are considered:
\begin{description}
\item[(i)] truly perturbative QCD {\it hard} parton collisions
with a sufficiently large momentum transfer $p_\perp^2$ or invariant mass
$\hat s$;
\item[(ii)]
phenomenological treatment of {\it soft} parton collisions
\footnote{The inclusion of soft parton collisions is optional in the program,
and by default is switched off.}
with low momentum
transfer $p_\perp^2$ or invariant mass $\hat s$.
\end{description}
The motivation for this differentiation of parton-parton collisions
is to regulate the singular behavior of the collision integrals
in (\ref{e1}) which results from the divergence of the
associated Born-amplitudes squared
for small momentum transfers $Q^2$ (except for $\hat s$ channel
processes such as $q \bar q$ annihilation).
To render the parton-parton cross-sections finite, an invariant
{\it hard-soft division scale} $p_0^2$ is introduced such that
collisions occurring at a momentum scale $Q^2 \geq p_0^2$
are treated as perturbative (semi)hard collisions,
whereas for those with $Q^2 < p_0^2$
a soft, non-perturbative interaction is assumed to occur. That is, the total
parton-parton cross-section for a collision
between two partons $a$ and $b$ is represented as
\begin{equation}
\hat \sigma_{a b} ( \hat s ) \;=\;
\sum_{c, (d)} \, \left\{ \,\frac{}{}
\int_{0}^{p^2_0} d Q^2
\left(
\frac{d \hat \sigma_{a b \rightarrow c (d)}^{soft}}{d Q^2}
\right)
\;+\;
\int_{p^2_0}^{\hat s} d Q^2
\left(
\frac{d \hat \sigma_{a b \rightarrow c (d)}^{hard}}{d Q^2}
\right)
\, \right\}
\;\;\;,
\label{sigma}
\end{equation}
where
\begin{eqnarray}
Q^2 \,&\equiv& \,Q^2(\hat{s},\hat{t},\hat{u}) \;=\; \left\{
\begin{array}{cl}
p_\perp^2 \simeq -\hat{t} (-\hat{u})
& \;\mbox{for} \;\hat{t} (\hat{u}) \; \mbox{channel}
\\
m^2 = \hat{s}
& \;\mbox{for} \;\hat{s} \;\mbox{channel}
\end{array}
\right.
\\
\hat s = (p_a + p_b)^2
& &
,\;\;\;\;\;\;\;\;\;\;
\hat t = (p_a - p_c)^2
\;,\;\;\;\;\;\;\;\;\;\;
\hat u = (p_a - p_d)^2
\end{eqnarray}
That is, the scale $Q^2$ of the collision is set by
$p_\perp^2$ for scatterings
$a+b\rightarrow a+b$, or by $\hat s$ for annihilation
processes and $a+b\rightarrow c+d$ and fusion processes $a+b\rightarrow c$.
The sum over $c$, $(d)$ corresponds to summing over all possible reaction
channels (processes).
The specific value of $p_0$ is generally dependent on beam energy
$E_{cm}=(P_A+P_B)^2$ of colliding hadrons (nuclei) $A+B$,
as well as on the mass numbers $A$ and $B$. It is treated as a parameter
that is determined by existing experimental data. A suitable form is
\begin{equation}
p_0\;\,\equiv\;\,
p_0(E_{cm},A,B)\;=\; \frac{a}{4}\;\left(\frac{2 E_{cm}/\mbox{GeV}}
{A+B}\right)^b
\;\;\;\;\;\;\;\;
(a = 2.0 \;\mbox{GeV} , \;\;\;\;\;b= 0.27)
\;.
\end{equation}
The complementation of both hard and soft
contributions renders the parton cross-section
finite and well defined for {\it all} $Q^2$.
It must be emphasized that the
effect of soft parton collisions on the global dynamics is not essential
for collisions involving only leptons and/or hadrons,
but plays an important role
in hadron-nucleus, or nucleus-nucleus collions.
The soft parton collisions
naturally involve comparably small momentum transfer, so that their contribution
to transverse energy production is small, but the effect on soft particle
production is significant.
In accord with the two-component structure of the cross-section (\ref{sigma}),
these parton collisions are distinguished depending on the momentum transfer:
\begin{description}
\item[(i)]
{\bf Hard collisions above} {\boldmath $p_{0}$}:
\smallskip
For the {\it perturbative (semi)hard collisions} above
$p_{0}$, the momentum scale $Q^2$ is determined by the
corresponding differential cross-sections
\begin{equation}
\frac{d \hat \sigma_{ab \rightarrow cd}^{hard}}
{d Q^2}
\,=\,
\frac{1}{16 \pi \hat s^2} \,
\left| \overline{M}_{ab \rightarrow cd} \right| ^2
(\hat s, Q^2)
\;\;\propto \frac{\pi \alpha_s^2(Q^2)}{Q^2}
\;,
\label{sigh}
\end{equation}
where $| \overline{M} | ^2$
is the process-dependent spin- and color-averaged squared matrix element
in Born approximation, published in
the literature (see e.g. Refs. \cite{sig1} and for
massive quarks Refs. \cite{sig2}).
\medskip
\item[(ii)]
{\bf Soft collisions below} {\boldmath $p_{0}$}:
\smallskip
In the case of a non-perturbative soft collision between two partons
it is assumed that a very low energy double gluon
exchange occurs. This provides a natural continuation
to the harder collisions above
$p_{0}$ where the dominant
one gluon exchange processes
$g g \rightarrow g g$, $g q \rightarrow g q$ and
$q q \rightarrow q q$ have the same overall structure \cite{gustaf82}.
A simple, and physically plausible, form for the soft cross-section
that continues the hard cross-section for $Q^2$ below $p_0$ down to $Q^2 = 0$,
may be modelled by introducing a screening mass-term $\mu^2$ in the
nominator of (\ref{sigh}),
\begin{equation}
\frac{d \hat \sigma_{ab \rightarrow cd}^{soft}}
{d Q^2}
\;\propto\;
\frac{2\pi \alpha_s^2(p_0^2)}{Q^2 + \mu^2}
\label{sigs}
\end{equation}
where $\mu$ is a phenomenological parameter that
governs the overall magnitude of the integrated
$\sigma^{soft} \propto \ln[(p_0^2+\mu^2)/\mu^2]$.
The value of $\mu$
is not known precisely, but it can be estimated to be in the
range 0.3 - 1 GeV.
\end{description}
Notice that
both hard and soft scatterings are treated in this approach on completely equal footing.
That is, with the four momenta of the incoming partons known, the momentum
transfer and scattering angle are sampled from the respective
differential cross-sections $d\hat \sigma / d\hat t$
(\ref{sigh}) and (\ref{sigs}), assuming azimuthal symmetry of the
scattering geometry.
\medskip
\subsubsection{Space-like parton evolution}
\noindent
Space-like parton cascades are associated with
QCD radiative corrections due to brems-strahlung emitted
by primary space-like partons, which are
contained in the initial state hadrons (nuclei),
and which encounter their very
first collision. Thereby such a primary, virtual parton is
struck out of the coherent hadron (nucleus)
wavefunction to materialize to a real excitation.
A typical space-like cascade is illustrated in the left part of Fig. 4,
where the parton $a_0$, originating from the initial hadron (nucleus) state,
undergoes successive space-like branchings $a_0 \rightarrow a_1 a_1'$,
$a_1 \rightarrow a_2 a_2'$, ..., $a_{n-1} \rightarrow a_n a_n'$ to become
the parton $a \equiv a_n$ which then actually collides with another parton $b$.
The branching chain proceeds by increasing the virtualities
$|q_i^2|$ of the partons $a_i$ in
the cascade, starting from $a_0$ with $|q_0^2| \simeq Q_0^2/4$
up to $|q^2| \simeq Q^2$
the space-like virtuality of the scattering parton $a$,
where $Q^2 \equiv Q_{ab}^2$ sets the scale of the collision
between partons $a$ and $b$, and hence for the evolution from $Q_0^2$ to $Q^2$.
The emitted partons $a_i'$ on the side branches with momenta $k_i$
on the other hand have, due to energy-momentum
conservation at the branching vertices, time-like virtualities and
each of them can initiate a time-like cascade, discussed afterwards.
The proper inclusion of both collinear, hard and coherent, soft branchings is
achieved by describing the space-like cascade in both space-time {\it and}
momentum space by using
a so-called {\it angular-ordered} evolution variable
$\tilde{q}_j^2$ (rather than the virtualities $|q_j^2|$)
\cite{webber88}
\begin{equation}
\tilde{q}_j^2 \;\equiv E_j^2\;\zeta_{j+1}
\;\;,\;\;\;\;\;\;\;\;\;\;\;
\zeta_{j+1} \;=\;\frac{q_0\cdot k_{j+1}}{\omega_0 \;E_{j+1}}
\;\simeq \;
1 - \cos \theta_{0,\;j+1}
\;\;\;\;\;\;\;\;\;(0 \le j \le n)
\;,
\label{tildep}
\end{equation}
where $q_j=(\omega_j,\vec{q}_j)$ and
$k_{j+1}=(E_{j+1},\vec k_{j+1})$ refer to
the $j^{th}$ branching $a_j \rightarrow a_{j+1} a_{j+1}'$ with
momentum assignment
$q_j\rightarrow q_{j+1} k_{j+1}$ (see Fig. 4).
The space-like cascade is then strictly ordered in the variable
$\tilde{q}_{j+1}^2 > \tilde{q}_j^2$, which is equivalent to the
ordering of emission angles,
$\omega_j \theta_{0, \;j+1^\prime} < \omega_{j+1} \theta_{0, \;j+2^\prime}$.
The space-like cascade terminates with parton $a_n \equiv a$ entering
the vertex of collision with parton $b$, that is, $Q_{ab}^2$ in Fig. 4.
The history of parton $a$ is however not known until after
it has collided with parton $b$, because it is this very collision
that causes the cascade evolution of parton $a$.
Therefore one must reconstruct the cascade {\it backwards} in
time starting from the time of the collisions at the vertex $Q_{ab}^2$ and
trace the history of the struck parton $a$ back to
the initial state at time $t_0 =0$ at which it was
originally resolved with $Q_0^2$ in its hadron (nucleon) mother.
The method used here is a space-time generalization of the
`backward evolution scheme' \cite{backevol}.
To sketch the procedure, consider the space-like branching
$q_{n-1} \rightarrow q_n k_n$ which is closest to the
collision vertex $Q_{ab}^2$ in Fig. 4.
The virtualities satisfy \cite{bassetto}
$\vert q_n^2 \vert > \vert q_{n-1}^2 \vert$, and $q_{n}^2, q_{n-1}^2 <
0$
(space-like) but $k_n^2 > 0$ (time-like).
The relative probability for a branching to occur between
$\tilde{q}^2$
and $\tilde{q}^2 + d\tilde{q}^2$ is given by
\begin{eqnarray}
d {\cal P}_{n-1,\,n}^{(S)} ( x_{n-1}, x_{n}, \tilde{q}^2;\,\Delta t)
&=&
\frac{d \tilde{q}^2}{\tilde{q}^2}\, \frac{d z}{z}
\,
\frac{\alpha_s\left((1-z) \tilde{q}^2\right)}{2 \pi} \,
\,
\gamma_{n\mbox{-}1 \rightarrow n n^\prime} (z)
\;\;
\left(
\frac{F(r_{n-1}; x_{n-1}, \tilde{q}^2)}
{F(r_n; x_n, \tilde{q}^2)}
\right)
\;\,{\cal T}^{(S)}(\Delta t)
\;,
\label{PS}
\end{eqnarray}
where $x_j = (q_j)_z / P_z$ ($j=n, n-1$) are the
fractions of longitudinal momentum $P_z$ of the initial mother hadron (nucleon),
with $F(r_j;x_j, \tilde{q}^2)\equiv F(r_j,q_j)$
the corresponding parton distributions
defined by (\ref{F}),
and the variables
\begin{equation}
z \;=\;\frac{E_n}{E_{n-1}} \;\simeq \;\frac{x_n}{x_{n-1}}
\;\;\;,\;\;\;\;\;\;\;\;
1\;-\;z \;=\;\frac{E^\prime_n}{E_{n-1}} \;\simeq \;
\frac{x_{n-1}-x_{n}}{x_{n-1}}
\label{tildez}
\end{equation}
specify the fractional energy or longitudinal
momentum of parton $n$ and $n^\prime$, respectively, taken away from
$n-1$.
The function $\alpha_s/(2\pi)\, \gamma (z)$
is the usual DGLAP branching probability \cite{dok80,dokbook},
with $\gamma (z)$ giving the energy distribution in the variable $z$.
The last factor
${\cal T}^{(S)}(\Delta t)$
in (\ref{PS}) determines the time interval in the global $cm$-frame,
$\Delta t = t_n - t_{n-1}$, that is associated with the branching
process
$a_{n-1}\rightarrow a_n a_n'$.
It accounts for the formation time of $a_n$ from $a_{n-1}$ on
the basis of the uncertainty principle:
$\Delta t = \Delta \omega/|q_n^2|$,
$\Delta \omega \simeq (x_n - x_{n-1})\,P_z$.
A very simple form is taken here,
\begin{equation}
{\cal T}^{(S)}(\Delta t)
\;=\; \delta\left( \frac{x_n - x_{n-1}}{\vert q_n^2\vert}\, P_z
\;-\; \Delta t\right)
\label{delts}
\;.
\end{equation}
The backwards evolution of the space-like branching
$q_{n-1} \rightarrow q_n + k_n$ is expressed in terms of the
probability that parton $a_{n-1}$ did {\it not} branch between the
lower bound $\tilde{q}_0^2$, given by the initial resolution scale
$Q_0^2$, and
$\tilde{q}_n^2 \equiv \tilde{q}^2 \simeq Q_{ab}^2$.
In that case, parton $n$ can {\it not} originate from this branching,
but must have been produced otherwise or already been present in
the initial parton distributions.
This non-branching probability is given by the
{\it Sudakov form-factor for space-like branchings}:
\begin{equation}
S_n ( x_{n}, \tilde{q}^2, \tilde{q}_0^2;\,\Delta t)
\;=\;
\exp
\left\{
\,-\, \sum_{a}
\,
\int_{\tilde{q}_0^2}^{\tilde{q}^2}
\,
\int_{z_- (\tilde{q}^{\prime})}^{z_+ (\tilde{q}^{\prime})}
\,
d {\cal P}_{n,\,n-1}^{(S)} ( x_n, z, \tilde{q}^{\prime 2}; \,\Delta t)
\right\}
\;,
\label{slff2}
\end{equation}
where the sum runs over the possible species $a = g, q, \bar q$
of parton $a_{n-1}$.
The upper limit of the $\tilde{p}^2$-integration is set by
$\tilde{q}^2 \, \lower3pt\hbox{$\buildrel <\over\sim$}\,Q_{ab}^2$,
associated with the collision vertex with parton $b$.
The limits $z_\pm$ are determined by kinematics \cite{webber88}:
$z_-(\tilde{q})= Q_0/\tilde{q}$ and
$z_+(\tilde{q})= 1 - Q_0/\tilde{q}$.
The knowledge of the space-like formfactor
$S_n ( x_{n}, \tilde{q}^2, \tilde{q}_0^2;\,\Delta t)$
is enough to trace
the evolution of the branching closest to the hard vertex backwards
from $q_n^2$ at $t=t_n$, the time of collision in the global $cm$-frame,
$q_{n-1}^2$ at $t_{n-1}= t_n- x_n/|q_n^2|\,P_z$.
The next preceding branchings $q_{n-2}\rightarrow q_{n-1}
k_{n-1}$, etc., are then reconstructed in exactly the same manner
with the replacements $t_n \rightarrow t_{n-1}$,
$x_n \rightarrow x_{n-1}$, $q_n^2 \rightarrow q_{n-1}^2$, and so forth,
until the initial point $q_0^2$ at $t_0 = 0$ is reached.
\medskip
\subsubsection{Time-like parton evolution}
\noindent
Time-like parton cascades are initiated by secondary partons
that emerge either from the side-branches of a
preceding space-like or time-like cascade,
or directly from a scattering or fusion process.
Consider the time-like cascade
initiated by the parton $c$ in the right part of Fig. 4,
with momentum $k = k_n$.
This parton has been produced in the collision
$a +b \rightarrow c + d$ with a time-like off-shellness
$k_n^2 \simeq Q_{ab}^2$.
Again an {\it angular-ordered} (rather than virtuality-ordered)
evolution in space-time {\it and} momentum-space
of the cascade is employed to incorporate interference effects of soft
gluons emitted along the time-like cascade tree of Fig. 4,
$c \equiv c_m \rightarrow c_{m-1} c_{m-1}'$, ...,
$c_{1} \rightarrow c_0 c_0'$.
In contrast to (\ref{tildep}), the time-like version of the angular
evolution variable is \cite{webber86}
\begin{equation}
\tilde{k}_j^2 \;\equiv E_j^2\;\xi_{j-1}
\;\;,\;\;\;\;\;\;\;\;\;\;\;
\xi_{j-1} \,=\,
\frac{k_{j-1} \cdot k^\prime_{j-1}}{E_{j-1} E^\prime_{j-1}} \,\simeq \,
1 - \cos \theta_{(j\mbox{-}1), (j\mbox{-}1)^\prime}
\;\;\;\;\;\;\;\;\;(m \ge j \ge 1)
\;.
\label{tildek}
\end{equation}
so that the time-like cascade can be described by
a $\tilde{k}^2$-ordered (rather than $k^2$-ordered) evolution,
which corresponds to an angular ordering with decreasing emission angles
$\theta_{j, j^\prime} > \theta_{(j\mbox{-}1), (j\mbox{-}1)^\prime}$.
Proceeding analogously to the space-like case (c.f. (\ref{PS})),
the probability $d {\cal P}_{m,\,m-1}^{(T)}$ for the
first branching after the $\gamma q$ vertex,
$k_m \rightarrow k_{m-1} k_{m-1}^\prime$
with $k^2_{m-1}, k^{\prime \,2}_{m-1}$,
is given by the space-time extension \cite{ms39,msrep}
of the usual DGLAP probability distribution \cite{dok80,dokbook},
\begin{equation}
d {\cal P}_{m,\, m-1}^{(T)} (z,\tilde{k}^2;\,\Delta t) \,=\,
\frac{d \tilde{k}^2}{\tilde{k}^2}\, dz
\;
\frac{\alpha_s( \kappa^2 )}{2 \pi} \,
\gamma_{m \rightarrow (m\mbox{-}1),(m\mbox{-}1)^\prime} (z)
\;\,
{\cal T}^{(T)} (\Delta t)\,
\; ,
\label{PT}
\end{equation}
where
${\cal T}^{(T)} (\Delta t)$ is the probabilty that parton $m$ with
virtuality
$k_m^2$ and corresponding proper lifetime $\tau_m \propto
1/\sqrt{k_m^2}$
decays within a time interval $\Delta t$,
\begin{equation}
{\cal T}^{(T)} (\Delta t)
\;=\; 1\; - \;\exp \left( - \frac{\Delta t}{t_m(k)}\right)
\label{deltt}
\;.
\end{equation}
The actual lifetime of the decaying parton $m$ in the global $cm$-frame
is then $t_m(k) = \gamma/\tau_m(k)$, where
$t_q(k) \approx 3 E/(2 \alpha_s k^2)$ for quarks and
$t_g(k) \approx E/(2 \alpha_s k^2)$ for gluons \cite{ms3}.
As before,
$F_j$ denotes the local density of parton species $j=m,m-1$, and
$\alpha_s/(2\pi) \gamma(z)$ is the DGLAP branching kernel
with energy distribution $\gamma(z)$.
The probability (\ref{PT}) is formulated in terms of the energy
fractions carried by the daughter partons,
\begin{equation}
z \;=\; \frac{E_{m-1}}{E_m}
\;\;\;,\;\;\;\;\;\;\;\;
1 - z \; = \; \frac{E_{m-1}^\prime}{E_m}
\;\;\; ,
\end{equation}
with the virtuality $k_m$ of the parton $m$ related to $z$ and $\xi$
through
$k_m^2 = k_{m_1}^2 + k_{m-1}^{\prime \, 2} + 2 E_m^2 z ( 1 - z ) \xi$,
and the argument $\kappa^2$ in the running coupling $\alpha_s$ in
(\ref{PT})
is \cite{webber88}
$\kappa^2= 2 z^2 ( 1 - z )^2 E_m^2 \xi \simeq k_\perp^2$.
The branching probability (\ref{PT}) determines the distribution
of emitted partons in both coordinate and momentum space, because
the knowledge of four-momentum and lifetime (or $\Delta t$ between
successive branchings) give the spatial positions of the
partons, if they are assumed to propagate on straight paths between
the vertices.
The probability that parton $m$ does {\it not} branch between
$\tilde{k}^2$ and
a minimum value $\tilde{k}^2_0 \equiv \mu_0^2$
is given by the exponentiation of (\ref{PT}),
yielding the {\it Sudakov form-factor for time-like branchings}:
\begin{equation}
T_m (\tilde{k}^2,\tilde{k}_0^2; \, \Delta t)
\;=\;
\exp
\left\{
\,-\, \int_{\tilde{k}_0^2}^{\tilde{k^2}}
\, \sum_a
\, \int_{z_{-}(\tilde{k}^\prime)}^{z_{+}(\tilde{k}^\prime)}
\;\,
d {\cal P}_{m,\,m-1}^{(T)} (z, \tilde{k}^{\prime\,2};\,\Delta t)
\right\}
\;,
\label{tlff2}
\end{equation}
which is summed over the species $a = g , q, \bar q$
of parton $m-1$.
The integration limits $\tilde{k}_0^2$ and $z_\pm$
are determined by the requirement that the branching must terminate
when the partons enter the non-perturbative regime and begin
to hadronize.
This condition can be parametrized by
the confinement length scale $L_c = O(1 \,fm)$ with
$\tilde{k_0}^2 \, \lower3pt\hbox{$\buildrel >\over\sim$}\, L_c^{-2}
\equiv \mu_0^2$, and
$
z_{+} ( \tilde{k}_m ) =
1 -z_{-} ( \tilde{k}_m ) =
\mu_0/\sqrt{4 \tilde{k}_m^2}
$,
so that for
$z_{+} ( \tilde{k}_0^2)=
z_{-} ( \tilde{k}_0^2) = 1/2$
the phase space for the branching vanishes.
The time-like form factor
$T_m (\tilde{k}^2,\tilde{k}_0^2; \, \Delta t)$
determines the four-momenta and
positions of the partons of a particular emission vertex
as sketched above for the first branching
from $k_m^2$ at $t=t_m$, the time of production of parton
$c$ in the global $cm$-frame,
to $k_{m-1}^2$ at $t_{m-1}= t_m+ E_m/|k_m^2|$.
Subsequent branchings are described
completely analogously by replacing $t_m\rightarrow t_{m-1}$,
$k_m^2 \rightarrow k_{m-1}^2$, etc..
Hence $T(\tilde{k}^2,\tilde{k}_0^2; \, \Delta t)$
generates a time-like cascade
as sequential branchings starting from $t=0$ at the
hard vertex forward in time, until the partons eventually hadronize
as discussed below.
\bigskip
\subsection{Cluster formation and hadronization}
In view of lack of knowledge about the details of confinement dynamics
and the non-perturbative hadronization mechanism, one must
rely at present on model-building.
In the present approach, the cluster-hadronization scheme of Ref.
\cite{ms37,ms40,ms41} is employed.
This phenomenological scheme is inspired by the Marchesini-Webber model
\cite{webber84}, however it works in space-time plus color-space.
On the other hand it is very different from
commonly used string-fragmentation models such as the Lund-model \cite{string}.
In VNI,
both the cluster-formation from the collection
of quarks and gluons at the end of the perturbative phase
and the subsequent cluster-decay into final hadrons
consist of two components:
\begin{description}
\item[1.]
The recombination of the {\it secondary time-like partons},
their conversion into colorless
{\it parton clusters} and the subsequent
decay into secondary hadrons.
\item[2.]
The recombination of the {\it primary space-like partons} that remained
spectators throughout the collision development into {\it beam clusters}
and the fragmentation of these clusters.
\end{description}
The important assumption here is that the process of hadron formation
depends only on the local space-, time-, and color-structure of the
parton system,
so that the hadronization mechanism can be modelled as the formation of
color-singlet clusters of partons as independent entities
(pre-hadrons),
which subsequently decay into hadrons.
This concept is reminiscent of the `pre-confinement' property
\cite{preconf} of parton evolution,
which is the tendency of the produced partons
to arrange themselves in color-singlet clusters with limited
extension in both position and momentum space, so that it is
suggestive to suppose that these clusters are the basic units out of
which hadrons form.
\medskip
\subsubsection{Cluster formation}
\noindent
{\bf (i) Parton clusters:}
\smallskip
Parton clusters are formed from secondary partons, i.e. those
that have been produced by the hard interaction and the
parton shower development.
The coalescence of these secondary partons to color-neutral clusters
has been discussed in detail in Refs. \cite{ms37,ms40}.
Throughout the dynamically-evolving
parton cascade development, every parton and its nearest
spatial neighbour are considered as as potential candidates for
a 2-parton cluster, which, if {\it color neutral}, plays the role
of a `pre-confined' excitation in the process of hadronization.
Within each single time step, the probability for parton-cluster
conversion is determined for each nearest-neighbor pair by
the requirement that the total color charge of the two partons must
give a composite color-singlet state (if necessary by accompanying
gluon emission), and the condition that
their {\it relative spatial distance} $L$ exceeds the critical
{\it confinement length scale} $L_c$.
The scale $L$ is defined as
the Lorentz-invariant distance $L_{ij}$ between parton $i$
and its nearest neighbor $j$:
\begin{equation}
L\;\equiv\; L_{ij} \;= \;
\mbox{min} (\Delta_{i 1}, \ldots , \Delta_{i j}, \ldots , \Delta_{i n})
\;,
\label{L}
\end{equation}
where $\Delta_{ij} = \sqrt{ (r_{ij})_0^2 +
(r_{ij})_x^2 + (r_{ij})_y^2 + (r_{ij})_z^2}$,
with $r_{ij}^\mu = r_i^\mu - r_j^\mu$,
and the probability for the coalescence of the two partons $i$, $j$ to
form
a cluster is modelled by a distribution of the form
\begin{equation}
\Pi_{ij\rightarrow c}\;\propto \; \left(\frac{}{}
1\,-\, \exp\left(-\Delta F\;L_{ij}\right)
\right)
\;\,\simeq \;\,
1\;-\;\exp\left(\frac{L_0-L_{ij}}{L_c-L_{ij}} \right)
\;\;\;\;\;\mbox{if $L_0 \;<\;L_{ij}\;\le\;L_c$}
\;,
\label{Pi2}
\end{equation}
and $\Pi_{ij\rightarrow c}= 0\; (1)$ if $L_{ij} < L_0$ ($L_{ij} > L_c$).
Here $\Delta F$ is the local change in the free energy
of the system that is associated with the
conversion of the partons to clusters,
and the second expression on the right side is a parametrization
in terms of $L_0 = 0.6$ $fm$ and $L_c = 0.8$ $fm$
that define the transition regime.
As studied in Ref. \cite{ms40}, the aforementioned color constraint,
that only colorless 2-parton configurations may produce a cluster,
can be incorporated by allowing
coalescence for any pair of color charges, as determined by the
space-time separation $L_{ij}$ and the probability (\ref{Pi2}),
however, accompanied by the additional emission
of a gluon or quark that carries away any unbalanced net color
in the case that the two coalescing partons are not in a colorless
configuration.
\medskip
\noindent
{\bf (ii) Beam clusters:}
\smallskip
If one or both of the colliding beam/target particles $A$ and $B$ were
a hadron or a nucleus, the one, respectively two beam clusters
are formed from the spectator partons that represent
the receding beam/target remnants of the original particles $A$ and $B$.
More precisely,
the remaining fraction of the longitudinal momentum and energy that has
not materialized or been redirected
and harnessed during the course of the collision, is
carried by those primary partons of the initial-state hadrons or nuclei, which
remained spectators throughout. In the present approach
these partons maintain their originally assigned momenta
and their space-like virtualities. Representing the beam remnants
of $A$ and/or $B$, they
may be pictured as the coherent relics of the original
hadron (or nucleus) wavefunctions.
Therefore the primary virtual partons must be treated differently
than the secondary partons which are real excitations that contribute
incoherently to the hadron yield.
In the global $cm$-frame, the primary partons are
grouped together to form a massive beam cluster with its four-momentum
given by the sum of the parton momenta and its position given by
the 3-vector mean of the partons' positions.
\bigskip
\subsubsection{Hadronization of clusters}
\noindent
{\bf (i) Parton clusters:}
\smallskip
For the decay of each parton cluster into final-state hadrons,
the scheme presented in detail in Refs. \cite{ms37}
is employed:
If a cluster is too light to
decay into a pair of hadrons, it is taken to represent
the lightest single meson that corresponds to its
partonic constituents. Otherwise, the cluster
decays isotropically in its rest frame into
a pair of hadrons, either mesons or baryons, whose combined
quantum numbers correspond to its partonic constituents.
The corresponding decay probability is chosen to be
\begin{equation}
\Pi_{c\rightarrow h}\;=\;
\;{\cal T}_c(E_c,m_c^2) \;
\;\,{\cal N}
\int_{m_h}^{m_c}
\frac{dm}{m^3}\;\exp\left(-\frac{m}{m_0}\right)
\;,
\label{pi3}
\end{equation}
where ${\cal N}$ is a normalization factor,
and the integrand is a Hagedorn spectrum \cite{hagedorn} that
parametrizes quite well
the density of accessible hadronic states below $m_c$ which are
listed in the particle data tables, and $m_0 = m_{\pi}$.
In analogy to (\ref{deltt}), ${\cal T}_c$ is a
life-time factor giving the probability
that a cluster of mass $m_c^2$ decays within
a time interval $\Delta t$ in the global $cm$-frame,
\begin{equation}
{\cal T}_c(E_c,m_c^2)\;=\; 1\;-\;
\exp\left( - \frac{\Delta t}{t_c(E_c,m_c^2)}\right)
\;,
\label{lft2}
\end{equation}
with the Lorentz-boosted life time
$t_c= \gamma_c \tau_c\simeq E_c/m_c^2$.
In this scheme, a particular cluster
decay mode is obtained from (\ref{pi3})
by summing over all possible decay channels,
weighted with the appropriate spin, flavour, and
phase-space factors, and then choosing the actual decay mode
acording to the relative probabilities of the channels.
\medskip
\noindent
{\bf (ii) Beam clusters:}
\smallskip
The fragmentation of the beam clusters containing the spectator partons
mimics in the present model what is commonly termed the `soft underlying event',
namely, the emergence of those final-state hadrons that are
associated with the non-perturbative physics which underlies the
perturbatively-accessible dynamics of the hard interaction
with parton shower fragmentation.
In the spirit of the Marchesini-Webber model \cite{webber88},
a version (suitably modified
for the present purposes) of the soft hadron production model
of the UA5 collaboration \cite{UA5}, which is based on a parametrization
of the CERN $p\bar p$ collider data for minimum-bias hadronic
collisions.
The parameters involved in this model are set to give a good
agreement with those data.
Soft hadron production is known to be a universal mechanism
\cite{Mdiff} that is
common to all high-energy collisions that involve beam hadrons in the
initial state,
and that depends essentially on the total energy-momentum of
the fragmenting final-state beam remnant.
Accordingly, one may assume that the fragmentation of the final-state
beam cluster depends solely on its invariant mass $M$, and that it
produces a charged-
particle multiplicity with a binomial distribution \cite{UA5},
\begin{equation}
P(n) \;=\; \frac{\Gamma(n+k)}{n! \Gamma(k)} \;
\frac{ (\overline{n}/k)^n}{(1 + \overline{n}/k)^{n+k}}
\label{ndist}
\;,
\end{equation}
where the mean charged multiplicity
$\overline{n}\equiv\overline{n}(M^2)$
and the parameter $k\equiv k(M^2)$
depends on the invariant cluster mass
\footnote{
Notice that in this model $M$ fluctuates statistically, as a result
of fluctuations of the initial-state parton configuration in
the incoming hadrons (or nuclei), as well as
due to the fluctuating number of remnant partons during the space-time
evolution.
Hence the distribution (\ref{ndist}) and the mean multiplicity
(\ref{nk})
vary from event to event. This is in contrast to the original UA5 model,
in which the fixed beam energy $\sqrt{s}/2$ controls the energy
dependence of
soft hadron production.
}
according to the
following particle data parametrization \cite{UA5},
\begin{equation}
\overline{n}(M^2) \;=\; 10.68 \; (M^2)^{0.115} \;-\; 9.5
\;\;\;\;\;\;\;\;\;
k(M^2) \;=\; 0.029 \; \ln(M^2) \;-\; 0.064
\label{nk}
\;.
\end{equation}
Adopting the scheme of Marchesini and Webber \cite{webber88}, the
fragmentation of a beam cluster of mass $M$ proceeds then as
follows:
First, a particle multiplicity $n$ is chosen from (\ref{ndist}), and the
actual charged particle multiplicity is taken to be $n$ plus the
modulus of the beam cluster charge.
Next, the beam cluster is split into
sub-clusters $(q_1 \bar{q}_2), (q_2 \bar{q}_3), \ldots$ ($q_i = u, d$),
which are
subsequently hadronized in the beam cluster rest frame,
in the same way as the parton clusters described above.
To determine the sub-cluster momenta, the following mass distribution
is assumed,
\begin{equation}
P(M) \;=\; c \; (M-1) \; \exp \left[ -a (M-1) \right]
\label{mdist}
\;,
\end{equation}
with $c$ a normalization constant and $a = 2$ GeV$^{-1}$, resulting
in an average value of $\langle M \rangle \approx 1.5$ GeV.
The transverse momenta are taken from the distibution
\begin{equation}
P(p_\perp) \;=\; c^\prime \; p_\perp \;
\exp \left[ -b \sqrt{p_\perp^2 +M^2}\right]
\label{pTdist}
\;,
\end{equation}
with normalization $c^\prime$ and slope parameter
$b = 3$ GeV$^{-1}$, and
the rapidities $y$ are drawn from a simple flat distribution
$P(y) \propto const.$ with an extent of
0.6 units and Gaussian tails with 1 unit standard deviation at
the ends.
Finally, all hadronization products of the sub-clusters are
boosted from the rest frame of the original beam cluster back into the
global $cm$-frame.
\bigskip
\bigskip
\section{PROGRAM DESCRIPTION}
\label{sec:section3}
\bigskip
\subsection{The package VNI-3.1}
The program package VNI-3.1
is a completely new written code, using only fragments of its precedessors
VNI-1.0 (Ref. \cite{ms0}) and VNI-2.0 (Ref. \cite{ms3}).
The current program is a
first release of a long term project that aims at unifying the simulation
of a wide class of high energy QCD processes, ranging from elementary
particle physics processes, e.g., $e^+e^-$-fragmentation, deep inelastic
$ep$-scattering, via hadronic collisions such as $pp$ ($p\bar{p}$),
to reactions involving (heavy) nuclei,
e.g. deep inelastic $eA$ scattering, $pA$, or $AA$ ($AB$) collisions.
The types of collision processes that are available
and are discussed in more detail
in Section 3.4 below, include
collisions involving
$e^\pm,\mu^\pm$ for leptons, $p,n,\pi,\Lambda,\Sigma\ldots$ for hadrons,
and
$\alpha, \H\rm{e}, {\rm O}, \ldots,
\rm{A}\u, {\rm P}\b, {\rm U}$ for nuclei.
Some of these collision processes have of course been
investigated before; here
they are revisited within the space-time picture of
parton cascades and cluster dynamics in order to provide an alternative
method for Monte Carlo simulation and also to check the consistency with
the wide literature on both experimental measurements and theoretical
predictions.
\subsection{Overall structure of the program}
The program package VNI is written entirely in Fortran 77, and should
run on any machine with such a compiler. The program VNI adopts the
common block structure, particle identification codes, etc.,
from the Monte Carlos of the Lund family, e.g.,
the JETSET/PYTHIA programs \cite{jetset}.
In addition, various program elements
of the latter are used in the sense of a library,
however modified to incorporate
the additional aspects of the space-time and color label information.
VNI also employs certain (significantly modified) parts of the
HERWIG Monte Carlo \cite{herwig}, used for the
final decays of clusters into hadronic resonances and stable particles.
It is important to stress that the correspondence of
VNI with JETSET/PYTHIA and HERWIG is rather loose and should not
lead to naive identification, in particular since VNI is taylored for a
larger set of degrees of freedom that is necessary to describe the
space-time dynamics and the information on color correlations.
Nevertheless, by truncating the additional information of the particle
record of VNI, a direct interface to the particle records of the Lund
Monte Carlo JETSET/PYTHIA, and to the program HERWIG, is actually
straightforward. Instructions are given below in Section 3.10.
\smallskip
The package VNI-3.1 consists of the following files:
\begin{description}
\item{(i)} the documentation {\bf vni-3.1.ps} (this one)
\item{(ii)} the program VNI-3.1 in 7 parts: {\bf vni1.f -- vni7.f}
\item{(iii)} two include files {\bf vni1.inc} and {\bf vni2.inc}
\item{(iv)} two example programs {\bf vnixpl1.f} and {\bf vnixpl2.f }.
\item{(vi)} a simple, portable histogram package {\bf vnibook.f }.
\end{description}
The seven components {\it vni1.f} $-$ {\it vni7.f} of Fortran source code
contain all the subroutines, functions, and block data, as categorized
above. All default settings and particle data are automatically loaded
by including the two include files {\it vni1.inc} and {\it vni2.inc}.
The example programs {\it vnixpl1.f} (a generic one) and {\it vnixpl2.f}
(a more application-oriented one), as well as the histogram package
{\it vnibook.f} are not integral part of the program, i.e. are
distributed supplementary as useful guiding tools.
\medskip
The program source code is organized in 7 parts, in which subroutines
and functions that are related in their performance duties, are grouped
together
(a summary list of all components with a brief description of their
purpose is given in Appendix E):
\begin{description}
\item[vni1.f:]
Main steering routines VNIXRIN (initialization), VNIXRUN
(event generation), and VNIXFIN (finalization), plus other
general-duty subroutines.
VNIXRIN and VNIXFIN are only to be called once, while
VNIXRUN
is to be called anew for each event, in which the
particle system is evolved in discrete time steps and
7-dimensional phase-space.
\item[vni2.f:]
Initialization routines. They include, e.g. the process
dependent event-by-event initialization of the chosen type
of collision process, and various other slave routines
associated with the initial set up of an event.
\item[vni3.f:]
Package for parton distributions (structure functions). A
portable and self-contained collection of routines with
a number of structure function parametrizations, plus a
possible interface to the CERN PDFLIB, plus various slave
routines.
\item[vni4.f:]
Evolution routines. These form the heart of the parton cascade/
cluster-hadronization simulation, and include
both (process-dependent)
particle evolution routines and (process-independent)
universal routines for space-like and time-like shower evolution, for
parton scatterings, parton recombination to clusters, and
cluster decays to hadrons and other final state particles.
\item[vni5.f:]
Diverse utility routines and functions performing the
slave-work for the part of perturbative parton evolution.
\item[vni6.f:]
Diverse utility routines and functions performing the
slave-work for the part of cluster fragmentation to hadrons.
\item[vni7.f:]
Event study and analysis routines for accumulating, analysing
and printing out statistics on observables.
\end{description}
\medskip
\subsection{Special features and machine dependence}
Important to remark are the following special features, being
not strictly common in standard Fortran 77:
\begin{description}
\item[1.]
The program uses of a {\it machine-dependent timing routine},
called $VNICLOC$, which measures the elapsed CPU time
\footnote{
The usage of the time measurement is not
necessary for the performance of the program as-a-whole, although it
is a nice convenience.}.
Two alternatives are provided:
a) the $TIMEX$ routine of the CERN library, in which case the
latter must be linked with the program, or,
b) the $MCLOCK$ routine which is specific to IBM-AIX compilers.
If neither of these options are desired, or does not work in the users
local environment, the lines involving
calls for $TIMEX$ ($MCLOCK$) calls can easily commented out.
\footnote{
The subroutine $VNICLOC$ is listed as the very first one in {\it vni1.f}.
}.
\item[2.]
All common block variables and dimensions are defined globally within
the {\it include files vni1.inc and vni2.inc}.
This greatly simplifies for instance possible modifications
(e.g., enlarging or decreasing) of global common block arrays.
Consequently, all subroutines contain INCLUDE statements for these include
files, and therefore
when compiling and linking the program, the files
{\it vni1.inc} and {\it vni2.inc} must be properly accessible.
\item[3.]
{\it Single precision} is normally assumed throughout, since most
machines are 32 bit processors. For applications at extremely high
energies, single precision for any real variable starts to become a
problem. The main source of numerical precision-loss arises from
multiple Lorentz boosts of particles throughout the simulation.
Therefore the relevant parts of the code are written in
double precision. All double precision variables have as first
character D. As additional precaution, particle energies are recalculated
from momentum and mass after each time step during the evolution of
an event.
\item[4.]
{\it Implicit variable-type assignment} is assumed, such that variables
or functions beginning with letters I - M are of {\it integer}$\ast$4
type, and variables or functions with first letters A - H and O - Z
are of type {\it real}$\ast$4 (single precision), except for
occasional {\it real}$\ast$8 (double precision) occurrences
as stated in item 3.
\item[5.]
{\it SAVE statements} have been included in accordance with the Fortran
standard. Since most ordinary machines take SAVE for granted, this
part is not particularly well tried out, however. Users on machines
without automatic SAVE are therefore warned to be on the lookout for
any variables which may have been missed.
\end{description}
\medskip
\subsection{The main subroutines}
There is a minimum of three routines that a user must know:
\begin{description}
\item[1.] VNIXRIN for the overall initialization of a sample of
collision events,
\item[2.] VNIXRUN for the subsequent actual generation of each event,
\item[3.] VNIXFIN for finishing up simulation.
\end{description}
These three routines, which are briefly descibed below,
are to be called in that order by a user's program as is exemplified by the
example programs {\it vnixpl1.f} and {\it vnixpl2.f}
(see also Section 3.10).
\smallskip
\begin{verbatim}
SUBROUTINE VNIXRIN(NEVT,TFIN,FRAME,BEAM,TARGET,WIN)
\end{verbatim}
\noindent
{\it Purpose}: to initialize the overall simulation procedure.
\medskip
\begin{description}
\item[{\boldmath $NEVT$} :]
integer specifying the number of collision events to be
simulated for a selected physics process.
\item[{\boldmath $TFIN$} :]
final time (in $fm$) in the global Lorentz frame up to which each
collision event is followed.
\item[{\boldmath $FRAME$} :]
a character variable used to specify the global Lorentz frame of the
experiment. Uppercase and lowercase letters may be freely mixed.
\\
{\bf = 'CMS' :} colliding beam experiment in CM frame, with beam momentum
in +$z$ direction and target momentum in $-z$ direction.
\\
{\bf = 'FIXT':} fixed target experiment, with beam particle momentum
pointing in $+z$ direction.
\\
{\bf = 'USER' :} full freedom to specify frame by giving beam
momentum in \tt{PSYS(1,1)}, \tt{PSYS(1,2)} and \tt{PSYS(1,3)} and target
momentum in \tt{PSYS(2,1)}, \tt{PSYS(2,2)} and \tt{PSYS(2,3)} in
common block \tt{VNIREC}. Particles are assumed on the mass shell,
and energies are calculated accordingly.
\\
{\bf = 'FOUR' :} as \tt{'USER'}, except also energies should be
specified, in \tt{PSYS(1,4)} and \tt{PSYS(2,4)}, respectively. The
particles need not be on the mass shell; effective masses are
calculated from energy and momentum. (But note that numerical
precision may suffer; if you know the masses the option \ttt{'FIVE'}
below is preferrable.)
\\
{\bf = 'FIVE' :} as \tt{'USER'}, except also energies and masses
should be specified, i.e the full momentum information in
\tt{PSYS(1,1) - PSYS(1,5)} and \tt{PSYS(2,1) - PSYS(2,5)} should be given for
beam and target, respectively. Particles need not be on the mass
shell. Space-like virtualities should be stored as $-\sqrt{-m^2}$.
Four-momentum and mass information must match.
\item[{\boldmath $BEAM$, }] {\boldmath $TARGET$} :
character variables to specify beam and target particles.
Uppercase and lowercase letters may be freely mixed. An
antiparticle may be denoted either by "$\tilde{}$" (tilde) or
"$\bar{}$" (bar) at the end of
the name. It is also possible to leave out the charge for neutron
and proton.
\\
{\bf = 'e--' :} electron $e^-$.
\\
{\bf = 'e+' :} positron $e^+$.
\\
{\bf = 'mu--' :} muon $\mu^-$.
\\
{\bf = 'mu+' :} antimuon $\mu^+$.
\\
{\bf = 'gamma':} real photon $\gamma$ (not yet implemented).
\\
{\bf = 'pi+' :} positive pion $\pi^+$.
\\
{\bf = 'pi--' :} negative pion $\Pi^-$.
\\
{\bf = 'pi0' :} neutral pion $\pi^0$.
\\
{\bf = 'n0' :} neutron $n$.
\\
{\bf = 'n{\boldmath $\tilde{}$}0' :} antineutron $\overline{n}$.
\\
{\bf = 'p+' :} proton $p$.
\\
{\bf = 'p{\boldmath $\tilde{}$}--' :} antiproton $\overline{p}$.
\\
{\bf = 'Lambda0':} $\Lambda$ baryon.
\\
{\bf = 'Sigma+' :} $\Sigma^+$ baryon.
\\
{\bf = 'Sigma--' :} $\Sigma^-$ baryon.
\\
{\bf = 'Sigma0' :} $\Sigma^0$ baryon.
\\
{\bf = 'Xi--' :} $\Xi^-$ baryon.
\\
{\bf = 'Xi0' :} $\Xi^0$ baryon.
\\
{\bf = 'Omega--':} $\Omega^-$ baryon.
\\
{\bf = 'Alpha' :} $^2_{1}\alpha$ particle.
\\
{\bf = 'He' :} $^4_2\H\rm{e}$ (Helium) nucleus.
\\
{\bf = 'Ox' :} $^{16}_8{\rm O}$ (Oxygen) nucleus.
\\
{\bf = 'Su' :} $^{32}_{16}{\rm S}$ (Sulfur) nucleus.
\\
{\bf = 'Ag' :} $^{108}_{47}\rm{A}\rm{g}$ (Silver) nucleus.
\\
{\bf = 'Au' :} $^{197}_{79}\rm{A}\u$ (Gold) nucleus.
\\
{\bf = 'Pb' :} $^{208}_{82}{\rm P}\b$ (Lead) nucleus.
\\
{\bf = 'Ur' :} $^{238}_{92}{\rm U}$ (Uranium) nucleus.
\item[{\boldmath $WIN$} :]
related to energy of system (in GeV). Exact meaning depends on FRAME,
i.e. for
FRAME='CMS' it is the total energy $\sqrt{s}$ of system, and for
FRAME='FIXT' it is the absolute 3-momentum $P = \sqrt{\vec{P}^2}$ of
beam particle.
\end{description}
\bigskip
\begin{verbatim}
SUBROUTINE VNIXRUN(IEVT,TRUN)
\end{verbatim}
\medskip
\noindent
{\it Purpose}: to generate one event of the type specified by the VNIXRIN.
This is the main routine, which administers the overall
run of the event generation and calls a number of other routines
for specific tasks.
\begin{description}
\item[{\boldmath $IEVT$} :]
integer labeling the current event number.
\item[{\boldmath $TRUN$} :]
time (in $fm$) $TRUN\le TFIN$ in the global Lorentz frame up to which
the space time evolution
of the current event $IEVT$ is carried out. If $TRUN < TFIN$, then
the particle record contains the history up to this time,
and repeated calls resume from $TRUN$ of the previous call
(c.f. Section 3.10).
\end{description}
\medskip
\begin{verbatim}
SUBROUTINE VNIXFIN()
\end{verbatim}
\noindent
{\it Purpose}:
to finish up overall simulation procedure, to write data
analysis results to files, evaluate CPU time, close files, etc..
This routine must be called after the last event has been
finished (c.f. Section 3.10).
\bigskip
\bigskip
For a deeper understanding of the
physics routines and their connecting structure,
I refer to Appendix A for a brief description.
Generally, for
each of the included collision processes, there is an initialization
routine that sets up the initial state, and an evolution
routine that carries out the time evolution of the particle
distributions in phase-space, starting from the initial state.
Within the evolution routines, a number of
universal slave routines (i.e. independent of the process under consideration)
perform the perturbative parton evolution in terms of space-like branchings,
time-like branchings, and parton collisions, as well as
the cluster hadronization.
\bigskip
Finally, the program provides some useful
pre-programmed event-analysis routines which collect, analyze and print out information
on the result of a simulation, and give the user immediate access
to calculated particle data, spectra, and other observables.
A more detailed description of their duties, and how
to call them can be found in Appendix B.
\newpage
\begin{table}[ptb]
\captive{Available collision processes.
\protect\label{t:ipro} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|rc|cl|}
\hline\hline
IPRO &&& type of collision process
\\
\hline\hline
&&& \\
1 &&& $l^+l^- \rightarrow \gamma/Z_0 \rightarrow 2-jets \rightarrow hadrons$
\\
2 &&& $l^+l^- \rightarrow Z_0 \rightarrow W^+W^- \rightarrow 4-jets \rightarrow hadrons \;\;\;\;\;\;$
\\
3 &&& $\gamma + h \rightarrow jets \rightarrow hadrons$
\\
4 &&& $\gamma + A \rightarrow jets \rightarrow hadrons$
\\
5 &&& $l + h \rightarrow l + jets + X \rightarrow hadrons$
\\
6 &&& $l + A \rightarrow l + jets + X \rightarrow hadrons$
\\
7 &&& $h + l \rightarrow l + jets + X \rightarrow hadrons$
\\
8 &&& $h + h' \rightarrow jets + X \rightarrow hadrons$
\\
9 &&& $h + A \rightarrow jets + X \rightarrow hadrons$
\\
10 &&& $A + l \rightarrow l + jets + X \rightarrow hadrons$
\\
11 &&& $A + h \rightarrow jets + X \rightarrow hadrons$
\\
12 &&& $A + A' \rightarrow jets + X \rightarrow hadrons$
\\ &&&
\\ \hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ptb]
\captive{Available beam and target particles.
\protect\label{t:bt} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline\hline
$\;\;\;$ leptons $l$ $\;\;\;$ & $\;\;\;$
hadrons $h$ $\;\;\;$ & $\;\;\;$ nuclei $A$ $\;\;\;$
\\
\hline\hline
&& \\
$ e^\pm $ & $\pi^\pm, \pi^0$ & $^2_{1}\alpha$
\\
$ \mu^\pm $ & $n, \overline{n}$ & $^4_2\H\rm{e}$
\\
$ $ & $p, \overline{p}$ & $^{16}_8{\rm O}$
\\
$ $ & $\Lambda^0$ & $^{32}_{16}{\rm S}$
\\
$ $ & $\Sigma^\pm, \Sigma^0$ & $^{108}_{47}\rm{A}\rm{g}$
\\
$ $ & $\Xi^-, \Xi^0$ & $^{197}_{79}\rm{A}\u$
\\
$ $ & $\Omega^-$ & $^{208}_{82}{\rm P}\b$
\\
$ $ & $ $ & $^{238}_{92}{\rm U}$
\\ &&
\\ \hline\hline
\end{tabular}
\end{center}
\end{table}
\subsection{The physics processes}
The program is structured to incorporate different classes of particle
collision processes in a modular manner. The
general classes that are part of the program are summarized in
Table 1.
All of the generic collision processes in Table 1
allow a further specification
of beam and/or target particle, .e.g., $l = e, \mu$ for leptons,
$h = p, n, \pi, \ldots$ for hadrons, or $A =\alpha, He, O,
\ldots$ for nuclei. They are summarized in Table 2.
\medskip
As explained before, the
actual evolution of any beam/target-particle collision
is simulated
on microscopic level of the system of partons, clusters and hadrons, and
their interactions. For the parton interaction processes, a selection
of elementary hard/soft $2\rightarrow 2$ scatterings and $2\rightarrow 1$
fusions are included
in VNI, and future extensions are planned. In addition
there are the standard space-like and time-like $1\rightarrow 2$ branching
processes, which, when combined with the elementary tree processes,
provide the usual parton cascade method of including
higher order, real and virtual, corrections to the
elementary tree processes. The parton-cluster
formation and hadronization scheme includes a number of 2-parton
recombination processes and the decay of the formed clusters into
final state hadron.
\begin{table}[ptb]
\captive{Elementary partonic subprocesses.
\protect\label{t:isub} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|lc|cc|}
\hline\hline
$ \;\;\;$ Class & ISUB $\;\;\;\;\;$ && type of subprocess $\;\;\;\;$
\\
\hline\hline
&&& \\
a) $2 \rightarrow> 2$ :
& 1 && $ q_i q_j \rightarrow q_i q_j$
\\
& 2 && $ q_i \bar{q}_i \rightarrow q_k \bar{q}_k $
\\
& 3 && $ q_i \bar{q}_i \rightarrow g g $
\\
& 4 && $ q_i \bar{q}_i \rightarrow g \gamma $
\\
& 5 && $ q_i \bar{q}_i \rightarrow \gamma \gamma $
\\
& 6 && $ q_i g \rightarrow q_i g $
\\
& 7 && $ q_i g \rightarrow q_i \gamma $
\\
& 8 && $ g g \rightarrow q_k \bar{q}_k $
\\
& 9 && $ g g \rightarrow g g $
\\
& 10 && soft scattering
\\
&&& \\
b) $2 \rightarrow 1$ :
& 11 && $ q_i \bar{q}_i \rightarrow g^\ast$
\\
& 12 && $ q_i g \rightarrow q_i^\ast$
\\
& 13 && $ g g \rightarrow g^\ast$
\\
&&& \\
c) $1 \rightarrow 2$ (space-like) :
& 21 && $ g^\ast \rightarrow q_i q_i$
\\
& 22 && $ q_i^\ast \rightarrow q_i g$
\\
& 23 && $ g^\ast \rightarrow g g$
\\
&&& \\
d) $1 \rightarrow 2$ (time-like) :
& 31 && $ g \rightarrow q_i \bar{q}_i$
\\
& 32 && $ q_i \rightarrow q_i g$
\\
& 33 && $ g \rightarrow g g$
\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
It is possible to select a combination of partonic subprocesses to
simulate. For this purpose, all subprocesses are numbered according
to an $ISUB$ code. The list of allowed codes is given below.
In the following $g$ denotes a gluon, $q_i$ represents a quark of
flavour $i=1,\ldots,n_f$, i.e. for $n_f=6$ this translates
to $d, u, s, c, b, t$.
A corresponding antiquark
is denoted $\bar{q}_i$. The notation $\gamma$ is for a real photon, i.e. on shell.
An asterix $\ast$ denotes an off-shell parton.
\bigskip
\subsection{The particle record}
Each new event generated is in its entity stored in the commonblock
VNIREC, which thus forms the event record. Here each particle that
appears at some stage of the time evolution of the system, will
occupy one line in the matrices. The different components of this line
will tell which particle it is, from where it originates, its
present status (fragmented/decayed or not), its momentum, energy and
mass, and the space-time position of its production vertex.
The structure of the particle record VNIREC follows closely the
one of the Lund Monte carlos \cite{jetset,ariadne,lepto},
employing the same overall particle classification of
particle identification, status code, etc.,
however with important differences
concerning color- and space-time
degrees of freedom..
Note: When in the following reference is made to certain switches
and parameters MSTV or PARV, these are described in Sec. 3.7 below.
\begin{verbatim}
PARAMETER (NV=100000)
COMMON/VNIREC/N,K(NV,5),L(NV,2),P(NV,5),R(NV,5),V(NV,5)
\end{verbatim}
\noindent
{\it Purpose}:
Contains the event record, i.e. the complete list of all
partons and particles in the current event.
\medskip
\begin{description}
\item[{\boldmath $N$} :] number of lines in the $K$, $L$, $P$, $R$, and $V$
arrays occupied by the current
event. $N$ is continuously updated as the definition of the original
configuration and the treatment of parton cascading, cluster fragmentation
and hadron production proceed.
In the following, the individual parton/particle number, running
between 1 and $N$, is called $I$.
The maximum $N$ is limited by the dimension $NV$ of the arrays.
\item[{\boldmath $K(I,$}1) :] status code ($KS$), which gives the current status of the
parton/particle stored in the line. The ground rule is that codes
1 - 10 correspond to currently existing partons/particles, while
larger codes contain partons/particles which no longer exist.
\\
{\bf = 0 :} empty line.
\\
{\bf = 1 :} an undecayed particle or an unfragmented parton.
\\
{\bf = 2 :} an unfragmented parton, which is followed by more partons in
the same color-singlet parton subsystem.
\\
{\bf = 3 :} an unfragmented parton with special color-flow information
stored in $K(I,4)$ and $K(I,5)$, such that color connected partons
need not follow after each other in the event record.
\\
{\bf = 6 :} a final state hadron resulting from decay of clusters
that formed from materialized (interacted) partons.
\\
{\bf = 7 :} a final state hadron resulting from soft cluster decay of
beam/target remnants formed from left-over (non-interacted)
initial state partons.
\\
{\bf = 11 :} a decayed particle or a fragmented parton, c.f. = 1.
\\
{\bf = 12 :} a fragmented jet, which is followed by more partons in the
same color-singlet parton subsystem, c.f. = 2.
\\
{\bf = 13 :} a parton which has been removed when special color-flow
has been used to rearrange a parton subsystem, cf. = 3.
\\
{\bf = 16 :} an unstable and removed hadron resulting from decay of clusters
that formed from materialized (interacted) partons, c.f. = 6.
\\
{\bf = 17 :} an unstable and removed hadron
resulting from soft cluster decay of
beam/target remnants formed from left-over (non-interacted)
initial state partons, c.f. = 7.
\\
{\bf {\boldmath $<$} 0 :}
Particle entries with negative status codes (e.g. -1, -6, -7) parallel
in their meaning those from above, but refer to particles
which are only virtually present and become active only after
a certain formation time, upon which their status code is set to
the appropriate positive value.
\\
\item[{\boldmath $K(I,$}2) :]
Flavor code ($KF$) for partons, hadrons and electromagnetic
particles included in the current version of VNI. The particle code
is summarized in Tables 3 -- 9. It is based on
the flavor and spin classification of particles, following the 1988
Particle Data Group numbering conventions. Nuclei are attributed
a non-standard code that is internal to VNI, given by
$KF = 1000 \cdot N_{neutrons} + N_{protons}$.
A negative $KF$ code, where existing,
always corresponds to the antiparticle of the one listed in the Tables
3 -- 9.
\item[{\boldmath $K(I,$}3) :]
line number of parent particle or jet, where known, else 0.
Note that the assignment of a particle to a given jet of a jet
system is unphysical, and what is given there is only related to
the way the event was generated.
\item[{\boldmath $K(I,$}4) :]
Special color-flow information (for internal use only) of the
form $K(I,4) =$ MSTV(2)$\cdot ICFR + ICTO$, where $ICFR$ and $ICTO$ give
the line numbers of the partons {\it from}
which the color comes and {\it to} where it goes, respectively.
\item[{\boldmath $K(I,$}5) :]
Special color-flow information (for internal use only) of
the form $K(I,5) =$ MSTV(2)$\cdot JCFR + JCTO$, where
$JCFR$ and $JCTO$ give
the line numbers of the partons {\it from} which the anticolor comes
and {\it to} where it goes, respectively.
\\
\item[{\boldmath $L(I,$}1) :]
color label ($L=1,\ldots, NC$) of a parton, where $NC$ is number of
colors specified by the parameter MSTV(5). Is = 0 for antiquarks
and all non-colored particles.
\item[{\boldmath $L(I,$}2) :]
anticolor label ($L=1,\ldots, NC$) of a parton. Is = 0 for quarks
and all non-colored particles.
\\
\item[{\boldmath $P(I,$}1) :]
$p_x$, momentum in the $x$ direction in GeV.
\item[{\boldmath $P(I,$}2) :]
$p_y$, momentum in the $y$ direction in GeV.
\item[{\boldmath $P(I,$}3) :]
$p_z$, momentum in the $z$ direction in GeV.
\item[{\boldmath $P(I,$}4) :]
$E$, energy, in GeV.
\item[{\boldmath $P(I,$}5) :]
$m = p^2$, mass in GeV. For partons with space-like virtualities,
i.e. where $Q^2 = - m^2 > 0$, its value is
$P(I,5) = -\sqrt{|m^2|} = -Q$.
\\
\item[{\boldmath $R(I,$}1) :]
current $x$ position in frame of reference in 1/GeV.
\item[{\boldmath $R(I,$}2) :]
current $y$ position in frame of reference in 1/GeV.
\item[{\boldmath $R(I,$}3) :]
current $z$ position in frame of reference in 1/GeV.
\item[{\boldmath $R(I,$}4) :]
time (in 1/GeV) of active presence in the system,
i.e. $t_{pres}= t - t_{prod}$.
Is set equal to PARW(2) at time of production $t_{prod}$ of the
particle and then increased
by PARW(2) in each timestep until the particle fragments or decays.
If $ \langle$ 0, giving the remaining formation time for virtually produced,
but not yet real particles.
\item[{\boldmath $R(I,$}5) :]
number and type of interactions of a particle, and is equal to
MSTV(3)$^2\cdot NCO +$ MSTV(3)$\cdot NSB + NTB$,
where -- for partons -- $NCO$ is
the number of 2-body collisions, $NSB$ of space-like branchings, and
$NTB$ of time-like branchings, undergone up to current time. For
clusters, hadrons and unstable particles, $NSB$ is zero and $NTB$
counts the number of 2-body decays.
\\
\item[{\boldmath $V(I,$}1) :]
$x$ position of production vertex, in 1/GeV.
\item[{\boldmath $V(I,$}2) :]
$y$ position of production vertex, in 1/GeV.
\item[{\boldmath $V(I,$}3) :]
$z$ position of production vertex, in 1/GeV.
\item[{\boldmath $V(I,$}4) :]
time $t$ of production, in 1/GeV.
\item[{\boldmath $V(I,$}5) :]
encodes origin of production as
$10\cdot$MSTV(4)$\cdot IMO + 10\cdot ITM + OR$,
where $IMO$ is the direct mother, $ITM$ the time of production, and $OR$
the generation of particle $IP$ in a cascade tree, i.e. the genetical
origin with respect to the production vertex of its original
`Ur'-mother. (This keeps track of the production history even if the
decayed mother has been removed and allows to follow the genealogy
of the system.)
\end{description}
\medskip
\noindent
{\bf Special cases:}
\smallskip
For the simulation of collisions involving hadrons or nuclei in
the initial state, additional initial state information on the
status and specific variables of "primary partons" of the incoming
hadron (nucleus) are stored as follows. Note however, that after
the first interaction of a "primary parton", this initial state
information is lost and the assignments for the components of
the arrays $K,R,V$ are the same as above.
\begin{description}
\item[{\boldmath $K(I,$}4) :]
$KF$ flavour code of nucleus from which a primary parton
originates.
\item[{\boldmath $K(I,$}5) :]
$KF$ flavour code of hadron (nucleon), which is mother of
the primary parton. (For hadrons $K(I,4)$ and $K(I,5)$ are equal).
\item[{\boldmath $R(I,$}5) :]
Is equal to $-|x|$, where $x$ is the longitudinal momentum
fraction of primary parton that it carries off its mother hadron.
\item[{\boldmath $V(I,$}4) :]
encodes location $IB$ in the particle record of a primary
sea-quark's brother, i.e. the sea-antiquark belonging to the same
vacuum polarization loop, as
MSTV(4)$^2\cdot IB + $MSTV(4)$\cdot ITM + OR$,
analogous to $V(I,5)$ above. For primary valence quarks and gluons
it is 0.
\end{description}
{\it Additional remark}: The arrays $K(I,3)-K(I,5)$ and the arrays $P, R$,
and $V$ may temporarily take special meaning other than the above,
for some specific internal use.
\bigskip
In Section 3.10 below, a typical event listing of the
particle record is printed, which
serves as an example for the organization of the
particle record, exhibiting the information for
{\it momentum space}:
It lists the particles
contained in VNIREC at a certain point during the evolution of an event,
where $KS$ and $KF$ give status and flavor codes, $C$ and $A$ the color and
anticolor index, $P$ the energy-momentum-mass in GeV
for each particle that appeared at some point in the event history.
An analogous listing for the {\it position space}
particle record (not printed here) contains the spatial
coordinates and time of active presence for each particle, as well as
the particle's rapidity and transverse momentum
with respect to the $z$-axis (jet-axis, or beam-axis).
Note: both listings
can be obtained at any point by calling the routine VNILIST,
described in Appendix B.
\begin{table}[thb]
\captive{Quark, lepton and gauge boson codes.
\protect\label{t:codeone} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|c|c|c||c|c|c||c|c|c|@{\protect\rule{0mm}{\tablinsep}}}
\hline \hline
KF & Name & Printed & KF & Name & Printed & KF & Name & Printed \\
\hline\hline
1 & $\d$ & \ttt{d} & 11 & $\rm{e}^-$ & \ttt{e-} & 21 & $\rm{g}$ & \ttt{g} \\
2 & $\u$ & \ttt{u} & 12 & $\nu_{\rm{e}}$ & \ttt{nu\_e} & 22 & $\gamma$ & \ttt{gamma} \\
3 & $\rm{s}$ & \ttt{s} & 13 & $\mu^-$ & \ttt{mu-} & 23 & $\rm{Z}^0$ & \ttt{Z0} \\
4 & $\c$ & \ttt{c} & 14 & $\nu_{\mu}$ & \ttt{nu\_mu} & 24 & $\rm{W}^+$ & \ttt{W+} \\
5 & $\b$ & \ttt{b} & 15 & $\tau^-$ & \ttt{tau-} & 25 & $\H^0$ & \ttt{H0} \\
6 & $\t$ & \ttt{t} & 16 & $\nu_{\tau}$ & \ttt{nu\_tau} & 26 & & \\
7 & & & 17 & & & 27 & & \\
8 & & & 18 & & & 28 & & \\
9 & & & 19 & & & 29 & &\\
10 & & & 20 & & & 30 & &\\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[bhp]
\captive{Special codes.
\protect\label{t:codetwo} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|c|c|c|@{\protect\rule{0mm}{\tablinsep}}}
\hline \hline
KF & Printed & Meaning \\
\hline \hline
90 & \ttt{CMsoft} & Center-of-mass of beam/target remnant system before
soft fragmentation \\
91 & \ttt{cluster} & Cluster from coalescence of secondary partons \\
92 & \ttt{Beam-REM} & Cluster from remnant primary partons of beam particle\\
93 & \ttt{Targ-REM} & Cluster from remnant primary partons of target particle\\
94 & \ttt{CMshower} & Four-momentum of time-like showering system \\
95 & \ttt{SPHEaxis} & Event axis found with \ttt{VNISPHE} \\
96 & \ttt{THRUaxis} & Event axis found with \ttt{VNITHRU} \\
97 & \ttt{CLUSjet} & Jet (cluster) found with \ttt{VNICLUS} \\
98 & \ttt{CELLjet} & Jet (cluster) found with \ttt{VNICELL} \\
99 & & \\
100 & & \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\newpage
{\footnotesize
\begin{table}[thb]
\captive{Meson codes, part 1.
\protect\label{t:codethree} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|c|c|c||c|c|c|@{\protect\rule{0mm}{\tablinsep}}}
\hline \hline
KF & Name & Printed & KF & Name & Printed \\
\hline \hline
211 & $\pi^+$ & \ttt{pi+} & 213 & $\rho^+$ & \ttt{rho+} \\
311 & $\rm{K}^0$ & \ttt{K0} & 313 & $\rm{K}^{*0}$ & \ttt{K*0} \\
321 & $\rm{K}^+$ & \ttt{K+} & 323 & $\rm{K}^{*+}$ & \ttt{K*+} \\
411 & $\rm{D}^+$ & \ttt{D+} & 413 & $\rm{D}^{*+}$ & \ttt{D*+} \\
421 & $\rm{D}^0$ & \ttt{D0} & 423 & $\rm{D}^{*0}$ & \ttt{D*0} \\
431 & $\rm{D}_s^+$ & \ttt{D\_s+} &
433 & $\rm{D}_{\rm{s}}^{*+}$ & \ttt{D*\_s+} \\
511 & $\rm{B}^0$ & \ttt{B0} & 513 & $\rm{B}^{*0}$ & \ttt{B*0} \\
521 & $\rm{B}^+$ & \ttt{B+} & 523 & $\rm{B}^{*+}$ & \ttt{B*+} \\
531 & $\rm{B}_s^0$ & \ttt{B\_s0} &
533 & $\rm{B}_{\rm{s}}^{*0}$ & \ttt{B*\_s0} \\
541 & $\rm{B}_c^+$ & \ttt{B\_c+} &
543 & $\rm{B}_{\c}^{*+}$ & \ttt{B*\_c+} \\
111 & $\pi^0$ & \ttt{pi0} & 113 & $\rho^0$ & \ttt{rho0} \\
221 & $\eta$ & \ttt{eta} & 223 & $\omega$ & \ttt{omega} \\
331 & $\eta'$ & \ttt{eta'} & 333 & $\phi$ & \ttt{phi} \\
441 & $\eta_{\c}$ & \ttt{eta\_c} & 443 & $\rm{J}/\psi$ & \ttt{J/psi} \\
551 & $\eta_{\b}$ & \ttt{eta\_b} &
553 & $\Upsilon$ & \ttt{Upsilon} \\
661 & $\eta_{\t}$ & \ttt{eta\_t} & 663 & $\Theta$ & \ttt{Theta} \\
130 & $\rm{K}_{\mrm{L}}^0$ & \ttt{K\_L0} & & & \\
310 & $\rm{K}_{\mrm{S}}^0$ & \ttt{K\_S0} & & & \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
}
\vspace{-0.75cm}
{\footnotesize
\begin{table}[thb]
\captive{Meson codes, part 2.
\protect\label{t:codefour} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|c|c|c||c|c|c|@{\protect\rule{0mm}{\tablinsep}}}
\hline \hline
KF & Name & Printed & KF & Name & Printed \\
\hline \hline
10213 & $\b_1$ & \ttt{b\_1+} &
10211 & $\a_0^+$ & \ttt{a\_0+} \\
10313 & $\rm{K}_1^0$ & \ttt{K\_10} &
10311 & $\rm{K}_0^{*0}$ & \ttt{K*\_00} \\
10323 & $\rm{K}_1^+$ & \ttt{K\_1+} &
10321 & $\rm{K}_0^{*+}$ & \ttt{K*\_0+} \\
10413 & $\rm{D}_1^+$ & \ttt{D\_1+} &
10411 & $\rm{D}_0^{*+}$ & \ttt{D*\_0+} \\
10423 & $\rm{D}_1^0$ & \ttt{D\_10} &
10421 & $\rm{D}_0^{*0}$ & \ttt{D*\_00} \\
10433 & $\rm{D}_{1 \rm{s}}^+$ & \ttt{D\_1s+} &
10431 & $\rm{D}_{0 \rm{s}}^{*+}$ & \ttt{D*\_0s+} \\
10113 & $\b_1^0$ & \ttt{b\_10} &
10111 & $\a_0^0$ & \ttt{a\_00} \\
10223 & $\rm{h}_1^0$ & \ttt{h\_10} &
10221 & $\rm{f}_0^0$ & \ttt{f\_00} \\
10333 & $\rm{h}'^0_1$ & \ttt{h'\_10} &
10331 & $\rm{f}'^0_0$ & \ttt{f'\_00} \\
10443 & $\rm{h}_{1 \c}^0$ & \ttt{h\_1c0} &
10441 & $\chi_{0 \c}^0$ & \ttt{chi\_0c0} \\ \hline
20213 & $\a_1^+$ & \ttt{a\_1+} &
215 & $\a_2^+$ & \ttt{a\_2+} \\
20313 & $\rm{K}_1^{*0}$ & \ttt{K*\_10} &
315 & $\rm{K}_2^{*0}$ & \ttt{K*\_20} \\
20323 & $\rm{K}_1^{*+}$ & \ttt{K*\_1+} &
325 & $\rm{K}_2^{*+}$ & \ttt{K*\_2+} \\
20413 & $\rm{D}_1^{*+}$ & \ttt{D*\_1+} &
415 & $\rm{D}_2^{*+}$ & \ttt{D*\_2+} \\
20423 & $\rm{D}_1^{*0}$ & \ttt{D*\_10} &
425 & $\rm{D}_2^{*0}$ & \ttt{D*\_20} \\
20433 & $\rm{D}_{1 \rm{s}}^{*+}$ & \ttt{D*\_1s+} &
435 & $\rm{D}_{2 \rm{s}}^{*+}$ & \ttt{D*\_2s+} \\
20113 & $\a_1^0$ & \ttt{a\_10} &
115 & $\a_2^0$ & \ttt{a\_20} \\
20223 & $\rm{f}_1^0$ & \ttt{f\_10} &
225 & $\rm{f}_2^0$ & \ttt{f\_20} \\
20333 & $\rm{f}'^0_1$ & \ttt{f'\_10} &
335 & $\rm{f}'^0_2$ & \ttt{f'\_20} \\
20443 & $\chi_{1 \c}^0$ & \ttt{chi\_1c0} &
445 & $\chi_{2 \c}^0$ & \ttt{chi\_2c0} \\ \hline
30443 & $\psi'$ & \ttt{psi'} & & & \\
30553 & $\Upsilon'$ & \ttt{Upsilon'} & & & \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
}
\begin{table}[thp]
\captive{Baryon codes.
\protect\label{t:codefive} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|c|c|c||c|c|c|@{\protect\rule{0mm}{\tablinsep}}}
\hline \hline
KF & Name & Printed & KF & Name & Printed \\
\hline \hline
& & &
1114 & $\Delta^-$ & \ttt{Delta-} \\
2112 & $\rm{n}$ & \ttt{n0} &
2114 & $\Delta^0$ & \ttt{Delta0} \\
2212 & $\rm{p}$ & \ttt{p+} &
2214 & $\Delta^+$ & \ttt{Delta+} \\
& & &
2224 & $\Delta^{++}$ & \ttt{Delta++} \\
3112 & $\Sigma^-$ & \ttt{Sigma-} &
3114 & $\Sigma^{*-}$ & \ttt{Sigma*-} \\
3122 & $\Lambda^0$ & \ttt{Lambda0} & & & \\
3212 & $\Sigma^0$ & \ttt{Sigma0} &
3214 & $\Sigma^{*0}$ & \ttt{Sigma*0} \\
3222 & $\Sigma^+$ & \ttt{Sigma+} &
3224 & $\Sigma^{*+}$ & \ttt{Sigma*+} \\
3312 & $\Xi^-$ & \ttt{Xi-} &
3314 & $\Xi^{*-}$ & \ttt{Xi*-} \\
3322 & $\Xi^0$ & \ttt{Xi0} &
3324 & $\Xi^{*0}$ & \ttt{Xi*0} \\
& & &
3334 & $\Omega^-$ & \ttt{Omega-} \\
4112 & $\Sigma_{\c}^0$ & \ttt{Sigma\_c0} &
4114 & $\Sigma_{\c}^{*0}$ & \ttt{Sigma*\_c0} \\
4122 & $\Lambda_{\c}^+$ & \ttt{Lambda\_c+} & & & \\
4212 & $\Sigma_{\c}^+$ & \ttt{Sigma\_c+} &
4214 & $\Sigma_{\c}^{*+}$ & \ttt{Sigma*\_c+} \\
4222 & $\Sigma_{\c}^{++}$ & \ttt{Sigma\_c++} &
4224 & $\Sigma_{\c}^{*++}$ & \ttt{Sigma*\_c++} \\
4132 & $\Xi_{\c}^0$ & \ttt{Xi\_c0} & & & \\
4312 & $\Xi'^0_{\c}$ & \ttt{Xi'\_c0} &
4314 & $\Xi_{\c}^{*0}$ & \ttt{Xi*\_c0} \\
4232 & $\Xi_{\c}^+$ & \ttt{Xi\_c+} & & & \\
4322 & $\Xi'^+_{\c}$ & \ttt{Xi'\_c+} &
4324 & $\Xi_{\c}^{*+}$ & \ttt{Xi*\_c+} \\
4332 & $\Omega_{\c}^0$ & \ttt{Omega\_c0} &
4334 & $\Omega_{\c}^{*0}$ & \ttt{Omega*\_c0} \\
5112 & $\Sigma_{\b}^-$ & \ttt{Sigma\_b-} &
5114 & $\Sigma_{\b}^{*-}$ & \ttt{Sigma*\_b-} \\
5122 & $\Lambda_{\b}^0$ & \ttt{Lambda\_b0} & & & \\
5212 & $\Sigma_{\b}^0$ &\ttt{Sigma\_b0} &
5214 & $\Sigma_{\b}^{*0}$ & \ttt{Sigma*\_b0} \\
5222 & $\Sigma_{\b}^+$ & \ttt{Sigma\_b+} &
5224 & $\Sigma_{\b}^{*+}$ & \ttt{Sigma*\_b+} \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\vspace{-0.5cm}
\begin{table}[bhp]
\captive{Nucleus codes.
\protect\label{t:codesix} } \\
\vspace{1ex}
\begin{center}
\begin{tabular}{|c|c|c||c|c|c|@{\protect\rule{0mm}{\tablinsep}}}
\hline \hline
KF & Name & Printed & KF & Name & Printed \\
\hline \hline
& & & & &\\
1001 & $^2_1\alpha$ & \ttt{alpha} & 61047 & $^{108}_{47}\rm{A}\rm{g}$ & \ttt{ag} \\
& & & & &\\
2002 & $^4_2\H\rm{e}$ & \ttt{he} & 118079 & $^{197}_{79}\rm{A}\u$ & \ttt{au} \\
& & & & &\\
8008 & $^{16}_8{\rm O}$ & \ttt{ox} & 126082 & $^{208}_{82}{\rm P}\b$ & \ttt{pb} \\
& & & & &\\
16016 & $^{32}_{16}{\rm S}$ & \ttt{su} & 146092 & $^{238}_{92}{\rm U}$ & \ttt{ur} \\
& & & & &\\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\newpage
\subsection{The general input and control parameters}
\label{sec:section5}
\bigskip
The VNIPAR common block contains the status code and parameters
that regulate the performance of the program. All of them are
provided with sensible default values, so that a novice user can
neglect them, and only gradually explore the full range of
possibilities.
\begin{verbatim}
COMMON/VNIPAR/MSTV(200),PARV(200),MSTW(200),PARW(200)
\end{verbatim}
\noindent
{\it Purpose:} to give access to status code and parameters that
regulate the performance of the program. If the default values,
denoted below by (D=\ldots), are not satisfactory, they must in
general be changed before the VNIXRIN call. Exceptions, i.e.
variables that can be changed for each new event, are denoted by
(C).
\bigskip
\bigskip
\begin{center}
{\bf MSTV(200), PARV(200): control switches and physics parameters}
\end{center}
\bigskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent{\bf 1. General:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTV(1)] : (D=100000) number of lines available in the commonblock
VNIREC. Should always be changed if the dimensions of the
$K,P,R,V$ arrays are changed.
\item[ MSTV(2)] : (D=25000) is used to trace internal color flow information
in the array $K$.
\item[ MSTV(3)] : (D=10000) is used to store number and type of interactions
of a particle in the array $R$ of VNIREC.
\item[ MSTV(4)] : (D=1000) is used to trace origin of a particle's production
in the array $V$ of VNIREC.
\item[ MSTV(5)] : (D=3) number of colors $N_c$ included in the simulation.
\item[ MSTV(6)] : (D=6) number of quark flavors $N_f$ included.
\item[ MSTV(7)] : (D=2) energy and momentum conservation options. Since
both the initial generation of initial hadronic or nuclear parton
system and the final cluster-hadron decay scheme require occasional
reshuffling of 4-momentum, an energy and/or momentum imbalance can
occur. This can be corrected, if desired, by renormalizing particle
energies and momenta.\\
{\bf = 0 :} no explicit conservation of any kind.\\
{\bf = 1 :} particles share energy imbalance compensation according
to their energy (roughly equivalent to boosting event to CM
frame).\\
{\bf = 2 :} as =1, plus particles share 3-momentum imbalance
compensation according to their longitudinal mass with respect to
the imbalance direction.
\item[ MSTV(8)] : (D=0) type of particles to be included in energy
and momentum conservation options under MSTV(7).\\
{\bf = 0 :} all living particles are included.\\
{\bf = 1 :} only partons.\\
{\bf = 2 :} only hadrons.
\item[ MSTV(9)] : - not used -
\item[ MSTV(10)] : (D=111111) initial seed for random number generation.
\item[ MSTV(11)] : (D=250000) maximum number of collision events $A+B$,
after which program is forcibly terminated.
\item[ MSTV(12)] : (D=2500) maximum number of time steps per collision
event, after which a new collision event is generated. When
increased, the dimension of the arrays in commonblock $VNITIM$
need to be altered accordingly.
\item[ MSTV(13)] : (D=0) choice of time grid $TIME(I)$ and time increase
$TINC(I)$. Generically, $t(i)=t(i-1)+[f(i)-f(i-1)]$, with
$i=0,\ldots,MSTV(12)$, and $t(0)=t_0=$PARV(12), $t_f=$PARV(13). \\
{\bf = 0 :} power law increase $f(i)=a\cdot(i)^b$, where $a=$PARV(14) and
$b=$PARV(15). \\
{\bf = 1 :} logarithmic increase $f(i)=a\cdot \ln(i)^b$, where $a=$PARV(14),
$b=$PARV(15).
\item[ MSTV(14)] : (D=0) Option to select or switch on/off certain physics
processes $ISUB$ in the simulation, by specifying the array $MSUB$
(c.f. Sec. 3.8 below), where $MSUB(ISUB) = 0\; (1)$ switches a
subprocess off (on).\\
{\bf = 0 :} the relevant processes are initialized automatically.\\
{\bf = 1 :} user selection of included processes is required.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARV(12)] : (D=0. fm) choice of initial point of time $t_i =$ PARV(12)
when evolution of collision event begins.
\item[ PARV(13)] : (D=0. fm) choice of final point of time $t_f =$ PARV(13).
If = 0., then it is set automatically to $TIME$(MSTW(2)), where MSTW(2)
is the selected number of time steps. Otherwise, if $> 0.$, it specifies
$t_f$ such that the minimum $\min(t_f , t_{max})$ is taken as the point when the
evolution of a collision event stops, where $t_{max} = TIME$(MSTV(12)).
\item[ PARV(14)] : (D=0.05) prefactor $a$ of the time-increment function $f$, see
MSTV(13).
\item[ PARV(15)] : (D=1.) exponent $b$ of the time-increment function $f$, see
MSTV(13). For longer collision time scales as in hadron-nucleus
or nucleus nucleus collisions, it is automatically scaled by
PARV(15) to 1.5 $\times$ PARV(15) to extend the time range.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 2. Initial state of collision system:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTV(20)] : (D=0) choice of model for initial parton distributions. \\
{\bf = 0 :} standard structure functions are used.\\
{\bf = 1 :} other parametrization, e.g. a la McLerran {\it et al.}
(not implemented yet).
\item[ MSTV(21)] : (D=1) choice of nucleon structure-function
set (c.f. MSTV(22)). \\
{\bf = 1 :} GRV94LO (Lowest order fit).\\
{\bf = 2 :} GRV94HO (Higher order: MSbar fit).\\
{\bf = 3 :} GRV94DI (Higher order: DIS fit).\\
{\bf = 4 :} - not used - \\
{\bf = 5 :} CTEQ2M (best higher order MSbar fit).\\
{\bf = 6 :} CTEQ2MS (singular at small $x$).\\
{\bf = 7 :} CTEQ2MF (flat at small $x$).\\
{\bf = 8 :} CTEQ2ML (large $\Lambda$).\\
{\bf = 9 :} CTEQ2L (best lowest order fit).\\
{\bf = 10 :} CTEQ2D (best higher DIS order fit).
\item[ MSTV(22)] : (D=1) choice of nucleon structure-function library. \\
{\bf = 1 :} the internal one with parton distributions chosen according
to MSTV(21).\\
{\bf = 2 :} the PDFLIB one with MSTV(21)=$1000\cdot NGROUP+NSET$
(requires PDFLIB to be linked).
\item[ MSTV(23)] : (D=1) choice of pion structure-function set (c.f. MSTV(24)).\\
{\bf = 1 :} Owens set 1.\\
{\bf = 2 :} Owens set 2.\\
\item[ MSTV(24)] : (D=1) choice of pion structure-function library. \\
{\bf = 1 :} the internal one with parton distributions chosen according
to MSTV(23).\\
{\bf = 2 :} the PDFLIB one with $MSTV(23)=1000\cdot NGROUP+NSET$
(requires PDFLIB to be linked).
\item[ MSTV(25)] : (D=2) choice of minimum $Q$-value $Q_{0}$ used
in parton distributions of initial hadronic or nuclear state.\\
{\bf = 0 :} fixed at $Q_{0}=$PARV(23).\\
{\bf = 1 :} determined by the average momentum tranfer of primary parton
collisions $Q_0=\langle Q_{prim}\rangle$,
with initial value $Q_{0}=$PARV(23) before the first event.\\
{\bf = 2 :} as =1, but in addition varying with total energy $\sqrt{s}$ of
the overall collision system, with
$\widetilde{Q}_{0} = \max[Q_0, \frac{a}{4} \cdot \left(E_h/GeV\right)^b]$,
where $Q_0=$PARV(23), $a=$PARV(24), $b=$PARV(25), and
$E_h=\sqrt{s}/\mbox{hadron}$ the average energy per hadron (nucleon).
\item[ MSTV(26)] : (D=1) option to choose $x$-dependent $Q_{in}$
in parton densities. \\
{\bf = 0 :} switched off, $Q_{0}$ according to selection of MSTV(25)
is used.\\
{\bf = 1 :} varying with Bjorken-$x$ of struck initial state partons, as
well as with total energy $\sqrt{s}$ of collision system, with
$Q_{0}=Q(x)$ where $1/Q^2(x) = 1/Q_{0}^2+Q_{0}^2/(x E_{had})^2$, and
$E_h=\sqrt{s}/\mbox{hadron}$ the average energy per hadron (nucleon).
\item[ MSTV(31)] : (D=0) initial separation along the collision ($z$-)axis of
beam and target particle in the overall CM frame.\\
{\bf = 0 :} fixed at $\Delta z=$PARV(31).\\
{\bf$>$ 0 :} shifted towards minimal separation, such that beam and
target particles almost touch with $\Delta z =$MSTV(31)$\cdot$PARV(31).
\item[ MSTV(32)] : (D=2) selection of impact parameter $b$ transverse to the
collision ($z$-) axis of beam and target particle.\\
{\bf = 0 :} fixed at $b =$ Max[PARV(32),PARV(33)].\\
{\bf = 1 :} randomly sampled in between $b_{min}\le b\le b_{max}$, with
$b_{min} = $PARV(32) and $b_{max} = $PARV(33).\\
{\bf = 2 :} as 1, but with $b_{min} =$ PARV(32)$\cdot\sqrt{\sigma_{nd}/\pi}$,
and $b_{max} =$ PARV(33)$\cdot\sqrt{\sigma_{nd}/\pi}$,
where $\sigma_{nd}$ is the non-diffractive crossection for given
beam/target and beam energy.
\item[ MSTV(33)] : (D=2) Choice of nuclear matter distribution for collisions
involving one or two nuclei, i.e. of nucleons' positions within nucleus.\\
{\bf = 0 :} uniform distribution with `sharp edge'.\\
{\bf = 1 :} Gaussian distribution for light nuclei $A\le 12$,
and `smeared edge' distribution for intermediate and heavy nuclei
$A > 12$ (c.f. PARV(27)).\\
{\bf = 2 :} Gaussian distribution for $A\le 12$, and Fermi distribution for
$A > 12$ (c.f. PARV(28)).
\item[ MSTV(34)] : (D=1) spatial distribution of initial partons within their
parent hadrons of beam and target particles, in the hadron restframe
within a sphere of the hadron radius.\\
{\bf = 0 :} uniform distribution.\\
{\bf = 1 :} exponential distribution with width given by PARV(34).\\
{\bf = 2 :} Gaussian distribution with width given by PARV(35).
\item[ MSTV(35)] : (D=2) Lorentz-boosted spatial distribution of initial partons.\\
{\bf = 0 :} spatial positions of all partons (valence, sea, glue) inside
parent hadron are Lorentz-boosted with the parent hadron's (nucleon's)
$\gamma$-factor.\\
{\bf = 1 :} only valence quarks are boosted with the parent hadron's
(nucleon's)
$\gamma$-factor; seaquarks and gluons are smeared out symmetrically(!)
in hadron (nucleon) direction of motion around the valence quark disc. \\
{\bf = 2 :} as = 1, but now seaquarks and gluons are
distributed all behind(!) the valence quark disc.
\item[ MSTV(36)] : (D=1) choice of primordial $k_\perp$-distribution of
initial partons.\\
{\bf = 0 :} no primordial $k_\perp$.\\
{\bf = 1 :} exponential $k_\perp$-distribution with width
specified by PARV(37).\\
{\bf = 2 :} Gaussian $k_\perp$-distribution with width specified by PARV(38).
\item[ MSTV(37)] : (D=1) choice of masses of initial partons.\\
{\bf = 0 :} on mass shell with quark current masses.\\
{\bf = 1 :} off mass shell with space-like virtuality as determined by
the excess of the partons' momentum over their energies.
\item[ MSTV(38)] : (D=0) switch for including effects of parton shadowing in
nuclear parton structure functions by multiplication with effective
shadowing factor $R_A(x)=f_A(x)/f_N(x)$ as a simple parametrization
of EMC data, where $f_A$ and $f_N$ are the measured nuclear and nucleon
structure functions, respectively.\\
{\bf = 0 :} parton shadowing switched off.\\
{\bf = 1 :} switched on.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARV(21)] : (D=1. GeV$^{-1}$) the minimum resolvable partonic
Bjorken-$x$, denoted $x_c$,
is set by the `confinement radius' $R_c = $PARV(21), which defines the
maximum possible longitudinal spread of a confined parton within
$\Delta z = 1/xP < 1/(x_c P) \equiv R_c$, where $P$ is the longitudinal
momentum of the mother hadron.
\item[ PARV(22)] : (D=0.95) the minimum
summed $x$-value $\left(\sum_i x_i\right)_{min}$,
when `filling' a hadron with partons of energy(momentum) fractions
$x_i$ of the hadron. The value depends, slightly on the value of $Q_0$
(c.f. PARV(23)-PARV(25)), or if cuts on $x$ are made. It should be
adjusted such that when adding one more parton, the average
$\langle \sum_i x_i \rangle$
should be = 1. If the sum is smaller than PARV(22), another parton
is added to the hadron, i.e. PARV(22) = $1 - \langle \sum_i x_i \rangle$.
\item[ PARV(23)] : (D=1. GeV) minimum value $Q_0$ that is used in initial parton
distributions (structure functions or alternative distributions).
I.e., if $Q < Q_0$, then the value PARV(23) is used.
\item[ PARV(24)], {\bf PARV(25)} : (D=2.50,0.25)
possibility of using effective, energy-dependent
$Q_{in}$
value in structure functions if MSTV(25)=2, according to the function
$\widetilde{Q}_{0} = \max[Q_0,\frac{a}{4}\cdot \left(E_h/GeV\right)^b]$,
where $Q_0 = $PARV(23), $a = $PARV(24),
$b= $ PARV(25), and
$E_h=\sqrt{s}/\mbox{hadron}$ the average energy per hadron (nucleon).
\item[ PARV(26)] : (D=1.18 fm) nuclear radius parameter $r_0$, with
$R(A) = r_0 A^1/3$.
\item[ PARV(27)] : (D=3.0 fm) parameter $R_d$ in `smeared-edge' distribution of
nucleons' positions in nucleus with $A > 12$ (for MSTV(33) = 1).
\item[ PARV(28)] : (D=0.545 fm) parameter $R_a$ in Fermi distribution of
nucleons' positions in nucleus with $A > 12$ (for MSTV(33) = 2).
\item[ PARV(31)] : (D=0.25 fm) initial separation along the collision ($z$-)axis
of beam and target particle in the overall CM frame for MSTV(31)$\ge$ 1.
\item[ PARV(32)] : (D=0. fm) determines
value of {\it minimum} impact parameter $b_{min}$,
transverse to the collision ($z$-)axis, such that
$b_{min} = $PARV(32) for MSTV(32) = 0 or 1,
or $b_{min} = $PARV(32)$\cdot\sqrt{\sigma_{nd}/\pi}$ for MSTV(32)=2.
\item[ PARV(33)] : (D=1. fm) determines
value of {\it maximum} impact parameter $b_{max}$,
transverse to the collision ($z$-)axis, such that
$b_{max} = $PARV(33) for MSTV(32) = 0 or 1,
or $b_{max} = $PARV(33)$\cdot\sqrt{\sigma_{nd}/\pi}$ for MSTV(32)=2.
\item[ PARV(34)] : (D=1.19 GeV$^{-1}$) width $w$ in exponential distribution
$\exp[-r/w]$ of partons' positions around its parent-hadron center-of-mass
(c.f. MSTV(33)).
(Note: $w=1/\nu$, where $\nu=0.84$ GeV is the measured nucleon form factor.)
\item[ PARV(35)] : (D=2.38 GeV$^{-1}$) sigma $s$ in Gaussian distribution
$\exp[-r^2/(2\,s^2)]$ of
partons' positions around its parent-hadron center-of-mass (c.f. MSTV(33)).
(Note: $s=2/\nu$, where $\nu=0.84$ GeV is measured nucleon form factor.)
\item[ PARV(36)] : (D=2.) cut-off for sampling initial partons' spatial
distributions
from exponential or Gaussian above, such that $r < a\,R_h$, where
$a=$PARV(36) and $R_h$ is the radius of the mother hadron.
\item[ PARV(37)] : (D=0.44 GeV) width $w$ in exponential distribution
$\exp[-k_\perp/w]$ of primordial $k_\perp$ distribution of initial partons
(c.f. MSTV(36)).
\item[ PARV(38)] : (D=0.44 GeV) sigma $s$ in Gaussian distribution
$\exp(-k_\perp^2/(2\,s^2)]$ of primordial $k_\perp$ distribution of
initial partons (c.f. MSTV(36)).
\item[ PARV(39)] : (D=2. GeV) upper cut-off for sampling primordial
$k_\perp$ from above exponential or Gaussian distribution.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 3. Parton scatterings:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTV(40)] : (D=3) master switch for parton collisions. \\
{\bf = 0 :} generally switched off.
\\
{\bf = 1 :} only primary-primary collisions, i.e. those involving
2 partons that both have not interacted before.
\\
{\bf = 2 :} as = 1 plus primary-secondary collisions, i.e. those involving
2 partons, of which one has had already at least one previous interaction.
\\
{\bf = 3 :} as = 2 plus secondary-secondary interactions, i.e.
all multiple collisions included.
\item[ MSTV(41)] : (D=1) choice of (semi)hard QCD collision cross-sections.
\\
{\bf = 1 :} standard $O(\alpha_s^2)$ perturbative cross-sections for
$2 \rightarrow 2$ and $2 \rightarrow 1$ processes.
\\
{\bf = 2 :} alternative of $2 \rightarrow n$ cross-sections
(not implemented yet!).
\item[ MSTV(42)] : (D=1) choice of $p_{0}$
to separate {\it hard} and {\it soft} parton collisions, and to act
as a regulator for the parton cross-sections $d\hat{\sigma}/dQ^2$,
that are divergent
for vanishing momentum transfer (or invariant mass) $Q$.
\\
{\bf = 0 :} fixed at
$p_0 = \max \left[ PARV(42), \max (VKIN(3),VKIN(5) \right]$.
\\
{\bf = 1 :} energy and mass dependent, with $p_0 = p_0(s,A)$,
according to the parametrization given by PARV(42), PARV(43).
\\
{\bf = 2 :} initially, before the first event, set to
$p_0(s,A)$, c.f. =1,
but in subsequent events set equal to the average momentum transfer
of primary parton-parton collisions
$p_0 = \langle Q_{prim}\rangle$ accumulated by statistics.
\item[ MSTV(43)] : (D=2) $Q^2$-definition in $2 \rightarrow 2$
parton collision processes;
for $\hat s$-channel or fusion processes,
$Q^2$ is always chosen to be $m^2=\hat s$.
Note that the parton-parton collisions invariants are
denoted as $\hat s = (p_1+p_2)^2$,
$\hat t = (p_1-p_1')^2$, $\hat u = (p_1-p_2')^2$.
\\
{\bf = 1 :} $Q^2 = 2 \hat{s} \hat{t} \hat{u}/(\hat{s}^2 +
\hat{t}^2 + \hat{u}^2)$.
\\
{\bf = 2 :} $Q^2 = 0.5 \,(m_{\perp\,1}^2 + m_{\perp\,2}^2)$.
\\
{\bf = 3 :} $Q^2 = \min (-\hat{t}, - \hat{u})$.
\\
{\bf = 4 :} $Q^2 = \hat{s}$.
\item[ MSTV(44)] : (D=2) choice of scheme for parton collision-time estimate.
\\
{\bf = 0 :} zero collision-time, i.e. instantaneous process.
\\
{\bf = 1 :} finite collision-time sampled from an exponential distribution
$\exp(-x)$, where $x=t/\tau$, $t$ the time in the lab, and $\tau$
the mean life-time of the parton in the lab, see PARV(44).
\\
{\bf = 2 :} finite collision-time sampled from Gyulassy-Wang distribution
$(2/\tau) \;x/(1+x^2)^2$, where $x=t/\tau$, $t$ the time in the lab,
and $\tau$ the mean life-time of the parton in the lab, see PARV(44).
\item[ MSTV(45)] : (D=1) selection of maximum allowed distance between two partons
in their $cm$-frame in order to be able to collide (used for ruling
out widely separated pairs to speed up simulation).
\\
{\bf = 0 :} fixed at a value given by $r_{12\;max} =$ PARV(45).
\\
{\bf = 1 :} variable at a value
$r_{12\;max} =$ PARV(45)/$(\sqrt{\hat{s}}/GeV)$.
\item[ MSTV(46)] : (D=0) choice of probability distribution W(x) from which
collision probability of 2 partons is sampled, where
$x = r_{12}^{(sep)}/r_{12}^{(\hat{\sigma})}$, with $r_{12}^{(sep)}$
the transverse 2-parton separation
at closest approach and $r_{12}^{(\hat{\sigma})}$
the transverse radius of interaction as given by the parton-parton cross-section $\hat{\sigma}$.
\\
{\bf = 0 :} flat distribution, $W(x) = \theta(a-x)$, with $a =$ PARV(46).
\\
{\bf = 1 :} exponential distribution, $W(x) = \exp(-x/a)$, $a =$ PARV(46).
\item[ MSTV(47)] : (D=2) special treatment of `flavor excitation' by parton
collisions involving one or two primary hadronic or nuclear
(anti)quarks by requiring a minimum momentum transfer of the
resolving parton with the heavy quark.
\\
{\bf = 0 :} no special treatment of heavy quarks over light quarks.\\
{\bf = 1 :} requirement of minimum momentum transfer for liberation of
heavy quark out of initial parton distribution by scattering, where
the minimum required momentum trasnfer
is given by PARV(47) times the mass of the heavy quark.
\\
{\bf = 2 :} as = 1, but now both the struck heavy quark as well as its
initial antiquark sibling are liberated (i.e. get on mass shell).
\item[ MSTV(48)] : (D=0) switch for including soft parton collisions according
to a phenomenological cross-section
$d\hat{\sigma}/dQ^2 \propto \alpha_s^2(p_0^2)/(Q^2 + \mu^2)$
for soft parton collisions (c.f. eq. (\ref{sigs})),
where $p_0 =$ PARV(42), $\mu =$ PARV(48), and $Q^2 = p_\perp^2$
for $\hat t, \hat u$ channel and
$Q^2 = \hat s$ for $\hat s$ channel.
\\
{\bf = 0 :} switched off.
\\
{\bf = 1 :} soft collisions are treated complementary to hard collisions,
i.e. occur only if $Q < p_{0}$.
\\
{\bf = 2 :} soft collisions are treated supplementary to hard collisions,
i.e. compete with the latter, whichever cross-section dominates.
\item[ MSTV(49)] : (D=1) switch to allow for $2 \rightarrow 1$
parton fusion processes
$a+b\rightarrow c^\ast$ in competition with $2\rightarrow 2$
scattering processes $a+b\rightarrow c^\ast\rightarrow d+e$,
such that for a given pair a,b either of those processes is selected
according to the relative probability given by the respective
cross-sections and depending on the `life-time' of the particle $c^\ast$.
\\
{\bf = 0 :} fusion processes switched off.
\\
{\bf = 1 :} switched on, with relative weight determined by PARV(49).
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARV(41)] : (D=0.25 GeV) nominal $\Lambda$-value used in
running $\alpha_s$ for parton-parton collisions.
\item[ PARV(42)] : (D=2.0 GeV) cut-off for perturbative QCD cross-sections
that are divergent when the momentum transfer $Q\rightarrow 0$. Here the
2-body collsion scale $Q$ is taken equal to $p_\perp$, the
transverse momentum
exchanged in a parton scattering, or equal to $\hat s$ for
annihilation or fusion processes. For MSTV(42)=0, the value
$p_{0} = a$ with $a =$ PARV(42) defines {\it hard} ($Q>p_{0}$) and
{\it soft} ($Q<p_{0})$ collisions. For MSTV(42)=1, see PARV(43).
\item[ PARV(43)] : (D=0.27) parameter for effective $p_{0}$ value,
if MSTV(42)=1,
instead of $p_{0} = const$ for MSTV(42)=0. The effective $p_{0}$
assumes a dependence on the total collision energy $\sqrt{s}$ and
the mass of the (nuclear) collision system. It is parametrized as
$p_{0}(\sqrt{s},A,B) = \frac{a}{4} \cdot \left(E_h/GeV\right)^b$,
where $a =$ PARV(42), $b =$ PARV(43), and $E_h= 2 \sqrt{s}/(A+B)$
with $A$ ($B$) the mass number of beam (target) particle, if
it is a hadron or nucleus.
\item[ PARV(44)] : (D=0.1) if MSTV(44)=1 or MSTV(44)=2, the
proportionality between
mean collision time $\tau$ of a 2-parton collision process, and the
associated transverse momentum generated (for $\hat{s}$-channel processes, the
invariant mass of the pair). It is parametrized as $\tau=\,c\,(1/Q)$,
with $c =$ PARV(45) and $Q = p_\perp$ or $\hat{s}$.
\item[ PARV(45)] : (D=5. GeV$^{-1}$) maximum separation of two partons to be
able to
interact through a 2-body collision. For MSTV(45)=0, it is fixed at
$r_{12} =$ PARV(45), whereas for MSTV(45)=1, it is taken s-dependent as
$r_{12} =$ PARV(45)$/(\sqrt{\hat{s}}/GeV)$.
\item[ PARV(46)] : (D=1.0) parameter in the collision-probability distribution
$W(x)=\theta(a-x)$ (for MSTV(46)=0) and $W(x)=\exp(-x/a)$ for MSTV(46)=1,
with $x = r_{12}^{(sep)}/r_{12}^{(\hat{\sigma})}$, $r_{12}^{(sep)}$
the relative transverse parton
separation of closest approach and $r_{12}^{(\hat{\sigma})}$
the transverse radius of interaction as given by the
parton-parton cross-section $\hat{\sigma}$.
\item[ PARV(47)] : (D=1.0) for `flavor excitation' processes where a primary
hadronic or nuclear (anti)quark is struck out by a collision, an
effective minimum resolution scale is required, depending on
the flavor, such that the momentum transfer
$Q^2 > a \cdot m_f$, with $a=$ PARV(47) and $m_f$ the flavor-dependent
quark mass. This parameter acts in addition to the
constraint set by PARV(42) above.
\item[ PARV(48)] : (D=0.5 GeV) parameter in the phenomenological
cross-section
$d\hat{\sigma}/dQ^2 \propto \alpha_s^2(p_0^2)/(Q^2 + \mu^2)$
for soft parton collisions,
where $\mu =$ PARV(48), $p_0$ is given by PARV(42), and $Q^2 = p_\perp^2$
for $\hat t, \hat u$ channel and
$Q^2 = \hat s$ for $\hat s$ channel.
\item[ PARV(49)] : (D=10.) parametric strength of $2 \rightarrow 1$
parton fusion processes $a+b\rightarrow c$ competing with
$2\rightarrow 2$ scattering processes $a+b\rightarrow c+d$. It is
given by the ratio of the two processes which is proportional
to $1/(R^2 \hat{s})$, where $1/R^2 = $PARV(49).
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 4. Space-like parton branchings:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTV(50)] : (D=1) master switch for space-like branchings.
\\
{\bf = 0 :} switched off
\\
{\bf = 1 :} on.
\item[ MSTV(51)] : (D=3) level of coherence imposed on the space-like parton
shower evolution.
\\
{\bf = 1 :} none, i.e. neither $k^2$ values nor angles need be ordered.
\\
{\bf = 2 :} $k^2$ values at branches are strictly ordered, increasing
towards the hard interaction.
\\
{\bf = 3 :} $k^2$ values and opening angles of emitted (on-mass-shell or
time-like) partons are both strictly ordered, increasing
towards the hard interaction.
\item[ MSTV(52)] : (D=2) structure of associated time-like parton evolution,
i.e. showers initiated by emission off the incoming space-like partons.
\\
{\bf = 0 :} no associated showers are allowed, i.e. emitted partons are
put on mass-shell.
\\
{\bf = 1 :} a shower may evolve, with maximum allowed time-like virtuality
set by the phase space only.
\\
{\bf = 2 :} a shower may evolve, with maximum allowed time-like virtuality
set by phase space or by the scale $Q^2$, the virtuality of the
space-like parton created at the same vertex, whichever is the
stronger constraint.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARV(51)] : (D=0.25 GeV) $\Lambda$-value used in
$\alpha_s$ and in structure functions for space-like parton evolution.
\item[ PARV(52)] : (D=1.0 GeV) invariant mass cutoff $q_0$ of
space-like parton
showers, below which parton branchings re not evolved. For consistency
it is taken as $q_{0} = \max(Q_0$,PARV(52)) where $Q_0 = $PARV(23) is
the initial resolution scale in the hadron structure functions.
\item[ PARV(53)] : (D=4.) $Q^2$-scale of the hard scattering is multiplied by
PARV(53) to define the maximum parton virtuality allowed in
space-like branchings associated with the hard interaction. This does not
apply to $\hat{s}$-channel resonances, where the maximum virtuality is set
by $m^2=\hat s$.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 5. Time-like parton branchings:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTV(60)] : (D=2) master switch for time-like branchings.
\\
{\bf = 0 :} no branchings at all, i.e. switched off.
\\
{\bf = 1 :} QCD type branchings of quarks and gluons.
\\
{\bf = 2 :} also emission of photons off quarks; the photons are assumed
on mass-shell.
\item[ MSTV(61)] : (D=2) branching mode for time-like parton evolution.
\\
{\bf = 1 :} conventional branching, i.e. without angular ordering.
\\
{\bf = 2 :} coherent branching, i.e. with angular ordering.
\item[ MSTV(62)] : not used.
\item[ MSTV(63)] : (D=1) choice of formation-time scheme for parton emission.
\\
{\bf = 0 :} no formation time, i.e. instantanous emission.
\\
{\bf = 1 :} finite formation time sampled from an exponential distribution
$\exp(-x)$, where $x=t/\tau$, $t$
the time in the lab, and $\tau$ the mean life-time
of the parton in the lab, see PARV(63).
\\
{\bf = 2 :} finite formation time sampled from Gyulassy-Wang distribution
$ (2/\tau) \;x/(1+x^2)^2$, where $x=t/\tau$, $t$ the time in the lab,
and $\tau$ the mean life-time of the parton in the lab, see PARV(63).
\item[ MSTV(64)] : not used.
\item[ MSTV(65)] : (D=0) selection of kinematics reconstruction for branchings
initiated by a single off-shell parton.
\\
{\bf = 0 :} conservation of energy and jet direction, but longitudinal
momentum is not conserved.
\\
{\bf = 1 :} with full energy-momentum conservation at each branching, but
the jet direction is not conserved due to recoil.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARV(61)] : (D=0.29 GeV) $\Lambda$-value used in running $\alpha_s$ for
time-like parton showers.
\item[ PARV(62)] : (D=1.0 GeV) invariant mass cutoff $\mu_0$ of timeloke
parton evolution,
below which partons are not assumed to radiate. Exception: for
the case MSTV(84)=3, for reasons of consistency,
the value is automatically
scaled to 1.5 $\times$ PARV(62).
\item[ PARV(63)] : (D=0.01) if MSTV(63)=1 or MSTV(63)=2, the proportionality
between mean-life time $\tau$ of an off-shell parton and its virtuality
$k^2$, is parametrized as $\tau = c\,(1/\sqrt{k^2})$, with $c =$ PARV(63).
(Is inversely related to PARV(65)).
\item[ PARV(64)] : (D=4.) $Q^2$-scale of the hard scattering is multiplied by
PARV(64) to define the maximum parton virtuality allowed in
time-like branchings associated with the hard interaction. This does not
apply to $\hat{s}$-channel resonances, where the maximum virtuality is set
by $m^2=\hat s$.
\item[ PARV(65)] : (D=4.) invariant virtuality $k^2$
of off-shell partons is multiplied
PARV(65) to define the maximum parton virtuality allowed in time-like
branchings of single parton decays. (Is inversely related to
PARV(63)).
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 6. Parton-cluster formation and cluster-hadron decay: }
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTV(80)] : (D=1) master switch for hadronization of parton clusters.
\\
{\bf = 0 :} off.
\\
{\bf = 1 :} on.
\item[ MSTV(81)] : (D=1) choice of definition of 2-parton spatial separation
$r_{12}= r_1-r_2$ between two partons 1, 2, for measuring
probability of pre-hadronic cluster formation of the two partons.
\\
{\bf = 0 :} distance $r_12$ measured in the global frame of the collision
event.
\\
{\bf = 1 :} distance $r_12$ measured in the center-of-mass frame of 1 and 2.
\item[ MSTV(82)] : (D=4) choice of definition of 2-parton momentum separation
$p_{12}$ of two partons 1, 2, for pre-hadronic cluster search.
\\
{\bf = 1 :} three-momentum, i.e.
$p_{12}=\sqrt{p_{x\;12}^2+p_{y\;12}^2+p_{z\;12}^2}$.
\\
{\bf = 2 :} transverse momentum, i.e. $p_{12}=\sqrt{p_{x\;12}^2+p_{y\;12}^2}$.
\\
{\bf = 3 :} `pseudo-mass', i.e. $p_{12}=\sqrt{2\,E_1E_2\,(1-p_1p_2)/(m_1m_2)}$.
\\
{\bf = 4 :} invariant mass', i.e.
$p_{12}=\sqrt{E_{12}^2-p_{x\;12}^2+p_{y\;12}^2+p_{z\;12}^2}$.
\item[ MSTV(83)] : (D=1) choice of probability distribution from which cluster
formation of 2 partons is sampled. Simulates the conversion process as
a `tunneling' of the partons through a potential barrier set by PARV(81)
and PARV(82), separating perturbative and non-perturbative regimes.
\\
{\bf = 0 :} flat distribution, i.e. $\theta$-function.
\\
{\bf = 1 :} exponential distribution.
\item[ MSTV(84)] : (D=3) level of including color correlations among pairs of
partons in the process of cluster formation.
\\
{\bf = 0 :} none, except that clustering of two partons originating from the
same mother is vetoed.
\\
{\bf = 1 :} probability of two cluster candidates being in a color singlet
state is sampled uniformly with factors
$C_{gg}=1/9, \;C_{q\bar q}=1/3, \;C_{gq}=1/3$.
\\
{\bf = 2 :} exact matching of colors and anticolors of two partons
is required to give color singlet(s).
\\
{\bf = 3 :} general case, where arbitrary color configuration of two partons
is allowed, and color singlet formation is balanced by additional gluon
emission(s) to conserve color locally at each vertex.
\item[ MSTV(85)] : (D=0) option allowing for diquark clustering, i.e. sequential
clustering of two (anti)quarks plus a third (anti)quark (not yet working
properly!).
\\
{\bf = 0 :} off.
\\
{\bf = 1 :} on.
\item[ MSTV(87)] : (D=0) choice of effective invariant masses
of partons to ensure that
kinematical thresholds for cluster-hadron decay can be satisfied.
\\
{\bf = 0 :} `curent' masses are used, i.e. $m_f =$ (0.05, 0.01, 0.01, 0.2, 1.5,
5.0, 150) GeV for $f = g, d, u, s, c, b, t$, respectively.
\\
{\bf = 1 :} `constituent' masses are used, i.e. $m_f =$ (0.5, 0.32, 0.32, 0.5,
1.8, 5.2, 150) GeV for $f= g, d, u, s, c, b, t$, respectively.
\item[ MSTV(88)] : (D=0) choice of measure for rough distinction between `slow'
and `fast' clusters in statistics accumulation of exogamous cluster
production with numerical value of boundary given by PARV(88).
\\
{\bf = 0 :} energy $E_{cl}$.
\\
{\bf = 1 :} 3-momentum $P_{cl}$
\\
{\bf = 2 :} cluster velocity $\beta = P_{cl}/E_{cl}$.
\\
{\bf = 3 :} Lorentz factor $\gamma = E_{cl}/M_{cl}$.
\item[ MSTV(89)] : (D=1) smearing of cluster rapidities within
an interval $| y | \le y_c$, where $y_c$ is determined by PARV(89).
\item[ MSTV(90)] : (D=1) master switch for hadronization of beam/target remnants.
\\
{\bf = 0 :} off.
\\
{\bf = 1 :} on.
\item[ MSTV(91)] : (D=1) switch for the decay of unstable hadrons, emerging either
directly from cluster-decays, or from previous unstable hadron decays,
\\
{\bf = 0 :} off.
\\
{\bf {\boldmath $\ge 1$} :}
the decay chain is iterated MSTV(91) times, but at most 5 times.
\item[ MSTV(92)] : (D=1) switch for simulating baryon stopping effect in
nucleus-nucleus collisions of protons and neutrons (and associated $\Delta$'s)
by redistributing them in transverse momentum $p_\perp$ according to
a $1/(1+b \,p_\perp^2)^4$ distribution (approximately an exponential for
$p_\perp \le 2 $ GeV), and in longitudinal momentum $p_z$ (or rapidity)
according to a Gaussian distribution $\exp[-P_{beam}^2/(2 c^2)]$.
The parameters $b$ and $c$ are given by PARV(97) and PARV(98),
respectively.
\\
{\bf = 0 :} off.
\\
{\bf = 1 :} on.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARV(81)] : (D=3.6 GeV$^{-1}$) minimum separation of 2 partons, below which
a cluster cannot be formed (denoted $R_\chi$ in Ref. \cite{ms37}).
\item[ PARV(82)] : (D=4. GeV$^{-1}$) critical separation of 2 partons that sets the
space-time scale for cluster formation (denoted $R_{crt}$
in Ref. \cite{ms37}).
Is related to the average cluster size and typically about 10 \%
above $R_\chi$.
\item[ PARV(83)] : (D=1.0) `slope' of the exponential probability distribution
$W(R)=\theta(R-R_\chi)\exp(-$PARV(83)$R/R_{crt})$ for MSTV(83)$>$ 0,
describing
the probability of 2 partons to form a pre-hadronic
cluster by tunneling through
the `potential barrier' marked by $R_\chi$ and $R_{crt}$, with average
cluster size $R_{cl} =2 R_{crt}/$PARV(83).
\item[ PARV(84)] : (D=0.3 GeV) minimum invariant mass of 2 partons required to
form a cluster being necessarily above threshold for hadron decay.
\item[ PARV(85)] : (D=1000. GeV) maximum allowed invariant mass of 2 partons to form
a cluster with mass $M_{cl}$, in order to provide option to suppress
production of heavy clusters - in addition to the requirement of PARV(81),PARV(82).
Note: default corresponds to no suppression.
\item[ PARV(86)] : (D=5. GeV$^{-1}$) maximum allowed separation of
2 partons that are
unambigous cluster candidates. Cluster formation is enforced above
this value, in order to avoid unphysical separation. In this case
the requirement $M_{cl}$ $<$ PARV(85) is overridden.
\item[ PARV(87)] : (D=1.) `slope' of exponential Hagedorn-type distribution for
sampling the masses of clusters in their 2-body decays, given by
$\exp(- a \cdot M_{cl}/M_0)$, where $a =$ PARV(87),
$M_{cl}$ is the cluster mass and
$M_0 = 0.2$ GeV is the `Hagedorn temperature' (see Ref. \cite{ms37}).
\item[ PARV(88)] : (D=1. GeV) boundary for distinguishing `slow' and `fast'
clusters in statistics accumulation of exogamous cluster production
(c.f. MSTV(88)).
\item[ PARV(89)] : (D=0.2) determines value of $y_c$ = PARV(89) $y_{beam}$
for smearing of cluster
rapidities within an interval $| y | \le y_c$, if MSTV(89) $>$ 0.
\item[ PARV(91)] : (D=0.6) Parametric factor for charged multiplicity $N_{ch}$
from soft
fragmentation of beam/target remnants with respect to $p\bar{p}$
collisions at CM energy $\sqrt{s}$, such that $N_{ch}(\sqrt{s}) =
N_{ch}^{(p\bar p)}(a\cdot \sqrt{s})$, where $a=$ PARV(91)
(if MSTV(90) = 1).
\item[ PARV(92)] : (D=1.0) Nuclear dependence in $eA$ or $\gamma A$ collisions of
charged multiplicity $N_{ch}$, resulting from soft fragmentation of nuclear
remnant of the original nucleus with mass number $A$. It is parametrized
as: $N_{ch}^{(eA)}(\sqrt{s},A) = A^\alpha \; N_{ch}^{(p\bar p)}(\sqrt{s})$,
where $\alpha =$ PARV(92).
\item[ PARV(93)] : (D=1.0) Nuclear dependence in $hA$ or $AB$ collisions o the
charged multiplicity $N_{ch}$, resulting from soft fragmentation of nuclear
beam or/and target remnants of the original nuclei with mass number
$A\; (B)$ (case of hadron $h$ corresponds to $A=1$ or $B=1$).
It is parametrized as:
$N_{ch}^{(AB)}(\sqrt{s},A,B) = \left[(A+B)/2\right]^\beta \,
N_{ch}^{(p\bar p)}(\sqrt{s})$, where $\beta =$ PARV(93).
\item[ PARV(94)] : (D=25.GeV$^{-1}$) Time in the global collision frame when soft
fragmentation starts to become gradually active (default corresponds to 5 fm.)
\item[ PARV(95)] : (D=250.GeV$^{-1}$) The positions of particles produced
in fragmentation
of both parton clusters and beam/target clusters, are assigned such
they become active only after an effective formation time
$\Delta t = PARV(95) \times \min(E/M^2,$PARV(95)) (default value corresponds
$\Delta t \le 50$ fm).
\item[ PARV(96)] : (D=10.) proportionality factor in the uncertainty relation
$\Delta t = PARV(96) \times E/M^2$, which determines the mean formation
time of particles produced in soft beam/target fragmentation (c.f. PARV(95)).
\item[ PARV(97)] : (D=0.35 GeV)
parameter $b$ in $1/(1+b \,p_\perp^2)^4$ distribution for simulating
baryon stopping effect in nucleus-nucleus collisions of protons and neutrons
(and associated $\Delta$'s)
by redistributing them in transverse momentum $p_\perp$ (c.f. MSTV(92)).
\item[ PARV(98)] : (D=0.14 GeV)
parameter $c$ in Gaussian distribution $\exp[-P_{beam}^2/(2 c^2)]$ for
simulating baryon stopping effect in nucleus-nucleus collisions
of protons and neutrons (and associated $\Delta$'s)
by redistributing them in longitudinal momentum $p_z$ (c.f. MSTV(92)).
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 7. Other settings:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[$\bullet$]
{\it Handling of errors and warnings:}
\end{description}
\begin{description}
\item[ MSTV(100)] : (D=2) check on possible errors during program execution.
\\
{\bf = 0 :} errors do not cause any immediate action.
\\
{\bf = 1 :} possible errors are checked and in case of problem, it is
excited from the subprogram, but the simulation is continued
from there on. For the first MSTV(101) errors a message is
printed; after that no messages appear.
\\
{\bf = 2 :} possible errors are checked and in case of problem, the
the simulation is forcibly terminated. For the first MSTV(101)
errors a message is printed; after that no messages appear.
\item[ MSTV(101)] : (D=10) max number of errors that are printed.
\item[ MSTV(102)] : (D=1) printing of warning messages.
\\
{\bf = 0 :} no warnings are written.
\\
{\bf = 1 :} first MSTV(103) warnings are printed, thereafter no warnings
appear.
\item[ MSTV(103)] : (D=10) max number of warnings that are printed.
\end{description}
\begin{description}
\item[$\bullet$]
{\it Handling of input/output:}
\end{description}
\begin{description}
\item[ MSTV(110)] : (D=1) direction of program output.
\\
{\bf = 0 :} all output is directed to standard output unit 6.
\\
{\bf = 1 :} output is directed to units MSTV(111)-MSTV(113).
\item[ MSTV(111)] : (D=10) unit number to which general output is directed.
\item[ MSTV(112)] : (D=6) unit number for writing `on-line' information.
\item[ MSTV(113)] : (D=20) unit number for listing warnings and error messages.
\item[ MSTV(114)] : (D=2) switch for general information on unit MSTV(111),
which is directed automatically to file VNIRUN.DAT. Allows to
select thr amount of output provided on the global performance of a
simulation consisting of of a sample of collision events:
\\
{\bf = 0 :} only minimal output concerning selected process and
main parameters is written out.
\\
{\bf = 1 :} as =0, plus a listing the 1st event in momentum and
position space.
\\
{\bf = 2 :} as =1, plus listing of number and properties of elementary
subprocesses that occurred during the simulation.
\item[ MSTV(115)] : (D=2) switch for `on-line' information on unit MSTV(112),
i.e. writing of initialazation and termination info, and initial
and final entries for each event as the simulation goes on.
\\
{\bf = 0 :} no `online output'.
\\
{\bf = 1 :} only initialization and finalization info is printed.
\\
{\bf = 2 :} continous listing of minimum event information to keep
control of the program performance.
\end{description}
\begin{description}
\item[$\bullet$]
{\it Miscellanous control switches for subroutines:}
\end{description}
\begin{description}
\item[ MSTV(120)] : (D=0) flag to indicate initialization of certain HERWIG
common blocks and default values that are necessary for using
parts of the HERWIG program for the hadronization of clusters.
Is set equal to 1 after first intitialization call.
\item[ MSTV(121)] : (D=0) is set to 1 before a VNIROBO call, the V vectors are
reset (in the particle range to be rotated/boosted) are set to
0 before the rotation/boost. MSTV(121) is inactive during a VNIROBO
call and is set back to 0 upon return.
\item[ MSTV(122)] : (D=0) specifies in a VNIROBO call the type of rotation of the
P, R, and V vectors. The rotation is clockwise (active rotation) for
MSTV(122)=0 and anticlockwise (passive rotation) otherwise. The
value of MSTV(122) is set back to 0 upon return.
\item[ MSTV(123)] : (D=0) specifies in a VNIROBO call for performing a boost,
whether the vectors R are boosted (MSTV(123)=0) or not (MSTV(123)=1).
The vectors P are always boosted. The value of MSTV(123) is set back
to 0 upon return.
\item[ MSTV(124)] : (D=1) pointer to lowest entry in particle record VNIREC to be
included in data analysis routines VNIANA1-VNIANA5.
\item[ MSTV(125)] : (D=100000) pointer to highest entry in particle record VNIREC
to be included in data analysis routines VNIANA1-VNIANA5.
\item[ MSTV(126)] : (D=20) number of lines in the beginning of the particle
record that are reserved for internal event history information. The
value should not be reduced, but can be increased if necessary.
\item[ MSTV(127)] : (D=1) in listing of the current state of the particle record
by calling the subroutine VNILIST, it can be chosen between listing
either color/anticolor $C$ $A$ and the space time origin of a particle $I$
according to the information contained $V(I,5)$ explained above,
( = 0 ), or, alternatively the mother $KMO$ and color flow information
contained in $K(I,3)-K(I,5)$, ( = 1 ). All other listed quantities are
the same for either option.
\item[ MSTV(128)] : (D=0) specifies in calls to VNIEDIT and VNIROBO the classes
of particles to be included.
The default = 0 {\it ex}cludes all inactive or decayed particles with $K(I,1) \le 0$
or $K(I,1) > 10$, whereas the setting = 1 {\it in}cludes any entry
listed in the particle record. MSTV(128) is set back to 0 upon return.
\end{description}
\begin{description}
\item[$\bullet$]
{\it Version, date of last change:}
\end{description}
\begin{description}
\item[ MSTV(140)] : (D=1) Print-out of VNI logo on first occasion; MSTV(140)
is reset to 0 afterwards.
\item[ MSTV(141)] : VNI version number.
\item[ MSTV(142)] : VNI subversion number.
\item[ MSTV(143)] : year of last change of VNI version
\item[ MSTV(144)] : month of last change of VNI version.
\item[ MSTV(145)] : day of last change of VNI version.
\end{description}
\medskip
\newpage
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 8. Statistics, event study and data analysis:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[$\bullet$]
{\it Subroutine} VNILIST: {\it Full listing of first event.}
\end{description}
\begin{description}
\item[ MSTV(150)] : (D=0) Gives an example listing of the evolution
history of the first sucessful event in momentum and
position space. Note: if the general output switch MSTV(114)
is set $\ge$ 1, then MSTV(150) is automatically set to 1.
\end{description}
\begin{description}
\item[$\bullet$]
{\it Subroutine} VNIANA1: {\it General event statistics.}
\end{description}
\begin{description}
\item[ MSTV(151)] : (D=0) Statistics on initial parton state.
\item[ MSTV(152)] : (D=0) Number and momenta of produced particles.
\item[ MSTV(153)] : (D=0) Factorial moments.
\item[ MSTV(154)] : (D=0) Energy-energy correlations.
\item[ MSTV(155)] : (D=0) Decay channels.
\end{description}
\begin{description}
\item[$\bullet$]
{\it Subroutine} VNIANA2: {\it Statistics on hadronic observables.}
\end{description}
\begin{description}
\item[ MSTV(161)] : (D=0) Distributions of particles in $y=\ln(1/x)$.
\item[ MSTV(162)] : (D=0) Bose-Einstein correlation analysis.
\end{description}
\begin{description}
\item[$\bullet$]
{\it Subroutine} VNIANA3: {\it Statistics on pre-hadronic clusters.}
\end{description}
\begin{description}
\item[ MSTV(171)] : (D=0) Rapidity spectra of clusters $dN/dy$.
\item[ MSTV(172)] : (D=0) Longitudinal space-time spectra $dN/dz$.
\item[ MSTV(173)] : (D=0) Transverse momentum spectra $(1/p_\perp)\;dN/dp_\perp$
\item[ MSTV(174)] : (D=0) Transverse space-time spectra $(1/r_\perp)\;dN/dr_\perp$.
\item[ MSTV(175)] : (D=0) Distributions of cluster sizes and masses.
\item[ MSTV(176)] : (D=0) Polarization profile of cluster density.
\end{description}
\begin{description}
\item[$\bullet$]
{\it Subroutine} VNIANA4: {\it Statistics on partons and produced hadrons.}
\end{description}
\begin{description}
\item[ MSTV(181)] : (D=0) Rapidity spectra of partons and hadrons $dN/dy$.
\item[ MSTV(182)] : (D=0) Longitudinal space-time spectra $dN/dz$.
\item[ MSTV(183)] : (D=0) Transverse momentum spectra $(1/p_\perp)\;dN/dp_\perp$.
\item[ MSTV(184)] : (D=0) Transverse space-time spectra $(1/r_\perp)\;dN/dr_\perp$.
\end{description}
\newpage
\begin{description}
\item[$\bullet$]
{\it Subroutine} VNIANA5: {\it Global thermodynamic multiparticle properties.}
\end{description}
\begin{description}
\item[ MSTV(191)] : (D=0) Flavor, energy and momentum composition.
\item[ MSTV(192)] : (D=0) Flow velocity profiles.
\item[ MSTV(193)] : (D=0) Particle densities, energy densities, pressures.
Note: in order to use this option, the {\it flow-velocity
profiles must! be obtained beforehand}
in a seperate run, because they are assumed to be
read in as input (see MSTV(192)).
\end{description}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\bigskip
\bigskip
\begin{center}
{\bf MSTW(200), PARW(200): generated quantities and statistics}
\end{center}
\bigskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent{\bf 1. General:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTW(1)] : total number of collision events to be generated.
\item[ MSTW(2)] : total number of time steps per collision event.
\item[ MSTW(3)] : physics process $A+B$ that is simulated.
The current available
beam ($A$) and target ($B$) particles and the collision processes are the
ones listed before in Table 1, with MSTW(3)=IPRO.
\item[ MSTW(4)] : Overall Lorentz frame of reference for event siumulation.
\\
= 1 : center-of-momentum frame (CMS) of beam and target particle.
\\
= 2 : fixed-target frame (FIXT) with target particle at rest.
\\
= 3 : user-defined frame (USER) with given {\it 3-momentum} of beam
and target. Particles are assumed on the mass shell.
\\
= 4 : user-defined frame (FOUR) with given {\it 4-momentum}
( i.e., 3-momentum and energy) of beam
and target particle. The particles need not to be on the mass shell.
\\
= 5 : user-defined frame (FIVE) with given {\it 5-momentum}
( i.e., 3-momentum, energy and mass) of beam
and target particle. The particles need not to be on the mass shell,
but 4-momentum and mass information must(!) match.
\item[ MSTW(5)] : KF flavour code for beam particle $A$.
\item[ MSTW(6)] : KF flavour code for target particle $B$.
\item[ MSTW(7)] : type of incoming beam particle $A$:
1 for lepton, 2 for hadron, and 3 for nucleus.
\item[ MSTW(8)] : type of incoming target particle $B$:
1 for lepton, 2 for hadron, and 3 for nucleus.
\item[ MSTW(9)] : combination of incoming beam and target particles.
\\
{\bf = 1 :} lepton on lepton
\\
{\bf = 2 :} lepton on hadron
\\
{\bf = 3 :} lepton on nucleus
\\
{\bf = 4 :} hadron on lepton
\\
{\bf = 5 :} hadron on hadron
\\
{\bf = 6 :} hadron on nucleus
\\
{\bf = 7 :} nucleus on lepton
\\
{\bf = 8 :} nucleus on hadron
\\
{\bf = 9 :} nucleus on nucleus
\item[ MSTW(10)] : performance flag for current event. Is =0 at beginning, and
set =1 if during the evolution the event turns out to be rejectable,
either due to unphysical kinematics, particle combinations, etc.,
or due to numerical errors.
\item[ MSTW(11)] : current collision event.
\item[ MSTW(12)] : current time step in this collision event.
\item[ MSTW(13)] : current number of successful collision events,
i.e. those that completed gracefully with MSTW(10)=0.
\item[ MSTW(14)] : counter for "non-diffractive" collisions events,
i.e. those that involved at least one parton
collision (in hadron or nucleus collisions).
\item[ MSTW(15)] :
Status of Lorentz transformation between different global Lorentz frames
in which the collision event is simulated (which may be different from
the initially specified frame, c.f. MSTW(4)). The value of MSTW(15)
saves the last performed transformation:
\\
{\bf = 1 :} from global $cm$-frame to fixed-target or
user-specified frame;
\\
{\bf = 2 :} from global $cm$-frame to hadronic center-of-mass in
DIS (photon-hadron $cm$-frame);
\\
{\bf = -1 :} from fixed-target or user-specified frame
to global $cm$-frame;
\\
{\bf = -2 :} from hadronic center-of-mass in
DIS (photon-hadron $cm$-frame) to global $cm$-frame.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARW(1)] : time increment $TINC(I)$ in current time step $I$.
\item[ PARW(2)] : time $TIME(I)$ in current time step $I$.
\item[ PARW(3)] : local machine time (in CPU) passed when simulation started.
\item[ PARW(4)] : local machine time (in CPU) when simulation ended.
\item[ PARW(5)] : conversion factor CPU $\rightarrow$ seconds
(machine-dependent: for most processors, it is equal to 0.01, i.e.
1 sec = 100 CPU.
\item[ PARW(11)] : Total invariant $\sqrt{s} = E_{CM}$, i.e. the total CM energy
of the collision system.
\item[ PARW(12)] : Invariant $s=E_{CM}^2$ mass-square of complete system.
\item[ PARW(13)] : CM energy per nucleon $E_{CM}/(A+B)$ (where
$A$ ($B$) is the mass number of beam (target), and $A+B$ is the total
number of nucleons in the system) in collisions involving nuclei.
Is equal to $E_{CM}$ for elementary particle- and hadronic collisions.
\item[ PARW(14)] : Invariant $s_{eff} =E_{CM}^2/(A+B)$ mass-square per nucleon
in collisions involving nuclei.
Is equal to $s$ for elementary particle- and hadronic collisions.
\item[ PARW(15)] : mass $M_A$ of beam particle.
\item[ PARW(16)] : mass $M_B$ of target particle.
\item[ PARW(17)] : longitudinal momentum $P_{z\,A}$
of beam particle in specified global frame.
\item[ PARW(18)] : longitudinal momentum $P_{z\,B}$
of target particle in specified global frame.
\item[ PARW(19)] : angle $\theta$ of rotation from CM frame to user-defined
frame (also, in $e^+e^-$ via $W^+W^-$, the rotation of the
$W$-pair along the $z$-axis).
\item[ PARW(20)] : azimuthal angle $\phi$ of rotation from CM frame to
user-defined frame, corresponding to PARW(18).
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 2. Initial state of collision system:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTW(21)] : number of neutrons of beam particle $A$.
\item[ MSTW(22)] : number of neutrons of target particle $B$.
\item[ MSTW(23)] : number of protons of beam particle $A$.
\item[ MSTW(24)] : number of protons of target particle $B$.
\item[ MSTW(25)] : number of $d$-valence quarks of beam particle $A$.
\item[ MSTW(26)] : number of $d$-valence quarks of target particle $B$.
\item[ MSTW(27)] : number of $u$-valence quarks of beam particle $A$.
\item[ MSTW(28)] : number of $u$-valence quarks of target particle $B$.
\item[ MSTW(29)] : number of $s$-valence quarks of beam particle $A$.
\item[ MSTW(30)] : number of $s$-valence quarks of target particle $B$.
\item[ MSTW(31)] : total number of initial state partons in collisions
involving one or more hadron or nucleus.
\item[ MSTW(32)] : number of initial gluons.
\item[ MSTW(33)] : $10^5\times$ number of $d$ quarks + number of $\bar{d}$ antiquarks.
\item[ MSTW(34)] : $10^5\times$ number of $u$ quarks + number of $\bar{u}$ antiquarks.
\item[ MSTW(35)] : $10^5\times$ number of $s$ quarks + number of $\bar{s}$ antiquarks.
\item[ MSTW(36)] : $10^5\times$ number of $c$ quarks + number of $\bar{c}$ antiquarks.
\item[ MSTW(37)] : $10^5\times$ number of $b$ quarks + number of $\bar{b}$ antiquarks.
\item[ MSTW(38)] : $10^5\times$ number of $t$ quarks + number of $\bar{t}$ antiquarks.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARW(21)] : absolute value of velocity $\vec{\beta}_A$ of beam particle $A$.
\item[ PARW(22)] : absolute value of velocity $\vec{\beta}_B$ of target particle$B$.
\item[ PARW(23)] : Lorentz factor $\gamma_A$ of beam particle $A$.
\item[ PARW(24)] : Lorentz factor $\gamma_B$ of target particle $B$.
\item[ PARW(25)] : rapidity $Y_A$ of beam particle $A$.
\item[ PARW(26)] : rapidity $Y_B$ of target particle $B$.
\item[ PARW(27)] : radius $R_A$ (in 1/GeV) of beam particle in its restframe.
\item[ PARW(28)] : radius $R_B$ (in 1/GeV) of target particle in its restframe.
\item[ PARW(29)] : radius $R_n$ (in 1/GeV) of neutron within in nucleus.
\item[ PARW(30)] : radius $R_p$ (in 1/GeV) of proton within in nucleus.
\item[ PARW(31)] : accumulated average $Q_0$ of initial hadron (nucleus) parton
distributions (equals the average momentum scale of primary parton
collisions $\langle Q_{prim} \rangle$).
\item[ PARW(32)] : accumulated average momentum fraction $x_A$ of beam-side
initial hadron (nucleus) parton distributions.
\item[ PARW(33)] : accumulated average momentum fraction $x_B$ of target-side
initial hadron (nucleus) parton distributions.
\item[ PARW(34)] : 3 times the total charge of beam-side particle.
\item[ PARW(35)] : 3 times the total charge of target-side particle.
\item[ PARW(36)] : ratio $b/b_{max}$ of actual impact parameter
$b_{min} \le b \le b_{max}$ to maximum allowed impact parameter
of beam and target particles, if variable impact parameter is
chosen (c.f. MSTV(32) and PARV(32), PARV(33)).
\item[ PARW(37)] : azimuthal angle of colliding beam and target particles
around the beam axis, if variable impact parameter is chosen.
\item[ PARW(38)] : sign of $r_x$-coordinate of beam-particle $A$ in global frame
for variable impact parameter selection (= $-$ sign of target-particle $B$).
\item[ PARW(39)] : sign of $r_y$-coordinate of beam-particle $A$ in global frame
for variable impact parameter selection (= $-$ sign of target-particle $B$).
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 3. Parton scatterings:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTW(40)] : Total number of 2-parton collisions $a+b \rightarrow N$
($N = 1,2$).
\item[ MSTW(41)] : Number of hard $2 \rightarrow 2$ collisions of $q+q, q+\bar{q},
\bar{q}+\bar{q}$.
\item[ MSTW(42)] : Number of hard $2 \rightarrow 2$ collisions of $q+g, \bar{q}+g$.
\item[ MSTW(43)] : Number of hard $2 \rightarrow 2$ collisions of $g+g$.
\item[ MSTW(44)] : Number of soft $2 \rightarrow 2$ collisions of $q+q, q+\bar{q},
\bar{q}+\bar{q}$.
\item[ MSTW(45)] : Number of soft $2 \rightarrow 2$ collisions of $q+g, \bar{q}+g$.
\item[ MSTW(46)] : Number of soft $2 \rightarrow 2$ collisions of $g+g$.
\item[ MSTW(47)] : Number of $2 \rightarrow 1$ fusions of $q+\bar{q}$.
\item[ MSTW(48)] : Number of $2 \rightarrow 1$ fusions of $q+g, \bar{q}+g$.
\item[ MSTW(49)] : Number of $2 \rightarrow 1$ fusions of $g+g$.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARW(40)] : total number of `primary' (i.e., first)
collisions among primary partons.
\item[ PARW(41)] : accumulated average $Q^2$ of the `primary' collisions.
\item[ PARW(42)] : accumulated average $\sqrt{\hat{s}}$ of hard $2 \rightarrow 2$
collisions.
\item[ PARW(43)] : accumulated average rapidity $y^\ast=y_1^\ast-y_2^\ast$ of hard
$2\rightarrow 2$ collisions.
\item[ PARW(44)] : accumulated average $p_\perp$ of hard
$2\rightarrow 2$ collisions.
\item[ PARW(45)] : accumulated average $\sqrt{\hat{s}}$ of soft
$2\rightarrow 2$ collisions.
\item[ PARW(46)] : accumulated average rapidity $y^\ast=y_1^\ast - y_2^\ast$ of soft
$2\rightarrow 2$ collisions.
\item[ PARW(47)] : accumulated average $p_\perp$ of soft
$2\rightarrow 2$ collisions.
\item[ PARW(48)] : accumulated average $\sqrt{\hat{s}}$ of hard
$2\rightarrow 1$ collisions.
\item[ PARW(49)] : accumulated average rapidity $y^\ast=y_1^\ast - y_2^\ast$ of hard
$2\rightarrow 1$ collisions.
\item[ PARW(50)] : integrated 2-parton cross-section $\hat{\sigma}(\hat{s})
=\int dp_\perp^2 (d\hat{\sigma}(\hat{s},p_\perp^2)/dp_\perp^2)$.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 4. Space-like parton branchings:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTW(50)] : Total number of space-like branchings $a \rightarrow b+c$.
\item[ MSTW(51)] : Number of space-like processes $CMshower \rightarrow b+c$.
\item[ MSTW(52)] : Number of space-like processes
$q (\bar{q}) \rightarrow q (\bar{q})+g$.
\item[ MSTW(53)] : Number of space-like processes $ g \rightarrow g+g$.
\item[ MSTW(54)] : Number of space-like processes $ g \rightarrow q+\bar{q}$.
\item[ MSTW(58)] : Number of space-like processes
$q/q~ \rightarrow q (\bar{q}) +\gamma$.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARW(50)] : average $\sqrt{Q^2}$ of starting scale for
space-like branching evolution.
\item[ PARW(51)] : average $\sqrt{-q^2}$ of virtuality of branching particle.
\item[ PARW(52)] : average value of $x$ in $x \rightarrow x'x''$ branching.
\item[ PARW(53)] : average fraction $z = x'/x$ of space-like branchings.
\item[ PARW(54)] : average relative transverse momentum $q_\perp$ of
space-like branchings.
\item[ PARW(55)] : average longitudinal momentum $q_z$ of branching particle.
\item[ PARW(56)] : average energy of $q^0$ branching particle.
\item[ PARW(57)] : average invariant mass $\sqrt{|q^2|}$ of branching particle.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\newpage
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 5. Time-like parton branchings:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTW(60)] : Total number of time-like branchings $a \rightarrow b+c$.
\item[ MSTW(61)] : Number of time-like processes $CMshower \rightarrow b+c$.
\item[ MSTW(62)] : Number of time-like processes
$q (\bar{q}) \rightarrow q (\bar{q})+g$.
\item[ MSTW(63)] : Number of time-like processes $g \rightarrow g+g$.
\item[ MSTW(64)] : Number of time-like processes $g \rightarrow q+\bar{q}$.
\item[ MSTW(68)] : Number of time-like processes
$q (\bar{q}) \rightarrow q (\bar{q}) + \gamma$.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARW(60)] : average $\sqrt{Q^2}$ of starting scale for
time-like branching evolution.
\item[ PARW(61)] : average $\sqrt{k^2}$ of virtuality of branching particle.
\item[ PARW(62)] : average value of $x$ in $x \rightarrow x'x''$ branching.
\item[ PARW(63)] : average energy fraction $z = x'/x$ of time-like branchings.
\item[ PARW(64)] : average relative transverse momentum $k_\perp$
of time-like branchings.
\item[ PARW(65)] : average longitudinal momentum $k_z$ of branching particle.
\item[ PARW(66)] : average energy of branching particle.
\item[ PARW(67)] : average invariant mass $\sqrt{m^2}$ of branching particle.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 6. Parton-cluster formation and cluster-hadron decay: }
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTW(80)] : Total number of 2-parton cluster formations.
\item[ MSTW(81)] : Total number of $gg$ clusterings, $gg \rightarrow CC + X$,
\ where $X$ is either `nothing', or $g$, or $gg$.
\item[ MSTW(82)] : Number of processes, $gg \rightarrow CC + g$.
\item[ MSTW(83)] : Number of processes, $gg \rightarrow CC + gg$.
\item[ MSTW(84)] : Total number of $q\bar{q}$ clusterings,
$q\bar{q} \rightarrow CC + X$, where $X$ is either `nothing',
or $g$, or $gg$.
\item[ MSTW(85)] : Number of processes, $q\bar{q} \rightarrow CC + g$.
\item[ MSTW(86)] : Number of processes, $q\bar{q} \rightarrow CC + gg$.
\item[ MSTW(87)] : Total number of $gq\;(g\bar{q})$ clusterings,
$gq \rightarrow C + X$, where $X$ is either $q$ or $gq$.
\item[ MSTW(88)] : Number of processes, $gq \rightarrow CC + gq$.
\item[ MSTW(89)] : Total number of $(qq)q \rightarrow CC$,
$(\bar{q}\bar{q})\bar{q} \rightarrow CC$ clusterings.
\item[ MSTW(90)]{\bf - MSTW(99)} : Same as MSTW(80) - MSTW(89), but now for the
corresponding numbers of `exogamously' produced clusters, with
`exogamy index' $e_{cl} = (e_i+e_j)/2$ unequal to 0 or 1
(see Ref. \cite{ms40}).
\item[ MSTW(100)] : Total number of `fast' clusters, classified according to
the choices of MSTV(88)] and PARV(88).
\item[ MSTW(101)] : Total number of `slow' clusters, the fraction complimentary
to the `fast' one.
\item[ MSTW(110)] : Total number of primary "neutral" particles per event,
produced directly by cluster-hadron decays.
\item[ MSTW(111)] : Number of primary leptons and gauge bosons.
\item[ MSTW(112)] : Number of primary light mesons.
\item[ MSTW(113)] : Number of primary strange mesons.
\item[ MSTW(114)] : Number of primary charm and bottom mesons.
\item[ MSTW(115)] : Number of primary tensor mesons.
\item[ MSTW(116)] : Number of primary light baryons.
\item[ MSTW(117)] : Number of primary strange baryons.
\item[ MSTW(118)] : Number of primary charm and bottom baryons.
\item[ MSTW(119)] : Number of other particles.
\item[ MSTW(120)]{\bf - MSTW(129)} : Same as MSTW(110) - MSTW(119), but now for
charged particles only.
\item[ MSTW(130)]{\bf - MSTW(139)} : Same as MSTW(110) - MSTW(119), but now for
the numbers of secondary neutral particles per event (i.e.
those produced by decays of unstable particles).
\item[ MSTW(140)]{\bf - MSTW(149)} : Same as MSTW(130) - MSTW(139), but now for
charged particles only.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARW(81)] : accumulated average space-time separation of clustered
partons.
\item[ PARW(82)] : accumulated average energy-momentum separation of clustered
partons.
\item[ PARW(83)] : minimum encountered separation of any two clustered
partons with respect to chosen space-time measure (c.f. MSTV(81)).
\item[ PARW(84)] : maximum encountered separation of any two clustered
partons with respect to chosen space-time measure (c.f. MSTV(81)).
\item[ PARW(85)] : minimum encountered separation of any two clustered
partons with respect to chosen energy-momentum measure (c.f. MSTV(82)).
\item[ PARW(86)] : maximum encountered separation of any two clustered
partons with respect to chosen energy-momentum measure (c.f. MSTV(82)).
\item[ PARW(91)]{\bf - PARW(99)} : Gives the `exo(endo)-gamy distribution'
(see Ref. \cite{ms40}) of clusters
formed from partons with `exogamy' index $e_i, e_j$, such that
$e_{cl} = (e_i+e_j)/2$. Inital beam/target particles have $e = 0 (1)$, so that
$e_{cl}$ lies in the interval [0,1], which is binned as
$0, 0.05, 0.2, 0.3, 0.45,0.55, 0.7, 0.8, 0.95, 1$, with
PARW(90)-PARW(99) giving the numbers of clusters
in the bins. Note: only those clusters are counted that are classified as
`fast' (c.f. MSTW(88)).
\item[ PARW(101)]{\bf - PARW(109)} : Same as PARW(91)-PARW(99), but for `slow' clusters
only. The sums PARW(91)+PARW(101), etc., hence give the total numbers.
\item[ PARW(110)]{\bf - PARW(149)} : not used.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\medskip
\newpage
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\noindent {\bf 8. Statistics, event study and data analysis:}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\begin{description}
\item[ MSTW(151)]{\bf - MSTW(160)} :
(D=1,21,41,61,91,121,151,181,221,301) Sequential
numbers of time steps, at which a snapshot analysis during the
space-time evolution of the collision system may be performed
in routines VNIANA4 - VNIANA5.
\item[ MSTW(161)] : (D=40) Number of bins for energy distribution in
$\ln(1/x)$ (wher $x$ is the energy fraction) in routine VNIANA2.
\item[ MSTW(162)] : not used.
\item[ MSTW(163)] : (D=40) Number of bins for pair mass distribution of
Bose-Einstein correlations in routine VNIANA2.
\item[ MSTW(164)-MSTW(170)] : not used.
\item[ MSTW(171)] : (D=20) Number of bins for cluster $y$-distribution in
routine VNIANA3.
\item[ MSTW(172)] : not used.
\item[ MSTW(173)] : (D=20) Number of bins for cluster $r_z$-distribution in
routine VNIANA3.
\item[ MSTW(174)] : not used.
\item[ MSTW(175)] : (D=20) Number of bins for cluster $p_\perp$-distribution in
routine VNIANA3.
\item[ MSTW(176)] : not used.
\item[ MSTW(177)] : (D=20) Number of bins for cluster $r_\perp$-distribution in
routine VNIANA3.
\item[ MSTW(178)]{\bf -MSTW(180)} : not used.
\item[ MSTW(181)] : (D=20) Number of bins for parton/hadron $y$-distribution
in routine VNIANA4.
\item[ MSTW(182)] : not used.
\item[ MSTW(183)] : (D=20) Number of bins for parton/hadron $r_z$-distribution
in routine VNIANA4.
\item[ MSTW(184)] : not used.
\item[ MSTW(185)] : (D=20) Number of bins for parton/hadron $p_\perp$-distribution
in routine VNIANA4.
\item[ MSTW(186)] : not used.
\item[ MSTW(187)] : (D=20) Number of bins for parton/hadron $r_\perp$-distribution
in routine VNIANA4.
\item[ MSTW(188)]{\bf - MSTW(200)} : not used.
\medskip
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\item[ PARW(150)]{\bf - PARW(160)} : not used.
\item[ PARW(161)],{\bf PARW(162)} :
(D=0.,6.) Lower and upper bound for energy distribution in $\ln(1/x)$
(where $x$ is the energy fraction of particles), in routine VNIANA2.
\item[ PARW(163)],{\bf PARW(164)} : (D=0.,3. GeV) Lower and upper bound
for pair mass
distribution of Bose-Einstein correlations in routine VNIANA2.
\item[ PARW(165)]{\bf - PARW(170)} : not used.
\item[ PARW(171)],{\bf PARW(172)} : (D=-8.,8.) Lower and upper bound for cluster
$y$-distribution in routine VNIANA3.
\item[ PARW(173)],{\bf PARW(174)} : (D=-40.,40. GeV$^{-1}$)
Lower and upper bound for cluster
$r_z$-distribution in routine VNIANA3.
\item[ PARW(175)],{\bf PARW(176)} : (D=0.,10. GeV)
Lower and upper bound for cluster
$p_\perp$-distribution in routine VNIANA3.
\item[ PARW(177)],{\bf PARW(178)} : (D=0.,40. GeV$^{-1}$)
Lower and upper bound for cluster
$r_\perp$-distribution in routine VNIANA3.
\item[ PARW(179)]{\bf - PARW(180)} : not used.
\item[ PARW(181)],{\bf PARW(182)} : (D=-8.,8.)
Lower and upper bound for parton/hadron
$y$-distribution in routine VNIANA4.
\item[ PARW(183)],{\bf PARW(184)} : (D=-100.,100. GeV$^{-1}$)
Lower and upper bound for
parton/hadron $r_z$-distribution in routine VNIANA4.
\item[ PARW(185)],{\bf PARW(186)} : (D=0.,10. GeV)
Lower and upper bound for parton/hadron
$p_\perp$-distribution in routine VNIANA4.
\item[ PARW(187)],{\bf PARW(188)} : (D=0.,100. GeV$^{-1}$)
Lower and upper bound for parton/hadron $r_\perp$-distribution
in routine VNIANA4.
\item[ PARW(189)]{\bf - PARW(190)} : not used.
\item[ PARW(191)],{\bf PARW(192)} : (D=-20.,20. GeV$^{-1}$)
Lower and upper bound for
$r_z$ in ($r_z\times r_\perp$) grid for density profiles in routine VNIANA5.
$y$-distribution in routine VNIANA4.
\item[ PARW(193)],{\bf PARW(194)} : (D=0.,10. GeV$^{-1}$)
Lower and upper bound for
$r_\perp$ in ($r_z\times r_\perp$) grid for density profiles in
routine VNIANA5.
\item[ PARW(195)]{\bf - PARW(200)} : not used.
\noindent
------------------------------------------------------------------------------------------------------------------------------------
\end{description}
\noindent
--------------------------------------------------------------------------------------------------------------------------------------------
\bigskip
\bigskip
\subsection{Kinematics cuts and selection of subprocesses}
\bigskip
The commonblock VNIKIN contains the arrays VKIN and MSUB, which may be used
to impose specific kinematics cuts and to switch on/off certain
subprocesses, respectively. The notation used in the following
is that 'hat' denotes internal variables in a partonic interaction,
while '*' is for variables in the global CM frame of the system as-a-whole.
All dimensionful quantities are in units of GeV or GeV$^{-1}$.
Kinematics cuts can be set by the user before the VNIXRUN call. Most of
the cuts are directly related to the kinematical variables used in the
evolution of the system in each event, and in the event data analysis.
Except for the entries VKIN(7) - VKIN(12), the assignments follow the
convention of JETSET/PYTHIA, in order to allow an easy interfacing.
Note, that in the current version of VNI, only the values
VKIN(7)-VKIN(12) are actively used. All other quantities are related to
parton collision processes involving one or more partons, which are only
of relevance in reactions with hadrons (nuclei) in the initial state.
As stated before, the latter are presently worked out, but not yet
included in the package.
\newpage
\begin{verbatim}
COMMON/VNIKIN/VKIN(200),MSUB(200),ISET(200)
\end{verbatim}
\noindent
{\it Purpose}: To impose kinematics cuts on the particles' interactions
and their space-time evolution. Also, to allow the user to
run the program with a desired subset of processes.
\begin{description}
\item[ VKIN(1),]{\bf VKIN(2) :} (D=2.,-1.) range of allowed
$\hat{m} = \sqrt{\hat{s}}$ values.
If VKIN(2) $<$ 0, the upper limit is inactive.
\item[ VKIN(3),]{\bf VKIN(4) :} (D=2.,-1.) range of allowed
(D=0.,-1.) range of allowed $\hat{p}_\perp$ values for
parton collisions, with transverse momentum
$\hat{p}_\perp$ is defined
in the rest-frame of the 2-body interaction. If VKIN(4) $<$ 0,
the upper limit is inactive. For processes which are singular
in the limit $\hat{p}_\perp\rightarrow 0$ (see VKIN(6)), VKIN(5) provides
an additional constraint. The VKIN(3) and VKIN(4) limits can
also be used in $2 \rightarrow 1 \rightarrow 2$ processes.
Here, however, the product
masses are not known and hence assumed vanishing in the event
selection. The actual $\hat{p}_\perp$ range for massive products is thus
shifted downwards with respect to the nominal one.
\item[ VKIN(5) :]
(D=1.) lower cutoff on $\hat{p}_\perp$ values, in addition to the
VKIN(3) cut above, for processes which are singular in the
limit $\hat{p}_\perp \rightarrow 0$ (see VKIN(6)).
\item[ VKIN(6) :]
(D=1.) 2-body parton collision processes, which do not proceed only
via an intermediate resonance (i.e. are $2 \rightarrow 1 \rightarrow 2$
processes),
are classified as singular in the limit $\hat{p}_\perp\rightarrow 0$,
if either
or both of the two final state products has a mass $m$ $<$ VKIN(6).
\\
\item[ VKIN(7),]{\bf VKIN(8) :}
(D=-100.,100.) range of allowed particle rapidities
$y^\ast$ in the CM frame of the overall system.
\item[ VKIN(9),]{\bf VKIN(10) :}
(D=0.,1000.) range (in GeV) of allowed particle
transverse momenta $k_\perp^\ast$ perpendicular to the
$z$-axis in the global
CM frame of the event.
\item[ VKIN(11),]{\bf VKIN(12) :}
(D=-1000.,1000) range (in GeV$^{-1}$) of allowed
longitudinal positions $z^\ast$ of particles from the CM of the global
collision system.
\item[ VKIN(13),]{\bf VKIN(14) :}
(D=0.,1000.) range (in GeV$^{-1}$) of allowed
transverse distances $r_\perp^\ast$ perpendicular to the $z$-axis in the CM
frame of the global collision system.
\\
\item[ VKIN(21),]{\bf VKIN(22) :}
(D=0.,1.) range of allowed $x_A$ (Bjorken-$x$) values for the parton
of beam-side $A$ that enters a parton collision.
\item[ VKIN(23),]{\bf VKIN(24) :}
(D=0.,1.) range of allowed $x_B$ (Bjorken-$x$) values for the parton
of target-side $B$ that enters a parton collision.
\item[ VKIN(25),]{\bf VKIN(26) :}
(D=-1.,1.) range of allowed Feynman $x_F$ values,
where $x_F = x_A-x_B$.
\item[ VKIN(27),]{\bf VKIN(28) :}
(D=-1.,1.) range of allowed $\cos(\hat{\theta})$ values
in a $2\rightarrow 2$ parton collision, where $\hat{\theta}$
is the scattering
angle in the rest-frame of the 2-body collision.
\\
\item[ VKIN(31),]{\bf VKIN(32) :}
(D=2.,-1.) range of allowed $\hat{m}'$ values, where
$\hat{m}'$ is the mass of the complete three- or four-body final state
in $2 \rightarrow 3$ or $2 \rightarrow 4$ processes
(while $\hat{m}$ (without a prime), constrained in VKIN(1)
and VKIN(2), here corresponds to the one- or two-body central
system). If VKIN(32) $<$ 0, the upper limit is inactive.
\item[ VKIN(35),]{\bf VKIN(36) :}
(D=0.,-1.) range of allowed $\mbox{abs}(\hat{t}) = -\hat{t}$
values in $2 \rightarrow 2$ processes.
Note that for deep inelastic scattering
this is nothing but the $Q^2$ scale of the photon vertex,
in the limit that initial and
final state radiation is neglected. If VKIN(36) $<$ 0, the upper
limit is inactive.
\item[ VKIN(37),]{\bf VKIN(38) :}
(D=0.,-1.) range of allowed $\mbox{abs}(\hat{u}) = - \hat{u}$
values in $2 \rightarrow 2$ processes. If VKIN(38) $<$ 0,
the upper limit is inactive.
\\
\item[ VKIN(39) -]{\bf VKIN(100) :}
currently not used.
\\
\item[ VKIN(101) -]{\bf VKIN(200) :}
reserved for internal use of storing kinematical
variables of an event.
\\
\\
\item[ MSUB(ISUB) :] array to be filled
when MSTV(14) = 1 (for MSTV(14) = 0, the
array MSUB is initialized in VNIXRIN automatically) to choose which
subset of subprocesses to include in the generation. The ordering
follows the ISUB code given before in Table 3.
\\
{ = 0 :} the subprocess ISUB is {\it excluded}.
\\
{ = 1 :} the subprocess ISUB is {\it included}.
\\
\item[ ISET(ISUB) :] gives the type of kinematical variable selection scheme
used for subprocess ISUB.
\\
{\bf = 1 :} 2 $\rightarrow$ 1 processes (parton fusion processes).
\\
{\bf = 2 :} 2 $\rightarrow$ 2 processes (hard and soft parton scattering processes).
\\
{\bf = 3 :} 1 $\rightarrow$ 2 processes (parton branching processes).
\\
{\bf = -1 :} process is not yet implemented, but space is reserved.
\\
{\bf = -2 :} process is not defined.
\end{description}
\bigskip
\bigskip
\subsection{Instructions on how to use the program}
Two example programs are included in the package VNI-3.1:
{\tt vnixple1.f} and {\tt vnixple2.f}.
The first one, {\it vnixple1.f} is disguised
as a generic skeleton program that calls the steering routines VNIXRIN,
VNIXRUN and VNIXFIN (c.f. Sec. 3.4) on a black-box-level.
The second example {\tt vnixpl2.f} illustrates more specifically how
to extract data from a simulation by using histograms.
Below the simple example {\tt vnixple1.f} is printed.
This example
generates 1000 $p\bar{p}$ events at $\sqrt{s} = 546$ GeV.
Each event is evolved for a time range
of 0 - 30 fm in the $p\bar{p}$ center-of-mass frame (the global CM frame).
\begin{verbatim}
PROGRAM VNIXPL1
C...Purpose: example for p+p~ collisions at CERN collider with 546 GeV.
INCLUDE 'vni1.inc'
CHARACTER*(8) FRAME,BEAM,TARGET
C...Required user input:
C.....number of events and final time (in fm).
NEVT = 1000
TFIN = 30.
C.....colliding particles, global Lorentz frame, and total energy.
BEAM = 'p+ '
TARGET= 'p- '
FRAME = 'cms '
WIN = 546.
C.....Specify your change of default parameters here:
C ..........
C...Initialize simulation procedure.
CALL VNIXRIN(NEVT,TFIN,FRAME,BEAM,TARGET,WIN)
C...Begin loop over of events.
DO 500 IEVT=1,NEVT
C.....Generate collision event.
CALL VNIXRUN(IEVT,TFIN)
C.....Analysis of event as-a whole.
C ..........
C...End loop over events.
500 CONTINUE
C...Finish up simulation procedure.
CALL VNIXFIN()
END
\end{verbatim}
This example is rather self-explanatory, and will become
further clear with the following subsections. The "$\ldots$"
indicate where one can interface on a black-box-level with the program
as-a-whole
without needing to dig deeper into its subroutine structure. In particular, one
can insert here histograming (either using the included
portable package "vnibook.f", or any other package such as
the HBOOK or PAW of the CERN library).
Note also the pre-programmed histogram options
discussed in Sec. 3.7 (switches MSTV(150)-MSTV(200)) and Appendix B (subroutines VNIANA1-VNIANA5).
In the above example only the final state at the end of each event
is collected. If the user is interested in the actual
space-time development of the collision events, he/she can access this by
inserting in the loop over events the following extension:
\begin{verbatim}
C...Generate collision event; analyze evolution every 5 fm.
TSTEP=5.
NTIM=20
C.....Begin loop over time intervals.
DO 400 ITIM=1,NTIM
TRUN=ITIM*TSTEP
IF(TRUN.GT.TFIN.OR.ITIM.EQ.NTIM) TRUN=TFIN
CALL VNIXRUN(IEVT,TRUN)
C.....Analysis of time development.
C ..........
C.....End loop over time intervals.
400 IF(TRUN.EQ.TFIN) GOTO 450
450 CONTINUE
\end{verbatim}
\noindent
{\bf Remarks:}
\begin{description}
\item[(i)]
In case one desires to interface with the JETSET/PYTHIA programs,
one should write a little program with common block
\begin{verbatim}
COMMON/LUJETS/N,K(4000,5),P(4000,5),V(4000,5)
\end{verbatim}
and which copies the variables $N, K, P$, and $V$ of the particle
record VNIREC directly. Be aware however, that VNIREC is dimensioned
for 100000 entries, whereas LUJETS has space for only 4000
entries.
\\
\item[(ii)]
In case one desires to interface with the HERWIG program, one
should write a little program with common block
\begin{verbatim}
PARAMETER (NMXHEP=2000)
COMMON/HEPEVT/NEVHEP,NHEP,ISTHEP(NMXHEP),IDHEP(NMXHEP),
&JMOHEP(2,NMXHEP),JDAHEP(2,NMXHEP),PHEP(5,NMXHEP),VHEP(4,NMXHEP)
\end{verbatim}
and which calls the routine HWHEPC($MCONV,IP,NP$) with $MCONV = 1$,
$IP = 1$,and $NP = N$. Again, watch the dimensions, 100000 versus 2000.
\end{description}
\bigskip
\bigskip
\bigskip
\subsection{Example for a typical collision event}
As a result of running the above example program {\it vnixple1.f},
a number of files are created in the directory, all of which are called
VNI???.DAT, where "???" are either 3 letters, or 3 digits.
The latter are data files that contain event statistics and other information
on the performance of a run.
(see Sec. 3.7, switches MSTV(150)-MSTV(200) and Appendix B for further options).
By default only a few files are created:
\begin{description}
\item
VNIRUN.DAT containing
the main summary of a simulation with print-out of results,
\item
VNICOL.DAT containing a detailed summary of all parton collisions that occurred,
\item
VNITIM.DAT containing the generated time-step grid and its real-time
translation,
\item
VNIRLU.DAT containing the status of the random number generator,
\item
VNIERR.DAT containing error messages in case problems occur.
\end{description}
Most important is the file VNIRUN.DAT, which
gives a summary of what
occurred during a simulation on the parton cascade level, the cluster
formation level and the hadron decay level.
One can obtain a listing of an event by simply calling VNILIST(1)
at the end or during an event,
which gives a first impression
(for a complete listing see Appendix D):
\begin{description}
\item[(i)]
The event record begins with the beam/target particles, here $p$ and $\bar{p}$,
which are then transformed to the center-of-mass frame (here it coincides
with the global frame of reference).
\item[(ii)]
Then follow the
three valence quarks of $p$, and after that its intrinsic gluons and
seaquark-pairs. After that the initial-sate valence quarks and gluons and
seaquarks of $\bar{p}$ are listed.
\item[(iii)]
Then comes the history of the cascade evolution, including
hard scatterings with shower (bremsstrahlung), subsequent
cluster-formation of the materialized partons
and hadron decay of these parton clusters.
\item[(iv)]
Finally, all non-materialized (non-interacted) partons
are recombined to a beam- and a target cluster that
undergo a soft fragmentation, again via cluster-hadron decay.
\item[(v)]
At the very end the sum of charge, three-momentum, energy, and
the total invariant mass of the collision system is printed.
\end{description}
\bigskip
\bigskip
\newpage
\begin{center}
Example of the particle record for a $p\bar{p}$ collision event
at $\sqrt{s} = 546$ GeV.
\end{center}
\begin{verbatim}
Particle listing (summary: momentum space P in GeV)
event no. 1 time step 200 at time 34.75 fm
I particle KS KF C A P_x P_y P_z E M
1 (p+) 15 2212 0 0 .000 .000 272.998 273.000 .938
2 (p~-) 15 -2212 0 0 .000 .000 -272.998 273.000 .938
==============================================================================
3 (p+) 15 2212 0 0 .000 .000 272.998 273.000 .938
==============================================================================
4 (p~-) 15 -2212 0 0 .000 .000 -272.998 273.000 .938
==============================================================================
5 (u) 13 2 1 0 .650 .048 1.786 2.319 .006
6 (d) 13 1 1 0 -.593 -.165 2.092 2.587 .010
7 (d~) 13 -1 0 3 -.135 -.452 6.724 6.886 -.490
8 (d) 13 1 3 0 .413 -.211 .408 .935 -.484
9 (g) 14 21 1 1 1.751 .556 2.232 3.398 -.682
.
.
==============================================================================
48 (d~) 13 -1 0 2 .526 -.150 -16.659 15.157 .010
49 (u~) 13 -2 0 3 -.121 -.114 -4.941 4.106 -.174
50 (u) 13 2 3 0 -.252 -.044 -1.576 .935 -.249
51 (g) 13 21 3 2 -.110 .229 -12.733 11.449 -.234
.
.
==============================================================================
.
.
83 (g) 14 21 2 1 2.052 2.858 1.999 1.045 -3.909
84 (d) 14 1 1 0 .000 .000 -8.008 8.008 .000
85 (d) 14 1 2 0 .000 .000 8.008 8.008 .000
86 (CMshower) 11 94 0 0 2.661 3.728 -3.710 8.110 5.571
87 (d) 14 1 2 0 1.128 .073 -1.364 3.504 3.023
88 (cluster) 11 91 0 0 1.116 1.615 -2.687 3.412 .750
89 (cluster) 11 91 0 0 -.450 -.605 -.228 1.213 .922
90 (cluster) 11 91 0 0 -1.528 -.662 1.381 2.181 .280
91 (cluster) 11 91 0 0 1.252 .689 -1.272 3.136 2.484
92 (cluster) 11 91 0 0 .110 -.650 5.219 5.796 2.433
93 pi0 6 111 0 0 .660 .523 -.880 1.171 .135
94 pi0 6 111 0 0 .461 1.090 -2.094 2.377 .135
95 rho- 6 -213 0 0 -.334 -.562 -.332 1.068 .769
96 pi0 6 111 0 0 -.110 -.046 -.182 .186 .135
97 pi- 6 -211 0 0 -.678 -.312 .548 1.066 .140
98 pi+ 6 211 0 0 -.845 -.352 .547 1.203 .140
.
.
==============================================================================
109 (Beam-REM) 11 92 0 0 -.315 .737 274.428 281.871 70.164
110 (Targ-REM) 11 93 0 0 -.907 -1.587 -276.508 247.044 -116.181
111 (CMF) 15 100 0 0 -1.222 -.850 -2.080 528.915 528.909
==============================================================================
112 (cluster) 11 91 0 0 -.315 .737 274.428 281.871 70.164
113 (cluster) 11 91 0 0 -.907 -1.587 -276.508 247.044 -116.181
114 (Beam-REM) 11 92 0 0 .411 .648 127.485 127.543 3.762
115 (a_1-) 17 -20213 0 0 .493 .228 19.200 19.250 1.275
116 (Delta+) 17 2214 0 0 -.081 .420 108.286 108.293 1.232
117 pi0 7 111 0 0 .490 .124 4.473 4.831 .135
118 (rho-) 17 -213 0 0 .005 .102 14.584 14.605 .769
119 pi0 7 111 0 0 -.160 .015 35.475 37.042 .135
120 p+ 7 2212 0 0 .085 .402 72.525 75.579 .938
121 pi0 7 111 0 0 -.122 -.268 8.519 9.014 .135
122 pi- 7 -211 0 0 .133 .367 5.779 6.174 .140
123 (Targ-REM) 11 93 0 0 -1.252 -.510 -63.766 63.825 2.384
124 (Delta~-) 17 -2214 0 0 -.779 -.303 -21.201 21.253 1.232
125 (eta) 17 221 0 0 -.473 -.208 -42.565 42.572 .549
126 pi0 7 111 0 0 -.072 -.254 -3.205 3.199 .135
127 p~- 7 -2212 0 0 -.701 -.052 -18.281 18.903 .938
128 pi0 7 111 0 0 -.179 .069 -12.277 12.622 .135
129 pi0 7 111 0 0 -.060 -.060 -12.901 13.269 .135
130 pi0 7 111 0 0 -.225 -.220 -17.816 18.383 .135
.
.
.
336 (rho+) 17 213 0 0 -.263 .404 -5.474 5.549 .769
337 (omega) 17 223 0 0 .067 -.237 -31.485 31.496 .783
338 pi0 7 111 0 0 -.342 .522 -5.018 5.113 .135
339 pi+ 7 211 0 0 .085 -.120 -.742 .657 .140
340 pi0 7 111 0 0 -.309 .029 -10.513 10.790 .135
==============================================================================
sum: .00 .000 .000 .000 546.000 546.000
\end{verbatim}
More information on
pre-programmed
print-out of results concerning for particle distributions, time evolution,
observables, etc., can be switched on with switches MSTW(151) - MSTW(200), as
explained in the previous Sec. 3.7.
\bigskip
\bigskip
\bigskip
\bigskip
\noindent {\bf ACKNOWLEDGEMENTS}
\medskip
The present program is, with respect to both its
physics and computational aspects, the product of several
years of gradual experience and development,
during which I profited from many colleagues, most of all
from John Ellis, Miklos Gyulassy, and Berndt M\"uller.
Thank you!
This work was supported in part by the D.O.E under contract no.
DE-AC02-76H00016.
\bigskip
\newpage
\newpage
|
1,108,101,564,701 | arxiv | \section{Introduction}
\label{sec:intro}
The basic object in string theory is a one-dimensional extended object, the string. The harmonics of the vibrating string correspond to the elementary particles with different masses and quantum numbers. String can support infinitely many harmonics, and hence string theory contains infinite number of elementary particles. Therefore, we can consider string theory as a framework for describing the interactions of infinitely many elementary particles. String field theory describe the dynamics of this system of infinite number of elementary particles using the language of quantum field theory \cite{Witten:1985cc,Thorn:1988hm,Zwiebach:1992ie}. Compared to the conventional formulation of string perturbation theory \cite{Friedan:1985ge}, string field theory has the advantage that the latter provide us with the standard tools in quantum field theory for computing the S-matrix elements that are free from infrared divergences \cite{Pius:2013sca,Pius:2014iaa,Sen:2014pia,Pius:2014gza,Sen:2014dqa,Sen:2015uoa,Sen:2015uaa,Sen:2015hha,deLacroix:2017lif}. Furthermore, the S-matrix computed using string field is unitary \cite{Pius:2016jsl,Pius:2018crk,Sen:2016ubf,Sen:2016bwe,Sen:2016uzq}. Since string field theory is based on a Lagrangian, it also has the potential to open the door towards the non-perturbative regime of string theory \cite{Schnabl:2005gv}, even though no one has succeeded in studying the non-perturbative behaviour of closed strings using closed string field theory yet \cite{Sen:2016qap,Yang:2005rx}. Moreover, string field theory can be used for the first principle construction for the effective actions describing the low energy dynamics of strings \cite{Sen:2014dqa,Sen:2015hha,Sen:2016qap}. \par
The gauge transformations of closed string field theory form a complicated infinite dimensional gauge group. Consequently, the quantization of closed string field theory requires the sophisticated machinery of Batalian-Vilkovisky formalism (BV formalism) \cite{Batalin:1981jr,Batalin:1984jr,Barnich:1994db,Barnich:1994mt,Henneaux:1989jq,Henneaux:1992ig,Gomis:1994he}. The quantum BV master action for closed string field theory can be obtained by solving the quantum BV master equation. The perturbative solution of quantum BV master action for the closed bosonic string field theory has been already constructed \cite{Zwiebach:1992ie}. The striking feature of closed string field theory is that, albeit, the quantum BV master action contains a kinetic term and infinite number of interaction terms, the theory has only one independent parameter, the closed string coupling. The interaction strengths (coupling constants) of the elementary interactions in closed string field theory are expressed as integrals over the distinct two dimensional world-sheets describing the elementary interactions of the closed strings.\par
The collection of world-sheets describing the elementary interactions of the closed strings are called as the string vertex. A consistent set of string vertices provide a cell decomposition of the moduli space of Riemann surfaces \cite{Zwiebach:1992ie}. The main challenge in constructing string field theory is to find a consistent set of string vertices that give rise to a suitable cell decomposition of the moduli spaces of Riemann surfaces. In principle, all the string vertices that provide such a cell decomposition of the moduli space can be constructed using the Riemann surfaces endowed with the metric solving the generalized minimal area problem \cite{Zwiebach:1992ie}. Unfortunately, our current understanding of minimal area metrics is insufficient to obtain a calculable formulation of closed string field theory \footnote{Recently, the cubic vertex of heterotic string field theory has constructed by using $SL(2,\mathbb{C})$ local coordinate maps which in turn has been used to construct the one loop tadpole string vertex in heterotic string field theory \cite{Erler:2017pgf}. The cubic string vertex defined this way differ from the cubic string vertex defined by the minimal area metric.}. However, there exist an alternate construction of the string vertices using Riemann surfaces endowed with metrics having constant curvature $-1$ \cite{Moosavian:2017fta,Moosavian:2017qsp}. They can be characterized using the Fenchel-Nielsen coordinates for the Teichm\"uller space and the local coordinates around the punctures on the world-sheets in terms of the hyperbolic metric. They can be used to construct a closed string field theory with approximate gauge invariance. \par
The interaction strengths in closed string field theory are obtained by integrating the off-shell string measure over the region in the moduli space that corresponds to the distinct two dimensional world-sheets describing the elementary interactions of the closed strings. The explicit evaluation of the interaction strength requires:
\begin{enumerate}
\item A convenient choice of parametrization of the Teichm\"uller space and the conditions on them that specify the region of the moduli space inside the Teichm\"uller space.
\item An explicit description for the region inside moduli space that corresponds to the string vertex and a consistent choice of local coordinates around the punctures on the Riemann surfaces belong to the string vertex.
\item An explicit procedure for constructing the off-shell string measure in terms of the chosen coordinates of the moduli space.
\item Finally, an explicit procedure for integrating the off-shell string measure over the region inside the moduli space that corresponds to the string vertex.
\end{enumerate}
In this paper, we provide detailed descriptions for each of them. \\
\noindent{\underline{\bf Summary of the results}}: The main results of this paper are as follows:
\begin{itemize}
\item We explicitly construct the off-shell string measure in terms of the Fenchel-Nielsen coordinates for the Teichm\"uller space using a specific choice of local coordinates that is encoded in the definition of the string vertices.
\item The interaction strengths in closed string field theory are obtained by integrating the off-shell string measure, which is an MCG-invariant object, over the region in the moduli space that corresponds to the Riemann surfaces describing the elementary interactions of the closed strings. The moduli space is the quotient of the Teichm\"uller space with the action of the mapping class group (MCG). However, in the generic case, an explicit fundamental region for the action of the MCG inside the Teichm\"uller space is not known. Therefore, integrating an MCG-invariant function over a region in the moduli space of the Riemann surfaces is not a straightforward operation to perform. In this paper, we discuss a way to bypass this difficulty and obtain an effective expression for the integral using the prescription for performing the integration over the moduli space of hyperbolic Riemann surfaces, parametrized using the Fenchel-Nielsen coordinates, introduced by M.Mirzakhani \cite{Mirzakhani:2006fta}.
\item We show that this integration method has an important property when we restrict the integration to a thin region around the boundary of the moduli space. Using this property, we find an effective expression for the integral of the off-shell string measure over the region inside the moduli space that corresponds to the string vertex.
\end{itemize}
In short, we describe a systematic method for evaluating the quantum BV master action for closed bosonic string field theory. \\
\noindent{\underline{\bf Organization of the paper}}: This paper is organized as follows. In section \ref{QBVMA}, we briefly review the general construction of the quantum BV master action for closed bosonic string field theory and explain what do we mean by the explicit evaluation of the quantum action. In section \ref{vertices} we discuss the construction string vertices using hyperbolic Riemann surfaces described in \cite{Moosavian:2017qsp}. In section \ref{OSM}, we describe the explicit construction of the off-shell string measure in terms of the Fenchel-Nielsen coordinates of the Teichm\"uller space. In section \ref{IOMS} we discuss the concept of effective string vertices and the practical procedure of evaluating the corrected interaction vertices. In \ref{disc} we provide a brief summary of the paper and mention some of the future directions. In appendix \ref{hyperbolic}, we review the theory of hyperbolic Riemann surfaces. In appendx \ref{MMidentity} and appendix \ref{LuoTan}, we discuss two classes of non-trivial identities satisfied by the lengths of the simple closed geodesics on a hyperbolic Riemann surface.
\section{The quantum BV master action}\label{QBVMA}
The quantum BV master action for closed string field theory is a functional of the the fields and the antifields in the theory. The {\it fields} and {\it antifields} are specified by splitting the string field $|\Psi\rangle$, which is an arbitrary element in the Hilbert space of the worldsheet CFT \cite{Thorn:1988hm}, as
\begin{equation}\label{stringfieldanti}
|\Psi\rangle=|\Psi_-\rangle+|\Psi_+\rangle.
\end{equation}
Both $|\Psi_-\rangle$ and $|\Psi_+\rangle$ are annihilated by $b_0^-$ and $L_0^-$. The string field $|\Psi_-\rangle$ contains all the fields and the string field $|\Psi_+\rangle$ contains all the antifields. They can be decomposed as follows
\begin{align}\label{psipmdecomp}
|\Psi_-\rangle&=\sideset{}{'}\sum_{G(\Phi_s)\leq 2}|\Phi_s\rangle \psi^s,\nonumber\\
|\Psi_+\rangle&=\sideset{}{'}\sum_{G(\Phi_s)\leq 2}|\tilde \Phi_s\rangle \psi^*_s,
\end{align}
where $|\tilde \Phi_s\rangle=b_0^-|\Phi^c_s\rangle$, such that $\langle\Phi_r^c|\Phi_s\rangle=\delta_{rs}$. The state $\langle\Phi_r^c|$ is the conjugate state of $|\Phi_r\rangle$. The sum in (\ref{psipmdecomp}) extends over the basis states $|\Phi_s\rangle$ with ghost number less than or equal to two. The prime over the summation sign reminds us that the sum is only over those states that are annihilated by $L_0^-$. The target space field $\psi^*_s$ is the antifield that corresponds to the target space field $\psi^s$. The target space ghost number of the fields $g^t(\psi^s)$ takes all possible non-negative values and that of antifields $g^t(\psi^*_s)$ takes all possible negative values. They are related via the following relation
\begin{equation}\label{ghostnumberaf}
g^t(\psi_s^*)+g^t(\psi^s)=-1.
\end{equation}
Therefore, the statistics of the antifield is opposite to that of the field. Moreover, it is possible to argue that corresponding to each target space field $\psi^s$ there is a unique antifield $\psi^*_s$ \cite{Thorn:1988hm}.\par
The quantum BV master action must be a solution of the following quantum BV master equation for the closed bosonic string theory
\begin{equation}\label{mastereqbcsft}
\frac{\partial _r S}{\partial \psi^s}\frac{\partial _l S}{\partial \psi^*_s}+\hbar\frac{\partial _r }{\partial \psi^s}\frac{\partial _l S}{\partial \psi^*_s}=0,
\end{equation}
where the target space field $\psi^*_s$ is the antifield corresponding to the field $\psi^s$ and $\partial_r, \partial_l$ denote the right and left derivatives respectively. The perturbative solution of this equation in the closed string coupling $g_s$ is given by \cite{Zwiebach:1992ie}:
\begin{equation}\label{cbstringfieldaction}
S(\Psi)=g_s^{-2}\left[\frac{1}{2}\langle\Psi|c_0^-Q_B|\Psi \rangle+\sum_{g\geq 0}(\hbar g_s^2)^g\sum_{n\geq 1}\frac{g_s^n}{n!}\{\Psi^n\}_g\right],
\end{equation}
where $\Psi$ denotes the string field (\ref{stringfieldanti}) having arbitrary ghost number that is built using target space fields and antifields carrying arbitrary ghost numbers. $\{\Psi^n\}_g$ denotes the $g$-loop elementary interaction vertex $\{\Psi_1,\cdots,\Psi_n\}_{g}$ for $n$ closed string fields with $\Psi_i=\Psi$ for $i=1,\cdots,n$. The $g$-loop elementary interaction vertex $\{\Psi_1,\cdots,\Psi_n\}_{g}$ for $n$ closed string fields can be defined as the integral of the off-shell string measure $ \Omega^{(g,n)}_{6g-6+2n}\left(|\Psi_1\rangle,\cdots,|\Psi_n\rangle\right)$ over the string vertex $\mathcal{V}_{g,n}$:
\begin{equation}\label{bstringvertex}
\{\Psi_1,\cdots,\Psi_n\}_{g}\equiv\int_{\mathcal{V}_{g,n}} \Omega^{(g,n)}_{6g-6+2n}\left(|\Psi_1\rangle,\cdots,|\Psi_n\rangle\right),
\end{equation}
where $\Psi_1,\cdots,\Psi_n$ denotes the off-shell closed string states $|\Psi_1\rangle,\cdots,|\Psi_n\rangle$. The definition of the string vertices and the construction of off-shell measure is discussed below.
\subsection{The string vertex $ \mathcal{V}_{g,n}$}
The string vertex $ \mathcal{V}_{g,n}$ for the closed strings can be understood as a collection of genus $g$ Riemann surfaces with $n$ punctures that belong to a specific region inside the compactified moduli space $\overline{\mathcal{M}}_{g,n}$. We can define the string vertices by stating the properties that they must satisfy \cite{Zwiebach:1992ie}:
\begin{itemize}
\item The string vertices must not contain Riemann surfaces that are arbitrarily close to the degeneration.
\item The Riemann surfaces that belong to the string vertices must be equipped with a specific choice of local coordinates around each of its punctures. The coordinates are only defined up to a constant phase and they are defined continuously over the set $\mathcal{V}_{g,n}$.
\
\item The local coordinates around the punctures on the Riemann surfaces that belong to the string vertices must be independent of the labeling of the punctures. Moreover, if a Riemann surface $\mathcal{R}$ with labeled punctures is in $ \mathcal{V}_{g,n}$ then copies of $\mathcal{R}$ with all other inequivalent labelings of the punctures also must be included in $ \mathcal{V}_{g,n}$.
\item If a Riemann surface belongs to the string vertex, then its complex conjugate also must be included in the string vertex. A complex conjugate Riemann surface of a Riemann surface $\mathcal{R}$ with coordinate $z$ can be obtained by using the anti-conformal map $z\to -\overline{z}$.
\end{itemize}
The string vertices with the above mentioned properties must also satisfy the following geometric identity. This identity can be understood as the geometric realization of the quantum BV master equation (\ref{mastereqbcsft}):
\begin{equation}\label{bvmastercond}
\partial \mathcal{V}_{g,n}=-\frac{1}{2}\mathop{\sum_{g_1,g_2}}_{g_1+g_2=g}\mathop{\sum_{n_1,n_2}}_{n_1+n_2=n}\mathbf{ S}[\{ \mathcal{V}_{g_1,n_1}, \mathcal{V}_{g_2,n_2}\}]-\Delta\mathcal{V}_{g-1,n+2},
\end{equation}
where $\partial \mathcal{V}_{g,n}$ denotes the boundary of the string vertex $\mathcal{V}_{g,n}$ and $\mathbf{ S}$ represents the operation of summing over all inequivalent permutations of the external punctures. $\{ \mathcal{V}_{g_1,n_1},\mathcal{V}_{g_2,n_2}\}$ denotes the set of Riemann surfaces obtained by taking a Riemann surface from the string vertex $\mathcal{V}_{g_1,n_1}$ and a Riemann surface from the string vertex $\mathcal{V}_{g_2,n_2}$ and gluing them by identifying the regions around one of the puncture from each via the special plumbing fixture relation:
\begin{equation}\label{specialplumbing}
zw=e^{i\theta},\qquad\qquad0\leq\theta\leq2\pi,
\end{equation}
where $z$ and $w$ denote the local coordinates around the punctures that are being glued. The special plumbing fixture corresponds to the locus $|t|=1$ of the plumbing fixture relation
\begin{equation}\label{plumbing}
zw=t,\qquad\qquad t\in \mathbb{C},\qquad\qquad 0\leq |t|\leq 1,
\end{equation}
The resulting surface has genus $g=g_1+g_2$ and $n=n_1+n_2-2$. $\Delta$ denotes the operation of taking a pair of punctures on a Riemann surface that belongs to the string vertex $ \mathcal{V}_{g-1,n+2}$ and gluing them via the special plumbing fixture relation (\ref{specialplumbing}). Therefore, the first term of (\ref{bvmastercond}) represents the gluing of two distinct surfaces via the special plumbing fixture and the second terms represents the special plumbing fixture applied to a single surface. \par
{\it The geometric condition (\ref{bvmastercond}) demands that the set of Riemann surfaces that belong to the boundary of a string vertex having dimension, say $d$, must agree with the set of union of surfaces having dimension $d$ obtained by applying the special plumbing fixture construction (\ref{specialplumbing}) to the surfaces belong to the lower dimensional string vertices only once, both in their moduli parameters and in their local coordinates around the punctures}.
\subsection{The off-shell string measure $ \Omega^{(g,n)}_{6g-6+2n}$}\label{offmeasure}
The off-shell string measure $ \Omega^{(g,n)}_{6g-6+2n}\left(|\Psi_1\rangle,\cdots,|\Psi_n\rangle\right)$ is constructed using $n$ number of vertex operators with arbitrary conformal dimensions. Consequently, the off-shell string measure depends on the choice of local coordinates around the punctures on the Riemann surface. Therefore, the integration measure of an off-shell amplitude is not a genuine differential form on the moduli space $\mathcal{M}_{g,n}$, because the moduli spaces do not know about the various choices of local coordinates around the punctures. Instead, we need to consider it as a differential form defined on a section of a larger space $\widehat{\mathcal{P}}_{g,n}$. This space is defined as a fiber bundle over $\mathcal{M}_{g,n}$. The fiber direction of the fiber bundle $\pi: \widehat{\mathcal{P}}_{g,n}\to \mathcal{M}_{g,n}$ contains the information about different choices of local coordinates around each of the $n$ punctures that differ only by a phase factor. The section of our interest corresponds to the choice of a specific set of local coordinates around the punctures for each point $\mathcal{R}_{g,n}\in \mathcal{M}_{g,n}$. Therefore, in order to construct a differential form on such a section, we only need to consider the tangent vectors of $\widehat{\mathcal{P}}_{g,n}$ that are the tangent vectors of the moduli space of Riemann surfaces equipped with the choice local coordinates that defines the section. They are given by the Beltrami differentials spanning the tangent space of the moduli space of Riemann surfaces \cite{Yoichi}. \par
Let us denote the coordinates of $\mathcal{M}_{g,n}$ by $\left(t_{1},\cdots,t_{6g-6+2n} \right)$. Consider $B_p$, an operator-valued $p$-form defined on the section of the space $\widehat{\mathcal{P}}_{g,n}$. The contraction of $B_p$ with $\left\{V_1,\cdots,V_p\right\}, p$ tangent vectors of the section, is given by
\begin{equation}\label{opevormbos}
B_p[V_1,\cdots,V_p]=b(V_1)\cdots b(V_p),
\end{equation}
where
\begin{equation}\label{bvgen2}
b(V_{k})=\int d^2z\Big(b_{zz}\mu_{k\bar z}^z+b_{\bar z\bar z}\mu_{kz}^{\bar z}\Big),
\end{equation}
Here $\mu_k$ denotes the Beltrami differential associated with the moduli $t_{k}$ of the Riemann surfaces belong to the section of the fiber space $\widehat{\mathcal{P}}_{g,n}$ in which we are interested. The $p$-form on the section can be obtained by taking the expectation value of the operator valued $p$-form $B_p$ between the surface state $\langle \mathcal{R}|$ and the state $|\Phi\rangle$:
\begin{equation}\label{pformpgnbos}
\Omega_p^{(g,n)}(|\Phi\rangle)=(2\pi \mathrm{i})^{-(3g-3+n)}\langle\mathcal{R}|B_p|\Phi\rangle.
\end{equation}
The state $|\Phi\rangle$ is the tensor product of external off-shell states $|\Psi_i\rangle,~i=1,\cdots,n$ inserted at the punctures and the state $\langle \mathcal{R}|$ is the surface state associated with the surface $\mathcal{R}_{g,n}$. It describes the state that is created on the boundaries of the discs $D_i,~i=1,\cdots,n$ by performing a functional integral over the fields of CFT on $\mathcal{R}-\sum_iD_i$. The inner product between $\langle\mathcal{R}|$ and a state $|\Psi_1\rangle\otimes\cdots\otimes |\Psi_n\rangle\in\mathcal{H}^{\otimes n}$
\begin{equation}\label{innerprcft}
\langle\mathcal{R}|(|\Psi_1\rangle\otimes\cdots\otimes |\Psi_n\rangle),
\end{equation}
can be understood as the $n$-point correlation function on $\mathcal{R}$ with the vertex operator for $|\Psi_i\rangle$ inserted at the $i^{th}$ puncture using the local coordinate around that puncture. \par
The path integral representation of $ \Omega^{(g,n)}_{6g-6+2n}\left(|\Psi_1\rangle,\cdots,|\Psi_n\rangle\right)$ is given by
\begin{align}\label{pathintrep}
& \Omega^{(g,n)}_{6g-6+2n}\left(|\Psi_1\rangle,\cdots,|\Psi_n\rangle\right)\nonumber\\
&=\frac{dt_1\cdots dt_{6g-6+2n}}{(2\pi \mathrm{i})^{(3g-3+n)}}\int \mathcal{D}x^{\mu}\int\mathcal{D}c~\mathcal{D}\overline{c}~\mathcal{D}b~\mathcal{D}\overline{b}~e^{-I_m(x)-I_{gh}(b,c)}\prod_{j=1}^{6g-6+2n} b(V_j)\prod_{i=1}^n\left[c\overline{c}~V_{i}(k_{i})\right]_{w_i},
\end{align}
where $\left[c\overline{c}~V_{i}(k_{i})\right]_{w_i}$ denotes the vertex operator corresponds to the state $|\Psi_i\rangle$ inserted using the local coordinate $w_i$. $I_m(x)$ is the action for matter fields and $I_{gh}(b,c)$ is the actions for ghost fields. $z$ is the global coordinate on $\mathcal{R}$.
\subsection{The explicit evaluation of the quantum master action}\label{EEQMA}
In this subsection, we explain, what we mean by the explicit evaluation of the quantum BV master action for the closed string field theory. Let us denote the vertex operator corresponds to the basis state $|\Phi_s\rangle$ by $\mathcal{A}(\Phi_s)$. Then the string field entering in the quantum BV master action can be expressed as
\begin{equation}\label{stringfieldanti1}
|\Psi\rangle=\sideset{}{'}\sum_{G(\Phi_s)\leq 2}\sum_p\psi^s(p)\mathcal{A}(\Phi_s) |\mathbf{1},p\rangle +\sideset{}{'}\sum_{G(\Phi_s)\leq 2}\sum_p\psi^*_s(p)\mathcal{A}(\widetilde{\Phi}_s) |\mathbf{1},p\rangle,
\end{equation}
where $|\mathbf{1},p\rangle $ denotes the $SL(2,\mathbb{C})$ invariant family of vacua for the worldsheet CFT for the closed bosonic string theory, parameterized by $p$. The expression for the quantum BV master action in terms of the target space fields and the target space antifields can be obtained by substituting this expansion of the string field $\Psi$ in the quantum BV master action (\ref{cbstringfieldaction}):
\begin{align}\label{cbstringfieldaction1}
S(\Psi)&=\frac{1}{2g_s^{2}}\mathop{\sideset{}{'}\sum_{G(\Phi_{s_j})\leq 2}}_{j=1,2}\mathop{\sum_{\phi^{s_i}\in \mathcal{S}_i}}_{i=1,2}\sum_{p_1,p_2}\phi^{s_1}(p_1)\mathbf{P}_{s_1s_2}\left(p_1,p_2\right)\phi^{s_2}(p_2)\nonumber\\
&+\mathop{\sum_{g\geq 0}}_{n\geq 1}\frac{\hbar^g g_s^{2g+n-2}}{n!}\mathop{\sideset{}{'}\sum_{G(\Phi_{s_j})\leq 2}}_{j=1,\cdots,n}\mathop{\sum_{\phi^{s_i}\in \mathcal{S}_i}}_{i=1,\cdots,n}\sum_{p_1,\cdots,p_n}\mathbf{V}_{s_1\cdots s_n}^{g,n}\left(p_1,\cdots,p_n \right)\phi^{s_1}(p_1)\cdots\phi^{s_n}(p_n),
\end{align}
where $\mathcal{S}_i=\left\{ \psi^{s_i}, \psi^*_{s_i}\right\}$ is the set of all fields and antifields of the closed bosonic string field theory spectrum. $\mathbf{P}_{s_1s_2}\left(p_1,p_2\right)$, the inverse of the propagator, is given by
\begin{equation}\label{propagator}
\mathbf{P}_{s_1s_2}\left(p_1,p_2\right)=\left\langle\mathcal{Y}_{s_1},p_1\left|c_0^-Q_B\right|\mathcal{Y}_{s_2},p_2 \right\rangle,
\end{equation}
and $\mathbf{V}_{s_1\cdots s_n}^{g,n}\left(p_1,\cdots,p_n \right)$, the $g$ loop interaction vertex of $n$ target spacetime fields/antifields $\left\{\phi^{s_1}(p_1),\cdots,\phi^{s_n}(p_n)\right\}$, is given by
\begin{equation}\label{interaction}
\mathbf{V}_{s_1\cdots s_n}^{g,n}\left(p_1,\cdots,p_n \right)=\int_{\mathcal{V}_{g,n}} \Omega^{(g,n)}_{6g-6+2n}\left(|\mathcal{Y}_{s_1},p_1\rangle,\cdots,|\mathcal{Y}_{s_n},p_n\rangle\right).
\end{equation}
Here, $|\mathcal{Y}_{s_i},p_i\rangle$ is the state associated with the string field/antifield $\phi^{s_i}(p_i)$. The state $|\mathcal{Y}_{s_i},p_i\rangle$ is annihilated by both $b^-_0$ and $L^-_0$. By the explicit evaluation of the quantum master action, we mean the explicit evaluation of $\mathbf{V}_{s_1\cdots s_n}^{g,n}\left(p_1,\cdots,p_n \right)$. The explicit evaluation requires:
\begin{enumerate}
\item A convenient choice of parametrization of the Teichm\"uller space and the conditions on them that specify the region of the moduli space inside the Teichm\"uller space.
\item An explicit procedure for constructing the off-shell string measure in terms of the chosen coordinates of the moduli space.
\item An explicit description for the region inside moduli space that corresponds to the string vertex and a consistent choice of local coordinates around the punctures on the Riemann surfaces belong to the string vertex.
\item Finally, an explicit procedure for integrating the off-shell string measure over the region inside moduli space that corresponds to the string vertex.
\end{enumerate}
In the remaining sections of this paper, we provide a detailed description for each of these steps.
\section{The string vertices using hyperbolic metric}\label{vertices}
The main challenge in constructing string field theory is to find a suitable cell decomposition of the moduli spaces of closed Riemann surfaces. In principle, the string vertices satisfying the conditions listed in the section (\ref{QBVMA}) that provide such a cell decomposition of the moduli space can be constructed using the Riemann surfaces endowed with the metric solving the generalized minimal area problem \cite{Zwiebach:1992ie}. Unfortunately, the current understanding of minimal area metrics is not sufficient enough to obtain a calculable formulation of closed string field theory. In our previous paper \cite{Moosavian:2017qsp}, we described an alternate construction of the string vertices using Riemann surfaces endowed with metric having constant curvature $-1$. We briefly review this construction below. For a brief review of the theory of hyperbolic Riemann surfaces, read appendix (\ref{hyperbolic}). \par
A hyperbolic Riemann surface can be represented as a quotient of the upper half-plane $\mathbb{H} $ by a Fuchsian group. A puncture on a hyperbolic Riemann surface corresponds to the fixed point of a parabolic element (element having trace $\pm 2$) of the Fuchsian group acting on the upper half-plane $\mathbb{H} $. For a puncture $p$ on a hyperbolic Riemann surface, there is a natural local conformal coordinate $w$ with $w(p) = 0$, induced from the hyperbolic metric. The local expression for the hyperbolic metric around the puncture is given by
\begin{equation}\label{metriclocal}
ds^2=\left(\frac{|dw|}{|w|\ln|w|}\right)^2.
\end{equation}
We can naively define the string vertices by means of the Riemann surfaces endowed with the hyperbolic metric as below.\\
\noindent{\bf The naive string vertex $\mathcal{V}^0_{g,n}$}: {\it Consider a genus-$g$ hyperbolic Riemann surface $\mathcal{R}$ having $n$ punctures with no simple closed geodesics that has geodesic length $l\leq c_*$, where $c_*$ is some positive real number such that $c_*\ll 1$. The local coordinates around the punctures on $\mathcal{R}$ are chosen to be $e^{\frac{\pi^2}{c_*}}w$, where $w$ is the natural local coordinate induced from the hyperbolic metric on $\mathcal{R}$. The set of all such inequivalent hyperbolic Riemann surfaces forms the string vertex $\mathcal{V}^0_{g,n}$}. \\
It was shown in \cite{Moosavian:2017qsp} that the string vertices $\mathcal{V}^0_{g,n}$ fails to provide a single cover of the moduli space for any non-vanishing value of $c_*$. The argument goes as follows. We can claim that the string vertex $\mathcal{V}^0_{g,n}$ together with the Feynman diagrams provide a cell decomposition of the moduli space only if the Fenchel-Nielsen length parameters and the local coordinates around the punctures on the surfaces at the boundary of the string vertex region matches exactly with the Fenchel-Nielsen length parameters and the local coordinates around the punctures on the surface obtained by the special plumbing fixture construction
\begin{equation}\label{specialplumbing1}
\widetilde z\cdot \widetilde w=e^{i\theta},\qquad 0\leq\theta\leq2\pi,
\end{equation}
where $\widetilde z$ and $\widetilde w$ denote the local coordinates around the punctures that are being glued. However, the metric on the surface obtained by the plumbing fixture of a set of hyperbolic Riemann surface fails to be exactly hyperbolic all over the surface \cite{Wolpert2,Wolpert3,Melrose}.
Consider the Riemann surface $\mathcal{R}_t$, for $t=(t_1,\cdots,t_m)$ obtained via plumbing fixture around $m$ nodes of a hyperbolic surface $\mathcal{R}_{t=0}\equiv\mathcal{R}_0$ with $m$ nodes. We denote the set of Riemann surfaces obtained by removing the nodes from $\mathcal{R}_0$ by $\hat{\mathcal{R}}$, i.e., $\hat{\mathcal{R}}_0=\mathcal{R}_0-\{\mathrm{nodes}\}$. The Riemann surfaces $\hat{\mathcal{R}}_0$ have a pair of punctures $(a_j,b_j)$ in place of the $j$\textsuperscript{th} node of $\mathcal{R}_0,~j=1,\cdots,m$. Assume that $w^{(1)}_j$ and $w^{(2)}_j$ are the local coordinates around the punctures $a_j$ and $b_j$ with the property that $w^{(1)}_j(a_j)=0$ and $w^{(2)}_j(b_j)=0$. Let us choose the local coordinates $w^{(1)}_j$ and $w^{(2)}_j$ such that, in terms of these local coordinates, the hyperbolic metric around the punctures of $\widehat{\mathcal{R}}_0$ has the local expression
\begin{equation}
ds^2=\left (\frac{|d\zeta|}{|\zeta|\mathrm{ln}~|\zeta|}\right)^2,\qquad\qquad\zeta=w^{(1)}_j~\mathrm{or}~w^{(2)}_j.
\end{equation}
Let us call the metric on the glued surface $\mathcal{R}_t$ as the the {\it grafted metric} $ds_{\text{graft}}^2$. The grafted metric has curvature $-1$ except at the collar boundaries, where the interpolation leads to a deviation of magnitude $(\mathrm{ln}|t|)^{-2}$ \cite{Wolpert2}. This deviation makes the resulting surface almost hyperbolic except at the boundaries of the plumbing collar. \par
However, we can compute the hyperbolic metric on $\mathcal{R}_t$ by solving the {\it curvature correction equation} \cite{Wolpert2,Wolpert3}. To describe the curvature correction equation, consider a compact Riemann surface having metric $ds^2$ with Gauss curvature \footnote{In two dimension, the Gaussian curvature is half of the Ricci curvature of the surface.} $\mathbf{ C}$. Then, another metric $e^{2f}ds^2$ on this surface has constant curvature $-1$ provided
\begin{equation}\label{constantcurvatureq}
Df-e^{2f}=\mathbf{ C},
\end{equation}
where $D$ is the Laplace-Beltrami operator on the surface. Therefore, in order to get the hyperbolic metric on $\mathcal{R}_t$, we need to solve this curvature correction equation perturbatively around the grafted metric by adding a Weyl factor. Then we can invert this expression for hyperbolic metric on $\mathcal{R}_t$ in terms of the grafted metric to obtain the grafted metric in terms of the hyperbolic metric. \par
To the second order the hyperbolic metric on $\mathcal{R}_t$, Riemann surface at the boundary of the string vertex $\mathcal{V}_{g,n}^0$ obtained by the special plumbing fixture (\ref{specialplumbing1}) of the hyperbolic Riemann surfaces, is related to the grafted metric as follows
\begin{equation}
ds^2_{\text{hyp}}=ds^2_{\text{graft}}\left(1+\sum_{i=1}^m\frac{c_*^{2}}{3}\left(E^{\dagger}_{i,1}+E^{\dagger}_{i,2}\right)+\mathcal{O}\left(c_*^3\right)\right). \label{graftedhyp2c}
\end{equation}
The functions $E^{\dagger}_{i,1}$ and $E^{\dagger}_{i,2}$ are the melding of the Eisenstein series $E(\cdot;2)$ associated to the pair of cusps plumbed to form the $i$\textsuperscript{th} collar. For the definition of these functions, see \cite{Moosavian:2017qsp}. The details of these functions are not very important for our discussions. \par
Using this relation, we modify the definition of the string vertices by changing the choice of local coordinates on the surfaces which belong to the boundary region of the string vertices as follows \cite{Moosavian:2017qsp}. The boundary of the string vertex with $m$ plumbing collar is defined as the locus in the moduli space of the hyperbolic Riemann surfaces with $m$ non-homotopic and disjoint non trivial simple closed curves having length equal to that of the length of the simple geodesic on any plumbing collar of a Riemann surface obtained by gluing $m$ pair of punctures on a set of hyperbolic Riemann surfaces via the special plumbing fixture relation (\ref{specialplumbing1}). To the second order in $c_*$, there is no correction to the hyperbolic length of the geodesics on the plumbing collars. Therefore, to second order in $c_*$, we don't have to correct the definition of the region corresponding to the string vertex in the moduli space for the hyperbolic Riemann surfaces parametrized using the Fenchel-Nielsen coordinates. However, the choice of local coordinates around the punctures must be modified to make it gluing compatible to second order in $c_*$. In order to modify the assignment of local coordinates in the string vertex $\mathcal{V}^0_{g,n}$, we divide it into subregions. Let us denote the subregion in the region corresponds to the string vertex $\mathcal{V}^0_{g,n}$ consists of surfaces with $m$ simple closed geodesics (none of them are related to each other by the action of any elements in MCG) of length between $c_*$ and $(1+\delta)c_*$ by $\mathbf{W}^{(m)}_{g,n}$, where $\delta$ is an infinitesimal real number. Then we modify the local coordinates as follows:
\begin{itemize}
\item For surfaces belong to the subregion $\mathbf{W}^{(0)}_{g,n}$, we choose the local coordinate around the $j^{th}$ puncture to be $e^{\frac{\pi^2}{c_*}}w_j$. In terms of $w_j$, the hyperbolic metric in the neighbourhood of the puncture takes the following form
\begin{equation}
\left(\frac{|dw_j|}{|w_j|\ln|w_j|}\right)^2, \qquad j=1,\cdots,n.
\end{equation}
\item For surfaces belong to the region $\mathbf{W}^{(m)}_{g,n}$ with $m\ne 0$, we choose the local coordinates around the $j^{th}$ puncture to be $e^{\frac{\pi^2}{c_*}}\widetilde{w}_{j,m}$, where $\widetilde{w}_{j,m}$, up to a phase ambiguity, is given by
\begin{equation}
\widetilde{w}_{j,m}=e^{\frac{c_*^2}{6}\sum_{i=1}^mf(l_i)Y_{ij}}w_{j}.
\end{equation}
\end{itemize}
We found $\widetilde w_{j,m}$ by solving the following equation
\begin{equation}
\left(\frac{|d\widetilde{w}_{j,m}|}{|\widetilde{w}_{j,m}|\mathrm{ln}|\widetilde{w}_{j,m}|}\right)^2=\left(\frac{|dw_j|}{|w_j|\text{ln}|w_j|}\right)^2\left\{1-\frac{c_*^2}{3~\text{ln}|w_j|}\sum_{i=1}^mf(l_i)Y_{ij}\right\},
\end{equation}
where $l_i$ denotes the length of the $i^{th}$ degenerating simple closed geodesic and the function $f(l_i)$ is an arbitrary smooth real function of the geodesic length $l_i$ defined in the interval $\left(c_*,c_*+\delta c_*\right)$, such that $f(c_*)=1$ and $f(c_*+\delta c_*)=0$. The coefficient $Y_{ij}$ is given by
\begin{align}\label{yij}
Y_{ij}&=\sum_{q=1}^2\sum_{c_i^q,d_i^q}\pi^{2}\frac{\epsilon(j,q)}{|c_i^q|^4}\nonumber\\ c_i^q>0 \qquad &d_i^q~\text{mod}~c_i^q \qquad\left(\begin{array}{cc}* & * \\c_i^q & d_i^q\end{array}\right)\in \quad (\sigma_i^q)^{-1}\Gamma_{i}^{q}\sigma_j
\end{align}
Here, $\Gamma_i^q$ denotes the Fuchsian group for the component Riemann surface with the cusp denoted by the index $q$ that is being glued via plumbing fixture to obtain the $i^{th}$ collar. The transformation $\sigma_j^{-1}$ maps the cusp corresponding to the $j^{th}$ cusp to $\infty$ and $(\sigma_j^q)^{-1}$ maps the cusp denoted by the index $q$ that is being glued via plumbing fixture to obtain the $i^{th}$ collar to $\infty$. The factor $\epsilon(j,q)$ is one if both the $j^{th}$ cusp and he cusp denoted by the index $q$ that is being glued via plumbing fixture to obtain the $i^{th}$ collar belong to the same component surface other wise $\epsilon(j,q)$ is zero.
The string vertices corrected in this way are denoted as $\mathcal{V}^{2}_{g,n}$. They provide an improved approximate cell decomposition of the moduli space that has no mismatch up to the order $c_*^2$.
\section{The off-shell string measure and Fenchel-Nielsen parameters}\label{OSM}
In this section, we describe the explicit construction of the off-shell string measure in terms of the Fenchel-Nielsen coordinates of the Teichm\"uller space. As explained in subsection (\ref{offmeasure}), the off-shell string measure can be defined using a specific choice of local coordinates, that is encoded in the definition of the string vertices, and the Beltrami differentials associated with the moduli parameters. \par
A flow in $\mathcal{T}_{g,n}$, the Teichm\"uller space of $\mathcal{R}$, hyperbolic Riemann surfaces with $g$ handles and $n$ borders, can be generated by a twist field defined with respect a simple closed curve on the Riemann surface \cite{Kerckhoff,Wolpert5,Wolpert6,Wolpert7}. The twist field $t_{\alpha}$, where $\alpha$ is a simple closed geodesic, generates a flow in $\mathcal{T}_{g,n}$ that can be understood as the {\it Fenchel-Nielsen deformation} of $\mathcal{R}$ with respect to $\alpha$. The Fenchel-Nielsen deformation is the operation of cutting the hyperbolic surface along $\alpha$ and attaching the boundaries after rotating one boundary relative to the other by some amount $\delta$. The magnitude $\delta$ parametrizes the flow on $\mathcal{T}_{g,n}$. \par
Assume that $\mathcal{R}$ is uniformized as $\mathbb{H}/\Gamma$. Suppose that the element of $\Gamma$ that corresponds to a simple closed geodesic $\alpha$ is the matrix
$$A=\left(\begin{array}{cc}a & b \\c & d\end{array}\right).$$
Then, the Beltrami differential corresponds to the twist vector field $t_{\alpha}$ is given by \cite{Wolpert5}
\begin{equation}\label{beltramitwist}
\mathbf{t }_{\alpha}=\frac{\mathrm{i}}{\pi}(\mathrm{Im}z)^2\overline{ \Theta}_{\alpha}.
\end{equation}
$\Theta_{\alpha}$ is the following relative Poincar\'e series
\begin{equation}\label{poicares}
\Theta_{\alpha}=\sum_{B\in \langle A\rangle \backslash \Gamma}\omega_{B^{-1}AB},
\end{equation}
where $\langle A\rangle$ denote the infinite cyclic group generated by the element $A$, and $\omega_A$ is given by
\begin{equation}
\omega_A=\frac{(a+d)^2-4}{\left(cz^2+(d-a)z-b\right)^2}.
\end{equation}
Consider the Fenchel-Nielsen coordinates of the Teichm\"uller space $\left(\tau_i,\ell_i\right),~ i=1,\cdots,3g-3+n$ defined with respect to the pants decomposition $ \mathcal{P}=\left\{C_1,\cdots,C_{3g-3+n}\right\}$, where $C_i$ denotes a simple geodesic on $\mathcal{R}$. By definition, for $i\neq j$, the curves $C_i$ and $C_j$ are disjoint and non-homotopic to each other. The tangent space at a point in the Teichm\"uller space is spanned by the Fenchel-Nielsen coordinate vector fields $ \left\{\frac{\partial}{\partial \tau_i},\frac{\partial }{\partial \ell_i}\right\},~ i=1,\cdots,3g-3+n$. The Fenchel-Nielsen coordinate vector field $\frac{\partial}{\partial \tau_i}$ can be identified with the twist vector field $t_{C_i}$ defined with respect to the curve $C_i$. Hence, the {\it Beltrami differential corresponds to the Fenchel-Nielsen coordinate vector field $\frac{\partial}{\partial \tau_i}$ is given by $\mathbf{ t}_{C_i}$}. The Beltrami differential for the Fenchel-Nielsen coordinate vector field $\frac{\partial}{\partial l_i}$ can also be constructed by noting the that with respect to the WP symplectic form $\frac{\partial}{\partial l_i}$ is dual to the twist vector field $\frac{\partial}{\partial \tau_i}$ \cite{Wolpert6}. We denote the Beltrami differential for the Fenchel-Nielsen coordinate vector field $\frac{\partial}{\partial l_i}$ as $\mathbf{ l}_{C_i}$.\par
Putting these together, the off-shell bosonic-string measure can be written as
\begin{align}\label{eq:the bosonic-string measure}
&\Omega_{6g-6+2n}^{(g,n)}(|\Psi_1\rangle\otimes\cdots\otimes|\Psi_{n}\rangle)\nonumber\\
&=\frac{\prod_{i=1}^{3g-3+n}d\ell_id\tau_i}{(2\pi \mathrm{i})^{(3g-3+n)}}\int \mathcal{D}x^{\mu}\int\mathcal{D}c~\mathcal{D}\overline{c}~\mathcal{D}b~\mathcal{D}\overline{b}~e^{-I_m(x)-I_{gh}(b,c)}\prod_{j=1}^{3g-3+n} b(\mathbf{t}_{C_j})b(\mathbf{l}_{ C_j})\prod_{i=1}^n\left[c\overline{c}~V_{i}(k_{i})\right]_{w_i},
\end{align}
where $[c\overline{c}V_i(k_i)]_{w_i}$ denote the vertex operator for the state $|\Psi_i\rangle$ inserted at $i^{\text{th}}$ puncture using the local coordinate $w_i$ and
\begin{alignat}{1}\label{eq:the expressions for b(v)}
b(\mathbf{t}_{C_i})&=\int_{\mathcal{F}} d^2z\left(b_{zz}\mathbf{t}_{C_i}+b_{\bar z\bar z}\overline{\mathbf{t}}_{C_i}\right),\nonumber
\\
b(\mathbf{l}_{C_i})&=\int_{\mathcal{F}} d^2z\left(b_{zz}\mathbf{l}_{C_i}+b_{\bar z\bar z}\overline{\mathbf{l}}_{C_i}\right).
\end{alignat}
Here $\mathcal{F}$ denotes the fundamental domain in the upper half-plane for the action of the Fuchsian group $\Gamma$ that corresponds to $\mathcal{R}$. Here, we assumed that $\mathcal{R}$ belongs to the string vertex $\mathcal{V}_{g,n}$. Remember that, the Riemann surfaces belong to the string vertices carry a specific choice of local coordinates around its punctures which is consistent with the geometrical identity (\ref{bvmastercond}). \par
Assume that the vertex operator $V_i(k_i)$ has conformal dimension $h_i$ with no ghost fields in it, for $i=1,\cdots,n$. Then we have
\begin{align}\label{eq:the bosonic-string measure1}
&\Omega_{6g-6+2n}^{(g,n)}(|\Psi_1\rangle\otimes\cdots\otimes|\Psi_{n}\rangle)\nonumber\\
&=\frac{\prod_{i=1}^{3g-3+n}d\ell_id\tau_i}{(2\pi \mathrm{i})^{(3g-3+n)}}z\left|\frac{\partial z}{\partial w_i}\right|^{2h_i-2}\sqrt{\text{det}' P_1^{\dagger}P_1}\left(\frac{2\pi^2}{\int d^2z~\sqrt{g}}\text{det}'\Delta \right)^{-13}\int \mathcal{D}x^{\mu}~e^{-I_m(x)}\prod_{i=1}^nV_{i}(k_{i}),
\end{align}
where, $\Delta$ is the Laplacian acting on scalars defined on $\mathcal{R}$ a genus $g$ hyperbolic Riemann surface with $n$ punctures. The prime indicates that we do not include contributions from zero modes while computing the determinant of $\Delta$. The operator $P_1=\nabla^1_z\oplus\nabla^z_{-1}$ and $P_1^{\dagger}=-\left(\nabla^2_z\oplus\nabla^z_{-2}\right)$. Operators $\nabla^n_z$ and $\nabla^z_n$ are defined by their action on $T(dz)^n$, which is given by
\begin{align}
\nabla^n_z \left(T(dz)^n\right)&=(g_{z\overline{z}})^n\frac{\partial}{\partial z}\left((g^{z\overline{z}})^nT\right)(dz)^{n+1},\nonumber\\
\nabla^z_n \left(T(dz)^n\right)&=g^{z\overline{z}}\frac{\partial}{\partial\overline{z}}T(dz)^{n-1}.
\end{align}
Interestingly, the determinant $\text{det}' P_1^{\dagger}P_1$ and $\text{det}'\Delta$ can be evaluated on any hyperbolic Riemann surface in terms of Selberg zeta functions \cite{Sarnak, Dhokerdet, Bolte, Hejhal, DHoker:1985een}. For instance, $\text{det}'\Delta$ on a genus $g$ hyperbolic Riemann surface with $n$ punctures can be expressed as follows \cite{LPT}
\begin{align}\label{selbergdet}
\text{det}'\Delta=2^{\frac{n}{2}+\frac{1}{2}\text{tr}\Phi(\frac{1}{2})}(2\pi)^{g-1+\frac{n}{2}}e^{(2g-2+n)\left(2\zeta'(-1)-\frac{1}{4} \right)}\frac{d}{ds}Z(s)\Big|_{s=1}
\end{align}
where $\zeta(s)$ is the Riemann zeta function and $\Phi(s)=\left(\phi_{ij}(s)\right)_{1\leq i, j \leq n}$. The elements $\phi_{ij}$ can be found by expanding the Eisenstein series defined with respect to the $i^{\text{th}}$ puncture around the $j^{\text{th}}$ puncture. The expansion can be obtained by taking the limit $(y=\text{Im}(z))\to \infty$
\begin{equation}\label{phiij}
E_i(\sigma_jz,s)=\delta_{ij}y^s+\phi_{ij}(s)y^{1-s}+\cdots,
\end{equation}
where $\sigma_i^{-1}\kappa_i\sigma_i=\left(\begin{array}{cc}1 & 1 \\0 & 1\end{array}\right)$. $\kappa_i$ is the parabolic generator associated with the $i^{\text{th}}$ puncture. Finally $Z(s)$ is the Selberg zeta function given by
\begin{equation}\label{selbergzeta}
Z(s)=\prod_{\gamma\in\text{S}}\prod_{k=1}^{\infty}\left[1-e^{-(s+k)\ell_{\gamma}} \right],
\end{equation}
where $\gamma$ is a simple closed geodesic on $\mathcal{R}$ and $S$ is the set of all simple closed geodesics on $\mathcal{R}$. A simple closed geodesic on $\mathcal{R}$ corresponds to a primitive element in the Fuchsian group $\Gamma$. A hyperbolic element of $\Gamma$ is said to be a primitive element if it can not be written as a power of any hyperbolic element in $\Gamma$. However, a primitive element can be an inverse of another primitive element in $\Gamma$. If $g\in \Gamma$ represents the simple closed geodesic $\gamma$, then the length of $\gamma$ is given by
\begin{equation}\label{lgamma}
\ell_{\gamma}=\text{cosh}^{-1}\left(\frac{1}{2}\left|\text{tr} ~g\right|\right).
\end{equation}
Therefore the Selberg zeta function can be expressed as a product over all the primitive elements in $\Gamma$. The $\text{det} P_1^{\dagger}P_1$ on $\mathcal{R}$ also can be similarly expressed in terms of the Selberg zeta functions.\par
The matter sector path integral can be expressed in terms of the Green's function $G$ for the Laplacian acting on the scalars on $\mathcal{R}$. To demonstrate this, let us consider the case where all the external states are tachyons, ie. $V_i(k_i)=e^{\text{i} k_i\cdot X_i}$. Then we have
\begin{align}\label{eq:the bosonic-string measure2}
&\Omega_{6g-6+2n}^{(g,n)}(|T_1\rangle\otimes\cdots\otimes|T_{n}\rangle)\nonumber\\
&=\frac{\prod_{i=1}^{3g-3+n}d\ell_id\tau_i}{(2\pi \mathrm{i})^{(3g-3+n)}}\left|\frac{\partial z}{\partial w_i}\right|^{2h_i-2}\sqrt{\text{det}' P_1^{\dagger}P_1}\left(\frac{2\pi^2}{\int d^2z~\sqrt{g}}\text{det}'\Delta \right)^{-13}e^{\frac{1}{2}\sum_{i,j}k_i\cdot k_jG(x_i,x_j)}(2\pi)^{26}\delta(k_1+\cdots+k_n),
\end{align}
where $x_i$ denotes the fixed point corresponds to the $i^{\text{th}}$ puncture. The Green's function on $\mathcal{R}$ can be constructed by first constructing the Green's function on $\mathbb{H}$ and then by considering the sum over all the elements of $\Gamma$, which is same as the idea of method of images \cite{Hejhal}. \par
Assume that the hyperbolic Riemann surface $\mathcal{R}$ corresponds to a point in the Teichm\"uller space with coordinate $(\ell_1,\tau_1,\cdots,\ell_{3g-3+n},\tau_{3g-3+n})$. Then by following the general algorithm described in \cite{Maskit}, it is possible to express the matrix elements of the generators of $\Gamma$ as functions of $(\ell_1,\tau_1,\cdots,\ell_{3g-3+n},\tau_{3g-3+n})$. Using these generators it is in principle possible to construct all the primitive elements of $\Gamma$. Therefore we can express the determinants of the Laplacians and the Green's functions on $\mathcal{R}$ as functions of the Fenchel-Nielsen coordinates. Finally we get an expression of the off-shell string measure in terms of the Fenchel-Nielsen coordinates. \par
\section{The effective string vertices} \label{IOMS}
The interaction vertices in closed string field theory is obtained by integrating the off-shell bosonic string measure constructed in the previous section over the region in the compactified moduli space $\overline{\mathcal{M}}_{g,n}$ that corresponds to the string vertex $\mathcal{V}_{g,n}$, which is denoted as $\mathcal{W}_{g,n}$. The modification of the local coordinates requires dividing $\mathcal{W}_{g,n}$ into different sub-regions. The moduli space $\mathcal{M}_{g,n}$ can be understood as the quotient of the Teichm\"uller space $\mathcal{T}_{g,n}$ with the action of the MCG (mapping class group). Unfortunately, in generic cases, an explicit fundamental region for the action of MCG is not known in terms of the Fenchel-Nielsen coordinates. This is due to the fact that the form of the action of MCG on the Fenchel-Nielsen coordinates is not yet known \cite{Thurs1,Hatch1}. Therefore, modifying the naive string vertex, to make it consistent to $\mathcal{O}(c_*^2)$, appears to be impractical. In this section, we discuss a way to overcome this difficulty by following the prescription for performing intgrations in the moduli space introduced by M.Mirzakhani \cite{Mirzakhani:2006fta}. \par
\subsection{The effective calculations}
Consider the space $\mathcal{M}$ with a covering space $\mathcal{N}$. The covering map is given by
$$\pi: \mathcal{N}\to \mathcal{M}.$$
If $dv_{\mathcal{M}}$ is a volume form for $\mathcal{M}$, then
$$ dv_{\mathcal{N}}\equiv\pi^{-1}*(dv_{\mathcal{M}}),$$
defines the volume form for the covering space $\mathcal{N}$. Assume that $h$ is a smooth function defined in the space $\mathcal{N}$. Then the push forward of the function $h$ at a point $x$ in the space $\mathcal{M}$, which is denoted by $\pi_*h(x)$, can be obtained by the summation over the values of the function $h$ at all points in the fiber of the point $x$ in $\mathcal{N}$:
\begin{equation}
(\pi_*h)(x)\equiv\sum_{y\in \pi^{-1}\{x\}}h(y). \label{coveri}
\end{equation}
This relation defines a smooth function on the space $\mathcal{M}$. As a result, the integral of this pull-back function over the space $\mathcal{M}$ can be lifted to the covering space $\mathcal{N}$ as follows:
\begin{equation}
\int_{\mathcal{M}}dv_{\mathcal{M}}~(\pi_*h)~=\int_{\mathcal{N}}dv_{\mathcal{N}}~h. \label{covint}
\end{equation}
\noindent{{\bf Integration over $\mathbb{S}^1$ as an integration over $\mathbb{R}$:}} In order to elucidate the basic logic behind the integration method, let us discuss a simple and explicit example. Consider the real line $\mathbb{R}=(-\infty,\infty)$ as the covering space of circle $\mathbb{S}^1=[0,1)$. We denote the covering map by $$\pi: \mathbb{R}\to \mathbb{S}^1.$$
Assume that $f(x)$ is a function living in $\mathbb{S}^1$, i.e. $f(x+k)=f(x),~k\in \mathbb{Z}$. Then we can convert the integration over $\mathbb{S}^1$ into an integration over $\mathbb{R}$ with the help of the identity
\begin{equation}\label{identity}
1=\sum_{k=-\infty}^{\infty}\frac{\text{sin}^2\left(\pi [x-k] \right)}{\pi^2\left( x-k\right)^2},
\end{equation}
as follows:
\begin{align}\label{covercircle}
\int_{0}^1dx~f(x)&= \int_0^1 dx~ \left(\sum_{k=-\infty}^{\infty}\frac{\text{sin}^2\left(\pi [x-k] \right)}{\pi^2\left( x-k\right)^2}\right)f(x)\nonumber\\
&= \int_{0}^{1} dx~ \sum_{k=-\infty}^{\infty}\left(\frac{\text{sin}^2\left(\pi [x-k] \right)}{\pi^2\left( x-k\right)^2} f(x-k)\right)\nonumber\\
&= \sum_{k=-\infty}^{\infty} \int_{0}^{1} dx~\frac{\text{sin}^2\left(\pi [x-k] \right)}{\pi^2\left( x-k\right)^2}f(x-k)\nonumber\\
&= \int_{-\infty}^{\infty} dx~ \frac{\text{sin}^2\left(\pi x \right)}{ \pi^2x^2}f(x).
\end{align}
In the last step, we absorbed the summation over $k$ and converted the integration over $\mathbb{S}^1$ to the integration over $\mathbb{R}$. For instance, choosing $f(x)$ to be the ione, gives the following well-known result $$1= \int_{-\infty}^{\infty} dx~ \frac{\text{sin}^2\left(\pi x \right)}{ \pi^2x^2}.$$ \par
\subsection{Effective regions in the Teichm\"uller spaces}
The discussion in the previous subsection suggest that, if we have a region in the Teichm\"uller space that can be identified as a covering space of a region in the moduli space, then the integration of a differential form defined in the moduli space can be performed by expressing the differential form as a push-forward of a differential form in the Teichm\"uller space using the covering map. In the remaining part of this section, we shall explain that it is indeed possible to find such a covering map and express the off-shell string measure as a push-forward of a differential form defined in the Teichm\"uller space. \\
\begin{figure}
\begin{center}
\usetikzlibrary{backgrounds}
\begin{tikzpicture}[scale=.6]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-15, -2.5) {};
\node [style=none] (1) at (-11.75, -1.5) {};
\node [style=none] (2) at (-9.75, -1.5) {};
\node [style=none] (3) at (-6.5, -2.5) {};
\node [style=none] (4) at (-9.75, -3.5) {};
\node [style=none] (5) at (-11.75, -3.5) {};
\node [style=none] (6) at (-11.75, -2.5) {};
\node [style=none] (7) at (-9.75, -2.5) {};
\node [style=none] (8) at (-13.75, -2.25) {};
\node [style=none] (9) at (-13.75, -2.75) {};
\node [style=none] (10) at (-7.75, -2.75) {};
\node [style=none] (11) at (-7.75, -2.25) {};
\node [style=none] (12) at (-10.75, -1.25) {};
\node [style=none] (13) at (-8.5, -2) {};
\node [style=none] (14) at (-10.75, -2.25) {};
\node [style=none] (15) at (-13, -2) {};
\node [style=none] (16) at (-8.25, -0.75) {};
\node [style=none] (17) at (-3.25, 1.25) {};
\node [style=none] (18) at (-1.25, 0.25) {};
\node [style=none] (19) at (4.5, 0.25) {};
\node [style=none] (20) at (1.5, 3) {};
\node [style=none] (21) at (1.5, 7) {};
\node [style=none] (22) at (2.75, 5) {};
\node [style=none] (23) at (0.25, 5) {};
\node [style=none] (24) at (1.5, 6) {};
\node [style=none] (25) at (1.5, 4.75) {};
\node [style=none] (26) at (1.25, 3.75) {};
\node [style=none] (27) at (2, 3.75) {};
\node [style=none] (28) at (1.25, 2.5) {};
\node [style=none] (29) at (2, 2.5) {};
\node [style=none] (30) at (0, 1) {};
\node [style=none] (31) at (0.25, 0.5) {};
\node [style=none] (32) at (3.25, 1) {};
\node [style=none] (33) at (3, 0.5) {};
\node [style=none] (35) at (-10.75, -0.5) {$\gamma_1$};
\node [style=none] (37) at (-8.25, -1.5) {$\gamma_2$};
\node [style=none] (38) at (1.5, 7.5) {$\gamma_1$};
\node [style=none] (40) at (1, 3) {$\gamma_2$};
\node [style=none] (41) at (1.75, 3) {};
\node [style=none] (42) at (-14.75, -3) {};
\node [style=none] (43) at (-14.75, -3) {$\widetilde{w}_1$};
\node [style=none] (44) at (-6.75, -3) {$\widetilde{w}_2$};
\node [style=none] (45) at (-1, 1) {$\widehat{w}_1$};
\node [style=none] (47) at (4.25, 1) {$\widehat{w}_2$};
\node [style=none] (48) at (-11.25, -3) {};
\node [style=none] (49) at (-9.5, -2.75) {};
\node [style=none] (50) at (-10.75, -2.75) {};
\node [style=none] (51) at (-10.75, -3.75) {};
\node [style=none] (52) at (-10.75, -2.25) {};
\node [style=none] (53) at (-10.75, -4.25) {$\gamma_3$};
\node [style=none] (54) at (1.5, 4.75) {};
\node [style=none] (55) at (1.75, 0.5) {};
\node [style=none] (56) at (1.5, 5.25) {};
\node [style=none] (57) at (1.75, 0) {$\gamma_3$};
\node [style=none] (58) at (-2.5, -2.5) {};
\node [style=none] (59) at (0.75, -1.5) {};
\node [style=none] (60) at (2.75, -1.5) {};
\node [style=none] (61) at (6, -2.5) {};
\node [style=none] (62) at (2.75, -3.5) {};
\node [style=none] (63) at (0.75, -3.5) {};
\node [style=none] (64) at (1, -2.75) {};
\node [style=none] (65) at (2.5, -2.75) {};
\node [style=none] (66) at (-1.25, -2.25) {};
\node [style=none] (67) at (-1.25, -2.75) {};
\node [style=none] (68) at (4.75, -2.75) {};
\node [style=none] (69) at (4.75, -2.25) {};
\node [style=none] (70) at (1.75, -1.25) {};
\node [style=none] (71) at (4, -2) {};
\node [style=none] (72) at (1.75, -2.25) {};
\node [style=none] (73) at (-0.5, -2) {};
\node [style=none] (74) at (-5.75, -2.5) {};
\node [style=none] (75) at (-3.5, -2.5) {};
\node [style=none] (76) at (1.75, -0.75) {$\gamma_1$};
\node [style=none] (77) at (4.25, -1.5) {$\gamma_2$};
\node [style=none] (78) at (-2.25, -3) {};
\node [style=none] (79) at (-2.25, -3) {$\widetilde{w}_1$};
\node [style=none] (80) at (5.75, -3) {$\widetilde{w}_2$};
\node [style=none] (81) at (1.25, -3.5) {};
\node [style=none] (82) at (3.25, -3) {};
\node [style=none] (83) at (1.75, -3.5) {};
\node [style=none] (84) at (1.75, -3.75) {};
\node [style=none] (85) at (1.75, -2.25) {};
\node [style=none] (86) at (1.75, -4.25) {$\gamma_3$};
\node [style=none] (87) at (1.25, -3.5) {};
\node [style=none] (105) at (-8.25, -3.75) {};
\node [style=none] (106) at (-3.25, -6.25) {};
\node [style=none] (107) at (-2.5, -7) {};
\node [style=none] (108) at (0.75, -6) {};
\node [style=none] (109) at (2.75, -6) {};
\node [style=none] (110) at (6, -7) {};
\node [style=none] (111) at (2.75, -8) {};
\node [style=none] (112) at (0.75, -8) {};
\node [style=none] (113) at (0.75, -7) {};
\node [style=none] (114) at (2.75, -7) {};
\node [style=none] (115) at (-1.25, -6.75) {};
\node [style=none] (116) at (-1.25, -7.25) {};
\node [style=none] (117) at (4.75, -7.25) {};
\node [style=none] (118) at (4.75, -6.75) {};
\node [style=none] (119) at (1.75, -5.75) {};
\node [style=none] (120) at (4, -6.5) {};
\node [style=none] (121) at (1.75, -6) {};
\node [style=none] (122) at (-0.5, -6.5) {};
\node [style=none] (124) at (1.75, -5.25) {$\gamma_1$};
\node [style=none] (125) at (4.25, -6) {$\gamma_2$};
\node [style=none] (125) at (1.75, -8.8) {$\gamma_3$};
\node [style=none] (126) at (-2.25, -7.5) {};
\node [style=none] (127) at (-2.25, -7.5) {$\widetilde{w}_1$};
\node [style=none] (128) at (5.75, -7.5) {$\widetilde{w}_2$};
\node [style=none] (129) at (1.25, -7.5) {};
\node [style=none] (130) at (3, -7.25) {};
\node [style=none] (131) at (1.75, -7.25) {};
\node [style=none] (132) at (1.75, -8.25) {};
\node [style=none] (134) at (4.25, -8.25) {};
\node [style=none] (135) at (-6.75, 0.75) {$\ell_{\gamma_2}\to c_*$};
\node [style=none] (136) at (-5, -2) {$\ell_{\gamma_3}\to c_*$};
\node [style=none] (137) at (-6.75, -5.5) {$\ell_{\gamma_1}\to c_*$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [thick, in=-165, out=0] (0.center) to (1.center);
\draw [thick, in=165, out=15, looseness=1.25] (1.center) to (2.center);
\draw [thick, in=-150, out=-15, looseness=0.25] (2.center) to (3.center);
\draw [thick, in=15, out=-180] (3.center) to (4.center);
\draw [thick, in=-15, out=-165, looseness=1.25] (4.center) to (5.center);
\draw [thick, in=0, out=165, looseness=0.75] (5.center) to (0.center);
\draw [thick, bend right=90, looseness=0.50] (6.center) to (7.center);
\draw [thick, bend left=285, looseness=0.50] (7.center) to (6.center);
\draw [thick, bend right=60] (8.center) to (9.center);
\draw [thick, bend right=60] (11.center) to (10.center);
\draw [thick, style=dashed, bend left=60, looseness=1.25] (11.center) to (10.center);
\draw [thick, color=red, bend right=75, looseness=0.50] (12.center) to (14.center);
\draw [thick, color=red, style=dashed, bend left=90, looseness=0.50] (12.center) to (14.center);
\draw [thick, color=red, style=dashed, in=-105, out=-75] (13.center) to (15.center);
\draw [very thick, ->] (16.center) to (17.center);
\draw [thick, in=255, out=15, looseness=0.75] (18.center) to (20.center);
\draw [thick, in=165, out=15, looseness=0.50] (18.center) to (19.center);
\draw [thick, in=0, out=75] (22.center) to (21.center);
\draw [thick, in=-255, out=-180] (21.center) to (23.center);
\draw [thick, in=90, out=-75] (23.center) to (20.center);
\draw [thick, bend left=90] (24.center) to (25.center);
\draw [thick, bend right=90] (24.center) to (25.center);
\draw [thick, style=dashed, bend left, looseness=0.75] (28.center) to (29.center);
\draw [thick, bend right=135, looseness=1.50] (28.center) to (29.center);
\draw [thick, bend right=60] (26.center) to (27.center);
\draw [thick, style=dashed, bend left=90] (26.center) to (27.center);
\draw [thick, bend right=90, looseness=0.75] (30.center) to (31.center);
\draw [thick, style=dashed, bend left=75, looseness=0.75] (30.center) to (31.center);
\draw [thick, style=dashed, bend left=60, looseness=0.75] (32.center) to (33.center);
\draw [thick, bend left=240, looseness=1.25] (32.center) to (33.center);
\draw [thick, color=red, style=dashed, bend left=75, looseness=0.75] (21.center) to (24.center);
\draw [thick, color=red, bend left=255, looseness=0.75] (21.center) to (24.center);
\draw [thick, style=dashed, bend left=90] (8.center) to (9.center);
\draw [thick, in=-105, out=105, looseness=0.50] (41.center) to (22.center);
\draw [thick, in=150, out=-75] (41.center) to (19.center);
\draw [thick, color=red, bend right=60, looseness=0.75] (20.center) to (41.center);
\draw [thick, color=red, in=180, out=45] (15.center) to (48.center);
\draw [thick, color=red, in=-150, out=-15, looseness=0.75] (48.center) to (49.center);
\draw [thick, color=red, in=150, out=30] (49.center) to (13.center);
\draw [thick, color=red, style=dashed, bend left=90, looseness=0.75] (20.center) to (41.center);
\draw [thick, color=blue, bend right=75, looseness=0.50] (50.center) to (51.center);
\draw [thick, color=blue, style=dashed, bend left=90, looseness=0.50] (50.center) to (51.center);
\draw [thick, color=blue, in=-120, out=75, looseness=0.25] (54.center) to (55.center);
\draw [thick, color=blue, style=dashed, in=75, out=60, looseness=0.25] (54.center) to (55.center);
\draw [thick, in=-165, out=0] (58.center) to (59.center);
\draw [thick, in=165, out=15, looseness=1.25] (59.center) to (60.center);
\draw [thick, in=-150, out=-15, looseness=0.25] (60.center) to (61.center);
\draw [thick, in=15, out=-180] (61.center) to (62.center);
\draw [thick, in=-15, out=-165, looseness=1.25] (62.center) to (63.center);
\draw [thick, in=0, out=165, looseness=0.75] (63.center) to (58.center);
\draw [thick, bend right=90, looseness=1.75] (64.center) to (65.center);
\draw [thick, bend left=285] (65.center) to (64.center);
\draw [thick, bend right=60] (66.center) to (67.center);
\draw [thick, bend right=60] (69.center) to (68.center);
\draw [thick, style=dashed, bend left=60, looseness=1.25] (69.center) to (68.center);
\draw [thick, color=red, bend right=75, looseness=0.50] (70.center) to (72.center);
\draw [thick, color=red, style=dashed, bend left=90, looseness=0.50] (70.center) to (72.center);
\draw [thick, color=red, style=dashed, in=-75, out=-105, looseness=1.25] (71.center) to (73.center);
\draw [very thick, ->] (74.center) to (75.center);
\draw [thick, style=dashed, bend left=90] (66.center) to (67.center);
\draw [thick, color=red, in=165, out=45] (73.center) to (81.center);
\draw [thick, color=red, bend right] (81.center) to (82.center);
\draw [thick, color=red, in=150, out=30] (82.center) to (71.center);
\draw [thick, color=blue, bend right=75, looseness=0.50] (83.center) to (84.center);
\draw [thick, color=blue, style=dashed, bend left=90, looseness=0.50] (83.center) to (84.center);
\draw [very thick, ->] (105.center) to (106.center);
\draw [thick, in=-165, out=0] (107.center) to (108.center);
\draw [thick, in=165, out=15, looseness=1.25] (108.center) to (109.center);
\draw [thick, in=-150, out=-15, looseness=0.25] (109.center) to (110.center);
\draw [thick, in=15, out=-180] (110.center) to (111.center);
\draw [thick, in=-15, out=-165, looseness=1.25] (111.center) to (112.center);
\draw [thick, in=0, out=165, looseness=0.75] (112.center) to (107.center);
\draw [thick, bend right=75, looseness=0.50] (113.center) to (114.center);
\draw [thick, bend right=120, looseness=2.00] (114.center) to (113.center);
\draw [thick, bend right=60] (115.center) to (116.center);
\draw [thick, bend right=60] (118.center) to (117.center);
\draw [thick, style=dashed, bend left=60, looseness=1.25] (118.center) to (117.center);
\draw [thick, color=red, bend right=75, looseness=0.50] (119.center) to (121.center);
\draw [thick, color=red, style=dashed, bend left=90, looseness=0.50] (119.center) to (121.center);
\draw [thick, color=red, style=dashed, in=-105, out=-75] (120.center) to (122.center);
\draw [thick, style=dashed, bend left=90] (115.center) to (116.center);
\draw [thick, color=red, in=180, out=45] (122.center) to (129.center);
\draw [thick, color=red, in=-150, out=-15, looseness=0.75] (129.center) to (130.center);
\draw [thick, color=red, in=150, out=30] (130.center) to (120.center);
\draw [thick, color=blue, bend right=75, looseness=0.50] (131.center) to (132.center);
\draw [thick, color=blue, style=dashed, bend left=90, looseness=0.50] (131.center) to (132.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{center}
\caption{Curves $\gamma_1,\gamma_2,\gamma_3$ are different non-self intersecting closed geodesics on twice-punctured torus. By shrinking these curves we can reach the boundaries of the string vertex $\mathcal{V}_{1,2}$.}
\label{twiceptorus}
\end{figure}
\noindent{\underline{\bf Naive interaction vertex $\mathbf{S}_{1,2}$}}: Let us start by constructing the naive one-loop interaction vertex $\mathbf{S}_{1,2}$ with two external states external states represented by the unintegrated vertex operators $V_1$ and $V_2$. It is given by
\begin{equation}\label{eq:the bosonic-string amplitude}
\mathbf{S}_{1,2} =(2\pi \mathrm{i})^{-2} \int_{\mathcal{W}_{1,2}} d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_2}d\tau_{\gamma_2} ~\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |V_1\rangle_{w_1}\otimes|V_2\rangle_{w_2},
\end{equation}
where $|\mathcal{R}_{1,2}\rangle$ is the surface state associated with the twice-punctured torus, and $|V_i\rangle_{w_i}$ denotes the state inserted at the $i^{\text{th}}$ puncture of the torus using the coordinate $e^{\frac{\pi^2}{c_*}}w_i$ induced from the hyperbolic metric on $\mathcal{R}_{1,2}$. The parameters $(\tau_{\gamma_j},\ell_{\gamma_j}),~j=1,2$ denote the Fenchel-Nielsen coordinates for the Teichm\"uller space $\mathcal{T}_{1,2}$ of twice-punctured tori defined with respect to the curves $\gamma_1$ and $\gamma_2$, see figure \ref{twiceptorus}. And
\begin{alignat}{1}\label{eq:the expressions for b(v)}
b(\mathbf{t}_{\gamma_i})&=\int_{\mathcal{F}} d^2z\left(b_{zz}\mathbf{t}_{\gamma_i}+b_{\bar z\bar z}\overline{\mathbf{t}}_{\gamma_i}\right),\nonumber
\\
b(\mathbf{l}_{\gamma_i})&=\int_{\mathcal{F}} d^2z\left(b_{zz}\mathbf{l}_{\gamma_i}+b_{\bar z\bar z}\overline{\mathbf{l}}_{\gamma_i}\right),
\end{alignat}
where $\mathcal{F}$ denotes the fundamental domain of the action of $\Gamma_{1,2}$, the Fuchsian group associated with $\mathcal{R}_{1,2}$, in $\mathbb{H}$. $\mathbf{t}_{\gamma_i}$ and $\mathbf{l}_{\gamma_i}$ are the Beltrami differentials associated with the Fenchel-Nielsen coordinates $(\tau_{\gamma_i},\ell_{\gamma_i})$. Finally, $\mathcal{W}_{1,2}$ is the region covered by the naive string vertex $\mathcal{V}_{1,2}^0$ in the moduli space. Although a copy of $\mathcal{W}_{1,2}$ is a subspace in $\mathcal{T}_{1,2}$, it has no simple descriptionin terms of the Fenchel-Nielsen coordinates. \par
In order to evaluate $\mathbf{S}_{1,2}$ we must specify $\mathcal{W}_{1,2}$ in terms of the Fenchel-Nielsen cordinates. This seems impossible, since there is no simple description of $\mathcal{W}_{1,2}$ or even $\mathcal{M}_{1,2}$ in terms of $(\tau_{\gamma_1},\ell_{\gamma_1},\tau_{\gamma_2},\ell_{\gamma_2})$. However, there is an interesting to resolution to this issue. The lengths of the non-self intersecting closed geodesics on $\mathcal{R}_{1,2}$ satisfy the following curious identity \cite{McShane1}:
\begin{equation}\label{gmidentityp}
\sum_{g_1\in \text{MCG}(\mathcal{R}_{1,2},\gamma_1+\gamma_3)}\frac{2}{1+e^{\frac{\ell_{g_1\cdot\gamma_1}+\ell_{g_1\cdot\gamma_3}}{2}}}+\sum_{g_2\in \text{MCG}(\mathcal{R}_{1,2},\gamma_2)}\frac{2}{1+e^{\frac{l_{g_2\cdot\gamma_2}}{2}}}=1,
\end{equation}
where $\gamma_1,\gamma_2$ and $\gamma_3$ are the non-self intersecting closed geodesics on $\mathcal{R}_{1,2}$ as shown in figure \ref{twiceptorus}, and $\ell_{\gamma_i}$ denotes the hyperbolic length of $\gamma_i$. $\text{MCG}(\mathcal{R}_{1,2},\gamma_1+\gamma_3)$ denotes the subgroup of mapping class group (MCG) of $\mathcal{R}_{1,2}$ that acts non-trivially only on the curve $\gamma_1+\gamma_3$. Similarly, $\text{MCG}(\mathcal{R}_{1,2},\gamma_2)$ denotes the subgroup of MCG of $\mathcal{R}_{1,2}$ that acts non-trivially only on the curve $\gamma_2$. \par
The MCG group $\text{MCG}(\mathcal{R}_{1,2})$ can be factorized in different ways as follows:
\begin{align}\label{MCGfactR12}
\text{MCG}(\mathcal{R}_{1,2})&=\text{MCG}(\mathcal{R}_{1,2},\gamma_1+\gamma_3)\times \text{Dehn}(\gamma_1)\times \text{Dehn}(\gamma_3),\nonumber\\
\text{MCG}(\mathcal{R}_{1,2})&=\text{MCG}(\mathcal{R}_{1,2},\gamma_2)\times \text{Dehn}^*(\gamma_2)\times \text{MCG}(\mathcal{R}_{1,1}(\ell_{\gamma_2})),
\end{align}
where $ \text{MCG}(\mathcal{R}_{1,1}(\ell_{\gamma_2}))$ denotes the MCG of the torus $\mathcal{R}_{1,1}(\ell_{\gamma_2})$ with a border having length $\ell_{\gamma_2}$. $\text{Dehn}(\gamma_i)$ denotes the group generated by the Dehn twist $\tau_{\gamma_i}\to \tau_{\gamma_i}+\ell_{\gamma_i}$ and $\text{Dehn}^*(\gamma_i)$ denotes the group generated by the half Dehn twist $\tau_{\gamma_i}\to \tau_{\gamma_i}+\frac{1}{2}\ell_{\gamma_i}$. Interestingly, the lengths of the non-self intersecting closed geodesics on $\mathcal{R}_{1,1}(\ell_{\gamma_2})$ also satisfy an identity of the kind (\ref{gmidentityp}) \cite{Mirzakhani:2006fta}:
\begin{equation}\label{torusMidentity}
\sum_{g\in \text{MCG}(\mathcal{R}_{1,1}(\ell_{\gamma_2}))}\left[1-\frac{1}{\ell_{\gamma_2}}\mathrm{ln}\left(\frac{\mathrm{cosh}(\frac{\ell_{g\cdot\gamma_1}}{2})+\mathrm{cosh}(\frac{\ell_{\gamma_2}+\ell_{g\cdot\gamma_1}}{2})}{\mathrm{cosh}(\frac{\ell_{g\cdot\gamma_1}}{2})+\text{cosh}(\frac{\ell_{\gamma_2}-\ell_{g\cdot\gamma_1}}{2})}\right)\right]=1.
\end{equation}
We also have an identity that involves the sum over all images of the elements in the group $\text{Dehn}(\gamma_i)$, and is given by
\begin{equation}\label{Dehn}
\sum_{g\in \text{Dehn}(\gamma_i)}\text{sinc}^2\left(\frac{\tau_{g\cdot\gamma_i}}{l_{g\cdot\gamma_i}}\right)= \sum_{g\in \text{Dehn}_*(\gamma_i)}\text{sinc}^2\left(\frac{2\tau_{g\cdot\gamma_i}}{l_{g\cdot\gamma_i}}\right)=1,
\end{equation}
where $\text{sinc}(x)=\frac{\text{sin}\pi x}{\pi x}$. The identity (\ref{Dehn}) can be verified using the following well known identity
\begin{equation}\label{identity}
\sum_{k=-\infty}^{\infty}\text{sinc}^2\left(x-k \right)=1\qquad x\in \mathbb{R}.
\end{equation}
Combining the identities (\ref{gmidentityp},\ref{torusMidentity}, \ref{Dehn}) give the following identity
\begin{equation}\label{gmidentityp1}
\sum_{g\in \text{MCG}(\mathcal{R}_{1,2})}G_1(\ell_{g\cdot\gamma_1},\tau_{g\cdot\gamma_1},\ell_{g\cdot\gamma_3},\tau_{g\cdot\gamma_3})+\sum_{g\in \text{MCG}(\mathcal{R}_{1,2})}G_1(\ell_{g\cdot\gamma_1},\tau_{g\cdot\gamma_1},\ell_{g\cdot\gamma_2},\tau_{g\cdot\gamma_2})=1,
\end{equation}
where $G_1$ and $G_2$ are given by
\begin{align}\label{gmidentityp2}
G_1(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_3},\tau_{\gamma_3})&=\frac{2~\text{sinc}^2\left(\frac{\tau_{\gamma_1}}{\ell_{\gamma_1}}\right)\text{sinc}^2\left(\frac{\tau_{\gamma_3}}{l_{\gamma_3}}\right)}{1+e^{\frac{\ell_{\gamma_1}+\ell_{\gamma_3}}{2}}}, \nonumber\\
G_2(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})&=\frac{2~\text{sinc}^2\left(\frac{2\tau_{\gamma_2}}{l_{\gamma_2}}\right)\left[1-\frac{1}{\ell_{\gamma_2}}\mathrm{ln}\left(\frac{\mathrm{cosh}(\frac{\ell_{\gamma_1}}{2})+\mathrm{cosh}(\frac{\ell_{\gamma_2}+\ell_{\gamma_1}}{2})}{\mathrm{cosh}(\frac{\ell_{\gamma_1}}{2})+\text{cosh}(\frac{\ell_{\gamma_2}-\ell_{\gamma_1}}{2})}\right)\right]}{1+e^{\frac{l_{\gamma_2}}{2}}}.
\end{align}
Notice that the functions $G_1$ and $G_2$ have the following decaying property
\begin{equation}\label{decay}
\lim_{\ell_{\gamma_3\to \frac{1}{c_*}}}G_1(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_3},\tau_{\gamma_3})=\mathcal{O}(e^{-1/c_*}),\qquad \lim_{\ell_{\gamma_2\to \frac{1}{c_*}}}G_2(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})=\mathcal{O}(e^{-1/c_*}).
\end{equation}
Using the identity (\ref{gmidentityp1}) we can express the amplitude $\mathbf{S}_{1,2}$ as an integral over the Teichm\"uller space $\mathcal{T}_{1,2}$ of twice-punctured tori as follows:
\begin{align}\label{eq:the bosonic-string amplitude1}
\mathbf{S}_{1,2}&=(2\pi \mathrm{i})^{-2} \int_{\mathcal{TW}^{\mathbf{P}_1}_{1,2}} d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_2}d\tau_{\gamma_2} ~G_1(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |{V}_1\rangle_{w_1}\otimes|{V}_2\rangle_{w_2}\nonumber\\
&+(2\pi \mathrm{i})^{-2} \int_{\mathcal{TW}^{\mathbf{P}_2}_{1,2}} d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_3}d\tau_{\gamma_3} ~G_2(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_3},\tau_{\gamma_3})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{3}})b(\mathbf{l}_{ \gamma_{3}}) |{V}_1\rangle_{w_1}\otimes|{V}_2\rangle_{w_2},
\end{align}
where $\mathcal{TW}^{\mathbf{P}_1}_{1,2}$ is the image of $\mathcal{W}_{1,2}$ in the Teichm\"uller space defined with respect to the pair of pants decomposition $\mathbf{P}_1$ given by the curves $\gamma_1$ and $\gamma_2$. $\mathcal{TW}^{\mathbf{P}_2}_{1,2}$ is the union of all the images of $\mathcal{W}_{1,2}$ in the Teichm\"uller space defined with respect to the pair of pants decomposition $\mathbf{P}_2$ given by the curves $\gamma_1$ and $\gamma_3$. Although $\mathcal{TW}^{\mathbf{P}_1}_{1,2}$ and $\mathcal{TW}^{\mathbf{P}_2}_{1,2}$ do not have a nice description, the decay behaviour of the functions $G_1$ and $G_2$ (\ref{decay}) allows us to replace them with the effective regions $E\mathcal{W}^{\mathbf{P}_1}_{1,2}$ and $E\mathcal{W}^{\mathbf{P}_2}_{1,2}$ without changing the value of $\mathbf{S}_{1,2}$. The string vertex region $\mathcal{W}_{1,2}$ has the property that it does not contain any hyperbolic Riemann surface having simple closed geodesics with length less than $c_*$. Consequently, $\mathbf{S}_{1,2}$ computed by integrating the off-shell string measure over $\mathcal{W}_{1,2}$ does not receive any contribution from hyperbolic Riemann surfaces having a simple closed geodesics with length less than $c_*$. Therefore, $\mathbf{S}_{1,2}$ computed by integrating the differential form in $\mathcal{T}_{1,2}$ over $E\mathcal{W}^{\mathbf{P}_1}_{1,2}$ must also not receive any finite contribution from such surfaces. This is true if we identify $E\mathcal{W}^{\mathbf{P}_1}_{1,2}$ with the following region in $\mathcal{T}_{1,2}$
\begin{equation}
E\mathcal{W}^{\mathbf{P}_1}_{1,2}:\qquad \ell_{\gamma_1}\in [c_*,\infty) \qquad \ell_{\gamma_2}\in [c_*,\infty), \qquad \tau_{\gamma_1}\in(-\infty,\infty), \qquad \tau_{\gamma_2}\in (-\infty,\infty),
\end{equation}
and $E\mathcal{W}^{\mathbf{P}_2}_{1,2}$ with the following region
\begin{equation}
E\mathcal{W}^{\mathbf{P}_2}_{1,2}:\qquad \ell_{\gamma_1}\in [c_*,\infty) \qquad \ell_{\gamma_3}\in [c_*,\infty), \qquad \tau_{\gamma_1}\in(-\infty,\infty), \qquad \tau_{\gamma_3}\in (-\infty,\infty).
\end{equation}
Notice that the region $E\mathcal{W}^{\mathbf{P}_1}_{1,2}$ includes hyperbolic Riemann surfaces with simple closed geodesic $\gamma_3$ having length less than $c_*$. Interestingly, when $\ell_{\gamma_3}\to c_*$ the length of $\gamma_2$ decay very fast and the function $G_2$ exponentially decays. As a result, the integration over region $E\mathcal{W}^{\mathbf{P}_1}_{1,2}$ does not include any finite contribution from hyperbolic Riemann surfaces with simple closed geodesic $\gamma_3$ having length less than $c_*$. Similar statement is true for the integration over $E\mathcal{W}^{\mathbf{P}_2}_{1,2}$. Then we can write $\mathbf{S}_{1,2}$ as
\begin{align}\label{eq:the bosonic-string amplitude1}
\mathbf{S}_{1,2}&=(2\pi \mathrm{i})^{-2} \int_{E\mathcal{W}^{\mathbf{P}_1}_{1,2}} d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_2}d\tau_{\gamma_2} ~G_1(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |{V}_1\rangle_{w_1}\otimes|{V}_2\rangle_{w_2}\nonumber\\
&+(2\pi \mathrm{i})^{-2} \int_{E\mathcal{W}^{\mathbf{P}_2}_{1,2}} d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_3}d\tau_{\gamma_3} ~G_2(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_3},\tau_{\gamma_3})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{3}})b(\mathbf{l}_{ \gamma_{3}}) |{V}_1\rangle_{w_1}\otimes|{V}_2\rangle_{w_2},
\end{align}
\noindent{\underline{\bf Corrected interaction vertex $\widetilde{\mathbf{S}}_{1,2}$}}: The naive interaction vertex $\mathbf{S}_{1,2}$ must be modified to make them suitable for constructing a string field theory with approximate gauge invariance. The modification can be implemented once we specify the subregions $\mathbf{W}^{(0)}_{1,2}, \mathbf{W}^{(1)}_{1,2}$ and $\mathbf{W}^{(2)}_{1,2}$ inside $\mathcal{W}_{1,2}$. \par
The subregion $\mathbf{W}^{(0)}_{1,2}$ has the property that it does not include any hyperbolic Riemann surface with one or more simple closed geodesic having length less than $c_*(1+\delta)$. Let us denote the union of all the images of $\mathbf{W}^{(0)}_{1,2}$ in $\mathcal{T}_{1,2}$ defined with respect to the pants decomposition $\mathbf{P}_1$ as $\mathcal{T}\mathbf{W}^{\mathbf{P_1},(0)}_{1,2}$. For $\mathcal{T}_{1,2}$ defined with respect to the pants decomposition $\mathbf{P}_2$, the union of all images of $\mathbf{W}^{(0)}_{1,2}$ is denoted as $\mathcal{T}\mathbf{W}^{\mathbf{P_2},(0)}_{1,2}$. Then by repeating the arguments in the previous paragraph we can identify the effective region $E\mathbf{W}^{\mathbf{P}^1,(0)}_{1,2}$ in $\mathcal{T}_{1,2}$ that corresponds to $\mathcal{T}\mathbf{W}^{\mathbf{P}^1,(0)}_{1,2}$ with the following region
\begin{equation}
E\mathbf{W}^{\mathbf{P}^1,(0)}_{1,2}: \quad \ell_{\gamma_1}\in [c_*(1+\delta),\infty), \quad \ell_{\gamma_2}\in [c_*(1+\delta),\infty), \quad \tau_{\gamma_1}\in (-\infty,\infty), \quad \tau_{\gamma_2}(-\infty,\infty).
\end{equation}
Similarly, we can identify the effective region $E\mathbf{W}^{\mathbf{P}^2,(0)}_{1,2}$ that corresponds to $\mathcal{T}\mathbf{W}^{\mathbf{P}^2,(0)}_{1,2}$ with the following region
\begin{equation}
E\mathbf{W}^{\mathbf{P}^2,(0)}_{1,2}: \quad \ell_{\gamma_1}\in [c_*(1+\delta),\infty),\quad \ell_{\gamma_3}\in [c_*(1+\delta),\infty), \quad \tau_{\gamma_1}\in (-\infty,\infty), \quad \tau_{\gamma_3}(-\infty,\infty).
\end{equation}
Now let us analyze the subregion $\mathbf{W}^{(1)}_{1,2}$. It has the property that any hyperbolic Riemann surface in this region has only one simple closed geodesic having length between $c_*$ and $c_*(1+\delta)$. Let us denote the union of all the images of $\mathbf{W}^{(1)}_{1,2}$ in $\mathcal{T}_{1,2}$ defined with respect to the pants decomposition $\mathbf{P}_1$ as $\mathcal{T}\mathbf{W}^{\mathbf{P_1},(1)}_{1,2}$ and that defined with respect to the pants decomposition $\mathbf{P}_2$ as $\mathcal{T}\mathbf{W}^{\mathbf{P_2},(1)}_{1,2}$. We can identify the effective regions correspond to
$\mathcal{T}\mathbf{W}^{\mathbf{P_1},(1)}_{1,2}$ and $\mathcal{T}\mathbf{W}^{\mathbf{P_2},(1)}_{1,2}$ as follows:
\begin{align}
E\mathbf{W}^{\mathbf{P}^1,(1)}_{1,2}&= E\mathbf{W}^{\mathbf{P}^1,\gamma_1}_{1,2}\cup E\mathbf{W}^{\mathbf{P}^1,\gamma_2}_{1,2}\nonumber\\
E\mathbf{W}^{\mathbf{P}^2,(1)}_{1,2} &= E\mathbf{W}^{\mathbf{P}^2,\gamma_1}_{1,2}\cup E\mathbf{W}^{\mathbf{P}^2,\gamma_3}_{1,2}
\end{align}
where
\begin{align}
E\mathbf{W}^{\mathbf{P}^1,\gamma_1}_{1,2}&= \ell_{\gamma_1}\in [c_*,c_*(1+\delta)),\quad \ell_{\gamma_2}\in [c_*(1+\delta),\infty), \quad \tau_{\gamma_1}\in (-\infty,\infty), \quad \tau_{\gamma_2}(-\infty,\infty), \nonumber\\
E\mathbf{W}^{\mathbf{P}^1,\gamma_2}_{1,2}&=\ell_{\gamma_1}\in [c_*(1+\delta),\infty),\quad \ell_{\gamma_2}\in [c_*,c_*(1+\delta)), \quad \tau_{\gamma_1}\in (-\infty,\infty), \quad \tau_{\gamma_2}(-\infty,\infty), \nonumber\\
E\mathbf{W}^{\mathbf{P}^2,\gamma_1}_{1,2}&= \ell_{\gamma_1}\in [c_*,c_*(1+\delta)),\quad \ell_{\gamma_3}\in [c_*(1+\delta),\infty), \quad \tau_{\gamma_1}\in (-\infty,\infty), \quad \tau_{\gamma_3}(-\infty,\infty), \nonumber\\
E\mathbf{W}^{\mathbf{P}^2,\gamma_3}_{1,2}&=\ell_{\gamma_1}\in [c_*(1+\delta),\infty),\quad \ell_{\gamma_3}\in [c_*,c_*(1+\delta)), \quad \tau_{\gamma_1}\in (-\infty,\infty), \quad \tau_{\gamma_3}(-\infty,\infty),
\end{align}
Finally, the effective regions $E\mathbf{W}^{\mathbf{P}^1, \gamma_1,\gamma_2}_{1,2}$ and $E\mathbf{W}^{\mathbf{P}^2, \gamma_1,\gamma_3}_{1,2}$ for the subregion $\mathbf{W}^{(2)}_{1,2}$ in $\mathcal{T}_{1,2}$ defined with respect to the pants decomposition $\mathbf{P}^1$ and $\mathbf{P}^2$ respectively are given by
\begin{align}
E\mathbf{W}^{\mathbf{P}^1,\gamma_1,\gamma_2}_{1,2}&= \ell_{\gamma_1}\in [c_*,c_*(1+\delta)),\quad \ell_{\gamma_2}\in [c_*c_*(1+\delta)), \quad \tau_{\gamma_1}\in (-\infty,\infty), \quad \tau_{\gamma_2}(-\infty,\infty), \nonumber\\
E\mathbf{W}^{\mathbf{P}^2,\gamma_1,\gamma_3}_{1,2}&=\ell_{\gamma_1}\in [c_*,c_*(1+\delta)),\quad \ell_{\gamma_3}\in [c_*c_*(1+\delta)), \quad \tau_{\gamma_1}\in (-\infty,\infty), \quad \tau_{\gamma_3}(-\infty,\infty).
\end{align}
Given, that we have identified the effective regions for the subregions in $\mathcal{W}_{1,2}$, let us construct the corrected interaction vertex $\widetilde{\mathbf{S}}_{1,2}$. It is given by
\begin{align}\label{eq:the bosonic-string amplitude2}
\widetilde{\mathbf{S}}_{1,2}&= \int_{ E\mathbf{W}^{\mathbf{P}^1,(0)}_{1,2}} \frac{d\ell_{\gamma_i}d\tau_{\gamma_i}d\ell_{\gamma_2}d\tau_{\gamma_2}}{(2\pi \mathrm{i})^{2}} ~G_1(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |{V}_1\rangle_{\widetilde{w}_1}\otimes|{V}_2\rangle_{\widetilde{w}_2}\nonumber\\
&+ \int_{ E\mathbf{W}^{\mathbf{P}^2,(0)}_{1,2}} \frac{d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_3}d\tau_{\gamma_3}}{(2\pi \mathrm{i})^{2}} ~G_2(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_3},\tau_{\gamma_3})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{3}})b(\mathbf{l}_{ \gamma_{3}}) |{V}_1\rangle_{{\widetilde{w}}_1}\otimes|{V}_2\rangle_{{\widetilde{w}}_2}\nonumber\\
&+ \int_{ E\mathbf{W}^{\mathbf{P}^1,\gamma_1}_{1,2}} \frac{d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_2}d\tau_{\gamma_2}}{(2\pi \mathrm{i})^{2}} ~G_1(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |{V}_1\rangle_{{\widetilde{w}}^{\gamma_1}_1}\otimes|{V}_2\rangle_{{\widetilde{w}}^{\gamma_1}_2}\nonumber\\
&+ \int_{ E\mathbf{W}^{\mathbf{P}^1,\gamma_2}_{1,2}} \frac{d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_2}d\tau_{\gamma_2}}{(2\pi \mathrm{i})^{2}} ~G_1(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |{V}_1\rangle_{{\widetilde{w}}^{\gamma_2}_1}\otimes|{V}_2\rangle_{{\widetilde{w}}^{\gamma_2}_2}\nonumber\\
&+ \int_{ E\mathbf{W}^{\mathbf{P}^2,\gamma_1}_{1,2}} \frac{d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_3}d\tau_{\gamma_3}}{(2\pi \mathrm{i})^{2}} ~G_2(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_3},\tau_{\gamma_3})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{3}})b(\mathbf{l}_{ \gamma_{3}}) |{V}_1\rangle_{{\widetilde{w}}^{\gamma_1}_1}\otimes|{V}_2\rangle_{{\widetilde{w}}^{\gamma_1}_2}\nonumber\\
&+\int_{ E\mathbf{W}^{\mathbf{P}^2,\gamma_3}_{1,2}} \frac{d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_3}d\tau_{\gamma_3}}{(2\pi \mathrm{i})^{2}} ~G_2(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_3},\tau_{\gamma_3})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |{V}_1\rangle_{{\widetilde{w}}^{\gamma_3}_1}\otimes|{V}_2\rangle_{{\widetilde{w}}^{\gamma_3}_2}\nonumber\\
&+ \int_{ E\mathbf{W}^{\mathbf{P}^1,\gamma_1,\gamma_2}_{1,2}} \frac{d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_2}d\tau_{\gamma_2}}{(2\pi \mathrm{i})^{2}} ~G_1(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |{V}_1\rangle_{{\widetilde{w}}^{\gamma_1\gamma_2}_1}\otimes|{V}_2\rangle_{{\widetilde{w}}^{\gamma_1\gamma_2}_2}\nonumber\\
&+ \int_{ E\mathbf{W}^{\mathbf{P}^2,\gamma_1,\gamma_3}_{1,2}} \frac{d\ell_{\gamma_1}d\tau_{\gamma_1}d\ell_{\gamma_3}d\tau_{\gamma_3}}{(2\pi \mathrm{i})^{2}} ~G_2(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_3},\tau_{\gamma_3})\langle\mathcal{R}_{1,2}|b(\mathbf{t}_{\gamma_1})b(\mathbf{l}_{ \gamma_1}) b(\mathbf{t}_{\gamma_{2}})b(\mathbf{l}_{ \gamma_{2}}) |{V}_1\rangle_{{\widetilde{w}}^{\gamma_1\gamma_3}_1}\otimes|{V}_2\rangle_{{\widetilde{w}}^{\gamma_1\gamma_3}_2}.
\end{align}
The local coordinates are as follows
\begin{align}
\widetilde{w}_j&=e^{\frac{\pi^2}{c_*}}w_j,\nonumber\\
\widetilde{w}^{\gamma_1}_j&=e^{\frac{c_*^2}{6}f(\ell_{\gamma_1})Y^{(1)}_{1j}}e^{\frac{\pi^2}{c_*}}w_{j},\nonumber\\
\widetilde{w}^{\gamma_2}_j&=e^{\frac{c_*^2}{6}f(\ell_{\gamma_2})Y^{(1)}_{2j}}e^{\frac{\pi^2}{c_*}}w_{j},\nonumber\\
\widetilde{w}^{\gamma_3}_j&=e^{\frac{c_*^2}{6}f(\ell_{\gamma_3})Y^{(1)}_{3j}}e^{\frac{\pi^2}{c_*}}w_{j},\nonumber\\
\widetilde{w}^{\gamma_1\gamma_2}_j&=e^{\frac{c_*^2}{6}\left[f(\ell_{\gamma_1})Y_{1j}^{(2)}+f(\ell_{\gamma_2})Y^{(2)}_{2j}\right]}e^{\frac{\pi^2}{c_*}}w_{j}\nonumber\\
\widetilde{w}^{\gamma_1\gamma_3}_j&=e^{\frac{c_*^2}{6}\left[f(\ell_{\gamma_1})Y_{1j}^{(2)}+f(\ell_{\gamma_3})Y^{(2)}_{3j}\right]}e^{\frac{\pi^2}{c_*}}w_{j}\nonumber
\end{align}
where $f$ is an arbitrary smooth real function of the geodesic length defined in the interval $\left(c_*,c_*+\delta c_*\right)$, such that $f(c_*)=1$ and $f(c_*+\delta c_*)=0$. The coefficient $Y^{(1)}_{1j}$ is given by
\begin{align}
Y^{(1)}_{1j}&=\sum_{q=1}^2\sum_{c_i^q,d_i^q}\frac{\pi^2}{|c_i^q|^4}\nonumber\\
c_i^q>0 \qquad &d_i^q~\text{mod}~c_i^q \qquad\left(\begin{array}{cc}* & * \\c_i^q & d_i^q\end{array}\right)\in \quad (\sigma_{\gamma_1}^q)^{-1}\Gamma_{0,4}\sigma_j.
\end{align}
where the transformation $\sigma_j^{-1}$ maps the cusp corresponding to the $j^{th}$ puncture to $\infty$ and $(\sigma_{\gamma_1}^q)^{-1}$ maps the cusp corresponds to the one of the two punctures, marked as $q$, obtained by degenerating the curve $\gamma_1$, to $\infty$. $\Gamma_{0,4}$ is the Fuchisan group of a four punctured hyperbolic Riemann surface with Fenchel-Nielsen parameters $(\ell_{\gamma_1},\tau_{\gamma_1},\ell_{\gamma_2},\tau_{\gamma_2})$.$Y^{(1)}_{1j}$ is given by
\begin{align}
Y^{(1)}_{2j}&=\sum_{q=1}^2\sum_{c_i^q,d_i^q}\frac{\pi^2}{|c_i^q|^4}\nonumber\\
c_i^q>0 \qquad &d_i^q~\text{mod}~c_i^q \qquad\left(\begin{array}{cc}* & * \\c_i^q & d_i^q\end{array}\right)\in \quad (\sigma_{\gamma_2}^q)^{-1}\Gamma_{0,4}\sigma_j.
\end{align}
where $(\sigma_{\gamma_2}^q)^{-1}$ maps the cusp corresponds to the one of the two punctures, marked as $q$, obtained by degenerating the curve $\gamma_2$, to $\infty$. $Y^{(1)}_{3j}$ is given by
\begin{align}
Y^{(1)}_{3j}&=\sum_{q=1}^2\sum_{c_i^q,d_i^q}\frac{\pi^2}{|c_i^q|^4}\nonumber\\
c_i^q>0 \qquad &d_i^q~\text{mod}~c_i^q \qquad\left(\begin{array}{cc}* & * \\c_i^q & d_i^q\end{array}\right)\in \quad (\sigma_{\gamma_3}^q)^{-1}\Gamma_{0,4}\sigma_j,
\end{align}
where $(\sigma_{\gamma_3}^q)^{-1}$ maps the cusp corresponds to the one of the two punctures, marked as $q$, obtained by degenerating the curve $\gamma_3$, to $\infty$. $Y^{(2)}_{1j}$ is given by
\begin{align}
Y^{(2)}_{1j}&=\sum_{q=1}^2\sum_{c_i^q,d_i^q}\pi^{2}\frac{\epsilon(j,q)}{|c_i^q|^4}\nonumber\\
c_i^q>0 \qquad &d_i^q~\text{mod}~c_i^q \qquad\left(\begin{array}{cc}* & * \\c_i^q & d_i^q\end{array}\right)\in \quad (\sigma_{\gamma_1}^q)^{-1}\Gamma_{0,3}\sigma_j,
\end{align}
where $\Gamma_{0,3}$ is the Fuchsian group of a thrice punctured hyperbolic sphere. The factor $\epsilon(j,q)$ is one if both the $j^{th}$ puncture and the puncture denoted by the index $q$ obtained by degenerating the curve $\gamma_1$ belong to the same thrice punctured sphere, other wise $\epsilon(j,q)$ is zero. $Y^{(2)}_{2j}$ is given by
\begin{align}
Y^{(2)}_{2j}&=\sum_{q=1}^2\sum_{c_i^q,d_i^q}\pi^{2}\frac{\epsilon(j,q)}{|c_i^q|^4}\nonumber\\
c_i^q>0 \qquad &d_i^q~\text{mod}~c_i^q \qquad\left(\begin{array}{cc}* & * \\c_i^q & d_i^q\end{array}\right)\in \quad (\sigma_{\gamma_2}^q)^{-1}\Gamma_{0,3}\sigma_j,
\end{align}
and $Y^{(2)}_{3j}$ is given by
\begin{align}
Y^{(2)}_{3j}&=\sum_{q=1}^2\sum_{c_i^q,d_i^q}\pi^{2}\frac{\epsilon(j,q)}{|c_i^q|^4}\nonumber\\
c_i^q>0 \qquad &d_i^q~\text{mod}~c_i^q \qquad\left(\begin{array}{cc}* & * \\c_i^q & d_i^q\end{array}\right)\in \quad (\sigma_{\gamma_3}^q)^{-1}\Gamma_{0,3}\sigma_j.
\end{align}
The last ingredient that one need for computing of the corrected interaction vertex $\widetilde{S}_{1,2}$ is the explicit form of the generators of the Fuchsian groups $\Gamma_{0,3}$, associated with the thrice punctured hyperbolic sphere and $\Gamma_{0,4}$, associated with the four punctured hyperbolic Riemann surface with specific Fenchel-Nielsen parameters. Interestingly, following the algorithm given in \cite{Maskit}, it is possible to construct the Fuchsian group of any hyperbolic Riemann surface having specific Fenchel-Nielsen parameters. For example, the group $\Gamma_{0,3}$ is generated by the transformations $$z\to \frac{z}{2z+1}\qquad \qquad z\to z+2.$$ The Fuchsian group $\Gamma_{0,4}(\ell,\tau)$ that produces a four punctured sphere with Fenchel-Nielsen parameter $(\ell,\tau)$ can be generated using the following three elements
\begin{align}\label{Gamma04lt}
a_1&=\left(\begin{array}{cc}1+\beta & -\beta \\ \beta & 1-\beta \end{array}\right) \nonumber\\
a_2&=\left(\begin{array}{cc}\left(1-\beta\right)& -\beta e^{2\tau}\\ \beta e^{-2\tau} & \left(1+\beta\right)\end{array}\right) \nonumber\\
a_3&=-\left(\begin{array}{cc}(1+\beta )e^{\ell}& \beta e^{-\ell+2\tau}\\ -\beta e^{\ell-2\tau}& (1 -\beta) e^{-\ell}\end{array}\right),
\end{align}
where $\beta=-\frac{\text{cosh}\ell+1}{\text{sinh}\ell}$. \par
\noindent{\underline{\bf Arbitrary interaction vertex}}: It is straightforward to generalize this discussion to the case of a general interaction vertex in closed string field theory. This is because the lengths of simple closed geodesics on a general hyperbolic Riemann surface with borders also satisfies identities of the kind (\ref{gmidentityp}). Assume that $\mathcal{R}(L_1,\cdots,L_n)$ is a Riemann surface with $g$ handles and $n$ boundaries $\beta_1,\cdots,\beta_n$ having hyperbolic lengths $L_1,\cdots,L_n$. In the limit $L_i\to 0,~i=1,\cdots,n$, the bordered surface $\mathcal{R}(L_1,\cdots,L_n)$ becomes $\mathcal{R}$, a genus $g$ Riemann surface with $n$ punctures. The lengths of the non-self intersecting closed geodesics on $\mathcal{R}(L_1,\cdots,L_n)$ satisfy the following identity \cite{Mirzakhani:2006fta}:
\begin{equation}\label{gmidentitya}
\sum_{i=1}^n\sum_k\sum_{g\in \text{MCG}(\mathcal{R}(L_1,\cdots,L_n),\mathcal{C}^k_i)} \mathcal{Q}_i(L_1,L_i,\ell_{g\cdot\mathcal{C}^k_i})=1,
\end{equation}
where
\begin{align}\label{rinterval}
\mathcal{Q}_i(L_1,L_i,\ell_{\mathcal{C}^k_i}) &= \delta_{1i}~\mathcal{D}(L_1,\ell_{\alpha^k_1},\ell_{\alpha^k_2})+(1-\delta_{i1})~\mathcal{E}(L_1,L_i,\ell_{\mathcal{C}^i_1})\nonumber\\
\mathcal{D}(x_1,x_2,x_3)&=1-\frac{1}{x_1}\mathrm{ln}\left(\frac{\mathrm{cosh}(\frac{x_2}{2})+\mathrm{cosh}(\frac{x_1+x_3}{2})}{\mathrm{cosh}(\frac{x_2}{2})+\mathrm{cosh}(\frac{x_1-x_3}{2})}\right),\nonumber\\
\mathcal{E}(x_1,x_2,x_3)&=\frac{2}{x_1}\mathrm{ln}\left(\frac{e^{\frac{x_1}{2}}+e^{\frac{x_2+x_3}{2}}}{e^{-\frac{x_1}{2}}+e^{\frac{x_2+x_3}{2}}}\right).\nonumber
\end{align}
$\mathcal{C}^k_1$ is the multi-curve $\alpha^k_1+\alpha^k_2$, where the simple closed geodesics $\alpha^k_1$ and $\alpha^k_2$ together with $\beta_1$ bounds a pair of pants, see figure \ref{cutting1}. $\mathcal{C}^k_i,~i\in\{2,\cdots,n\}$ is a simple closed geodesic $\gamma^k_i$ which together with $\beta_1$ and $\beta_i$ bounds a pair of pants. The index $k$ distinguishes curves that are not related to each other via the action of elements in $\text{MCG}(\mathcal{R}(L_1,\cdots,L_n))$, the mapping class group of $\mathcal{R}(L_1,\cdots,L_n)$. The summation over $k$ add contributions from all such distinct classes of curves. By $\ell_{C_1^k}$ we mean the pair $(\ell_{\alpha^k_1},\ell_{\alpha^k_2})$. $ \text{MCG}(\mathcal{R}(L_1,\cdots,L_n),\mathcal{C}^k_i)$ is the subgroup of $\text{MCG}(\mathcal{R}(L_1,\cdots,L_n))$ that acts non-trivially only on the curve $\mathcal{C}^k_i$. Remember that a Dehn twist performed with respect to $\mathcal{C}^k_i$ is not an element of $ \text{MCG}(\mathcal{R}(L_1,\cdots,L_n),\mathcal{C}_i)$. We also have an identity for the group Dehn twists, and is given by
\begin{equation}\label{dehnidentity}
\sum_{g\in \text{Dehn}(\mathcal{C}^k_i)} \mathcal{Y}_i(\ell_{g\cdot\mathcal{C}^k_i},\tau_{g\cdot\mathcal{C}^k_i})=1,
\end{equation}
where $\text{Dehn}(\mathcal{C}^k_1)$ denotes the product group $\text{Dehn}(\alpha^k_1)\times\text{Dehn}(\alpha^k_2)$, and
\begin{align}\label{dehnidentity1}
\mathcal{Y}_i(\ell_{\mathcal{C}^k_i},\tau_{\mathcal{C}^k_i})&= \delta_{i1}~\text{sinc}^2\left(\frac{\tau_{\alpha_1^k}}{\ell_{\alpha_1^k}}\right)\text{sinc}^2\left(\frac{\tau_{\alpha_2^k}}{\ell_{\alpha_2^k}}\right)+(1-\delta_{i1})~\text{sinc}^2\left(\frac{\tau_{\gamma_i^k}}{\ell_{\gamma_i^k}}\right).
\end{align}
Combining the identity (\ref{gmidentitya}) with the identity (\ref{dehnidentity} )gives us the following identity
\begin{equation}\label{gmidentityc}
\sum_{i=1}^n\sum_k\sum_{g\in \text{MCG}(\mathcal{R}(L_1,\cdots,L_n),\mathcal{C}^k_i)\times \text{Dehn}(\mathcal{C}^k_i)} \mathcal{Z}_i(L_1,L_i,\ell_{g\cdot\mathcal{C}^k_i},\tau_{g\cdot\mathcal{C}^k_i})=1.
\end{equation}
where
\begin{equation}\label{gmidentityc1}
\mathcal{Z}_i(L_1,L_i,\ell_{\mathcal{C}^k_i},\tau_{\mathcal{C}^k_i})=\mathcal{Q}_i(L_1,L_i,\ell_{\mathcal{C}^k_i}) \mathcal{Y}_i(\ell_{\mathcal{C}^k_i},\tau_{\mathcal{C}^k_i}).
\end{equation}
\begin{figure}
\begin{center}
\usetikzlibrary{backgrounds}
\begin{tikzpicture}[scale=.75]
\draw[line width=1pt] (1,1) .. controls (1.75,1) and (2.25,.75) ..(3,0);
\draw[line width=1pt] (1,-2) .. controls(1.75,-2) and (2.25,-1.75) ..(3,-1);
\draw[line width=1pt] (1,.4) .. controls(2.75,.2) and (2.75,-1.2) ..(1,-1.4);
\draw[blue, line width=1pt] (1,.7) ellipse (.115 and .315);
\draw[blue, line width=1pt] (1,-1.7) ellipse (.115 and .315);
\draw[line width=1pt] (3,0) .. controls(3.3,-.25) and (3.75,-.25) ..(4,0);
\draw[line width=1pt] (3,-1) .. controls(3.3,-.75) and (3.75,-.75) ..(4,-1);
\draw[line width=1pt] (4,0) .. controls(5,1) and (6,1) ..(7,0);
\draw[line width=1pt] (4,-1) .. controls(5,-2) and (6,-2) ..(7,-1);
\draw[line width=1pt] (7,0) .. controls(7.2,-.15) ..(8,-.15);
\draw[line width=1pt] (7,-1) .. controls(7.2,-.85) ..(8,-.85);
\draw[blue, line width=1pt] (8,-.5) ellipse (.15 and .35);
\draw[line width=1pt] (4.75,-.75) .. controls(5.25,-1.15) and (5.75,-1.15) ..(6.25,-.75);
\draw[line width=1pt] (4.75,-.35) .. controls(5.25,0.1) and (5.75,0.1) ..(6.25,-.35);
\draw[line width=1pt] (4.75,-.35) .. controls(4.65,-.475) and (4.65,-.625) ..(4.75,-.75);
\draw[line width=1pt] (6.25,-.35) .. controls(6.35,-.475) and (6.35,-.625) ..(6.25,-.75);
\draw[line width =1pt, color=violet] (3.5,-.5) ellipse (1.2 and .15);
\draw[line width =1pt, color=violet] (5.5,-1.39) ellipse (.15 and .35);
\draw (0.25,.5) node[above right] {$\beta_2$} (0.25,-2) node[above right] {$\beta_1$} (8.2,-.2)node [below right ] {$\beta_3$} (3.5,-.25) node [below right ] {$\alpha^k_1$} (5.55,-1.25) node [below right ] {$\alpha^k_2$};
\draw[->,line width =1pt] (9,-.5)--(9.8,-.5);
%
\draw[line width=1pt] (11,1) .. controls (11.75,1) and (12.25,.75) ..(13,0);
\draw[line width=1pt] (11,.4) .. controls(12,.2) and (12.4,-.5) ..(12.3,-.5);
\draw[line width=1pt] (9.95,-2.2) .. controls(10.5,-1.4) and (10.4,-1.2) ..(10.4,-1.2);
\draw[line width=1pt] (12.4,-1.2) .. controls(12.3,-1.4) and (12.8,-1.9) ..(13.1,-2.2);
\draw[line width=1pt] (10,-2.85) .. controls(11,-1.75) and (12,-1.75) ..(13,-2.85);
\draw[blue, line width=1pt] (11,.7) ellipse (.115 and .315);
\draw[blue, line width=1pt] (10,-2.5) ellipse (.115 and .315);
\draw[line width=1pt] (13,0) .. controls(13.3,-.25) and (13.75,-.25) ..(14,0);
\draw[line width=1pt] (14,0) .. controls(15,1) and (16,1) ..(17,0);
\draw[line width=1pt] (15.5,-1.75) .. controls(16.2,-1.6) ..(17,-1);
\draw[line width=1pt] (17,0) .. controls(17.2,-.15) ..(18,-.15);
\draw[line width=1pt] (17,-1) .. controls(17.2,-.85) ..(18,-.85);
\draw[blue, line width=1pt] (18,-.5) ellipse (.15 and .35);
\draw[line width=1pt] (15.5,-1.02) .. controls(16.1,-.9) ..(16.25,-.75);
\draw[line width=1pt] (14.7,-.45) .. controls(15.25,0.1) and (15.75,0.1) ..(16.25,-.35);
\draw[line width=1pt] (16.25,-.35) .. controls(16.35,-.475) and (16.35,-.625) ..(16.25,-.75);
\draw[line width =1pt, color=violet] (13.5,-.5) ellipse (1.2 and .15);
\draw[line width =1pt, color=violet] (11.4,-1.2) ellipse (1 and .15);
\draw[line width =1pt, color=violet] (15.5,-1.39) ellipse (.15 and .35);
\draw[line width =1pt, color=violet] (13,-2.475) ellipse (.15 and .35);
\draw (10.25,.5) node[above right] {$\beta_2$} (9,-2.8) node[above right] {$\beta_1$} (18.2,-.2)node [below right ] {$\beta_3$} (13.5,-.5) node [below right ] {$\alpha^k_1$} (14.65,-1.2) node [below right ] {$\alpha^k_2$} (11,-.3) node [below right ] {$\alpha^k_1$} (13.2,-2.2) node [below right ] {$\alpha^k_2$};
\end{tikzpicture}
\end{center}
\caption{Cutting a surface along a curve $\mathcal{C}_1^k=\alpha^k_1+\alpha^k_2$ produces a surface with borders.}
\label{cutting1}
\end{figure}
Now consider cutting $\mathcal{R}(L_1,\cdots,L_n)$ along $\mathcal{C}_i^k$. Let us denote the surface obtained as a result of this cutting by $\mathcal{R}(L_1,\cdots,L_n; \ell_{\mathcal{C}^k_i})$. Notice that the group $\text{MCG}(\mathcal{R}(L_1,\cdots,L_n); \ell_{\mathcal{C}^k_i})\times \text{Dehn}(\mathcal{C}^k_i)$ has no non-trivial action on $\mathcal{R}(L_1,\cdots,L_n;\ell_{\mathcal{C}^k_i})$. Therefore, we can repeat the whole exercise by considering the identity (\ref{gmidentitya}) on $\mathcal{R}(L_1,\cdots,L_n; \ell_{\mathcal{C}^k_i})$. At the end we obtain an identity of the following kind
\begin{equation}\label{MCGIIdentity}
\sum_{g\in\text{MCG}(\mcal{R}(L_1,\cdots,L_n))}\sum_s\mathcal{G}_s(\ell_{g\cdot\gamma_s},\tau_{g\cdot\gamma_s})=1.
\end{equation}
where $\mathcal{G}_s$s are functions of the Fenchel-Nielsen coordinates of $\mathcal{R}(L_1,\cdots,L_n)$ defined with respect to a multi-curves $\gamma_s=\sum_{i=1}^{3g-3+n}\gamma_s^i$. The collection of curves $\left\{\gamma^1_s,\cdots,\gamma_s^{3g-3+n} \right\}$ form a system of non-self intersecting geodesics that define a pair of pants decomposition $\mathbf{P}^s$ of $\mathcal{R}(L_1,\cdots,L_n)$. The sum over $s$ represents the sum over pair of pants decompositions that are not related to each other via any MCG transformation. \par
The function $\mathcal{G}_s$ has an important property. To demonstrate this property a non-self intersecting closed geodesic $\gamma$ on $\mcal{R}(L_1,\cdots,L_n)$ that can not be mapped to any element in the set $\left\{\gamma^1_s,\cdots,\gamma_s^{3g-3+n} \right\}$ by the action of any element in $\text{MCG}(\mcal{R}(L_1,\cdots,L_n))$. The hyperbolic metric on $\mcal{R}(L_1,\cdots,L_n)$ has the property that if $\ell_{\gamma}\to c_*$, then the length of at least one of the curve in the set $\left\{\gamma^1_s,\cdots,\gamma_s^{3g-3+n} \right\}$ will have length of the order $e^{\frac{1}{c_*}}$. Moreover, the function $\mathcal{G}_s$ is a function of all the curves in the set $\left\{\gamma^1_s,\cdots,\gamma_s^{3g-3+n} \right\}$ constructed by multplying the functions $\mathcal{D}(x,y,z)$ and $\mathcal{E}(x,y,z)$. Note that the function $\mathcal{D}(x,y,z)$ appearing in the Mirzakhani-McShane identity (\ref{gmidentity}) given by
\begin{equation}\label{DR1}
\mathcal{D}(x,y,z)=2~\ln\left( \frac{e^{\frac{x}{2}}+e^{\frac{y+z}{2}}}{e^{\frac{-x}{2}}+e^{\frac{y+z}{2}}}\right),
\end{equation}
vanishes in the limits $y\to \infty$ keeping $x,z$ fixed and $z\to \infty$ keeping $x,y$ fixed :
\begin{equation}\label{Dlimit}
\lim_{y,z\to\infty} \mathcal{D}(x,y,z)= \lim_{y,z\to\infty} \mathcal{O}\left( {\text{e}}^{-\frac{y+z}{2}}\right).
\end{equation}
$\mathcal{E}(x,y,z)$ given by
\begin{equation} \label{DR2}
\mathcal{E}(x,y,z)=x-\ln\left( \frac{\cosh\left(\frac{y}{2}\right)+\cosh(\frac{x+z}{2})}{\cosh\left(\frac{y}{2}\right)+\cosh\left(\frac{x-z}{2}\right)}\right).
\end{equation}
becomes 0 in the limit $z\to \infty$ keeping $x,y$ fixed:
\begin{equation}\label{Elimit}
\lim_{z\to\infty} \mathcal{E}(x,y,z)= \lim_{z\to\infty} \mathcal{O}\left( {\text{e}}^{-\frac{z}{2}}\right).
\end{equation}
This can be easily verified by using the following relation
\begin{equation}\label{DE}
\mathcal{E}(x,y,z)=\frac{\mathcal{D}(x,y,z)+\mathcal{D}(x,-y,z)}{2}.
\end{equation}
Combining these observations suggests that the function $\mathcal{G}_s$ has the following property
\begin{equation}\label{decay1}
\lim_{\ell_{\gamma}\to c_*} \mathcal{G}_s=\mathcal{O}(e^{-1/c_*}).
\end{equation}
\noindent{\underline{\bf Naive interaction vertex $\mathbf{S}_{g,n}$}}: The naive $g$-loop $n$-point interaction vertex $\mathbf{S}_{g,n}$ for $n$ external off-shell states $|{V}_1\rangle,\cdots,|{V}_n\rangle$ represented by the vertex operators ${V}_1,\cdots,{V}_n$ constructed using the naive string vertex $\mathcal{V}^0_{g,n}$ is given by
\begin{equation}\label{eq:the bosonic-string amplitude2}
\mathbf{S}_{g,n} =\int_{\mathcal{W}_{g,n}} \frac{d\ell_{\gamma^s_1}d\tau_{\gamma^s_1}\cdots d\ell_{\gamma_{Q}^s}d\tau_{\gamma_{Q}^s}}{ (2\pi \mathrm{i})^{Q}}~\langle\mathcal{R}|b(\mathbf{t}_{\gamma^s_1})b(\mathbf{l}_{ \gamma^s_1})\cdots b(\mathbf{t}_{\gamma^s_{Q}})b(\mathbf{l}_{ \gamma^s_{Q}})|{V}_1\rangle_{w_1}\otimes\cdots\otimes|{V}_n\rangle_{w_n},
\end{equation}
where $Q=3g-3+n$ and the states $|{V}_1\rangle,\cdots,|{V}_n\rangle$ are inserted on the Riemann surface $\mathcal{R}$ using the set of local coordinates $(e^{\frac{\pi^2}{c_*}}w_1,\cdots,e^{\frac{\pi^2}{c_*}} w_n)$ induced from the hyperbolic metric. $|\mathcal{R}\rangle$ is the surface state associated with the Riemann surface $\mathcal{R}$. $(\tau_{\gamma_j^s},\ell_{\gamma^s_j}),~1\leq j\leq Q$ denote the Fenchel-Nielsen coordinates for the Teichm\"uller space $\mathcal{T}_{g,n}$ defined with respect to the pants decomposition $\mathbf{P}_s$ of $\mathcal{R}$. Using the identity (\ref{MCGIIdentity}), we can decompose (\ref{eq:the bosonic-string amplitude2}) into sum over all possible distinct pants decompositions of $\mathcal{R}$ with each term expressed as an integral over $\mathcal{T}_{g,n}$:
\begin{equation}\label{eq:the bosonic-string amplitude3}
\mathbf{S}_{g,n} = \sum_s\int_{\mathcal{TW}_{g,n}} \frac{d\ell_{\gamma^s_1}d\tau_{\gamma^s_1}\cdots d\ell_{\gamma_{Q}^s}d\tau_{\gamma_{Q}^s}}{(2\pi \mathrm{i})^{Q}}~\mathcal{G}_s\langle\mathcal{R}|b(\mathbf{t}_{\gamma^s_1})b(\mathbf{l}_{ \gamma^s_1})\cdots b(\mathbf{t}_{\gamma^s_{Q}})b(\mathbf{l}_{ \gamma^s_{Q}})|{V}_1\rangle_{w_1}\otimes\cdots\otimes|{V}_n\rangle_{w_n},
\end{equation}
where $\mathcal{TW}_{g,n}$ is the union of all images $\mathcal{W}_{g,n}$ in $\mathcal{T}_{g,n}$. Since the set local coordinates induced from hyperbolic metric do not satisfy the geometrical identity induced form the quantum BV master equation, the closed string field theory action constructed using the naive interaction vertex $\mathbf{S}_{g,n}$ is not gauge invariant.\\
\noindent{\underline{\bf Corrected interaction vertex $\widetilde{\mathbf{S}}_{g,n}$}}: In order to correct the interaction vertex $\widetilde{\mathbf{S}}_{g,n}$ the set of local coordinates on the world-sheets in $\mathcal{W}_{g,n}$ induced from the hyperbolic metric used to construct $\mathbf{S}_{g,n}$ must be modified. The set of local coordinates has to be modified if $\mathcal{R}$ belongs to the regions $\mathbf{W}^{(m)}_{g,n}$ for $m\neq 0$. Although the regions inside $\mathcal{M}_{g,n}$ where we need to modify the local coordinates have a simple description in terms of the length of the simple closed geodesics, it is impossible to specify them as explicit regions inside $\mathcal{T}_{g,n}$ using the Fenchel-Nielsen coordinates. This is due to the fact that there are infinitely many simple closed geodesics on a Riemann surface. \par
Interestingly, the effective expression (\ref{eq:the bosonic-string amplitude3}) has a noteworthy feature due to the decay property of the weight factors $\mathcal{G}_s$s (\ref{decay1}). Assume that the length of a simple closed geodesic $\alpha$ which does not belong to the set of curves $\left\{\gamma_s^1,\cdots,\gamma_s^{3g-3+n} \right\}$ associated with the pants decomposition $\mathbf{P}^s$ becomes $c_*$. Then the weight factor $\mathcal{G}_s$ decays to $\mathcal{O}(e^{-1/c_*})$. As a result, by correcting interaction vertex by modifying the local coordinates within each term in the effective expression (\ref{eq:the bosonic-string amplitude3}) independently it is possible to make them approximately satisfy the quantum BV master equation. \par
Consider the $s^{\text{th}}$ term in the effective expression. The effective region $E\mathcal{W}^{\mathbf{P}^s}_{g,n}$ for $\mathcal{TW}_{g,n}$ is given by
\begin{equation}
E\mathcal{W}^{\mathbf{P}^s}_{g,n}:~\ell_{\gamma_s^1}\in [c_*,\infty)\quad \cdots \ell_{\gamma_s^Q}\in [c_*,\infty)\quad \tau_1\in (-\infty,\infty)\cdots \tau_Q\in (-\infty,\infty).
\end{equation}
In order to modify the local coordinates we must divide $E\mathcal{W}^{\mathbf{P}^s}_{g,n}$ into subregions $E\mathbf{W}^{\mathbf{P}^s,(m)}_{g,n},~m=0,\cdots, Q$. Divide the subregion $E\mathbf{W}^{\mathbf{P}^s,(m)}_{g,n}$ further into $\frac{Q!}{m!(Q-m)!}$ number of regions $E\mathbf{W}^{\mathbf{P}^s,\gamma_{j_i}\cdots \gamma_{j_m}}_{g,n}$, where $i_1,\cdots, i_m\in \{1,\cdots,Q\}$. The number $\frac{Q!}{m!(Q-m)!}$ counts the inequivalent ways of choosing $m$ curves from the set $\left\{\gamma_s^1,\cdots,\gamma_s^{Q} \right\}$. For surfaces that belong to the region $E\mathbf{W}^{\mathbf{P}^s,\gamma_s^{i_1}\cdots\gamma_s^{i_m}}_{g,n}$ with $m\ne 0$, we choose the local coordinates around the $j^{th}$ puncture, up to a phase ambiguity, is given by
\begin{equation}
\widetilde{w}_{j}^{\gamma_{s}^{i_1}\cdots\gamma_s^{i_m}}=e^{\frac{c_*^2}{6}\sum_{k=1}^mf(\ell_{\gamma_{i_k}})Y^{\gamma_{i_1}\cdots\gamma_{i_m}}_{i_kj}}e^{\frac{\pi^2}{c_*}} w_{j}.
\end{equation}
where
\begin{align}
Y^{\gamma_{i_1}\cdots\gamma_{i_m}}_{i_kj}&=\sum_{q=1}^2\sum_{c_j^q,d_j^q}\pi^{2}\frac{\epsilon(j,q)}{|c_j^q|^4}\nonumber\\ c_j^q>0 \qquad &d_j^q~\text{mod}~c_j^q \qquad\left(\begin{array}{cc}* & * \\c_j^q & d_j^q\end{array}\right)\in \quad (\sigma_i^q)^{-1}\Gamma_{{\gamma_{i_1}\cdots\gamma_{i_m}}}^{jq}\sigma_j
\end{align}
Here, $\Gamma_{{\gamma_{i_1}\cdots\gamma_{i_m}}}^{jq}$ denotes the Fuchsian group for the component Riemann surface obtained from $\mathcal{R}$ by degenerating the curves $\gamma_{i_1},\cdots,\gamma_{i_m}$ carrying the $j^{\text{th}}$ puncture and the puncture denoted by the index $q$ which is obtained by degenerating the curve $\gamma_{i_k}$. The transformation $\sigma_j^{-1}$ maps the cusp corresponding to the $j^{th}$ cusp to $\infty$ and $(\sigma_j^q)^{-1}$ maps the puncture denoted by the index $q$ obtained by degenerating the curve $\gamma_{i_k}$ to $\infty$. The factor $\epsilon(j,q)$ is one if both the $j^{th}$ puncture and the puncture denoted by the index $q$ belong to the same component surface, other wise $\epsilon(j,q)$ is zero. \par
Then the corrected interaction vertex $\widetilde{\mathbf{S}}_{g,n}$ is given by
\begin{align}
\widetilde{\mathbf{S}}_{g,n}&=\sum_s\sum_{m=0}^{Q}\sum_{\left\{i_1,\cdots,i_m\right\}}\int_{E\mathbf{W}^{\mathbf{P}^s,\gamma_s^{i_1}\cdots\gamma_s^{i_m}}_{g,n}} \frac{\prod_{j=1}^Qd\ell_{\gamma^s_j}d\tau_{\gamma^s_j}}{(2\pi \mathrm{i})^{Q}}\mathcal{G}_s\nonumber\\
&\times \langle\mathcal{R}|b(\mathbf{t}_{\gamma^s_1})b(\mathbf{l}_{ \gamma^s_1})\cdots b(\mathbf{t}_{\gamma^s_{Q}})b(\mathbf{l}_{ \gamma^s_{Q}})|{V}_1\rangle_{\widetilde{w}_{1}^{\gamma_{s}^{i_1}\cdots\gamma_s^{i_m}}}\otimes\cdots\otimes|{V}_n\rangle_{\widetilde{w}_{n}^{\gamma_{s}^{i_1}\cdots\gamma_s^{i_m}}},
\end{align}
where the sum over the sets $\{i_1,\cdots,i_m\}$ denotes the sum of $\frac{Q!}{m!(Q-m)!}$ inequivalent ways of choosing $m$ curves from the set $\left\{\gamma_s^1,\cdots,\gamma_s^{Q} \right\}$. The expression for corrected interaction vertex $\widetilde{\mathbf{S}}_{g,n}$ is true for any values of $g$ and $n$ such that $3g-3+n\geq 0$, and the closed string field theory master action constructed using these corrected interaction vertices will have approximate gauge invariance.
\section{Discussion}\label{disc}
In this paper we completed the construction of quantum closed string field theory with approximate gauge invariance by exploring the hyperbolic geometry of Riemann surfaces initiated in \cite{Moosavian:2017qsp}. In \cite{Moosavian:2017qsp} it was shown that although the string vertices constructed using Riemann surfaces with local coordinates induced from hyperbolic Riemann surfaces $-1$ constant curvature fails to provide gauge invariant quantum closed string field theory, the corrected string vertices obtained by modifying these local coordinates on Riemann surfaces belongs to the boundary region of string vertices give rise to quantum closed string field theory with approximate gauge invariance. Unfortunately, due to the complicated action of mapping class group on the Fenchel-Nielsen coordinates the implementing the suggested modification seemed to be impractical. However, in this paper we argued that by using the non-trivial identities satisfied by the lengths of simple closed geodesics on hyperbolic Riemann surfaces it is indeed possible to implement the modifications in very convenient fashion. The identities that we explored in this paper are due to McShane and Mirzakhani \cite{McShane1,Mirzakhani:2006fta}. Although they are very convenient to use, they have a very important drawback. They are applicable only for the case of hyperbolic Riemann surfaces with at least one border or puncture. For instance, for computing the contributions from vacuum graphs to the string field theory action, we can not use them. Interestingly, there exists another class of such identities due to Luo and Tan \cite{LT01,HT01} that are applicable for kinds of hyperbolic Riemann surfaces with no elliptic fixed points. For a quick introduction read appendix \ref{LuoTan}. But they have one disadvantage, the functions involved in these identities are significantly more complicated than the functions appearing in the identities due to McShane and Mirzakhani. \par
There are many interesting direction that deserve further study. It would be very useful to check whether it is possible to construct the string vertices in closed superstring field theory that avoids the occurrence of any unphysical singularities due to the picture changing operators by exploring hyperbolic geometry. It might be worth exploring hyperbolic geometry of super Riemann surfaces to construct closed superstring field theory using the supergeometric formulation of superstring theory. This is particularly interesting due to the fact that there exist generalization of McShane-Mirzakhani identities for the case of super Riemann surfaces \cite{Stanford:2019vob}. Another interesting direction is to use the formalism discussed in this paper to systematically compute the field theory limit of string amplitudes. We hope to report on this in the near future.
\bigskip
{\bf Acknowledgement:} It is our pleasure to thank Davide Gaiotto and Ashoke Sen for important comments and detailed discussions. We thank Thiago Fleury, Greg McShane, Scott Wolpert and Barton Zwiebach for helpful discussions. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \& Innovation.
|
1,108,101,564,702 | arxiv | \section{Introduction}
NLO predictions of multi-jet production at hadron colliders have a long
history. They are important processes for the LHC both as precision tests of
QCD and direct probes of the strong coupling and also as background in many new
physics searches. The LHC experiments have been able to measure jet rates
for up to 6 hard jets which are now being used in new physics
searches \cite{Aad:2011tqa,Chatrchyan:2013gia,Chatrchyan:2013iqa}. This
presents a serious challenge for precise theoretical predictions since high
multiplicity computations in QCD are notoriously difficult. Di-jet
production has been known at NLO for more than 20 years~\cite{Giele:1993dj} and has recently
seen improvements via NLO plus parton shower (NLO+PS) description~\cite{Alioli:2010xa,Hoeche:2012fm}
and steady progress towards NNLO QCD results~\cite{Ridder:2013mf}.
The full three-jet computation was completed and implemented in a public code {\sc
NLOJET++} 10 years ago~\cite{Nagy:2001fj}. Recently predictions for four-jet
production have been presented by two independent
groups~\cite{Bern:2011ep,Badger:2012pf}.
The advances in methods of evaluation of multi-leg virtual
amplitudes~\cite{Bern:1994zx,Bern:1994cg,%
Cascioli:2011va,Becker:2011vg,Actis:2012qn,Mastrolia:2010nb,Britto:2004nc,Ellis:2007br,Forde:2007mi,Giele:2008ve,%
Badger:2008cm,Ossola:2006us,Mastrolia:2012an,Mastrolia:2012bu,vanHameren:2009vq}
have inspired many efforts to automate NLO computations~%
\cite{Badger:2010nx,Hirschi:2011pa,Berger:2008sj,Bevilacqua:2011xh,Cullen:2011ac,Badger:2012pg}.
Processes with four final states, previously out of reach, can now be
routinely used for phenomenological
predictions~\cite{Bevilacqua:2012em,Greiner:2012im,Bern:2013gka,Cullen:2013saa,vanDeurzen:2013xla,Gehrmann:2013bga,Campanario:2013fsa}.
We refer the reader to other contributions to these proceedings for further details on the current
state-of-the art~\cite{Ossola:2013jea,Cullen:2013cka,Bern:2013pya}.
Five partons in the final state still constitute a considerable challenge, though steady progress in that direction
gives hope for the same level of automation in the near future.
Recent state-of-the-art calculations with five QCD partons in the final state include
the NLO QCD corrections to $pp\to W+5j$~\cite{Bern:2013gka} by the {\sc BlackHat} collaboration
and NLO QCD corrections to $pp\to 5j$~\cite{Badger:2013yda}.
\section{5-Jet production at the LHC at 7 and 8~TeV \label{sec:5j}}
The different parts of the calculation, which contribute to NLO cross section can be schematically written as
\begin{gather}
\delta\sigma^{\mbox{\scriptsize NLO}} = \int\limits_n
\big(d\sigma_n^{\rm V}
+ \int\limits_1 d\sigma_{n+1}^{\rm S}\big)
+ \int\limits_n d\sigma_n^{\rm Fac}
+ \int\limits_{n+1} \big(d\sigma_{n+1}^{\rm R} - d\sigma_{n+1}^{\rm S}\big).
\end{gather}
We used the Sherpa Monte-Carlo event generator \cite{Gleisberg:2008ta} to handle phase-space integration and
generation of tree-level amplitudes and Catani-Seymour dipole subtraction terms as implemented in Comix~\cite{Gleisberg:2008fv,Gleisberg:2007md}.
The one-loop matrix elements for the virtual corrections $d\sigma_n^{\rm V}$ are
evaluated with the publicly available {\sc NJet}\xspace\footnote{To download {\sc NJet}\xspace visit the project home page
at\\ \url{https://bitbucket.org/njet/njet/}.} package~\cite{Badger:2012pg}
interfaced to Sherpa via the Binoth Les Houches Accord \cite{Binoth:2010xt,Alioli:2013nda}.
{\sc NJet}\xspace is based on the {\sc NGluon}\xspace library~\cite{Badger:2010nx} %
and uses an on-shell generalized unitarity framework \cite{Britto:2004nc,Ellis:2007br,Forde:2007mi,Giele:2008ve,Badger:2008cm,Ossola:2006us}
to compute multi-parton one-loop primitive amplitudes from tree-level building blocks~\cite{Berends:1987me}.
The scalar loop integrals are obtained via the {\sc QCDLoop/FF} package~\cite{vanOldenborgh:1990yc,Ellis:2007qk}.
{\sc NJet}\xspace implements full-colour expressions for up-to five outgoing QCD partons.
The complexity of high-multiplicity virtual corrections motivates us to explore
ways to speed up the computation. One of the optimizations implemented in {\sc NJet}\xspace is the usage
of de-symmetrized colour sums for multi-gluon final states, which allows us to get full colour result
at a small fraction of computational cost by exploiting the Bose symmetry of the phase space~\cite{Ellis:2009zw,Badger:2012pg}.
Another possibility is to separate leading and sub-leading contributions,
which enables Monte-Carlo integrator to sample the dominant however simpler terms more often
and get the same statistical error with fewer evaluations of the expensive sub-leading part.
In our leading terms we include all multi-quark processes in the
large $N_c$ limit and processes with two or more gluons in the final state
using the de-symmetrized colour sums.
In Figure~\ref{fig:colour} we
compare leading and full virtual contributions to the hadrest jet transverse momentum in $pp\to 5j$.
The correction from the sub-leading part is around $10\%$ at low $p_T$ and shows a tendency to
grow with increasing hardness of the jet.
Considering that $d\sigma_n^{\rm V}$ contribute $\sim 50\%$ of the total NLO cross section for this process
this translates to $5-10$ percent effect depending on the kinematic region.
\begin{figure}[h]
\centering
\includegraphics[width=0.43\textwidth]{{5j-colour}.pdf}
\caption{Full colour and leading approximation (as explained in the text)
for the virtual corrections to the transverse momentum of the 1st jet in $pp\to 5j$.}
\label{fig:colour}
\end{figure}
The calculation is done in QCD with five massless quark flavours including the bottom-quark in the
initial state. We neglect contributions from top quark loops. We set the renormalization scale equal
to the factorization scale ($\ensuremath{\mu_r}=\ensuremath{\mu_f}=\mu$) and
use a dynamical scale based on the total transverse momentum $\ensuremath{{\widehat{H}_T}}$ of the final state partons:
\begin{equation}
\ensuremath{{\widehat{H}_T}} = \sum_{i=1}^{N_{\mbox{\scriptsize parton}}} p_{T,i}^{\mbox{\scriptsize parton}}.
\end{equation}
For the definition of physical observables we use the anti-kt jet clustering algorithm as implemented
in {\sc FastJet} \cite{Cacciari:2011ma,Cacciari:2008gp}. We apply asymmetric cuts on the jets ordered
in transverse momenta, $p_T$, to match the ATLAS multi-jet measurements \cite{Aad:2011tqa}:
\begin{align}
p_T^{j_1} &> 80 \text{ GeV} & p_T^{j_{\geq 2}} &> 60 \text{ GeV} & R &= 0.4
\label{eq:cuts}
\end{align}
The PDFs are obtained through the LHAPDF interface \cite{Whalley:2005nh} with all central values using
NNPDF2.1~\cite{Ball:2011uy} for LO ($\alpha_s(M_Z) = 0.119$) and NNPDF2.3~\cite{Ball:2012cxX} for NLO
($\alpha_s(M_Z) = 0.118$) if not mentioned otherwise.
Generated events are stored in ROOT Ntuple format~\cite{Binoth:2010ra} which allows for flexible
analysis. Renormalization and factorization scales can be changed at the analysis level as
well as the PDF set. This technique makes it possible to do extended analysis of
PDF uncertainties and scale dependence, which would otherwise be prohibitively expensive for such
high multiplicity processes.
\subsection{Numerical results \label{sec:results}}
Using the above setup we obtain for the 5-jet cross section at 7~TeV
\begin{align}
\sigma_5^{\text{7TeV-LO}}(\mu=\ensuremath{{\widehat{H}_T}}/2) &= 0.699 ( 0.004 )^{+ 0.530 }_{- 0.280 }\: {\rm nb}, \\
\sigma_5^{\text{7TeV-NLO}}(\mu=\ensuremath{{\widehat{H}_T}}/2) &= 0.544 ( 0.016 )^{+ 0.0 }_{- 0.177 }\: {\rm nb}.
\label{eq:5jXS7TeV}
\end{align}
In parentheses we quote the uncertainty due to the numerical integration.
The theoretical uncertainty has been estimated from scale variations over the range
$\mu\in[\ensuremath{{\widehat{H}_T}}/4,\ensuremath{{\widehat{H}_T}}]$ and is indicated by the sub- and superscripts.
As seen in \Fig{fig:5j_scalevar_all} the total cross section at the scale $\mu = \ensuremath{{\widehat{H}_T}}$ is lower than
the central value which is the origin of the zero value of the upper error bound. The total cross
section at this scale is $\sigma_5^{\text{7TeV-NLO}}(\mu=\ensuremath{{\widehat{H}_T}}) = 0.544 (0.016)\: {\rm nb}$.
For a centre-of-mass energy of 8~TeV the results read:
\begin{align}
\sigma_5^{\text{8TeV-LO}}(\mu=\ensuremath{{\widehat{H}_T}}/2) &= 1.044 ( 0.006 )^{+ 0.770 }_{- 0.413 }\: {\rm nb}, \\
\sigma_5^{\text{8TeV-NLO}}(\mu=\ensuremath{{\widehat{H}_T}}/2) &= 0.790 ( 0.021 )^{+ 0.0 }_{- 0.313 }\: {\rm nb},
\label{eq:5jXS8TeV}
\end{align}
where we have found $\sigma_5^{\text{8TeV-NLO}}(\mu=\ensuremath{{\widehat{H}_T}}) = 0.723 ( 0.011 )\: {\rm nb}$.
As usual for a next-to-leading order correction a significant reduction of the scale uncertainty can
be observed.
\begin{figure}[htbp]
\centering
\leavevmode
\subfloat[]{\label{fig:5j_scalevar}%
\includegraphics[width=0.43\columnwidth]{5j_scalevar}}
\subfloat[]{\label{fig:5j_scalevar_nlopdfs}%
\includegraphics[width=0.43\columnwidth]{5j_scalevar_nlopdfs}}
\caption{Residual scale dependence of the 5-jet cross section in
leading and next-to-leading order using LO~(a) and NLO~(b) PDFs for LO prediction.}
\label{fig:5j_scalevar_all}
\end{figure}
In \Fig{fig:5j_scalevar_all} the scale dependence of the LO and NLO
cross section is illustrated. The dashed black line indicates the central scale $\mu=\ensuremath{{\widehat{H}_T}}/2$. The horizontal bands show
the cross section uncertainty estimated by a scale variation within $\mu\in[\ensuremath{{\widehat{H}_T}}/4,\ensuremath{{\widehat{H}_T}}$].
By comparing Figs.~\ref{fig:5j_scalevar} and \ref{fig:5j_scalevar_nlopdfs} we observe that a significant part of the NLO corrections comes
from using NLO PDFs with the corresponding $\ensuremath{\alpha_s}$. Similar to what has been found in Ref.~\cite{Badger:2012pf} we
conclude that using the NLO PDFs in the LO predictions gives a
better approximation to the full result compared to using LO PDFs.
In \Tab{tab:xs} we show for completeness the cross sections
for two, three and four-jet production as calculated with {\sc NJet}\xspace using
the same setup as in the five jet case.
\begin{table}[h]
\centering
\setlength{\tabcolsep}{12pt}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{lcc}
\hline
$n$ & $\sigma_n^{\text{7TeV-LO}} \: [{\rm nb}]$ & $\sigma_n^{\text{7TeV-NLO}} \: [{\rm nb}]$ \\
\hline
$2$ & $768.0 ( 0.9 )^{+ 203.0}_{- 151.3 }$ & $1175 ( 3 )^{+ 120 }_{- 129 }$ \\
\hline
$3$ & $71.1 ( 0.1 )^{+ 31.5 }_{- 20.0 }$ & $52.5 ( 0.3 )^{+ 1.9 }_{- 19.3 }$ \\
\hline
$4$ & $7.23 ( 0.02 )^{+ 4.37 }_{- 2.50 }$ & $5.65 ( 0.07 )^{+ 0 }_{- 1.93 }$ \\
\hline
\end{tabular}
\caption{Cross sections for 2, 3 and 4 jets at 7~TeV.}
\label{tab:xs}
\end{table}
The jet rates have been measured
recently by ATLAS using the 7~TeV data set~\cite{Aad:2011tqa}.
\begin{figure}[htbp]
\centering
\leavevmode
\subfloat[]{\label{fig:MJincl}%
\includegraphics[width=0.43\columnwidth]{MultiJetInclusive}}
\subfloat[]{\label{fig:jetratios}%
\includegraphics[width=0.43\columnwidth]{MultiJetRatioPDF}}
\caption{(a) LO and NLO cross sections for jet production calculated with {\sc NJet}\xspace as well
as results from ATLAS measurements \cite{Aad:2011tqa}.
(b) NLO {\sc NJet}\xspace predictions with different PDF sets for the jet ratios ${\cal{R}}_n$ compared
with recent ATLAS measurements \cite{Aad:2011tqa}.
}
\end{figure}
In \Fig{fig:MJincl} we show the data together with the theoretical predictions in leading and
next-to-leading order. In case of the six jet rate only LO results are shown. In the lower plot the
ratio of theoretical predictions with respect to data is given. With exception of the two-jet cross
section the inclusion of the NLO results improves significantly the agreement with data.
In addition to inclusive cross
sections it is useful to consider their ratios since many theoretical
and experimental uncertainties may cancel between numerator
and denominator. In particular we consider
\begin{equation}
\R{n} = {\sigma_{(n+1)\text{-jet}}\over\sigma_{n\text{-jet}}}.
\end{equation}
This quantity is in leading order proportional to the QCD coupling $\ensuremath{\alpha_s}$ and can be used to
determine the value of $\ensuremath{\alpha_s}$ from jet rates.
In \Fig{fig:jetratios} we show QCD predictions in NLO using different PDF sets together with the
results from ATLAS. The results obtained from NNPDF2.3 are also collected in \Tab{tab:jetratios}
where, in addition, the ratios at leading order (using the LO setup with NNPDF2.1) are shown.
\begin{table}[htbp]
\setlength{\tabcolsep}{12pt}
\renewcommand{\arraystretch}{1.6}
\centering
\begin{tabular}{cccc}
\hline
\R{n} & ATLAS~\cite{Aad:2011tqa} & LO & NLO \\ \hline
2 & $0.070^{+ 0.007 }_{- 0.005 }$ & $0.0925(0.0002)$ & $0.0447(0.0003)$\\ \hline
3 & $0.098^{+ 0.006 }_{- 0.007 }$ & $0.102(0.000)$ & $0.108(0.002)$\\ \hline
4 & $0.101^{+ 0.012 }_{- 0.011 }$ & $0.097(0.001)$ & $0.096(0.003)$\\ \hline
5 & $0.123^{+ 0.028 }_{- 0.027 }$ & $0.102(0.001)$ & $--$\\ \hline
\end{tabular}
\caption{Results for the jet ratios $\R{n}$ for the central scale of $\ensuremath{{\widehat{H}_T}}/2$ and NNPDF2.3 PDF
set.}
\label{tab:jetratios}
\end{table}
In case of \R{3}\ and \R{4}\ perturbation theory seems to provide stable results. The leading order
and next-to-leading order values differ by less than 10\%. In addition NNPDF~\cite{Ball:2012cxX},
CT10~\cite{Lai:2010vv} and MSTW08~\cite{Martin:2009iq} give compatible predictions.
ABM11~\cite{Alekhin:2012ig} gives slightly smaller results for \R{3}\ and \R{4}.
Within uncertainties the predictions also agree with the ATLAS measurements.
The poor description of \R{2}\ can be attributed to the inclusive two-jet cross section
which seems to be inadequately described by a fixed order NLO calculation.
As a function of the leading jet $p_T$, all PDF
sets agree well with the 3/2 ratio ATLAS data at large $p_T$ as shown in \Fig{fig:32ratiodata}.
\begin{figure}[h]
\centering
\subfloat[]{\label{fig:32ratiodata}%
\includegraphics[width=0.43\columnwidth]{{{plot_32ratio_data_pT_1}}}}
\subfloat[]{\label{fig:jetratio_pT}%
\includegraphics[width=0.43\columnwidth]{{{plot_jetratio_pT_1}}}}
\caption{(a) The 3/2 jet ratio as a function of the $p_T$ of the leading jet compared with ATLAS
data~\cite{Aad:2011tqa} ($R=0.6$).
(b) The $\R{n}$ ratios as functions of the $p_T$ of the leading jet ($R=0.4$).}
\end{figure}
In \Fig{fig:jetratio_pT} we compare LO and NLO predictions for \R{n}\ as function of the leading jet
$p_T$. While for \R{3}\ and \R{4}\ the corrections are moderate for all values of $p_T$ we observe
large negative corrections independent from $p_T$ in case of \R{2}.
\begin{figure}[htbp]
\centering
\leavevmode
\subfloat[]{%
\includegraphics[width=0.43\columnwidth]{{{plot_q16.4_l8_sm0.1_NNPDF23_7_nlopdf_jet_pT_1}}}}
\subfloat[]{%
\includegraphics[width=0.43\columnwidth]{{{plot_q20.4_l10_sm0.1_NNPDF23_7_nlopdf_jet_eta_1}}}}
\caption{The $p_T$ and rapidity distributions of the leading jet. Both LO and NLO use the NNPDF2.3 PDF set
with $\alpha_s(M_Z) = 0.118$}
\label{fig:7TeVpt1dist}
\end{figure}
In \Fig{fig:7TeVpt1dist} we show the transverse momentum and rapidity distributions
of the leading jet for five-jet production. Similarly to total cross section
we observe significant reduction of the scale uncertainty when going from LO to NLO.
Using again the NLO setup to
calculate the LO predictions, the NLO calculation gives very small
corrections. Over a wide range the LO predictions are modified by less
than 10\%. A remarkable feature observed already in the 4-jet
calculation \cite{Bern:2011ep,Badger:2012pf} is the almost constant
K-factor.
\section{Conclusions}
In this contribution we have presented first results for five-jet production at NLO accuracy in QCD.
We find moderate corrections of the order of 10\% at NLO with respect to a leading order computation using NLO PDFs.
We have compared theoretical predictions for inclusive jet cross sections and jet
rates with data from ATLAS. With the exception of quantities affected by the two-jet rate we find
good agreement between theory and data.
\section*{Acknowledgements}
This work has been supported by the Helmholtz Gemeinschaft under contract HA-101 (Alliance Physics at the
Terascale), by the German Research Foundation (DFG) through the transregional collaborative research
centre ``Computational Particle Physics'' (SFB-TR9), by the European Commission through contract
PITN-GA-2010-264564 (LHCPhenoNet) and by the Alexander von Humboldt Foundation,
in the framework of the Sofja Kovaleskaja Award 2010, endowed by the German Federal Ministry of Education and Research.
\small
\bibliographystyle{iopart-num}
|
1,108,101,564,703 | arxiv | \section{Introduction}
Face recognition has advanced significantly in recent years with the development of deep neural networks~\cite{parkhi2015deep,schroff2015facenet,liu2017sphereface,wang2018cosface,deng2019arcface}. In our daily lives, face recognition has been used in a wide range of applications due to the excellent performance currently available. These applications include security-sensitive applications such as mobile phone unlocking and payment, door locks, airport and railway station check-in, financial industry authentication, and other similar applications. Unfortunately, many studies~\cite{goodfellow2014explaining,madry2018towards,carlini2017towards,dong2018boosting} have found that deep neural networks are vulnerable to adversarial examples. Unsurprisingly, face recognition based on deep neural networks is also vulnerable to adversarial examples~\cite{dong2019efficient,zhong2020towards,yang2021attacks}.
Dong~\emph{et al.}~\cite{dong2019efficient}, Zhong~\emph{et al.}~\cite{zhong2020towards}, and Yang~\cite{yang2021attacks} generate adversarial examples with strong perturbations by decision-based attack, gradient-based attack, and generative adversarial network (GAN)~\cite{goodfellow2014generative}, respectively, which can effectively perform digital adversarial attacks on face recognition. However, implementing these global perturbations in the physical world is unrealistic. Therefore, they cannot be employed in actual attacks. In most cases, these methods are used to evaluate the security of face recognition rather than to carry out actual attacks. Recently, many methods for physical attacks on face recognition have also been proposed~\cite{sharif2016accessorize,sharif2019general,nguyen2020adversarial,komkov2021advhat,yin2021adv}. Sharif~\emph{et al.}~\cite{sharif2016accessorize,sharif2019general} and Komkov~\emph{et al.}~\cite{komkov2021advhat} are perfect for white-box physical attacks, but they are hard to perform black-box attacks on face recognition models or systems. Nevertheless, realistic face recognition environments are commonly in a black-box state. Nguyen~\emph{et al.}~\cite{nguyen2020adversarial} used a projector to perform adversarial light projection physical attacks, but such attacks are not convenient in reality. Yin~\emph{et al.}~\cite{yin2021adv} can implement transferable physical attacks on face recognition, but physical attacks on commercial face recognition systems are not ideal. Similarly, Yin~\emph{et al.}~\cite{yin2021adv} perform poorly when confronted with low-quality face images.
We summarize that an effective adversarial attack method on face recognition should be capable of conducting black-box attacks, physical attacks, impersonation attacks, convenient attacks, attacks on low-quality target images, and attacks against commercial systems. Previous studies have shown that it is not easy to effectively implement black-box physical impersonation attacks on face recognition. To address these challenges simultaneously, we propose an effective black-box impersonation attack method on face recognition, called RSTAM, which implements a physical attack employing an adversarial mask printed by a mobile and compact printer. To begin, we design an initial binary mask, as shown in Figure~\ref{fig:RSTAM}. Secondly, we propose a random similarity transformation strategy, which can enhance the diversity of the inputs and thus the transferability of the adversarial masks. Following that, we propose a random meta-optimization strategy for ensembling several pre-trained face models to generate more transferable adversarial masks. Finally, we perform experiments on two high-resolution face datasets CelebA-HQ~\cite{karras2018progressive}, Makeup Transfer (MT)~\cite{li2018beautygan}, and two low-quality face datasets LFW~\cite{huang2008labeled}, CASIA-FaceV5~\cite{casiafacev5}. Meanwhile, we perform evaluations on five state-of-the-art commercial face recognition systems: Face++~\cite{faceplusplus}, Baidu~\cite{baidu}, Aliyun~\cite{aliyun}, Tencent~\cite{tencent} and Microsoft~\cite{microsoft}. Our experiments show that our proposed method RSTAM can effectively perform black-box impersonation attacks on commercial face recognition systems and low-pixel target images. Moreover, RSTAM can implement convenient physical attacks through the use of a Canon SELPHY CP1300~\cite{cp1300}, a mobile and compact printer. The main contributions of our work are summarized as follows.
\begin{itemize}[leftmargin=*]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item We design an initial binary mask for the adversarial masks.
\item We propose a random similarity transformation strategy that can improve the transferability of the adversarial masks by increasing the diversity of the inputs. Moreover, we use only one hyperparameter to control the random similarity transformation with four degrees of freedom (4DoF).
\item We propose a random meta-optimization strategy to perform an ensemble attack using several pre-trained face models. This strategy enables us to extract more common gradient features from the face models, thereby increasing the transferability of the adversarial masks.
\item Experiments demonstrate that RSTAM is an effective attack method on face recognition capable of performing black-box attacks, physical attacks, convenience attacks, impersonation attacks, attacks on low-quality images, and attacks against state-of-the-art commercial face recognition systems.
\end{itemize}
\section{Related Works}
\subsection{Adversarial Attacks}
Adversarial attacks fall into two broad categories: white-box attacks and black-box attacks. For white-box attack methods, they have full access to the target model and and the majority of them are gradient-based attacks, such as Fast Gradient Sign Method (FGSM)~\cite{goodfellow2014explaining}, Projected Gradient Descent (PGD)~\cite{madry2018towards}, and Carlini \& Wagner's method (C\&W)~\cite{carlini2017towards}. FGSM is a single-step gradient-based attack method that shows that linear features of deep neural networks in high-dimensional space are sufficient to generate adversarial examples. PGD is a multi-step extension of the FGSM attack that generates more powerful adversarial examples for white-box attacks. C\&W is an optimization-based attack method that also happens to be a gradient-based attack method. However, in practice, what exists is more of a black-box scenario. White-box attack methods tend to lack transferability and fail to attack target models with unknown parameters and gradients. Therefore, more researchers have focused on black-box attack methods.
Black-box attack methods can be classified into three categories: transfer-based, score-based, and decision-based. The transfer-based attacks generate the adversarial examples with a source model and then transfer them to a target model to complete the attack, in which we do not need to know any parameters or gradients of the target model. MI-FGSM~\cite{dong2018boosting} suggests incorporating momentum into the attack process to stabilize the update direction and increase the transferability of the generated adversarial examples. DI$^2$-FGSM~\cite{xie2019improving} first proposes improving the transferability of the adversarial examples by increasing the diversity of the inputs. The translation-invariant attack method (TI-FGSM)~\cite{dong2019evading} and the affine-invariant attack method (AI-FGSM) ~\cite{xiang2021improving} are used to further improve the transferability and robustness of adversarial examples. Score-based attacks can simply know the output score of the target model and estimate the gradient of the target model by querying the output score~\cite{ilyas2018black,cheng2019improving,li2019nattack}. Decision-based attacks assume that the attack is performed in a more challenging situation where only the output labels of the classifier are known. Boundry Attack~\cite{brendel2018decision} and Evolutionary Attack~\cite{dong2019efficient} are effective methods for dealing with this attack setting.
\subsection{Adversarial Attacks on Face Recognition}
Adversarial attacks on face recognition come in two common forms: dodging attacks and impersonation attacks. The purpose of dodging attacks is to reduce the confidence of the similarity of the same identity pair in order to evade face recognition. Impersonation attacks attempt to fool face recognition by using one identity to mimic another. Both technically and practically, impersonation attacks are more challenging and practical than dodging attacks. As a result, we concentrate on impersonation attack methods. Dong~\emph{et al.} ~\cite{dong2019efficient} proposed a decision-based adversarial attack on face recognition. Zhong~\emph{et al.}~\cite{zhong2020towards} increased the diversity of agent face recognition models by using the dropout~\cite{srivastava2014dropout} technique to improve the transferability of the adversarial examples. Yang~\emph{et al.}~\cite{yang2021attacks} introduced a GAN~\cite{goodfellow2014generative} to generate adversarial examples for impersonation attacks on face recognition. Dong~\emph{et al.} ~\cite{dong2019efficient}, Zhong~\emph{et al.}~\cite{zhong2020towards}, and Yang~\emph{et al.}~\cite{yang2021attacks} are the digital-based attacks on face recognition, which makes them hard to implement in the real world. At this point, there are also many researchers who have proposed methods for physical-based attacks on face recognition. Sharif~\emph{et al.}~\cite{sharif2016accessorize,sharif2019general} proposed a way to perform real-physical attacks on face recognition by printing out a pair of eyeglass frames. Komkov~\emph{et al.}~\cite{komkov2021advhat} proposed a physical attack method that can print an adversarial sticker using a color printer and put it on a hat to complete the attack. Nguyen~\emph{et al.}~\cite{nguyen2020adversarial} proposed an adversarial light projection attack method that uses a projector for physical attack. Yin~\emph{et al.}~\cite{yin2021adv} generated eye makeup patches by GAN and printed them out and stuck them around the eyes to perform physical attacks. Methods \cite{sharif2016accessorize,sharif2019general,komkov2021advhat} are perfect for white-box physical attacks. But realistic environments are often black-box situations and they are hard to attack with black-box face recognition models or systems. Although the method~\cite{yin2021adv} can perform a transferable black-box attack, it is ineffective for low-pixel face images and for commercial face recognition systems.
Compared with the previous methods, we propose an adversarial attack method on face recognition, RSTAM, which can effectively accomplish the black-box impersonation attack, both on low-pixel face pictures and on commercial face recognition systems. Furthermore, RSTAM can carry out a physical attack with the help of a mobile and compact printer.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/raap_e}
\caption{An example of RSTAM. The raw image of the target identity is from a social network. The initial binary mask is designed by us. We can observe that the confidence scores between the attack and the target are significantly greater than those between the source and the target.}
\label{fig:RSTAM}
\Description{An example of RSTAM attack.}
\end{figure*}
\section{Methodology}
\subsection{Overview}
Figure~\ref{fig:RSTAM} shows an example of the RSTAM. The raw image of the target identity is from a social network. Firstly, we apply the facial landmark detection method~\cite{JLS21} to generate corresponding facial landmarks of the raw image. Then we obtain the aligned target image according to the facial landmarks. Similarly, we can use the facial landmarks in the source image to obtain the eye region of the attacker. After that, we use our proposed method RSTAM to generate an adversarial mask and print out the adversarial mask using the Canon SELPHY CP1300~\cite{cp1300}, which is a mobile and compact printer. Finally, the attacker with the printed adversarial mask is obtained. We can see from Figure~\ref{fig:RSTAM} that the similarity confidence between the attack and the target is high on all five commercial face recognition systems, with Face++~\cite{faceplusplus} 84.89\%, Baidu~\cite{baidu} 80.24\%, Aliyun~\cite{aliyun} 75.23\%, Tencent~\cite{tencent} 63.83\%, and Microsoft~\cite{microsoft} 70.94\%. Figure~\ref{fig:RSTAM} further shows that we can effectively perform physical impersonation attacks on the commercial face recognition systems using a photo of the target identity on a social network.
\subsection{Adversarial Mask}
In this section, we will give a detailed description for the adversarial mask. Let $\mathbf{x}^{t}$ denote a face image of the target identity, $\mathbf{x}^{s}$ denote a source image of the attacker, $\mathbf{x}^{adv}$ denote an attack image of the attacker with an adversarial mask and $f(\mathbf{x}) : \mathbf{X} \rightarrow \mathbb{R}^d$ denote a face recognition model that extracts a normalized feature representation vector for an input image $\mathbf{x} \in \mathbf{X} \subset \mathbb{R}^n$. Our goal for the aversarial mask attack is to solve the following constrained optimization problem,
\begin{equation}
\begin{aligned}
&\mathop{argmin} \limits_{\mathbf{x}^{adv}} \mathcal{L}(f(\mathbf{x}^{adv}),f(\mathbf{x}^{t})), \\
&s.t. \ \mathbf{x}^{adv} \odot (1-\mathbf{M}) = \mathbf{x}^s \odot (1-\mathbf{M}),
\end{aligned}
\label{eq:1}
\end{equation}
where $\mathcal{L}$ is a cosine similarity loss function,
\begin{equation}
\mathcal{L}(\mathbf{v}^s,\mathbf{v}^t) = 1-cos(\mathbf{v}^s,\mathbf{v}^t).
\end{equation}
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\linewidth]{figures/mask_with_eye}
\caption{An example of generating a binary mask $\mathbf{M}$ using the initial binary mask $\mathbf{M}_0$ and the facial landmarks of the source.}
\label{fig:mask}
\Description{An example of mask.}
\end{figure}
$\odot$ is the element-wise product, and $\mathbf{M}$ is a binary mask. The binary mask $\mathbf{M}$ is used to constrain that only one pixel can be perturbed when its corresponding position is 1 in $\mathbf{M}$. We design an initial binary mask $\mathbf{M}_0$, and then use the initial binary mask $\mathbf{M}_0$ and the facial landmarks of the source image to generate the corresponding binary mask $\mathbf{M}$. Figure~\ref{fig:mask} shows an example of the binary mask $\mathbf{M}$ we generate using the initial binary mask $\mathbf{M}_0$ and the facial landmarks of the source image.
Let $\mathbf{A}$ denote an adversarial mask. We can generate an adversarial mask $\mathbf{A}$ with the $\ell_\infty$-norm perturbations~\cite{goodfellow2014explaining,madry2018towards} by performing a multi-step update as
\begin{equation}
\begin{aligned}
&\mathbf{x}^{adv}_0 = \mathbf{x}^s \odot (1-\mathbf{M}) + \mathbf{x}^t \odot \mathbf{M}, \\
&{\mathbf{x}}_{n+1}^{adv} = {Clip}_{\mathbf{x}^{adv}_0,\epsilon}({\mathbf{x}}_{n}^{adv} - \alpha \cdot \mathbf{sign}(\nabla_{{\mathbf{x}}^{adv}_{n}} \mathcal{L}(f({\mathbf{x}}^{adv}_{n}), f({\mathbf{x}}^{t})))\odot \mathbf{M}), \\
& \mathbf{A} = {\mathbf{x}}_{n+1}^{adv} \odot \mathbf{M},
\end{aligned}
\end{equation}
where $\alpha$ is a perturbation step size, $\mathbf{sign}(\cdot)$ is the sign function, ${Clip}_{\mathbf{x}^{adv}_0,\epsilon}$ denotes element-wise clipping, aiming to restrict $\mathbf{x}^{adv}$ with in the $\ell_\infty$-bound of $\mathbf{x}^{adv}_0$. $\epsilon$ is a perturbation bound, $ \Vert \mathbf{x}^{adv}-\mathbf{x}_{0}^{adv} \Vert_\infty \leq \epsilon$ .
\subsection{Random Similarity Transformation}
The similarity transformation, which has four degrees of freedom (4DoF), consists of translational, rotational, and scaling transformations. It is commonly used for face alignment~\cite{tadmor2016learning,liu2017sphereface}. Many previous studies~\cite{xie2019improving,gao2020patch,dong2019evading,xiang2021improving} have demonstrated that the transferability of adversarial examples can be greatly improved by increasing the diversity of the inputs. In order to improve the transferability of the adversarial masks, we propose a random similarity transformation strategy to increase the diversity of the inputs. Moreover, our strategy requires only one hyperparameter to control the random similarity transformation with 4DoF. Let $U(a,b)$ denote the uniformly distributed random sampling function from $a$ to $b$ and $\beta$ denote a hyperparameter. At each iteration we can obtain a random similarity transformation matrix $\mathbf{T}$ by the following strategy,
\begin{equation}
\begin{aligned}
t_x & = U(-\beta W,\beta W),\\
t_y & = U(-\beta H,\beta H),\\
\theta & = U(-\beta \pi/2,\beta \pi/2),\\
s & = U(1-\beta,1+\beta),\\
\mathbf{T} &=\left[ \begin{array}{ccc}
1 & 0 & t_x \\
0 & 1 & t_y \\
0 & 0 & 1
\end{array} \right] \left[ \begin{array}{ccc}
cos(\theta) & sin(\theta) & 0 \\
-sin(\theta) & cos(\theta) & 0 \\
0 & 0 & 1
\end{array} \right]\left[ \begin{array}{ccc}
s & 0 & 0 \\
0 & s & 0 \\
0 & 0 & 1
\end{array} \right]\\
&=\left[ \begin{array}{ccc}
s \cdot cos(\theta) & s \cdot sin(\theta) & t_x\\
-s \cdot sin(\theta) & s \cdot cos(\theta) & t_y\\
0 & 0 & 1
\end{array} \right].
\end{aligned}
\end{equation}
where $W$ and $H$ are the width and height of the input image. Let $(p_x,p_y)$ denote one coordinate of the input image. We can use the similarity transformation matrix $\mathbf{T}$ to obtain the corresponding transformed coordinates $(p_x^t, p_y^t)$,
\begin{equation}
\begin{aligned}
\left[ \begin{array}{c}
p_x^t \\
p_y^t \\
1
\end{array} \right]&=\mathbf{T}\left[ \begin{array}{c}
p_x \\
p_y \\
1
\end{array} \right].\\
\end{aligned}
\end{equation}
Finally, we generate the transformed input image by bilinear interpolation.
\subsection{Adversarial Mask with Random Similarity Transformation (RSTAM)}
Xie~\emph{et al.}~\cite{xie2019improving} first found that the transferability of adversarial examples could be further improved by increasing the diversity of inputs. Methods~\cite{dong2019evading,gao2020patch,xiang2021improving} also demonstrate this finding. From the description in Section 3.2, we can consider that the adversarial masks are a type of the adversarial examples, so we can also use this finding as well. For the adversarial mask attack on face recognition, we propose a random similarity transformation strategy to enhance the diversity of the input face images, which is described in Section 3.3. Using this strategy we propose the adversarial mask attack method with random similarity transformation, RSTAM. Algorithm block~\ref{alg:RSTAM} describes the detailed algorithm for the $\ell_\infty$-bound RSTAM, RSTAM$_\infty$. Similarly, the RSTAM attack can also use the $\ell_2$-norm perturbations, RSTAM$_2$. In RSTAM$_2$, we get $\bar{\mathbf{x}}^{adv}_{n+1}$ by
\begin{equation}
\begin{aligned}
\bar{\mathbf{x}}^{adv}_{n+1} = {Clip}_{[\mathbf{x}^{adv}_0-\epsilon, \mathbf{x}^{adv}_0+\epsilon]}(\mathbf{x}^{adv}_{n} - \alpha \cdot \frac{\mathbf{g}_n}{\Vert \mathbf{g}_n \Vert_2} \odot \mathbf{M}).
\end{aligned}
\end{equation}
\begin{algorithm}[tb]
\normalsize
\caption{RSTAM$_\infty$ algorithm}
\label{alg:RSTAM}
\begin{algorithmic}[1]
\REQUIRE{A face image $\mathbf{x}^t$ of the target identity; A source image $\mathbf{x}^s$ of the attacker; the initial binary mask $\mathbf{M}_0$ of our design; the facial landmarks $lms^{s}$ of the source image; A target face model $f$. }
\REQUIRE{Iterations $N$; the perturbation bound $\epsilon$; the perturbation step size $\alpha$; a hyperparameter $\beta$ for the random similarity transformation.}
\REQUIRE{The binary mask generation function $\boldsymbol{GenM}$; the random similarity transformation function $\boldsymbol{RST}$; the sign function $\boldsymbol{sign}$.}
\REQUIRE{The cosine similarity loss function $\mathcal{L}$.}
\STATE $\mathbf{M} = \boldsymbol{GenM}(\mathbf{M}_0,lms^{s})$
\STATE $\mathbf{x}^{adv}_0 = \mathbf{x}^s \odot (1-\mathbf{M}) + \mathbf{x}^t \odot \mathbf{M}$
\FOR{$n=0$ to $N-1$}
\STATE $\mathbf{g}_n = \nabla_{\mathbf{x}^{adv}_n}{\mathcal{L}(f(\boldsymbol{RST}(\mathbf{x}^{adv}_n,\beta)),f(\mathbf{x}^t))}$
\STATE $\bar{\mathbf{x}}^{adv}_{n+1} = \mathop{Clip}_{[\mathbf{x}^{adv}_0-\epsilon, \mathbf{x}^{adv}_0+\epsilon]}(\mathbf{x}^{adv}_{n} - \alpha \cdot \boldsymbol{sign}(\mathbf{g}_n) \odot \mathbf{M})$
\STATE $\mathbf{x}^{adv}_{n+1} = \mathop{Clip}_{[\mathbf{0},\mathbf{1}]}(\bar{\mathbf{x}}^{adv}_{n+1})$
\ENDFOR
\STATE $\mathbf{A}_N = \mathbf{x}^{adv}_{N} \odot \mathbf{M}$
\RETURN $\mathbf{x}^{adv}_{N}, \mathbf{A}_N$
\end{algorithmic}
\end{algorithm}
When multiple pre-trained face models are available, it is natural to consider using the ensemble idea to obtain more transferable adversarial examples. Ensemble methods are also often used in research and competitions to improve performance and robustness~\cite{dietterich2000ensemble,seni2010ensemble}. Dong~\emph{et al.}~\cite{dong2018boosting} demonstrated that the transferability of adversarial examples can be effectively improved by applying ensemble methods. However, the number of pre-trained models available is limited. Therefore, Dong's hard ensemble method is still prone to ``overfitting'' pre-trained face models. Meta-learning has been proposed as a framework to address the challenging few-shot learning setting~\cite{finn2017model,sun2019meta,jamal2019task}. Inspired by meta-learning, we propose a random meta-optimization strategy for ensembling several pre-trained face models to generate adversarial masks. Different from meta-learning, which is used to update the parameters of the neural network model, the random meta-optimization strategy assumes the network model as data and then updates the adversarial masks directly. Algorithm block~\ref{alg:RSTAM_meta} describes in detail the RAAM$_{\infty}^{meta}$ algorithm for performing ensemble attacks using the random meta-optimization strategy.
\begin{algorithm}[tb]
\normalsize
\caption{RSTAM$_{\infty}^{meta}$ algorithm}
\label{alg:RSTAM_meta}
\begin{algorithmic}[1]
\REQUIRE{A face image $\mathbf{x}^t$ of the target identity; A source image $\mathbf{x}^s$ of the attacker; the initial binary mask $\mathbf{M}_0$ of our design; the facial landmarks $lms^{s}$ of the source image; target face models $F=[f_1,f_2,...,f_m]$.}
\REQUIRE{Iterations $N$; the perturbation bound $\epsilon$; a perturbation step size $\alpha$; a hyperparameter $\beta$ for the random similarity transformation.}
\REQUIRE{The binary mask generation function $\boldsymbol{GenM}$; the random similarity transformation function $\boldsymbol{RST}$; the sign function $\boldsymbol{sign}$.}
\REQUIRE{The cosine similarity loss function $\mathcal{L}$.}
\STATE $\mathbf{M} = \boldsymbol{GenM}(\mathbf{M}_0,lms^{s})$
\STATE $\mathbf{x}^{adv}_0 = \mathbf{x}^s \odot (1-\mathbf{M}) + \mathbf{x}^t \odot \mathbf{M}$
\FOR{$n=0$ to $N-1$}
\STATE $f_{que}^{meta} = Random.choice(F)$, a model is randomly selected from F as the meta-query model.
\STATE $F_{sup}^{meta} = F.remove(f_{que}^{meta})$, the remaining models in F are used as meta-support models.
\STATE $\mathbf{g}_{que}^{meta}=\mathbf{0}$
\STATE $\mathbf{g}_{sup}^{meta}=\mathbf{0}$
\FOR{$f_{sup}^{meta}$ in $F_{sup}^{meta}$}
\STATE $\mathbf{g}_{sup} = \nabla_{\mathbf{x}^{adv}_n}{\mathcal{L}(f_{sup}^{meta}(\boldsymbol{RST}(\mathbf{x}^{adv}_n,\beta)),f_{sup}^{meta}(\mathbf{x}^t))}$
\STATE $\bar{\mathbf{x}}^{meta} = \mathop{Clip}_{[\mathbf{x}^{adv}_0-\epsilon, \mathbf{x}^{adv}_0+\epsilon]}(\mathbf{x}^{adv}_{n} - \alpha \cdot \boldsymbol{sign}(\mathbf{g}_{sup}) \odot \mathbf{M})$
\STATE $\mathbf{x}^{meta} = \mathop{Clip}_{[\mathbf{0},\mathbf{1}]}(\bar{\mathbf{x}}^{meta})$
\STATE $\mathbf{g}_{que} = \nabla_{\mathbf{x}^{meta}}{\mathcal{L}(f_{test}^m(\boldsymbol{RA}(\mathbf{x}^{meta},\beta)),f_{test}^{meta}(\mathbf{x}^t))}$
\STATE $\mathbf{g}_{que}^{meta}= \mathbf{g}_{que}^{meta} + \mathbf{g}_{que}$
\STATE $\mathbf{g}_{sup}^{meta} = \mathbf{g}_{sup}^{meta} + \mathbf{g}_{sup}$
\ENDFOR
\STATE $\mathbf{g}_n = \frac{1}{m-1}(\mathbf{g}_{sup}^{meta}+\mathbf{g}_{que}^{meta})$
\STATE $\bar{\mathbf{x}}^{adv}_{n+1} = \mathop{Clip}_{[\mathbf{x}^{adv}_0-\epsilon, \mathbf{x}^{adv}_0+\epsilon]}(\mathbf{x}^{adv}_{n} - \alpha \cdot \boldsymbol{sign}(\mathbf{g}_n) \odot \mathbf{M})$
\STATE $\mathbf{x}^{adv}_{n+1} = \mathop{Clip}_{[\mathbf{0},\mathbf{1}]}(\bar{\mathbf{x}}^{adv}_{n+1})$
\ENDFOR
\STATE $\mathbf{A}_N = \mathbf{x}^{adv}_{N} \odot \mathbf{M}$
\RETURN $\mathbf{x}^{adv}_{N}, \mathbf{A}_N$
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\subsection{Experimental Setup}
\noindent\textbf{Datasets.} In the experiments, we use four public face datasets, which contain two high-resolution face datasets CelebA-HQ~\cite{karras2018progressive}, Makeup Transfer (MT)~\cite{li2018beautygan}, and two low -quality face datasets LFW~\cite{huang2008labeled}, CASIA-FaceV5~\cite{casiafacev5}.
\begin{itemize}[leftmargin=*]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item CelebA-HQ is a high-quality version of CelebA~\cite{liu2015faceattributes} that consists of 30,000 images at $1024\times1024$ resolution.
\item LFW is made up of 13,233 low-quality facial images gathered from the internet. There are 5749 identities in this collection, with 1680 people having two or more photos.
\item MT is a facial makeup dataset that includes 3834 female face images, with 2719 makeup images and 1115 non-makeup images.
\item CASIA-FaceV5 contains 2500 facial images of 500 subjects from Asia and all face images are captured using Logitech USB camera.
\end{itemize}
In order to evaluate the performance of the attacks, we randomly select 500 different identity pairs from the CelebA-HQ and LFW datasets, respectively. Furthermore, we randomly select 1000 different identity makeup images from the MT dataset to make up 500 different identity pairs, and 500 subjects of CASIA-FaceV5 are randomly paired into 250 different identity pairs for the experiment.
~\\[4pt]
\noindent\textbf{Face Recognition Models and Face Recognition Systems.} In our experiments, we use five face recognition models, FaceNet~\cite{schroff2015facenet}, MobileFace, IRSE50, IRSE101, and IR151~\cite{deng2019arcface}, and five commercial face recognition systems, Face++~\cite{faceplusplus}, Baidu~\cite{baidu}, Aliyun~\cite{aliyun}, Tencent~\cite{tencent} and Microsoft~\cite{microsoft}. Because the API of Aliyun's face recognition system is not available to individual users, we only use the web application of Aliyun's face recognition system for our experiments. In Microsoft's face recognition system, we use ``recognition\_04'' face recognition model and ``detection\_03'' face detection model, which are the latest versions of Microsoft's face recognition system. All other face recognition systems use the default version.
~\\[4pt]
\noindent\textbf{Evaluate Metrics.}
For impersonation attacks on face recognition models, the attack success rate ($ASR$)~\cite{deb2020advfaces,zhong2020towards,yin2021adv} is reported as an evaluation metric,
\begin{equation}
\begin{aligned}
ASR=\frac{\sum_{i=1}^N 1_{\tau} (cos[f(x^t_i), f({x^{adv}_i})] > \tau)}{N} \times 100\%
\end{aligned}
\end{equation}
where $N$ is the number of pairs in the face dataset, $1_{\tau}$ denotes the indicator function, $\tau$ is a pre-determined threshold. For each victim facial recognition model, $\tau$ will be determined at 0.1\% False Acceptance Rate ($FAR$) on all possible image pairs in LFW, i.e. FaceNet 0.409, MobileFace 0.302, IRSE50 0.241, and IR152 0.167.
For the evaluation of attacks on face recognition systems, we report the mean confidence scores ($MCS$) on each dataset as an evaluation metric,
\begin{equation}
MCS = \frac{\sum^N_{i=1} \emph{conf}_i }{N} \times 100\%
\end{equation}
where \emph{conf} is a confidence score between the target and the attack returned from the face recognition system API, $N$ is the number of pairs in the face dataset.
~\\[4pt]
\noindent\textbf{Implementation Details.} We first resize all the input images from all datasets to $512 \times 512$ and normalize to $[0,1]$. The following is the setting of our main comparison method in the experiment.
\begin{itemize}[leftmargin=*]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item PASTE is the standard baseline that we use to directly paste the target image's associated binary mask $M$ region onto the attacker's source image and then perform the impersonation attack.
\item AM$_{\infty}$ is the adversarial mask attack method with the $\ell_\infty$-bound. The perturbation step size $\alpha$ is set to 0.003. The number of iterations $N$ is set to 2000. the perturbation bound $\epsilon$ is set to 0.3. The target face model $f$ uses IRSE101 and the remaining models, FaceNet, MobileFace, IRSE50 and IR151, are used as victim models for the black box attack.
\item RSTAM$_{\infty}$ is the $\ell_\infty$-bound RSTAM. The hyperparameter $\beta$ is set to 0.2. Other settings are the same as on AM$_{\infty}$.
\item RSTAM$_{\infty}^{all}$ uses all five face recognition models as target face models without using the random meta-optimization ensemble strategy. Assume that the target face models $F=[f_1,f_2,f_3,f_4,f_5]$, we can get the $n_{th}$ update gradient as
\begin{equation}
\mathbf{g}_n = \frac{1}{5}\sum^{5}_{i=1} \nabla_{\mathbf{x}^{adv}_n}{\mathcal{L}(f_i(\boldsymbol{RST}(\mathbf{x}^{adv}_n,\beta)),f_i(\mathbf{x}^t))}.
\end{equation}
Other settings are the same as on RSTAM$_{\infty}$.
\item RSTAM$_{\infty}^{meta}$ is the $\ell_\infty$-bound RSTAM with using the random meta-optimization ensemble strategy. The target models $F$ use all five face recognition models. Other settings are the same as on RSTAM$_{\infty}$.
\item RSTAM$_{2}$ is the $\ell_2$-bound RSTAM. The perturbation step size $\alpha$ is set to 2. Other settings are the same as on RSTAM$_{\infty}$.
\item RSTAM$_{2}^{meta}$ is the $\ell_2$-bound RSTAM with using the random meta-optimization ensemble strategy. The perturbation step size $\alpha$ is set to 2. Other settings are the same as on RSTAM$_{\infty}^{meta}$.
\end{itemize}
Additionally, we implement our codes based on the open source deep learning platform PyTorch~\cite{paszke2019pytorch}.
\subsection{Digital-Environment Experiments}
In this section, we will present the outcomes of the black-box impersonation attack in the digital world. Firstly, we will report quantitative results and qualitative results in the datasets CelebA-HQ, LFW, MT, and CASIA-FaceV5, respectively. Next, we will report the results of the hyperparameter $\beta$ sensitivity experiments.
~\\[4pt]
\noindent\textbf{Quantitative Results.} Table 1-4 show the quantitative results on digital images. Table~\ref{tab:celeba} shows the results of the black-box impersonation attack on the CelebA-HQ dataset, which is a high-definition multi-attribute face dataset including annotations with 40 attributes per image. The images in this collection span a wide range of position variants as well as background clutter. We can see from Table~\ref{tab:celeba} that the $ASR$ of RSTAM$_\infty$ for Face Models is much higher than PASTE benchmark and AM$_\infty$, 64.20\% (RSTAM$_\infty$) vs. 27.40\% (PASTE) vs. 31.60\% (AM$_\infty$) on FaceNet, 63.20\% vs. 48.00\% vs. 43.80\% on MobileFace, 91.00\% vs. 54.80\% vs. 59.60\% on IRSE50, and 91.80\% vs. 41.20\% vs. 51.20\% on IR151. The hard ensemble attack method RSTAM$^{all}_\infty$ increases the $MCS$ on Face++ (74.24\% vs. 71.94\%) and Baidu (72.54\% vs. 70.80\%) but decreases on Tencent (49.50\% vs. 50.63\%) and Microsoft (49.12\% vs. 53.97\%) for the Face Systems in Table~\ref{tab:celeba}. Thus, we can think that the hard ensemble attack method is not the best ensemble attack method, and the ensemble attack method RSTAM$^{meta}_{\infty}$ based on our proposed random meta-optimization strategy can further improve the performance of the ensemble attack method, 74.76\% (RSTAM$^{meta}_{\infty}$) vs. 74.24\% (RSTAM$^{all}_\infty$) on Face++, 72.83\% vs. 72.54\% on Baidu, 50.88\% vs. 49.50 on Tencent, and 50.58\% vs. 49.12\% on Microsoft. Lastly, we can also see in Table~\ref{tab:celeba} that the $\ell_2$-bound RSTAM has better attack performance than the $\ell_\infty$-bound in the black-box attack on commercial face recognition system.
Table~\ref{tab:lfw} provides the results of the black-box impersonation attack on the low-quality face dataset LFW. From Table~\ref{tab:lfw}, we can observe that the RSTAM attack is still effective on low-quality face images. The $MCS$ of RSTAM$^{meta}_2$ on LFW can reach 70.29\% in Face++, 70.08\% in Baidu, 51.45\% in Tencent and 50.13\% in Microsoft.
Table~\ref{tab:mt} presents the results of the black box impersonation attack on the female makeup face images of the MT dataset and Table~\ref{tab:casia} presents the results of the attack on the Asian face dataset CASIA-FaceV5. Compared with the multi-attribute datasets CelebA-HQ and LFW, the Face Models show lower robustness under relatively single-attribute face datasets MT and CASIA, and PASTE can then achieve a higher $ASR$. In contrast, commercial face recognition systems show similar robustness to single-attribute face datasets and multi-attribute face datasets. It can also be demonstrated that RSTAM, our proposed black-box impersonation attack method, can effectively work with single-attribute or multi-attribute datasets, high-quality or low-quality images, face recognition models, and commercial face recognition systems.
\begin{table}[htbp]
\centering
\caption{The results of digital black-box impersonation attacks on the CelebA-HQ dataset. The attack evaluation metric for face models uses $ASR$ (\%), while the attack evaluation metric for face systems uses $MCS$ (\%). The highlighted values represent the best in each column.}
\label{tab:celeba}
\scalebox{0.65}{
\begin{tabular}{l|cccc|cccc}
\toprule
&\multicolumn{4}{c|}{Face Models} & \multicolumn{4}{c}{Face Systems}\\
& FaceNet &MobileFace & IRSE50 & IR151 & Face++ & Baidu & Tencent & Microsoft \\ \midrule
PASTE & 27.40 & 48.00 & 54.80 & 41.20 & 66.21 & 61.98 & 30.37 & 29.60 \\ \midrule
AM$_{\infty}$& 31.60 & 43.80 & 59.60 & 51.20 & 65.66 & 61.44 & 33.42 & 32.87 \\
RSTAM$_{\infty}$ & \textbf{64.20} & \textbf{63.20} & \textbf{91.00}& \textbf{91.80} & 71.94 & 70.80 & 50.63 & 53.97 \\
RSTAM$_{\infty}^{all}$ & - & - & - & - & 74.24 & 72.54 & 49.50 & 49.12 \\
RSTAM$_{\infty}^{meta}$ & - & - & - & - & 74.76 & 72.83 & 50.88 & 50.58 \\ \midrule
RSTAM$_{2}$ & 60.80 & 62.4 & 89.4 & \textbf{91.80} & 71.18 & 70.75 & \textbf{52.60} &\textbf{55.67} \\
RSTAM$_{2}^{meta}$ & - & - & - & - & \textbf{74.80} & \textbf{72.90} & 51.94 & 52.19 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{The results of digital black-box impersonation attacks on the LFW dataset. The attack evaluation metric for face models uses $ASR$ (\%), while the attack evaluation metric for face systems uses $MCS$ (\%). The highlighted values represent the best in each column.}
\label{tab:lfw}
\scalebox{0.65}{
\begin{tabular}{l|cccc|cccc}
\toprule
&\multicolumn{4}{c|}{Face Models} & \multicolumn{4}{c}{Face Systems}\\
& FaceNet &MobileFace & IRSE50 & IR151 & Face++ & Baidu & Tencent & Microsoft \\ \midrule
PASTE & 24.00 & 36.40 & 44.20 & 21.20 & 58.90 & 51.86 & 27.70 & 22.13 \\ \midrule
AM$_{\infty}$& 28.00 & 36.60 & 49.40 & 33.60 & 59.65 & 52.85 & 31.04 & 26.37 \\
RSTAM$_{\infty}$ & \textbf{59.20} & 50.80 & \textbf{85.40} & \textbf{89.00} & 66.11 & 66.94 & 47.34 & 48.88 \\
RSTAM$_{\infty}^{all}$ & - & - & - & - & 69.59 & 69.45 & 48.20 & 45.78 \\
RSTAM$_{\infty}^{meta}$ & - & - & - & - & 70.00 & 69.82 & 49.45 & 47.71 \\ \midrule
RSTAM$_{2}$ & 57.60 & \textbf{52.00} & 84.80 & \textbf{89.00} & 65.56 & 66.93 & 51.16 & \textbf{53.18} \\
RSTAM$_{2}^{meta}$ & - & - & - & - & \textbf{70.29} & \textbf{70.08} & \textbf{51.45} & 50.13 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{The results of digital black-box impersonation attacks on the MT dataset. The attack evaluation metric for face models uses $ASR$ (\%), while the attack evaluation metric for face systems uses $MCS$ (\%). The highlighted values represent the best in each column.}
\label{tab:mt}
\scalebox{0.65}{
\begin{tabular}{l|cccc|cccc}
\toprule
&\multicolumn{4}{c|}{Face Models} & \multicolumn{4}{c}{Face Systems}\\
& FaceNet &MobileFace & IRSE50 & IR151 & Face++ & Baidu & Tencent & Microsoft \\ \midrule
PASTE & 86.40 & 61.80 & 71.30 & 42.00 & 63.87 & 55.98 & 28.15 & 28.16 \\ \midrule
AM$_{\infty}$& 87.00 & 61.20 & 77.40 & 54.80 & 63.62 & 57.07 & 31.26 & 31.41 \\
RSTAM$_{\infty}$ & \textbf{94.40} & 79.00 & \textbf{95.60} & \textbf{92.20} & 71.26 & 65.81 & 46.43 & 49.06 \\
RSTAM$_{\infty}^{all}$ & - & - & - & - & 72.59 & 67.39 & 45.37 & 47.74 \\
RSTAM$_{\infty}^{meta}$& - & - & - & - & 73.06 & 67.98 & 46.08 & 48.90 \\ \midrule
RSTAM$_{2}$ & 93.80 & \textbf{79.40} & 94.20 & \textbf{92.20} & 70.73 & 66.17 & \textbf{48.23} & \textbf{51.84} \\
RSTAM$_{2}^{meta}$ & - & - & - & - & \textbf{73.12} & \textbf{68.18} & 47.44 & 50.65 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{The results of digital black-box impersonation attacks on the CASIA-FaceV5 dataset. The attack evaluation metric for face models uses $ASR$ (\%), while the attack evaluation metric for face systems uses $MCS$ (\%). The highlighted values represent the best in each column.}
\label{tab:casia}
\scalebox{0.65}{
\begin{tabular}{l|cccc|cccc}
\toprule
&\multicolumn{4}{c|}{Face Models} & \multicolumn{4}{c}{Face Systems}\\
& FaceNet &MobileFace & IRSE50 & IR151 & Face++ & Baidu & Tencent & Microsoft \\ \midrule
PASTE & 92.40 & 80.80 & 89.20 & 66.40 & 59.53 & 52.02 & 27.99 & 42.82 \\ \midrule
AM$_{\infty}$& 92.00 & 79.20 & 90.80 & 72.40 & 63.44 & 56.52 & 34.05 & 46.01 \\
RSTAM$_{\infty}$ & \textbf{97.20} & 87.60 & 98.40 & 97.60 & 71.70 & 68.27 & 42.59 & 57.34 \\
RSTAM$_{\infty}^{all}$ & - & - & - & - & 72.84 & 69.91 & 44.39 & 57.77 \\
RSTAM$_{\infty}^{meta}$ & - & - & - & - & 73.31 & 70.26 & 44.96 & 58.88 \\ \midrule
RSTAM$_{2}$ &\textbf{97.20} & \textbf{90.00} & \textbf{98.80} & \textbf{98.00} & 71.44 & 69.41 & \textbf{47.29} & \textbf{62.62} \\
RSTAM$_{2}^{meta}$ & - & - & - & - & \textbf{73.82} & \textbf{70.76} & 46.71 & 60.87 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[tbh]
\centering
\includegraphics[width=1.0\linewidth]{figures/digital-att}
\caption{The visualization results of digital black-box impersonation attacks on five commercial face recognition systems. The confidence scores are pasted to the right of each attack, (\textcolor{red}{F:Face++}, \textcolor{blue}{B:Baidu}, \textcolor{teal}{A:Aliyun}, \textcolor{violet}{T:Tencent}, \textcolor{black}{M:Microsoft}).}
\label{fig:digital-attack}
\Description{digital-attack}
\end{figure*}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{figures/Beta}
\caption{The results of the hyperparameter $\beta$ sensitivity experiments using the RSTAM$_\infty$ attack on the CelebA-HQ dataset.}
\label{fig:beta}
\Description{fig:beta}
\end{figure}
\noindent\textbf{Qualitative Results.} The results of the qualitative are shown in Figure~\ref{fig:digital-attack}. As illustrated in Figure 2, the confidence scores between the targets and the attacks generated via RSTAM are much higher than the confidence scores between the targets and the sources. The confidence scores of the attacks using RSTAM are mostly greater than 70\%. In particular, the confidence score between the attack and the target can reach 97.39\% on the LFW using RSTAM$^{meta}_2$ for the Tencent face recognition system.
~\\[4pt]
\noindent\textbf{Sensitivity of the Hyperparameter $\beta$.}
The hyperparameter $\beta$ is used to control the random similarity transformation with 4DoF, which plays an important role in RSTAM. We perform sensitivity experiments for the hyperparameter $\beta$ using RSTAM$_\infty$ on the CelebA-HQ dataset. The results of the hyperparameter $\beta$ sensitivity experiments are shown in Figure~\ref{fig:beta}. As shown in Figure~\ref{fig:beta}, we suggest that the hyperparameter $\beta$ is set between 0.15 and 0.25. In all experiments except the sensitivity experiment of $\beta$, we set the hyperparameter $\beta$ to 0.2.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\linewidth]{figures/1234-2}
\caption{Realistic environmental settings for physical attack experiments.}
\label{fig:env-setting}
\Description{env-setting}
\end{figure}
\begin{figure*}[tbph]
\centering
\includegraphics[width=0.8\linewidth]{figures/phy-att-v.pdf}
\caption{The visualization results of physical black-box impersonation attacks on five commercial face recognition systems. The target and source are the same as in Figure~\ref{fig:RSTAM}. The confidence scores are pasted to the right of each attack, (\textcolor{red}{F:Face++}, \textcolor{blue}{B:Baidu}, \textcolor{teal}{A:Aliyun}, \textcolor{violet}{T:Tencent}, \textcolor{black}{M:Microsoft}). }
\label{fig:phy-att}
\Description{phy-att}
\end{figure*}
\subsection{Physical-Realizability Experiments}
The successful completion of an attack in the digital world does not guarantee that it can be applied to the physical world. Moreover, compared to digital attacks, physical attacks have more practical value in the real world. Therefore, we use a mobile and compact printer, Canon SELPHY CP1300, to print out adversarial masks to carry out experiments on physical attacks. The setup of the physical environment is shown in Figure~\ref{fig:env-setting}. For shooting, the camera makes use of the iPhone 11 Pro Max's 12 MP front-facing camera as well as a Bluetooth remote control.
Figure 5 shows the visualization results of our physical black-box impersonation attacks against the five state-of-the-art commercial face recognition systems. The target and source are the same as in Figure~\ref{fig:RSTAM}. Although the printed adversarial masks exhibit distortion in comparison to the digital adversarial masks, the confidence scores of the physical attacks at position \ding{172} are not much reduced, and even increase on RSTAM$^{meta}_{\infty}$ and RSTAM$^{meta}_{2}$ against the Face++. This also shows that RSTAM is effective in using a mobile and compact printer to implement physical attacks in a realistic environment. Except for the attack on the Tencent face system, RSTAM can maintain a high confidence score for commercial face recognition systems at various positions. Moreover, RSTAM can have good attack performance at the long-range position \ding{175}, where the face is of low quality. Similarly, in the physical world, the RSTAM$^{meta}_{\infty}$ and RSTAM$^{meta}_{2}$ ensemble attack methods based on random meta-optimization strategy have superior attack performance.
\section{Conclusions}
In this paper, we propose a black-box impersonation attack method on face recognition, RSTAM. In order to improve the transferability of the adversarial masks, we propose a random similarity transformation strategy for increasing the input diversity and a random meta-optimization strategy for ensembling several pre-trained face models to generate more general adversarial masks. Finally, we perform experimental validation on four public face datasets: CelebA, LFW, MT, CASIA-FaceV5, and five commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft. The experiments adequately demonstrate that RSTAM is an effective attack on face recognition. Furthermore, RSTAM can be easily implemented as a physical black-box impersonation attack using a mobile and compact printer. We can also find that the current commercial face recognition systems are not very secure. We can easily collect real face images of target identities from social networks and then complete the impersonation attacks with RSTAM. Therefore, our further work will focus on how to effectively defend against RSTAM and achieve a more robust and secure face recognition model.
\balance
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,564,704 | arxiv | \section{Introduction}
Quantizing a transformer model is not a simple matter due to numerous channel-dependent outliers in activations \cite{DBLP:conf/emnlp/BondarenkoNB21}. They lead to a large quantization error \cite{DBLP:conf/icml/ZhaoHDSZ19}, and we observe that the problem is worse in the decoder-only transformers like GPT-2. One solution to the difficulty is quantization-aware training (QAT), an approach that fine-tunes the model parameters in response to the numerical error arising from quantization. Post-training quantization (PTQ) -- a counterpart of QAT that performs quantization without modifying model parameters -- is not powerful enough to cope with the outliers.
\begin{figure}[!ht]
\centering
\includegraphics[width=1.\linewidth]{imgs/intro.png}
\vskip -0.1in
\caption{
Average perplexity (PPL) of
the full-precision (FP) model and the models quantized with PTQ and QAT on
5 datasets (left).
We use the PTB dataset as the fine-tuning data (F-ID) for QAT.
The FP model and the QAT model are evaluated on the F-ID and the other 4 datasets (F-OOD) (right).
}
\label{motivation}
\end{figure}
While QAT is effective, it requires the dataset and the training pipeline, and the problem is that they are often inaccessible when dealing with the original pretrained model without any downstream task. One then cannot but use arbitrary fine-tuning data for QAT.
However, the fine-tuning returns worse accuracies for distributions unseen during training (out-of-distribution with regard to fine-tuning; F-OOD) despite improving for the training distribution (in-distribution with regard to fine-tuning; F-ID) \cite{DBLP:journals/corr/abs-2202-10054}.
This is consistent with our observation that QAT overfits the model to the fine-tuning data as in Figure~\ref{motivation}.
The resulting quantized model therefore has its generality impaired. This violates the premise of a general-purpose language model, which must operate well across various texts of the target language.
\begin{figure*}
\centering
\includegraphics[width=6in]{imgs/main_fig.png}
\vskip -0.05in
\caption{
Quadapter performs a linear scaling and its inversion before and after $\textit{Q}$, the quantizer for the target activation (left).
In the transformer block of GPT-2, Quadapters can be installed in two different locations (right).
}
\label{fig:quadapter}
\end{figure*}
Our hypothesis is that QAT incurs the overfitting because it changes all the parameters of the model. This difficulty is much like the research topic of continual learning, where it is important that a model should not forget its past capability when transferring to a new task \cite{DBLP:journals/corr/abs-2111-00667}. Adapter is a strategy to adapt to a new distribution by training only a small number of parameters. It is a popular method to lessen the catastrophic forgetting. We borrow this concept to propose Quadapter, a lightweight module to adapt to the quantization error on behalf of the intact original model.
The contribution of this work is that we successfully quantize GPT-2, overcoming the large inter-channel variance and the QAT overfitting issue with Quadapter. To the best of our knowledge, this is the first work to quantize both weights and activations of GPT-2
without the complete training pipeline.
\section{Related Works}
\noindent{\textbf{Adapters}}
Extensive researches have been conducted on how to steer a large pretrained model with few adapter parameters.
The concept of adapter has proven its usefulness in language models for
transfer learning \cite{pmlr-v97-houlsby19a},
multi-task learning \cite{DBLP:journals/corr/abs-1902-02671},
and domain adaptation \cite{DBLP:journals/corr/abs-2111-00667}.
Several works apply adapters to the visual domain as well
\cite{DBLP:conf/eccv/LiH16, DBLP:conf/aaai/PerezSVDC18}.
\noindent{\textbf{Transformer Quantization}}
In comparison to GPT-2,
BERT is easier to quantize. It can be quantized
with PTQ under a limited performance drop \cite{DBLP:conf/aaai/ShenDYMYGMK20}.
QAT on BERT for a given downstream task
recovers full-precision (FP) performance even with ultra-low precision \cite{DBLP:conf/nips/ZafrirBIW19, DBLP:conf/emnlp/BondarenkoNB21}, or with
integer-only operations for non-linear layers \cite{DBLP:conf/icml/KimGYMK21}.
On the other hand,
quantization studies on autoregressive transformers are relatively limited in their scope, using weight-only quantization
\cite{DBLP:journals/corr/abs-2009-07453} or requiring full-fledged training \cite{DBLP:journals/corr/abs-1910-10485,DBLP:conf/acl/taoacl2022}. Please note that these works focus on quantizing GPT-2 that is finetuned on a downstream task whereas ours quantizes the original pretrained GPT-2.
\noindent{\textbf{Quantization techniques}}
Directly relevant to our work are cross-layer-equalization (CLE) \cite{9008784} and adaptive rounding (AdaRound) \cite{DBLP:conf/icml/NagelABLB20}. Similarly to CLE, Qudapter rescales associated model weights to lessen the quantization burden. AdaRound and our proposed method are alike in training foldable helper parameters to minimize the block-wise quantization error.
In addition, learned step size (LSQ) \cite{DBLP:conf/iclr/EsserMBAM20} and its extension (LSQ+) \cite{9151058} train the quantization-related parameters during QAT, to which Quadapter bears similarity.
\begin{table*}
\centering
\resizebox{1.0\textwidth}{!}{
\begin{tabular}{llccccc|ccccc} \toprule
& & \multicolumn{5}{c}{GPT-2} & \multicolumn{5}{c}{DistilGPT-2}\\
Data & Method & Wikitext2 & PTB & LAMBADA & CBT\_CN & CBT\_NE & Wikitext2 & PTB & LAMBADA & CBT\_CN & CBT\_NE \\
\hline\hline
- & FP32 & 29.27 & 41.31 & 48.39 & 27.29 & 30.53 & 44.36 & 59.73 & 74.94 & 42.54 & 47.09 \\
\hline
\multirow{4}{*}{Calib. data}
& PTQ & 915.58 & 751.23 & 827.06 & 655.31 & 759.83 & 87.52 & 114.42 & 205.35 & 93.16 & 104.94 \\
& AdaRound & 507.07 & 478.29 & 685.98 & 319.74 & 309.11 & 84.94 & 104.94 & 164.98 & 107.89 & 92.59 \\
& CLE & 40.28 & 59.33 & 74.61 & 38.92 & 43.69 & 69.81 & 86.66 & 144.06 & 68.78 & 76.80 \\
& Quadapter BC & \textbf{34.53} & \textbf{50.65} & \textbf{63.51} & \textbf{32.47} & \textbf{36.46} & \textbf{52.79} & \textbf{70.43} & \textbf{102.75} & \textbf{51.97} & \textbf{57.81} \\
\hline\hline
\multirow{3}{*}{Wikitext2}
& QAT & \underline{32.51} & 100.75 & 125.40 & 54.94 & 63.94 & \underline{35.04} & 109.40 & 129.19 & 67.03 & 76.55 \\
& Quadapter BC+QAT & \underline{\textbf{21.61}} & 57.06 & 63.65 & 33.80 & 38.40 & \underline{\textbf{28.50}} & 80.52 & 86.57 & 50.64 & 57.05 \\
& Quadapter (ours) & \underline{29.34} & \textbf{47.30} & \textbf{57.28} & \textbf{30.37} & \textbf{34.05} & \underline{43.05} & \textbf{66.28} & \textbf{85.42} & \textbf{47.66} & \textbf{52.49} \\
\hline
\multirow{3}{*}{PTB}
& QAT & 331.61 & \underline{33.94} & 330.10 & 212.12 & 252.03 & 347.25 & \underline{37.44} & 308.22 & 214.14 & 257.44 \\
& Quadapter BC+QAT & 79.74 & \underline{\textbf{24.10}} & 106.32 & 59.90 & 69.79 & 121.62 & \underline{\textbf{29.65}} & 146.48 & 91.73 & 106.31 \\
& Quadapter (ours) & \textbf{33.69} & \underline{39.46} & \textbf{55.68} & \textbf{31.45} & \textbf{35.16} & \textbf{50.73} & \underline{56.63} & \textbf{87.02} & \textbf{49.43} & \textbf{54.35} \\
\bottomrule
\end{tabular}
}
\vskip -0.1in
\caption{
Performance evaluation of the quantized GPT-2 and DistilGPT-2 on various datasets. The metric is PPL (lower is better).
In the case of Quadapter BC+QAT, QAT initiates after the block-wise calibration of Quadapter. For Quadapter (ours), both the training phases are completed. \underline{Underline} indicates the results on F-ID
}
\label{tab:main}
\end{table*}
\section{Methods}
Quadapter is simply a set of learnable parameters.
On the other hand, the Quadapter block represents the actual working mechanism of Quadapter, involving two consecutive layers of linear relations, their quantizers, and their associated Quadapter instance. The effectiveness of Quadapter comes from the interaction amongst the involved components, and from the two-phase training procedure.
\subsection{Quadapter Design}
Quadatper linearly scales the input channels and reverts after quantization. This ensures the identity relation if not for quantizers, making it possible to keep the model parameters intact (Figure~\ref{fig:quadapter} left).
The scaling and the inverse-scaling of an activation are, in practice, folded to the weight and the bias of the preceding layer and to the weight of the following layer. For example, given a forward pass of two linear layers:
\begin{align}
\mathbf{y} & = \mathbf{W}_2(\mathbf{W}_1{\mathbf{x}} + \mathbf{b}_1)+\mathbf{b}_2,
\label{eq:fpforward}
\end{align}
the Quadapter block output $\hat\mathbf{y}$ is as follows:
\begin{align}
\hat\mathbf{y} = \textit{Q}_{{\boldsymbol \theta}_2}(\mathbf{W}_2{\mathbf{A}}^{-1}) \textit{Q}_{{\boldsymbol \theta}_a}(\textit{Q}_{{\boldsymbol \theta}_1}( {\mathbf{A}}\mathbf{W}_1){\mathbf{x}} \nonumber \\
+ {\mathbf{A}}\mathbf{b}_1) + \mathbf{b}_2 \label{eq:main_1eq} \\
= \textit{Q}_{{\boldsymbol \theta}_2}(\mathbf{W}_2')\textit{Q}_{{\boldsymbol \theta}_a}(\textit{Q}_{{\boldsymbol \theta}_1}(\mathbf{W}_1'){\mathbf{x}} + \mathbf{b}_1') + \mathbf{b}_2.
\label{eq:main_2eq}
\end{align}
Here, ${\mathbf{A}} = \rm{diag}(\mathbf{\alpha})$ is a diagonal matrix with ${\mathbf{A}}_{ii}=\mathbf{\alpha}_i$,
where $\alpha \in \mathbb{R}^d$ is the learnable Quadapter parameter with the intermediate activation dimension $d$.
$\textit{Q}_{{\boldsymbol \theta}_1}$ and $\textit{Q}_{{\boldsymbol \theta}_2}$ are the weight quantizers,
and $\textit{Q}_{{\boldsymbol \theta}_a}$ is the activation quantizer.
Each quantizer $\textit{Q}_{\boldsymbol \theta}$ quantizes its input values based on the quantization parameter
${\boldsymbol \theta} = (\theta_{{\rm min}}, \theta_{{\rm max}})$ \cite{DBLP:journals/corr/abs-1806-08342}.
Quadapter $\mathbf{\alpha}$ is trained during training and fused at the inference time (Equation~\ref{eq:main_2eq}).
As in Equation~\ref{eq:main_1eq}, the forward scaling and the inverse scaling correspond across three nested quantizers that are strongly nonlinear operations. Therefore $\mathbf{\alpha}$ should be learned rather than set analytically as in \cite{9008784}; a single analytical solution is not sufficient to balance the quantization burden between the two layers.
\subsection{Quadapter Training}
The learning of Quadapter is comprised of two phases: the block-wise calibration and the end-to-end fine-tuning.
\noindent{\textbf{Phase 1: Block-wise Calibration }}
Each of the Quadapter instances is initialized to $\vec{\mathbf{1}}$ and trained with the calibration data, independently per Quadpter block.
The local objective for each block is L2 loss:
\begin{align}
\operatornamewithlimits{\arg \min}_{\mathbf{\alpha}} ||\mathbf{y} - \hat\mathbf{y}||_2^2,
\end{align}
which \cite{DBLP:conf/icml/NagelABLB20} shows to be effectively complementary to the task loss.
$\hat\mathbf{y}$ is computed in the dynamic quantization mode \cite{DBLP:conf/nips/ZafrirBIW19}, where the statistics are obtained per batch.
Quadapter resulting from the calibration phase is a PTQ method that is independent of the fine-tuning process. We therefore denote such Quadapter by \textit{Quadapter BC}.
\noindent{\textbf{Phase 2: End-to-end Fine-tuning}}
The subsequent fine-tuning starts with more accommodating quantization parameters (i.e. the min/max statistics) since they have moved to moderate values from extreme outliers during the first phase. The fine-tuning therefore converges much more quickly.
In the second phase, the statistics for quantization are computed in the fashion of static quantization \cite{DBLP:conf/nips/ZafrirBIW19}, based on the same calibration data as in the first phase.
Quadapter is then trained to minimize the end-to-end task loss. During the course, the quantization parameters are jointly learned as in \cite{9151058} while the model parameters stay fixed.
Algorithm \ref{algo1} details the full flow of the Quadatper training.
\SetKwComment{Comment}{/* }{ */}
\RestyleAlgo{ruled}
\begin{algorithm}[hb!]
\caption{Quadapter training}\label{alg:two}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\small
\Input{pretrained model $M$, Quadapter blocks, calibration data $D_1$, fine-tuning data $D_2$, learning rates $\eta_1$, $\eta_2$.}
\Output{
Learned Quadapter parameters $\{\mathbf{\alpha}_1, \mathbf{\alpha}_2, ...\}$
and quantization parameters ${\boldsymbol \theta}^* = \{{\boldsymbol \theta}^1, {\boldsymbol \theta}^2, ...\}$.
}
\Comment{Phase 1}
\ForEach{$i$-th Quadapter block}{
Initialize $\mathbf{\alpha}_i = \vec{\mathbf{1}}$ \\
From $M$ and $D_1$, gather block input ${\mathbf{x}}_i$ and output $\mathbf{y}_i$ \\
\While{not converged}{
$\hat\mathbf{y}_i \gets \text{Eq. 2} $ \\
$\mathbf{\alpha}_i \gets \mathbf{\alpha}_i - \eta_1 \nabla_{\mathbf{\alpha}_i} ||\hat\mathbf{y}_i - \mathbf{y}_i||^2_2$ \\
}
}
\Comment{Phase 2}
Apply learned Quadapters to $M$ \\
Initialize ${\boldsymbol \theta}^*$ with $D_1$ to make quantized model $M_Q$ \\
\While{not converged}{
\For{${\mathbf{x}}, \mathbf{y} \in D_2$}{
compute $L_{\rm{task}}(M_Q({\mathbf{x}}), \mathbf{y})$ \\
\ForEach{$i$-th Quadapter block}{
$\mathbf{\alpha}_i \gets \mathbf{\alpha}_i - \eta_2 \nabla_{\mathbf{\alpha}_i} L_{\rm{task}}$ \\
}
update $\boldsymbol\theta^*$ with LSQ+ \\
}
}
\label{algo1}
\end{algorithm}
\section{Experiments}
\noindent{\textbf{Models}} We quantize GPT-2 \cite{radford2019language} and DistilGPT-2 \cite{sanh2019distilbert} based on their huggingface pretrained models\footnote{huggingface.co/gpt2, huggingface.co/distilgpt2}.
Our quantization configuration follows \cite{DBLP:journals/corr/abs-2201-08442}, doing uniform asymmetric 8-bit quantization for both activations and weights. All the weights and activations are quantized, except for biases, non-linear opererations, and additions \cite{DBLP:conf/nips/ZafrirBIW19, DBLP:conf/icml/KimGYMK21}.
For every transformer block, the Quadapter instances are installed in between the first layer norm and the linear projection for key/query/value as well as between the second layer norm and the first feed-forward network (Figure \ref{fig:quadapter} right). One additional instance is applied to between the final layer norm and the logit projection.
\noindent{\textbf{Baseline methods}} Our implementation of LSQ+ follows the original proposition \cite{9151058}, except for updating the min/max parameters for stability of training \cite{DBLP:journals/corr/abs-2201-08442}. It is applied for all the QAT experiments. We use AI Model Efficiency Toolkit\footnote{https://github.com/quic/aimet} to obtain AdaRound performance. The CLE metrics are computed with an untrained Quadapter, initialized analytically as in \cite{9008784}.
\noindent{\textbf{Datasets}} We employ WikiText-2 \cite{merity2016pointer}, the English Penn Treebank (PTB) corpus \cite{DBLP:journals/coling/MarcusSM94}, the LAMBADA dataset \cite{DBLP:conf/acl/PapernoKLPBPBBF16}, and the named-entity subset (CBT\_NE) as well as the common-noun subset (CBT\_CN) of Children’s Book Test \cite{hill2016goldilocks}. We follow the datasets' default divisions as to training/validation/test splits.
\noindent{\textbf{Experiment design}} To test the overfitting resiliency, GPT-2 and DistilGPT-2 are quantized with various PTQ and QAT methods on one of the five datasets. The resulting quantized model is evaluated on its F-ID and on the other four datasets (F-OOD). In addition, we expose the models to varying amounts of fine-tuning data during quantization to compare the changing behaviors of QAT and Quadapter.
\begin{figure}[t]
\includegraphics[width=\linewidth]{imgs/fewshot_fig.png}
\vskip -0.05in
\caption{
GPT-2 quantization performance when fine-tuned on F-ID of varying sizes. Both axes are logarithmic.
}
\label{fig:num_line}
\end{figure}
\noindent{\textbf{Results}} In Table~\ref{tab:main}, Quadapter outperforms the baseline methods on the F-OOD in both GPT-2 and DistilGPT-2. This observation evinces the general capability of Quadapter to reduce overfitting across different models. The comparison between Quadapter (ours) and Quadapter BC+QAT is the ablation of the end-to-end finetuning, and the reusult proves its importance.
Noteworthy is that Quadapter is a powerful stand-alone PTQ technique. Even without QAT fine-tuning, the F-OOD metrics are better than those of the QAT baselines. In addition, the effectiveness of the calibration phase is shown by the comparison between CLE and Quadapter BC.
Another advantage of Quadapter is that it is a viable quantization option in data-scarce situations.
As shown in Figure~\ref{fig:num_line}, Quadapter outperforms QAT throughout different amounts of fine-tuning data, and the gap is most evident when only a small amount of data is available.
Aside from the convincing metrics reported above, we further explore if Quadapter does the intended job of transforming an activation into a more uniform distribution. Figure~\ref{fig:stat} describes the per-channel statistics before and after the Quadapter training. Values in most activation dimensions except for few have small magnitudes around 0, and such dimensions lose precision when quantized because of the large magnitudes of total min/max before applying Quadpater. The illustration verifies that the effect of Quadapter indeed aligns with our expectation, reducing the ranges of outlier-ridden channels while enlarging the ranges of the others.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{imgs/stats.png}
\vskip -0.1in
\caption{Visualization of the per-channel (x-axis) min/max (y-axis) values of the final layer norm output activation in GPT-2.
The solid/dotted lines represent per-channel/total min and max.
}
\label{fig:stat}
\end{figure}
\section{Limitations}
One limitation of Quadapter is that it requires two consecutive layers of linear relations. In other words, it can be a mediator only for convolution layers, linear layers, or normalization layers (when followed by a linear or convolution layer), but not if residual connections or nonlinear activation functions intervene.
\section{Conclusions}
We identify two challenges in quantizing autoregressive transformer language models: the overfitting issue of QAT and the inter-channel variance in activations. Through experiments, we demonstrate that Quadapter not only mitigates the two problems but also serves as an effective PTQ technique.
\bibliographystyle{acl_natbib}
|
1,108,101,564,705 | arxiv | \section{Introduction}
The aim of the present paper is to describe the following general phenomenon: under appropriate topological conditions,
increasing transfinite sequences of topologies interpolating between
two given topologies $\sigma\subseteq \tau$ stabilize at $\tau$ and, under appropriate additional descriptive set theoretic conditions, the stabilization
occurs at a countable stage of the interpolation. Increasing sequences of topologies play an important role in certain descriptive set theoretic considerations; see, for example,
\cite[Section 1]{Lo}, \cite[Sections 5.1--5.2]{BK}, \cite[Section 2]{Be},
\cite[Section 2]{So1}, \cite[Chapter 6]{Hj1}, \cite[Section 3]{FS}, \cite[Sections 2--4]{So2}, \cite{Hj2}, \cite{Dr}, and, implicitly,
\cite[Sections 3--5]{BDNT}.
In this context, such sequences of topologies are often used to approximate an equivalence relation
by coarser, but more manageable, ones. We relate our theorems on increasing interpolations between two topologies to this theme. Section~\ref{Su:res} contains
a more detailed summary of our results.
The results of this paper are expected to have applications to a Scott-like analysis of quite general Borel equivalence relations but, since
they concern a self-contained and, in a way, distinct topic, we decided to publish them separately.
\subsection{Basic notions and notation}
{\em Unless otherwise stated, all topologies are assumed to be defined on a fixed set $X$.}
We write
\[
{\rm cl}_\tau\;\hbox{ and }\;{\rm int}_\tau
\]
for the operations of closure and interior with respect to a topology $\tau$.
If $\tau$ is a topology and $x\in X$, by a {\bf neighborhood of $x$} we understand a subset of $X$ that contains $x$ in its $\tau$-interior.
A {\bf neighborhood basis of $\tau$} is a family $\mathcal A$ of subsets of $X$ such that for each $x\in X$ and each neighborhood $B$ of $x$, there
exists $A\in {\mathcal A}$ that is a neighborhood of $x$ and $A\subseteq B$. So a neighborhood basis need not consist of open sets.
A topology is called {\bf Baire} if a countable union of nowhere dense sets has dense complement.
Given a family of topologies $T$, we write
\[
\bigvee T
\]
for the topology whose
basis consist of sets of the form $U_0\cap \cdots \cap U_n$, where each $U_i$, $i\leq n$, is $\tau$-open for some $\tau\in T$. This is the smallest
topology containing each topology in $T$. If $\tau_i$, for $i\in I$, are topologies, we write
\[
\bigvee_{i\in I} \tau_i
\]
for $\bigvee T$, where $T=\{ \tau_i\mid i\in I\}$.
It is convenient to have the following piece of notation. For an ordinal $\alpha$, let
\begin{equation}\label{E:oplu}
\alpha\oplus 1 =
\begin{cases}
\alpha+1,&\text{ if $\alpha$ is a successor ordinal;}\\
\alpha, &\text{ if $\alpha$ is equal to $0$ or is a limit ordinal.}
\end{cases}
\end{equation}
More uniformly, one can write, for all ordinals $\alpha$,
\[
\alpha\oplus 1 = \sup \{ \xi+2\mid \xi<\alpha\}.
\]
\subsection{Filtrations}
The notion of filtration defined below is the main new notion of the paper.
Let $\sigma\subseteq \tau$ be topologies and let $\rho$ be an ordinal. A transfinite sequence $(\tau_\xi)_{\xi<\rho}$
of topologies is called a {\bf filtration from $\sigma$ to $\tau$} if
\begin{equation}\label{E:cot}
\sigma= \tau_0\subseteq \tau_1\subseteq \cdots \subseteq \tau_\xi \subseteq \cdots \subseteq \tau
\end{equation}
and, for each $\alpha<\rho$, if $F$ is $\tau_\xi$-closed for some $\xi<\alpha$, then
\begin{equation}\label{E:intap2}
{\rm int}_{\tau_\alpha}(F) = {\rm int}_{\tau}(F).
\end{equation}
We will write $(\tau_\xi)_{\xi\leq\rho}$ for $(\tau_\xi)_{\xi<\rho+1}$.
Each filtration from $\sigma$ to $\tau$ as above can be extended to all ordinals by
setting $\tau_\xi=\tau$ for all $\xi\geq \rho$. For this reason, it will be harmless to assume that a filtration
is defined on all ordinals, which we sometimes do to make our notation lighter. On the other hand, a truncation of a filtration
from $\sigma$ to $\tau$ is also a filtration from $\sigma$ to $\tau$, that is, if $(\tau_\xi)_{\xi<\rho}$ is such a filtration and $\rho'\leq \rho$,
then so is $(\tau_\xi)_{\xi<\rho'}$.
A filtration $(\tau_\xi)_{\xi<\rho}$ from $\sigma$ to $\tau$ is also a filtration from $\sigma$ to $\bigvee_{\xi<\rho}\tau_\xi$.
In fact, if $\tau$ is not relevant to the consideration at hand,
we call a transfinite sequence $(\tau_\xi)_{\xi<\rho}$ of topologies a {\bf filtration from $\sigma$} if it is a filtration from
$\sigma$ to $\bigvee_{\xi<\rho}\tau_\xi$. It is easy to see that $(\tau_\xi)_{\xi<\rho}$ is a filtration from $\sigma$ precisely when,
for each $\alpha<\rho$, $(\tau_\xi)_{\xi\leq\alpha}$ is a filtration from $\sigma$ to $\tau_\alpha$.
Note that if $F\subseteq X$ is an arbitrary set and $(\tau_\xi)_\xi$ is a transfinite sequence of topologies fulfilling \eqref{E:cot}, then for each $\alpha$
\[
{\rm int}_{\tau_\alpha}(F)\subseteq {\rm int}_\tau(F).
\]
So condition \eqref{E:intap2}
says that if $F$ is simple from the point of view of $\tau_\alpha$, that is, if $F$ is $\tau_\xi$-closed for some
$\xi< \alpha$, then ${\rm int}_{\tau_\alpha}(F)$ is as large as possible, in fact, equal to ${\rm int}_\tau(F)$.
One might say that if $F$ is $\tau_\xi$-closed for some $\xi< \alpha$, then $\tau_\alpha$ computes the interior of $F$ correctly, that is, as intended by $\tau$.
In some results below, we will find it useful to consider a weakening of \eqref{E:intap2} to \eqref{E:intap}.
\subsection{Results}\label{Su:res}
Let $\sigma\subseteq \tau$ be two topologies.
The first question is to determine whether a given filtration $(\tau_\xi)_\xi$ from $\sigma$ to $\tau$ reaches $\tau$, that is, whether there exists an ordinal
$\xi$ with $\tau_\xi=\tau$.
Since all the topologies $\tau_\xi$ are defined on the same set, there exists an ordinal $\xi_0$ such that $\tau_\xi= \tau_{\xi_0}$ for all $\xi\geq \xi_0$;
the question is whether $\tau_{\xi_0}=\tau$.
If the answer happens to be positive, we aim to obtain information on
the smallest ordinal $\xi$ for which $\tau_\xi=\tau$. We will achieve these goals in Sections~\ref{S:stbd} and \ref{S:stdes}
(Corollary~\ref{C:tst}, Theorem~\ref{T:stab2}, and Corollary~\ref{C:stom})
assuming that $\tau$ is regular and Baire and that it has a neighborhood basis consisting
of sets that are appropriately definable with respect to $\sigma$. So, informally speaking, termination at $\tau$ of a filtration from $\sigma$ to $\tau$
has to do with the attraction exerted by $\tau$,
which is expressed by $\tau$ being Baire, and with the distance from $\sigma$ to $\tau$, which is expressed by the complexity, with
respect to $\sigma$, of a neighborhood basis of $\tau$.
Given an equivalence relation $E$ on a set $X$, with $X$ equipped with a topology $\tau$, we can define a canonical equivalence relation
that approximates $E$ from above: make $x,y\in X$ equivalent
when the $\tau$-closures of the $E$ equivalence classes of $x$ and $y$ are equal. Given a filtration, this procedure gives rise
to a transfinite sequence of upper approximations of $E$. In Section~\ref{S:eqr}, we consider the question
of these approximations stabilizing to $E$. We answer it in Theorem~\ref{T:eqte} and
Corollary~\ref{C:ceq}.
We also present and study a canonical, slowest filtration from $\sigma$ to $\tau$; see Section~\ref{S:slf}.
\section{The slowest filtration}\label{S:slf}
We introduce an operation on pairs of topologies, which will let us define filtrations. Let $\sigma$ and $\tau$ be topologies. Let
\begin{equation}\label{E:sit}
(\sigma,\tau)
\end{equation}
be the family of all unions of sets of the form
\[
U\cap {\rm int}_\tau(F),
\]
where $U$ is $\sigma$-open and $F$ is $\sigma$-closed. Since
\[
{\rm int}_\tau(F_1\cap F_2) = {\rm int}_\tau(F_1)\cap {\rm int}_\tau(F_2),
\]
it follows that $(\sigma, \tau)$ is a topology.
We record the following obvious lemma.
\begin{lemma}\label{L:opb}
Let $\sigma\subseteq \tau$ be topologies.
\begin{enumerate}
\item[(i)] We have $\sigma\subseteq (\sigma,\tau)\subseteq \tau$.
\item[(ii)] If $(\tau_\xi)_\xi$ be a filtration from $\sigma$ to $\tau$, then $\tau_\xi\subseteq (\tau_\xi,\tau)\subseteq \tau_{\xi+1}$, for each $\xi$.
\end{enumerate}
\end{lemma}
Let $\sigma$ and $\tau$ be two topologies with $\sigma\subseteq\tau$. Lemma~\ref{L:opb} suggests defining a filtration from $\sigma$ to $\tau$ that would be
the slowest such filtration; see Proposition~\ref{P:slo} below.
This goal will be achieved by extending operation \eqref{E:sit} to a transfinite sequence of topologies.
So we define by transfinite recursion topologies $(\sigma,\tau)_\xi$, where $\xi$ is an ordinal. (We will have $(\sigma,\tau)_1=(\sigma,\tau)$.)
Let
\[
(\sigma, \tau)_0=\sigma.
\]
If $(\sigma,\tau)_\xi$ has been defined, let
\[
(\sigma, \tau)_{\xi+1} = ((\sigma,\tau)_\xi, \tau).
\]
If $\lambda$ is a limit ordinal and $(\sigma,\tau)_\xi$ have been defined for all $\xi<\lambda$, then
\[
(\sigma, \tau)_\lambda=\bigvee_{\xi<\lambda}(\sigma, \tau)_\xi.
\]
Note that the definition above can be phrased as follows. Given an ordinal $\xi$,
if $(\sigma, \tau)_\gamma$ are defined for all $\gamma<\xi$, then $(\sigma, \tau)_\xi$ is the family of all unions of sets of the form
\[
U\cap {\rm int}_\tau(F)
\]
where, for some $\gamma<\xi$, $U$ is $(\sigma, \tau)_\gamma$-open and $F$ is $(\sigma, \tau)_\gamma$-closed.
Proposition~\ref{P:slo} justifies regarding $((\sigma,\tau)_\xi)_\xi$ as the slowest filtration from $\sigma$ to $\tau$. On the opposite end,
the transfinite sequence $(\tau_\xi)_\xi$
with $\tau_0=\sigma$ and $\tau_\xi = \tau$ for $\xi>0$ is trivially the fastest such filtration.
\begin{proposition}\label{P:slo}
Let $\sigma\subseteq \tau$ be topologies.
\begin{enumerate}
\item[(i)] The transfinite sequence $((\sigma,\tau)_\xi)_\xi$ is a filtration from $\sigma$ to $\tau$.
\item[(ii)] If $(\tau_\xi)_\xi$ is a filtration from $\sigma$ to $\tau$, then $(\sigma,\tau)_\xi\subseteq \tau_\xi$, for each ordinal $\xi$.
\end{enumerate}
\end{proposition}
\begin{proof} Immediately from Lemma~\ref{L:opb}(i), we get
\[
\sigma= (\sigma,\tau)_0\subseteq (\sigma,\tau)_1\subseteq\cdots \subseteq (\sigma,\tau)_{\xi}\subseteq\cdots \subseteq \tau.
\]
It is also clear from the very definition that, for each $\alpha$, if $F$ is $(\sigma,\tau)_\xi$-closed for some $\xi<\alpha$, then
\[
{\rm int}_{(\sigma,\tau)_\alpha}(F) = {\rm int}_\tau(F),
\]
that is, we have point (i).
Point (ii) is obtained by transfinite induction. Clearly, we have $(\sigma,\tau)_0 = \sigma= \tau_0$. Assuming inductively that
$(\sigma,\tau)_\xi\subseteq \tau_\xi$ and using Lemma~\ref{L:opb}(ii), we get
\[
(\sigma, \tau)_{\xi+1} = ((\sigma,\tau)_\xi, \tau)\subseteq (\tau_\xi, \tau)\subseteq \tau_{\xi+1},
\]
as required. If $\lambda$ is a limit ordinal and if, inductively, $(\sigma,\tau)_\xi\subseteq \tau_\xi$
for all $\xi<\lambda$, then $\bigcup_{\xi<\lambda}(\sigma, \tau)_\xi \subseteq \tau_\lambda$ and, therefore, $(\sigma, \tau)_\lambda\subseteq \tau_\lambda$.
The conclusion follows.
\end{proof}
\section{Stabilization at $\tau$}\label{S:stbd}
Theorem~\ref{T:sts} should be seen in the context of Lemma~\ref{L:opb}(i).
\begin{theorem}\label{T:sts}
Let $\sigma\subseteq \tau$ be topologies.
Assume that $\tau$ is regular, Baire, and has a neighborhood basis consisting of sets with the Baire property with respect to $\sigma$.
If $\sigma=(\sigma,\tau)$, then $\sigma= \tau$.
\end{theorem}
We start with a general lemma that will be used here and later on to check equality of two topologies.
\begin{lemma}\label{L:toe}
Let $Z$ be a regular topological space, and let $Y$ be a Baire space.
Let $f\colon Z\to Y$ be a continuous bijection. Assume that, for each $z\in Z$ and a non-empty open $z\in U\subseteq Z$,
$f(U)$ is comeager in a neighborhood of $f(z)$. Then $f$ is a homeomorphism.
\end{lemma}
\begin{proof} We write ${\rm cl}_Z$ for closure in $Z$.
We show that, for each $z\in U\subseteq Z$, with $U$ open,
$f({\rm cl}_Z(U))$ contains $f(z)$ in its interior. If not, then, by surjectivity of $f$,
$f(Z\setminus {\rm cl}_Z(U))$ has $f(z)$ in its closure. Since $Z\setminus {\rm cl}_Z(U)$ is open, we have that $f(Z\setminus {\rm cl}_Z(U))$ is non-meager in each
neighborhood of each of its points. Since each neighborhood of $z$ contains a point in $f(Z\setminus {\rm cl}_Z(U))$, it follows that $f(Z\setminus {\rm cl}_Z(U))$
is non-meager in each neighborhood of $f(z)$. By injectivity of $f$ and $Y$ being Baire, this statement contradicts $f(U)$ being comeager in a neighborhood of $f(z)$.
Now we finish the proof by noticing that, by regularity of $Z$, for each $U\subseteq Z$ open we have
\[
U = \bigcup_{z\in U} {\rm cl}_Z(U_z)
\]
for some open sets $U_x$ with $z\in U_z$. Thus,
\[
f(U) = \bigcup_{z\in U} f({\rm cl}_Z(U_z))
\]
and, by what was proved above, $f({\rm cl}_Z(U_z))$ contains $f(z)$ in its interior. Thus, $f(U)$ is open, and the lemma follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T:sts}]
First, we claim that each non-empty $\tau$-open set is non-meager with respect to $\sigma$.
Let $V$ be non-empty and $\tau$-open, and, towards a contradiction, assume that we have closed and nowhere dense sets with respect to $\sigma$ sets
$F_n$, $n\in {\mathbb N}$, such that $\bigcup_nF_n\supseteq V$.
Then ${\rm int}_\tau(\bigcup_n F_n)\not= \emptyset$. Since $\tau$ is Baire and each $F_n$ is also $\tau$-closed,
it follows that ${\rm int}_\tau(F_{n_0})\not= \emptyset$, for some $n_0$.
Since $F_{n_0}$ is $\sigma$-closed, we have that ${\rm int}_\tau(F_{n_0})$
is $(\sigma,\tau)$-open, so
since $(\sigma,\tau) =\sigma$, it is $\sigma$-open. Thus, ${\rm int}_\sigma(F_{n_0})\not= \emptyset$ contradicting the assumption on
the sequence $(F_n)$.
Our second claim is that for each $x\in X$, each $\tau$-neighborhood of $x$ is $\sigma$-dense in a $\sigma$-neighborhood of $x$.
Indeed, let $V$ be a $\tau$-open set containing $x$. Then ${\rm cl}_\sigma(V)$ is $\sigma$-closed and, therefore, ${\rm int}_\tau({\rm cl}_\sigma(V))$ is
$(\sigma,\tau)$-open and so $\sigma$-open since $(\sigma,\tau)= \sigma$. We clearly have
\[
x\in V\subseteq {\rm int}_\tau({\rm cl}_\sigma(V))
\]
and $V$ is $\sigma$-dense in ${\rm int}_\tau({\rm cl}_\sigma(V))$. It follows that ${\rm int}_\tau({\rm cl}_\sigma(V))$ is a $\sigma$-neighborhood of $x$, in which
$V$ is $\sigma$-dense.
Thirdly, we observe that, by assumption, each $x\in X$ has a $\tau$-neighborhood basis consisting of sets that have the Baire property with respect to $\sigma$.
It follows immediately, from the three claims above, that for each $x\in X$, each $\tau$-neighborhood of $x$ is $\sigma$-comeager in a $\sigma$-neighborhood
of $x$. The first claim also implies that the topology $\sigma$ is Baire.
The above observation implies the conclusion of the theorem by Lemma~\ref{L:toe} applied to ${\rm id}_X\colon (X, \tau)\to (X, \sigma)$.
\end{proof}
If $(\tau_\xi)_\xi$ is a filtration from $\sigma$ to $\tau$, an intuition behind condition \eqref{E:intap2} is that it
tries to ensure that $\tau_{\xi+1}$ is substantially closer to $\tau$ than $\tau_\xi$, unless $\tau_\xi$ is already equal to $\tau$.
Corollary~\ref{C:tst}(ii) below resonates with this intuition. Proposition~\ref{P:slo}(ii) suggests regarding the smallest $\xi$ as in
the conclusion of Corollary~\ref{C:tst}(ii) as an ordinal valued ``distance" from $\sigma$ to $\tau$.
Recall that {\bf C-sets} with respect to a topology is the smallest $\sigma$-algebra of sets closed under the Souslin operation and containing all
open sets with respect to this topology; see \cite[Section~29D]{Ke}.
The main point for us is that C-sets have the Baire property even if the given topology is strengthened; see \cite[Corollary~29.14]{Ke}.
\begin{corollary}\label{C:tst}
Let $\sigma\subseteq \tau$ be topologies.
Assume that $\tau$ is regular, Baire, and has a neighborhood basis consisting of sets that are C-sets with respect to $\sigma$.
\begin{enumerate}
\item[(i)] Let $(\tau_\xi)_\xi$ be a filtration from $\sigma$ to $\tau$. If $\tau_{\xi_0} =\tau_{\xi_0+1}$, then $\tau_{\xi_0}= \tau$.
\item[(ii)] There exists an ordinal $\xi$ such that $(\sigma,\tau)_\xi=\tau$.
\end{enumerate}
\end{corollary}
\begin{proof} (i) Let $\xi$ be such that $\tau_\xi=\tau_{\xi+1}$. This equality and Lemma~\ref{L:opb}(ii) give $\tau_\xi=(\tau_\xi, \tau)$.
Now the conclusion follows from Theorem~\ref{T:sts} if we only notice that C-sets with respect to $\sigma$ are also C-sets
with respect to $\tau_\xi$ since $\sigma=\tau_0\subseteq\tau_\xi$ and, therefore, they have the Baire property with respect to $\tau_\xi$.
(ii) Since the topologies $(\sigma,\tau)_\xi$ are defined on the same set $X$ for all ordinals $\xi$, there exists an ordinal $\xi$ such that
$(\sigma,\tau)_\xi = (\sigma,\tau)_{\xi+1}$, and (ii) follows from (i).
\end{proof}
\section{Stabilization at $\tau$ and descriptive set theoretic complexity}\label{S:stdes}
We prove here a more refined version of stabilization. Theorem~\ref{T:stab2} makes a connection with descriptive set theoretic complexity of neighborhood bases.
Note that the assumptions of Theorem~\ref{T:stab2} ensure that Corollary~\ref{C:tst}(i) applies,
but the conclusion of Theorem~\ref{T:stab2} gives an upper estimate on the smallest $\xi_0$ with $\tau_{\xi_0}=\tau$, which we do not get from Corollary~\ref{C:tst}(i).
\begin{theorem}\label{T:stab2}
Let $\sigma\subseteq \tau$ be topologies,
with $\tau$ being regular and Baire. For an ordinal $\alpha\leq \omega_1$,
let $(\tau_\xi)_{\xi\leq\alpha}$ be a filtration from $\sigma$ to $\tau$, with $\tau_\xi$ metrizable, for $\xi<\alpha$, and $\tau_\alpha$ Baire.
If $\tau$ has a neighborhood basis consisting of sets in $\bigcup_{\xi<\alpha}{\mathbf \Pi}^0_{1+\xi}$ with respect to $\sigma$, then
$\tau_\alpha=\tau$.
\end{theorem}
\begin{remark}
{\bf 1.} We emphasize that in Theorem~\ref{T:stab2} we do not make any separability assumptions.
{\bf 2.} One can relax the assumption of metrizability but with no apparent gain in applicability;
it suffices to assume that $\tau_\xi$ are paracompact and that sets that are $\tau_\xi$-closed are
intersections of countably many sets that are $\tau_\xi$-open, for all $\xi<\alpha$.
{\bf 3.} When $\alpha=\omega_1$, then, of course, $\bigcup_{\xi<\alpha}{\mathbf \Pi}^0_{1+\xi}$
is the family of all Borel sets with respect to $\sigma$.
\end{remark}
Fix $(\tau_\xi)_{\xi<\rho}$, a transfinite sequence of topologies fulfilling \eqref{E:cot}.
Let $\alpha<\rho$.
Define {\bf $\alpha$-tame} sets to be the smallest family of subsets of $X$ containing $\tau_\xi$-closed sets for each $\xi<\alpha$
and closed under the following operation. Let $\mathcal U$ be a $\tau_\xi$-discrete family of $\tau_\xi$-open sets, for some $\xi<\alpha$.
Let $F^U$ be an $\alpha$-tame set, for $U\in {\mathcal U}$. Then
\[
\bigcup_{U\in {\mathcal U}} \left(F^U\cap U\right)
\]
is $\alpha$-tame.
The class of $\alpha$-tame sets is needed in the proof of Theorem~\ref{T:stab2} to handle the case $\alpha=\omega_1$.
If $\alpha<\omega_1$, the simpler family of sets
that are $\tau_\xi$-closed for $\xi<\alpha$ suffices. This is reflected in Lemma~\ref{L:fsi}(ii) below.
\begin{lemma}\label{L:fsi}
Let $(\tau_\xi)_{\xi<\rho}$ be a transfinite sequence fulfilling (1), and let $\alpha<\rho$.
\begin{enumerate}
\item[(i)] If $\tau_\xi$ is metrizable, for each $\xi<\alpha$, then each $\alpha$-tame set is a countable union of $\tau_\alpha$-closed sets.
\item[(ii)] If $\alpha<\omega_1$ and $\tau_\xi$ is metrizable, for each $\xi<\alpha$, then each $\alpha$-tame set is a countable union of $\tau_\xi$-closed sets with
$\xi<\alpha$. That is, for each $\alpha$-tame set $F$, there exist sets $F_n$, $n\in {\mathbb N}$, such that $F_n$ is $\tau_{\xi_n}$-closed, for some $\xi_n<\alpha$,
and $F= \bigcup_n F_n$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove point (i). It suffices to show that the family of countable unions of $\tau_\alpha$-closed sets is closed under the operation
in the definition of $\alpha$-tame sets. Let $\mathcal U$ be a $\tau_\xi$-discrete family of $\tau_\xi$-open sets, for some
$\xi<\alpha$, and let $F_k^U$, for $U\in {\mathcal U}$, $k\in {\mathbb N}$,
be $\tau_\alpha$-closed sets. We need to see that
\begin{equation}\label{E:unon}
\bigcup_{U\in {\mathcal U}} \left(\left(\bigcup_k F_k^U\right)\cap U\right)
\end{equation}
is a countable union of $\tau_\alpha$-closed sets. Recall that each $\tau_\xi$-open set is a countable union of $\tau_\xi$-closed sets. So, for each
$U\in {\mathcal U}$, we can fix $\tau_\xi$-closed sets $H^U_n$, $n\in {\mathbb N}$, such that $U=\bigcup_n H^U_n$. Thus, the set in \eqref{E:unon}
can be represented as
\begin{equation}\label{E:unond}
\bigcup_k\bigcup_n \left( \bigcup_{U\in {\mathcal U}} \left(F^U_k\cap H^U_n\right)\right)
\end{equation}
and $\tau_\xi$-discreteness of $\mathcal U$, which implies $\tau_\alpha$-discreteness of $\mathcal U$, ensures that the sets
$\bigcup_{U\in {\mathcal U}} \left(F^U_k\cap H^U_n\right)$ are $\tau_\alpha$-closed.
The proof of (ii) is similar to the proof of (i). With the notation (${\mathcal U},\, \xi,\, F^U_k$) as above, we assume that for each $U\in {\mathcal U}$ and $k\in {\mathbb N}$,
the set $F_k^U$ is $\tau_{\gamma^U_k}$-closed for some $\gamma^U_k<\alpha$. Working with formula \eqref{E:unond}, we see that
\begin{equation}\label{E:remi}
\bigcup_{U\in {\mathcal U}} \left(F^U_k\cap H^U_n\right) = \bigcup_{\gamma<\alpha} \bigcup \{ F^U_k\cap H^U_n\mid \gamma^U_k=\gamma\}.
\end{equation}
Now, the first union on the right hand side of \eqref{E:remi} is countable and, by $\tau_\xi$-discreteness of $\mathcal U$, we see that
\[
\bigcup \{ F^U_k\cap H^U_n\mid \gamma^U_k=\gamma\}
\]
is $\tau_{\max(\xi, \gamma)}$-closed. Since $\max(\xi, \gamma)<\alpha$, point (iii) and the lemma follow.
\end{proof}
We say that $A\subseteq X$ is {\bf $\alpha$-solid} if for each countable family $\mathcal F$ of $\alpha$-tame sets with
$\bigcup {\mathcal F}$ containing a non-empty relatively $\tau_\alpha$-open subset of $A$, we have
${\rm int}_{\tau_\alpha}(F)\not= \emptyset$ for some $F\in {\mathcal F}$.
We call a set $A\subseteq X$ {\bf $\alpha$-slight} if there exists a countable family $\mathcal F$ with $A\subseteq \bigcup {\mathcal F}$ and such that
each $F\in {\mathcal F}$ is $\alpha$-tame and ${\rm int}_{\tau_\alpha}(F) =\emptyset$.
We register the following lemma that follows directly from the definitions.
\begin{lemma}\label{L:obv}
Let $(\tau_\xi)_{\xi<\rho}$ be a transfinite sequence fulfilling (1), and let $\alpha<\rho$.
A set is $\alpha$-solid if and only if no non-empty relatively $\tau_\alpha$-open subset of it is $\alpha$-slight.
\end{lemma}
Lemma~\ref{L:onsl} below that contains basic properties of $\alpha$-slight sets.
\begin{lemma}\label{L:onsl}
Let $(\tau_\xi)_{\xi<\rho}$ be a transfinite sequence fulfilling (1), and let $\alpha<\rho$.
\begin{enumerate}
\item[(i)] The empty set is $\alpha$-slight.
\item[(ii)] If $\tau_\xi$ is metrizable, for each $\xi<\alpha$, then $\alpha$-slight sets are $\tau_\alpha$-meager.
\item[(iii)] Countable unions of $\alpha$-slight sets are $\alpha$-slight.
\item[(iv)] Assume $\tau_\xi$ is metrizable, for each $\xi<\alpha$.
Let $A\subseteq X$. Assume that for some $\xi<\alpha$, there is a family $\mathcal U$ of $\tau_\xi$-open sets such that $A\subseteq \bigcup {\mathcal U}$ and
$A\cap U$ is $\alpha$-slight for each $U\in {\mathcal U}$. Then $A$ is $\alpha$-slight.
\item[(v)] Assume that $\alpha<\omega_1$ and $\tau_\xi$ is metrizable, for each $\xi<\alpha$.
Let $A\subseteq X$. Then $A$ is $\alpha$-slight if and only if there is a countable family $\mathcal F$ such that $A\subseteq \bigcup {\mathcal F}$ and
each $F\in {\mathcal F}$ is $\tau_\xi$-closed, for some $\xi<\alpha$ depending on $F$, and ${\rm int}_{\tau_\alpha}(F)=\emptyset$.
\end{enumerate}
\end{lemma}
\begin{proof} Points (i) and (iii) are obvious, with point (i) for $\alpha=0$ being true due to the set theoretic convention that
the union of an empty family of sets is the empty set. Point (ii) is immediate from Lemma~\ref{L:fsi}(i).
We show (iv).
By Stone's Theorem \cite[Theorem~4.4.1]{En}
each family of $\tau_\xi$-open sets has a refinement that is a $\sigma$-discrete,
with respect to $\tau_\xi$, family of $\tau_\xi$-open sets.
Therefore, there are $\tau_\xi$-discrete families ${\mathcal U}_n$, $n\in {\mathbb N}$, of $\tau_\xi$-open sets such that, given $n$,
the set $A\cap U$ is $\alpha$-slight for each $U\in {\mathcal U}_n$ and
the family $\bigcup_n {\mathcal U}_n$ covers $A$. Thus,
by (iii), it suffices to show the following statement: if $\mathcal U$ is a $\tau_\xi$-discrete family of $\tau_\xi$-open sets
covering $A$ such that $A\cap U$ is $\alpha$-slight, for each $U\in {\mathcal U}$, then $A$ is $\alpha$-slight.
Now, for $U\in {\mathcal U}$, we can find a sequence $F_k^U$, $k\in {\mathbb N}$, of $\alpha$-tame sets
such that $A\cap U \subseteq \bigcup_k F^U_k$ and ${\rm int}_{\tau_\alpha}( F^U_k)=\emptyset$ for each $k$.
By $\tau_\xi$-discreteness of ${\mathcal U}$, for each $k\in {\mathbb N}$, the set
\[
E_k= \bigcup \{ F^U_k\cap U \mid U\in {\mathcal U} \}
\]
is $\alpha$-tame. Again, using $\tau_\xi$-discreteness of $\mathcal U$, we see that, for each $k$, $E_k$
has empty interior with respect to $\tau_\alpha$. Since $A \subseteq \bigcup_k E_k$, we see that
$A$ is $\alpha$-slight.
To see (v), note first that the direction $\Leftarrow$ is obvious since for each $\xi<\alpha$, each $\tau_\xi$-closed set is $\alpha$-tame. The
direction $\Rightarrow$ follows from Lemma~\ref{L:fsi}(ii).
\end{proof}
We record a reformulation of Lemma~\ref{L:onsl}(iv) that we will need later in the paper.
\begin{lemma}\label{L:onso}
Let $(\tau_\xi)_{\xi<\rho}$ be a transfinite sequence fulfilling (1), and let $\alpha<\rho$.
Assume that $\alpha<\omega_1$ and $\tau_\xi$ is metrizable, for each $\xi<\alpha$.
Then $A\subseteq X$ is $\alpha$-solid if and only if
for each countable family $\mathcal F$ of sets such that each $F\in {\mathcal F}$ is $\tau_\xi$-closed, for some $\xi<\alpha$ depending on $F$, and
$\bigcup {\mathcal F}$ contains a non-empty relatively $\tau_\alpha$-open subset of $A$, we have
${\rm int}_{\tau_\alpha}(F)\not= \emptyset$ for some $F\in {\mathcal F}$.
\end{lemma}
\begin{proof} The lemma follows from Lemmas~\ref{L:onsl}(iv) and \ref{L:obv}.
\end{proof}
We say that $(\tau_\xi)_{\xi<\rho}$ is a {\bf weak filtration from $\sigma$ to $\tau$} provided (1) holds and, for each $\alpha<\rho$,
if $F$ is $\tau_\xi$-closed for some $\xi<\alpha$, then
\begin{equation}\label{E:intap}
{\rm int}_{\tau_\alpha}(F) \hbox{ is $\tau$-dense in } {\rm int}_{\tau}(F).
\end{equation}
It is clear that each filtration is a weak filtration.
\begin{lemma}\label{L:frth}
Let $(\tau_\xi)_{\xi<\rho}$ be a weak filtration from $\sigma$ to $\tau$.
\begin{enumerate}
\item[(i)] If $\alpha<\rho$ and $F$ is $\alpha$-tame, then
${\rm int}_{\tau_\alpha}(F) \hbox{ is $\tau$-dense in } {\rm int}_{\tau}(F)$.
\item[(ii)] If $\alpha\leq\beta<\rho$, then $\beta$-solid sets are $\alpha$-solid.
\end{enumerate}
\end{lemma}
\begin{proof} (i) We consider the family
\[
{\mathcal F} = \{ F\mid F\subseteq X,\, {\rm int}_{\tau_\alpha}(F)\hbox{ is $\tau$-dense in }{\rm int}_{\tau}(F)\}.
\]
We need to show that $\alpha$-tame sets are included in $\mathcal F$. Since $(\tau_\xi)_{\xi<\rho}$ is a weak filtration, $\mathcal F$
contains all $\tau_\xi$-closed sets for all $\xi<\alpha$. It remains to see that $\mathcal F$ is closed under the operation in the definition
of $\alpha$-tame sets.
Let $\mathcal U$ be a $\tau_\xi$-discrete family of $\tau_\xi$-open sets, for some $\xi<\alpha$. Let $F^U$ be sets in $\mathcal F$, for $U\in {\mathcal U}$.
We need to see that
\[
\bigcup_{U\in {\mathcal U}} \left(F^U\cap U\right)\in {\mathcal F}.
\]
Since the family $\mathcal U$ is $\tau_\xi$-discrete, so $\tau$-discrete, and sets in $\mathcal U$ are $\tau_\xi$-open, so $\tau$-open, we have
\[
{\rm int}_\tau\left( \bigcup_{U\in {\mathcal U}} \left(F^U\cap U\right)\right) = \bigcup_{U\in {\mathcal U}} {\rm int}_\tau\left(F^U\cap U\right)=
\bigcup_{U\in {\mathcal U}} \left( {\rm int}_\tau\left(F^U\right)\cap U\right).
\]
Since each $U\in {\mathcal U}$ is also $\tau_\alpha$-open, we have
\[
{\rm int}_{\tau_\alpha} \left(F^U\cap U\right) = {\rm int}_{\tau_\alpha} \left(F^U\right) \cap U.
\]
It follows that it is enough to show that ${\rm int}_{\tau_\alpha} \left(F^U\right) \cap U$ is $\tau$-dense in ${\rm int}_\tau \left(F^U\right) \cap U$,
for each for $U\in {\mathcal U}$. But this is clear since, by assumption, ${\rm int}_{\tau_\alpha} \left(F^U\right)$ is $\tau$-dense in ${\rm int}_\tau \left(F^U\right)$ and $U$ is $\tau$-open being $\tau_\xi$-open.
(ii) We make two observations. First, clearly, $\alpha$-tame sets are $\beta$-tame.
Second, for $F\subseteq X$, ${\rm int}_{\tau_\beta}(F)\not=\emptyset$ trivially implies that
${\rm int}_{\tau}(F)\not= \emptyset$. If now $F$ is $\alpha$-tame, then
${\rm int}_{\tau}(F)\not= \emptyset$ implies ${\rm int}_{\tau_\alpha}(F) \not=\emptyset$, by (i).
So for $\alpha$-tame sets, ${\rm int}_{\tau_\beta}(F)\not=\emptyset$ implies ${\rm int}_{\tau_\alpha}(F) \not=\emptyset$.
These two observations give (ii).
\end{proof}
The statement of the following technical result is more precise than what is needed in this section, but this more refined
version will be used in Section~\ref{S:eqr}. Its proof extends the arguments in \cite[Lemma~4.1]{So2}. There are also analogies with \cite[Lemmas 8 and 9]{Lo}.
\begin{lemma}\label{L:stab}
Let $\alpha\leq\omega_1$. Assume that $(\tau_\xi)_{\xi\leq\alpha}$ is a weak filtration from $\sigma$, with $\tau_\xi$ metrizable for $\xi<\alpha$.
If $A\subseteq X$ is ${\mathbf \Pi}^0_{1+\xi}$ with respect to $\sigma$, for some $\xi\leq\alpha$, $\xi<\omega_1$, and $B\subseteq A$ is $\alpha$-solid,
then ${\rm cl}_{\tau_\xi}(B)\setminus A$ is $\tau_\alpha$-meager.
\end{lemma}
For the remainder of the proof of Lemma~\ref{L:stab}, we fix $\alpha$ and $(\tau_\xi)_{\xi\leq\alpha}$ as in the statement.
For $\xi\leq\alpha$, put
\begin{equation}\label{E:shclin}
{\rm cl}_\xi = {\rm cl}_{\tau_\xi} \;\hbox{ and }\;{\rm int}_\xi = {\rm int}_{\tau_\xi}.
\end{equation}
Since $\alpha$ is fixed, we write {\bf slight} for $\alpha$-slight.
\begin{lemma}\label{L:slal}
If $A\subseteq X$ is ${\mathbf \Pi}^0_{1+\xi}$ with respect to $\sigma$, for $\xi\leq \alpha$, $\xi<\omega_1$, then there exists a $\tau_\xi$-closed set $F$ such that
\begin{enumerate}
\item[(i)] if $\xi<\alpha$, then $(A\setminus F)\cup (F\setminus A)$ is slight;
\item[(ii)] if $\xi=\alpha$, then $F\setminus A$ is $\tau_\alpha$-meager and $A\setminus F$ is the union of $\tau_\alpha$-open sets $U$ such that $U\cap A$
is slight.
\end{enumerate}
\end{lemma}
\begin{proof} First we make the following technical observation.
\noindent {\em Let $\gamma<\xi\leq\alpha$, and let $A, F_1, F_2\subseteq X$ be such that $F_1$ is $\tau_\gamma$-closed,
$F_2$ is $\tau_\xi$-closed, $A\cap F_1$ is slight, and $A\cap V$ is not slight for each $\tau_\xi$-open set $V$ with $V\cap F_2\not=\emptyset$. Then
\[
{\rm int}_\alpha(F_1\cap F_2)=\emptyset.
\]}
To prove this observation,
set $U={\rm int}_\alpha(F_1\cap F_2)$ and assume that $U$ is not empty.
Note that $U$ is $\tau_\alpha$-open and $U\subseteq F_1$. Since $\gamma<\xi\leq \alpha$ and $F_1$ is
$\tau_\gamma$-closed, by assumption \eqref{E:intap}, there exists a $\tau_\xi$-open set $V$ with
\begin{equation}\label{E:uv}
V\subseteq F_1 \;\hbox{ and }\; V\cap U\not=\emptyset.
\end{equation}
Since $A\cap F_1$ is assumed to be slight, by the first part of \eqref{E:uv},
$A\cap V$ is slight, which implies by our assumption on $F_2$ that $V\cap F_2=\emptyset$.
Now, by the second part of \eqref{E:uv}, we get $U\not\subseteq F_2$, which leads to a contradiction with the definition
of $U$.
For $A\subseteq X$ and $\xi\leq \alpha$, let
\[
c_\xi(A) = X\setminus \bigcup \{ U\mid A\cap U\hbox{ is slight and } U \hbox{ is $\tau_\xi$-open}\}.
\]
We show that $F= c_\xi(A)$ fulfills the conclusion of the lemma.
Obviously $c_\xi(A)$ is $\tau_\xi$-closed. It is clear that $A\setminus c_\alpha(A)$ fulfills the second part of point (ii). By Lemma~\ref{L:onsl}(iv),
if $\xi<\alpha$, then $A\setminus c_\xi(A)$ is slight.
It remains to see that if $A$ is ${\mathbf \Pi}^0_{1+\xi}$ with respect to $\tau_0$,
then $c_\xi(A)\setminus A$ is slight, if $\xi<\alpha$, and $c_\xi(A)\setminus A$ is $\tau_\alpha$-meager,
if $\xi=\alpha$.
This is done by induction on $\xi$. For $\xi=0$, $A$ is ${\mathbf \Pi}^0_1$ with respect to $\tau_0$, so $c_0(A)\subseteq A$ by Lemma~\ref{L:onsl}(i).
Now $c_0(A)\setminus A$ being empty is $\tau_\alpha$-meager if $\alpha=0$ and is slight if $\alpha>0$ by Lemma~\ref{L:onsl}(i).
Assume we have the conclusion for all $\gamma<\xi$. Let $A$ be in ${\mathbf \Pi}^0_{1+\xi}$ with $\xi>0$. There exists a sequence
$B_n$, $n\in {\mathbb N}$, with $B_n\in {\mathbf \Pi}^0_{\gamma_n}$,
for some $\gamma_n<\xi$,
with $X\setminus A = \bigcup_n B_n$. We have
\[
c_\xi(A)\setminus A = c_\xi(A)\cap \bigcup_n B_n \subseteq \bigcup_n \bigl(c_\xi(A) \cap c_{\gamma_n}(B_n)\bigr) \cup \bigl( B_n\setminus c_{\gamma_n}(B_n)\bigr).
\]
By what we proved above, the set $B_n\setminus c_{\gamma_n}(B_n)$ is slight for each $n$, so also $\tau_\alpha$-meager, by Lemma~\ref{L:onsl}(ii);
thus, to prove the conclusion of the lemma, by Lemma~\ref{L:onsl}(iii),
it suffices to show that, for each $n$, $c_\xi(A) \cap c_{\gamma_n}(B_n)$ is slight, if $\xi<\alpha$, and is $\tau_\alpha$-meager, if $\xi=\alpha$.
Both these goals will be achieved
if we prove that
\[
{\rm int}_\alpha (c_\xi(A) \cap c_{\gamma_n}(B_n)) =\emptyset.
\]
This equality will follow from the observation at the beginning of the proof if we show that $A\cap c_{\gamma_n}(B_n)$ is slight and $A\cap V$
is not slight for any $\tau_\xi$-open set $V$
with $V\cap c_\xi(A)\not=\emptyset$. The second condition holds by the definition of $c_\xi(A)$. To see that $A\cap c_{\gamma_n}(B_n)$ is slight, note that
\[
A\cap c_{\gamma_n}(B_n)\subseteq c_{\gamma_n}(B_n)\setminus B_n,
\]
and by our inductive assumption $c_{\gamma_n}(B_n)\setminus B_n$ is slight.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{L:stab}]
Let $A$ be ${\mathbf \Pi}^0_{1+\xi}$, $\xi\leq\alpha$, and let $B\subseteq A$ be $\alpha$-solid. By Lemmas~\ref{L:slal} and \ref{L:onsl}(ii),
independently of whether
$\xi<\alpha$ or $\xi=\alpha$, there exists a $\tau_\xi$-closed
set $F$ such that $F\setminus A$ is $\tau_\alpha$-meager and
$A\setminus F$ is covered by $\tau_\alpha$-open sets $U\subseteq X\setminus F$ with $A\cap U$ slight.
Note that this last statement together with the assumption that $B$ is $\alpha$-solid
immediately imply that $B\setminus F$ is empty.
Thus, we have $B\subseteq F$. Since $F$ is $\tau_\xi$-closed, it follows that ${\rm cl}_\xi(B)\subseteq F$, which gives
\[
{\rm cl}_\xi(B)\setminus A\subseteq F\setminus A.
\]
Since $F\setminus A$ is $\tau_\alpha$-meager, we have that ${\rm cl}_\xi(B)\setminus A$ is $\tau_\alpha$-meager, as required.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T:stab2}]
We continue with our convention \eqref{E:shclin}.
First, we note that each non-empty $\tau$-open set $B$ is $\alpha$-solid. Indeed, let each $F_n$, $n\in {\mathbb N}$, be $\alpha$-tame.
Assume that $\bigcup_n F_n$
contains a non-empty relatively $\tau_\alpha$-open subset of $B$. So $\bigcup_nF_n$ contains a non-empty $\tau$-open set.
Since, by Lemma~\ref{L:fsi}(i), each $F_n$ is a countable union of $\tau$-closed sets
and $\tau$ is Baire, we have ${\rm int}_{\tau}(F_n)\not= \emptyset$ for some $n$. By Lemma~\ref{L:frth}(i),
we have ${\rm int}_{\alpha}(F_n)\not= \emptyset$. Thus, $B$ is $\alpha$-solid.
Now we show that if $A\subseteq X$ is a $\tau$-neighborhood of $x$,
then $A$ is $\tau_\alpha$-comeager in a $\tau_\alpha$-neighborhood of $x$. We can assume that $A$ is ${\mathbf \Pi}^0_{1+\xi}$ for some $\xi<\alpha$.
Note that $B={\rm int}_{\tau}(A)$ is $\tau$-open and $x\in B$. Since, by what was proved above,
$B$ is $\alpha$-solid, by Lemma~\ref{L:stab}, we have that ${\rm cl}_\xi(B)\setminus A$ is $\tau_\alpha$-meager. Put $F= {\rm cl}_\xi(B)$ and note that $F$
is $\tau_\xi$-closed. By assumption \eqref{E:intap2},
we get
\[
{\rm int}_\alpha(F) = {\rm int}_{\tau}(F)\supseteq B\ni x.
\]
Clearly we also have
\[
{\rm int}_\alpha(F) \setminus A\subseteq F\setminus A,
\]
and this last set is $\tau_\alpha$-meager. Thus, ${\rm int}_\alpha(F)$ is the desired $\tau_\alpha$-neighborhood of $x$.
The above observation implies the conclusion of the theorem by Lemma~\ref{L:toe}
applied to ${\rm id}_X\colon (X, \tau)\to (X, \tau_{\alpha})$.
\end{proof}
Recall the notation $\alpha\oplus 1$ from \eqref{E:oplu}.
\begin{corollary}\label{C:stom}
Let $\sigma\subseteq \tau$ be topologies,
with $\tau$ being regular and Baire. For an ordinal $\alpha\leq\omega_1$, let
$(\tau_\xi)_{\xi<\alpha\oplus 1}$ be a filtration from $\sigma$ to $\tau$, with $\tau_\xi$ completely metrizable for each $\xi<\alpha\oplus 1$.
If $\tau$ has a neighborhood basis consisting of sets that are in
$\bigcup_{\xi<\alpha}{\mathbf \Pi}^0_{1+\xi}$ with respect to $\sigma$, then $\tau=\bigvee_{\xi<\alpha}\tau_\xi$.
\end{corollary}
\begin{proof} If $\alpha$ is a successor ordinal, then $\alpha\oplus 1 =\alpha+1$, and the conclusion is immediate from Theorem~\ref{T:stab2}.
Assume now that $\alpha$ is limit. (The corollary is tautologically true for $\alpha=0$.) We then have $\alpha\oplus 1 =\alpha$. Let
\[
\tau_{\alpha}=\bigvee_{\xi<\alpha}\tau_\xi.
\]
Note that $(\tau_\xi)_{\xi\leq\alpha}$ is a filtration from $\sigma$ to $\tau$.
If we show that $\tau_{\alpha}$ is Baire, Theorem~\ref{T:stab2} will imply that $\tau=\tau_{\alpha}$ as required.
Therefore, it suffices to check the following claim.
\begin{claim}
Let $T$ be a set of completely metrizable topologies linearly ordered by inclusion. Then $\bigvee T$ is a Baire topology.
\end{claim}
\noindent {\em Proof of Claim.} Let $F_i$, $i\in {\mathbb N}$, be a sequence of sets that are nowhere dense with respect to $\bigvee T$, and let $U$ be a non-empty
set that is open with respect to $\bigvee T$. Since $T$ is linearly ordered by inclusion, we can assume that $U\in t$ for some $t\in T$.
We inductively construct
topologies $t_i\in T$, $i\in {\mathbb N}$, with a complete metric $d_i\leq 1$ inducing $t_i$. We also construct non-empty sets $U_i\in t_i$. All this is arranged so that
$U_i\cap F_i=\emptyset$, $d_i{\rm -diam}(U_j)\leq \frac{1}{j+1}$, $t\subseteq t_i\subseteq t_j$, ${\rm cl}_{t_i}(U_j)\subseteq U_i\subseteq U$ for all natural numbers $i<j$.
The construction of these objects is easy using the fact that $T$ is linearly ordered by inclusion.
Consider now the topology $t_\infty=\bigvee_{i\in {\mathbb N}} t_i$, which is completely metrizable as witnessed by the metric $d_\infty = \sum_i 2^{-i}d_i$. Note that the
sets ${\rm cl}_{t_\infty}(U_i)$ are non-empty, $t_\infty$-closed, decreasing, and their $d_\infty$-diameters tend to $0$. It follws that
their intersection consists of precisely one point $x_\infty$. For each $i$ we have
\[
x_\infty\in {\rm cl}_{t_\infty}(U_{i+1})\subseteq {\rm cl}_{t_i}(U_{i+1})\subseteq U_i.
\]
Thus, $x_\infty\in U\setminus \bigcup_{i\in {\mathbb N}} F_i$.
We just showed that the complement of $\bigcup_{i \in {\mathbb N}}F_i$ is dense with respect to $\bigvee T$, and the claim follows.
\end{proof}
\section{Upper approximations of equivalence relations}\label{S:eqr}
Fix $(\tau_\xi)_{\xi<\rho}$, a transfinite sequence of topologies as in \eqref{E:cot}.
Let $E$ be an equivalence relation on $X$. There exists a natural way of producing
a transfinite sequence of upper approximations of $E$ using $(\tau_\xi)_{\xi<\rho}$. For each $\xi<\rho$ define the equivalence relation $E_\xi$ on
$X$ by letting
\[
xE_\xi y\;\hbox{ if and only if }\; {\rm cl}_{\tau_\xi}([x]_E)= {\rm cl}_{\tau_\xi}([y]_E).
\]
Note that
\begin{equation}\label{E:down}
E_0\supseteq E_1\supseteq\cdots \supseteq E_\xi\supseteq \cdots \supseteq E.
\end{equation}
The main question is when the transfinite sequence of equivalence relations in \eqref{E:down} stabilizes at $E$. Theorem~\ref{T:eqte} gives an answer.
Before we state it we need a definition. Let $(\tau_\xi)_{\xi<\rho}$ be a transfinite sequence of topologies with \eqref{E:cot}.
Recall the definition of $\alpha$-solid for $\alpha<\rho$ from Section~\ref{S:stdes}. We call a set {\bf solid} if it is $\alpha$-solid for each $\alpha<\rho$.
(For more on this notion, see Remark 2 following the statement of Theorem~\ref{T:eqte}.)
Recall notation $\alpha\oplus 1$ from \eqref{E:oplu}.
\begin{theorem}\label{T:eqte}
Let $(\tau_\xi)_{\xi<\alpha\oplus 1}$, $\alpha \leq\omega_1$, be a filtration from $\sigma$.
Assume $\tau_\xi$ is completely metrizable for each $\xi<\alpha$.
Let $E$ be an equivalence relation on $X$ whose equivalence classes are solid.
If all equivalence classes of $E$ are in $\bigcup_{\xi<\alpha} {\bf \Pi}^0_{1+\xi}$ with respect to $\sigma$,
then $E=\bigcap_{\xi<\alpha}E_\xi$.
\end{theorem}
\begin{remark}
We keep the notation and assumptions as Theorem~\ref{T:eqte}.
{\bf 1.} If $\alpha$ is a successor ordinal, say $\alpha=\beta+1$, then the conclusion of Theorem~\ref{T:eqte} reads: if all equivalence classes of $E$ are in
${\bf \Pi}^0_{1+\beta}$ with respect to $\sigma$,
then $E=E_\beta$.
{\bf 2.} We point out here that being solid, under the assumptions of Theorem~\ref{T:eqte}, can be phrased so that it involves
only sets that are $\tau_\xi$-closed for appropriate $\xi$ rather than the more complicated $\alpha$-tame sets.
If $\alpha$ is a successor ordinal, then $A$ being solid means $\alpha$-solid. So, under the assumption that $\tau_\xi$ is metrizable
for each $\xi<\alpha$, by Lemma~\ref{L:onso}, $A$ being solid
is equivalent to the following condition: for each countable family $\mathcal F$ with every $F\in {\mathcal F}$ being
$\tau_\xi$-closed, where $\xi<\alpha$ depends on $F$, and
with $\bigcup {\mathcal F}$ containing a non-empty relatively
$\tau_\alpha$-open subset of $A$, we have ${\rm int}_{\tau_\alpha}(F)\not= \emptyset$ for some $F\in {\mathcal F}$.
In the same spirit, if $\alpha$ is limit, then $A$ being solid means $\alpha'$-solid for each $\alpha'<\alpha$. So, again, under the assumption that $\tau_\xi$ is metrizable
for each $\xi<\alpha$, by Lemma~\ref{L:onso}, $A$ being solid
is equivalent to the following condition: for each $\alpha'<\alpha$, for each countable family $\mathcal F$ with every $F\in {\mathcal F}$ being
$\tau_\xi$-closed, where $\xi<\alpha'$ depends on $F$, and
with $\bigcup {\mathcal F}$ containing a non-empty relatively
$\tau_{\alpha'}$-open subset of $A$, we have ${\rm int}_{\tau_{\alpha'}}(F)\not= \emptyset$ for some $F\in {\mathcal F}$.
\end{remark}
We will need a refinement of a special case of Lemma~\ref{L:stab}. Our gain consists of getting ${\rm cl}_{\tau_\alpha}(A)\setminus A$ to be relatively $\tau_\alpha$-meager
in ${\rm cl}_{\tau_\alpha}(A)$ rather than just $\tau_\alpha$-meager. In exchange, we have to make a stronger assumption
that $A$ be $(\alpha+1)$-solid rather than $\alpha$-solid
(see Lemma~\ref{L:frth}(ii)). Recall the definition of weak filtration from Section~\ref{S:stdes}.
The proof of the lemma below is the place where we need to use weak filtrations
instead of filtrations.
\begin{lemma}\label{L:stabre}
Let $\alpha<\omega_1$.
Let $(\tau_\xi)_{\xi\leq\alpha+1}$ be a filtration from $\sigma$, with $\tau_\xi$ metrizable for each $\xi\leq \alpha$.
If $A$ is $(\alpha+1)$-solid and ${\bf \Pi}^0_{1+\alpha}$ with respect to $\sigma$,
then ${\rm cl}_{\tau_{\alpha}}(A)\setminus A$ is relatively $\tau_\alpha$-meager in ${\rm cl}_{\tau_\alpha}(A)$.
\end{lemma}
\begin{proof}
The conclusion will follow from Lemma~\ref{L:stab}. Put $X' = {\rm cl}_{\tau_\alpha}(A)$, and let $\tau_\xi'$ be $\tau_\xi$ restricted to $X'$.
Note that $(\tau_\xi')_{\xi\leq\alpha}$ is a transfinite sequence of topologies on $X'$ fulfilling \eqref{E:cot}
with $\tau_\xi'$ metrizable for $\xi\leq\alpha$.
First, we check that $A$ being $(\alpha+1)$-solid with respect to $(\tau_\xi)_{\xi\leq \alpha+1}$ implies that it is $\alpha$-solid with respect to $(\tau'_\xi)_{\xi\leq\alpha}$.
By Lemma~\ref{L:onso},
it suffices to check that for every sequence $(F_n')$ such that $F_n'$ is $\tau_{\xi_n}'$-closed, for some $\xi_n<\alpha$, and $\bigcup_n F_n'$ contains
a non-empty relatively $\tau_\alpha'$-open subset of $A$, there is $n$ such that ${\rm int}_{\tau_\alpha'}(F_n')\not=\emptyset$. Let $F_n$ be $\tau_{\xi_n}$-closed with
$F_n'=F_n\cap X'$. Our assumption on $(F_n')$ implies that
$\bigcup_n \left( F_n\cap X'\right)$ contains a non-empty relatively $\tau_\alpha$-open subset of $A$ since $A$
is a subset of $X'$. Now consider the countable family $\{ F_n\cap X'\mid n\in {\mathbb N}\}$.
Since $X'$ is $\tau_\alpha$-closed, the sets $F_n\cap X'$ are $\tau_\alpha$-closed. Since
$A$ is $(\alpha+1)$-solid with respect to $(\tau_\xi)_{\xi\leq\alpha+1}$, there is $n$ such that
\begin{equation}\label{E:lat}
{\rm int}_{\tau_{\alpha+1}}(F_n\cap X')\not=\emptyset.
\end{equation}
Since $(\tau_\xi)_{\xi\leq\alpha+1}$ is a filtration, equation \eqref{E:lat} gives
${\rm int}_{\tau_{\alpha}}(F_n)\cap X'\not=\emptyset$, that is,
\[
{\rm int}_{\tau_{\alpha}'}(F_n\cap X')\not=\emptyset,
\]
as required.
Thus, to reach the desired conclusion by using Lemma~\ref{L:stab} (applied to $A=B$ and $X'$ with $(\tau_\xi')_{\xi\leq\alpha}$),
it suffices to check condition \eqref{E:intap} for $(\tau_\xi')_{\xi\leq\alpha}$, which we now do.
Let $\xi<\beta<\alpha$, and let $F$
be $\tau_\xi$-closed. We need to check that ${\rm int}_{\tau'_\beta}(F\cap X')$ is $\tau_\alpha'$-dense in ${\rm int}_{\tau'_\alpha}(F\cap X')$.
Let
\[
N = {\rm int}_{\tau_{\alpha+1}}(X').
\]
Since $A$ is solid, $N$ is $\tau_\alpha$-dense in $X'$. It will therefore suffice to check that
\begin{equation}\label{E:nee}
{\rm int}_{\tau'_\alpha}(F\cap X')\cap N \subseteq {\rm int}_{\tau'_\beta}(F\cap X').
\end{equation}
Observe that
\begin{equation}\label{E:pri}
{\rm int}_{\tau'_\alpha}(F\cap X')\cap N \subseteq {\rm int}_{\tau'_{\alpha+1}}(F\cap X')\cap N,
\end{equation}
and, further, since $N$ is $\tau_{\alpha+1}$-open and included in $X'$, we have
\begin{equation}\label{E:plus}
{\rm int}_{\tau'_{\alpha+1}}(F\cap X')\cap N \subseteq {\rm int}_{\tau_{\alpha +1}}(F\cap N) = {\rm int}_{\tau_{\alpha +1}}(F) \cap N.
\end{equation}
Note that we use $\tau_{\alpha+1}$-openness of $N$ in our verification of the inclusion and the equality in \eqref{E:plus}.
On the other hand, we have
\begin{equation}\label{E:ir}
{\rm int}_{\tau_\beta}(F)\cap X'\subseteq {\rm int}_{\tau'_\beta}(F\cap X').
\end{equation}
By combining \eqref{E:pri}, \eqref{E:plus}, and \eqref{E:ir}, we see that to prove \eqref{E:nee},
it is enough to show
\[
{\rm int}_{\tau_{\alpha +1}}(F)\cap N\subseteq {\rm int}_{\tau_\beta}(F)\cap X'.
\]
Since $N\subseteq X'$, this inclusion immediately follows from ${\rm int}_{\tau_{\alpha +1}}(F)\subseteq {\rm int}_{\tau_\beta}(F)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T:eqte}]
Let $x,y\in X$ be such that $[x]_E$ and $[y]_E$ are ${\bf \Pi}^0_{1+\xi}$ for some $\xi<\alpha$. If
$xE_\xi y$, then ${\rm cl}_\xi([x]_E) = {\rm cl}_\xi([y]_E)$.
Note that $\xi+1<\alpha\oplus 1$. Using this assumption, metrizability of $\tau_\gamma$ for
$\gamma<\xi$, and $[x]_E$ and $[y]_E$ being $(\xi+1)$-solid, by Lemma~\ref{L:stabre}, we see that
$[x]_E$ and $[y]_E$ are both $\tau_\xi$-comeager in
${\rm cl}_\xi([x]) = {\rm cl}_\xi([y])$. This last set is $\tau_\xi$-closed and $\tau_\xi$ is completely metrizable, so $[x]_E$ and $[y]_E$ intersect; thus, $xEy$.
It follows from the argument above that if each $E$ equivalence class is in in the family $\bigcup_{\xi<\alpha}{\bf \Pi}^0_{1+\xi}$ with respect to $\sigma$, then
$\bigcap_{\xi<\alpha}E_\xi \subseteq E$, so $E=\bigcap_{\xi<\alpha}E_\xi$, as required.
\end{proof}
The following corollary is a consequence of Theorem~\ref{T:eqte}, in which the assumption on equivalence classes is phrased in terms of $\tau$. We
emphasize that no separability assumptions are needed.
\begin{corollary}\label{C:ceq}
Let $\sigma\subseteq \tau$ be topologies,
with $\tau$ being Baire. Let $\alpha\leq\omega_1$, and let
$(\tau_\xi)_{\xi<\alpha}$ be a filtration from $\sigma$ to $\tau$, with $\tau_\xi$ completely metrizable for each $\xi<\alpha$.
Assume $E$ is an equivalence relation whose equivalence classes are $\tau$-open.
If all $E$ equivalence classes are in $\bigcup_{\xi<\alpha} {\bf \Pi}^0_{1+\xi}$ with respect to $\sigma$, then $E=\bigcap_{\xi<\alpha}E_\xi$.
\end{corollary}
\begin{remark}
{\bf 1.} Each $E$ equivalence class being $\tau$-open, as in the corollary above, is equivalent to saying that $E$ is a $(\tau\times\tau)$-open subset of $X\times X$.
{\bf 2.} As in Theorem~\ref{T:eqte}, in Corollary~\ref{C:ceq}, if $\alpha<\omega_1$ is a successor,
say $\alpha=\beta+1$, then the conclusion reads: if all equivalence classes of $E$ are in
${\bf \Pi}^0_{1+\beta}$ with respect to $\sigma$,
then $E=E_\beta$.
\end{remark}
\begin{proof}[Proof of Corollary~\ref{C:ceq}] Extend the given filtration to a filtration $(\tau_\xi)_{\xi< \alpha+1}$ by setting $\tau_\alpha=\tau$.
It follows from Theorem~\ref{T:eqte}, that it suffices to check that $E$ equivalence classes are solid. Since $(\tau_\xi)_{\xi< \alpha+1}$ is
a filtration from $\sigma$ to $\tau$, by Lemma~\ref{L:frth}(ii), it suffices to check that $E$ equivalence classes are $\alpha$-solid. This is immediate,
by Lemma~\ref{L:onsl}(ii),
from $\tau$ being Baire and each $E$-class being $\tau$-open.
\end{proof}
\smallskip
\noindent {\bf Acknowledgments.} I would like to thank Assaf Shani for pointing out paper \cite{Dr} to me.
|
1,108,101,564,706 | arxiv | \section{Introduction}
\label{sec1}
The quantity
$$
\zeta(4)=\sum_{k=1}^\infty\frac1{k^4}=\frac{\pi^4}{90}
$$
is a somewhat typical representative of even zeta values\,---\,the values of Riemann's zeta function at positive even integers.
It is shadowed by the far more famous $\zeta(2)=\pi^2/6$, which was a main subject of Euler's resolution of Basel's problem,
and $\zeta(3)$\,---\,an \emph{objet de l'\'etude} of Ap\'ery's iconic proof of the irrationality of the latter (and also of $\zeta(2)$) \cite{Ap79,vdP79}.
Though known to be irrational (and transcendental!), $\zeta(4)$ serves as a natural guinea pig for extending Ap\'ery's machinery to other zeta values.
Ap\'ery-type approximations to the number were discovered and rediscovered on several occasions \cite{Co81,So02,Zu03}, however they were not good enough to draw conclusions about its irrationality.
An unexpected difficulty to control the `true' arithmetic of those rational approximations to $\zeta(4)$ generated further research \cite{KR07,Zu09}, which eventually led to producing sufficient approximations and establishing a new world record for the irrationality measure of $\pi^4$ \cite{MZ20}.
In this note we turn our attention to a rational side of the coin and prove the following two-parametric identity.
\begin{theorem}
\label{th:main}
For integers $n\ge m\ge0$, define two rational functions
\begin{align}
R(t)=R_{n,m}(t)
&=(-1)^m\Bigl(t+\frac n2\Bigr)\frac{(t-n)_m}{m!}\,\frac{(t-2n+m)_{2n-m}}{(2n-m)!}\,\nonumber
\\ &\qquad\times
\frac{(t+n+1)_n}{(t)_{n+1}}\,\frac{(t+n+1)_{2n-m}}{(t)_{2n-m+1}}\,\biggl(\frac{n!}{(t)_{n+1}}\biggr)^2\nonumber
\\ \intertext{and}
\tilde R(t)=\tilde R_{n,m}(t)
&=\frac{n!\,(t-n)_{2n-m}}{(t)_{n+1}(t)_{2n-m+1}}
\sum_{j=0}^n{\binom nj}^2\binom{2n-m+j}n\frac{(t-j)_n}{n!}\,.\label{Equ:RTilde}
\end{align}
Then
\begin{equation}
-\frac13\sum_{\nu=n-m+1}^\infty\frac{\d R(t)}{\d t}\bigg|_{t=\nu}
=\frac16\sum_{\nu=1}^\infty\frac{\d^2\tilde R(t)}{\d t^2}\bigg|_{t=\nu}.
\label{hyp1}
\end{equation}
\end{theorem}
The $m=n$ instance of \eqref{hyp1} was stated as Problem~1 in \cite{Zu09}.
The fact that both sides of \eqref{hyp1} are linear forms in $1$ and $\zeta(4)$ with rational coefficients is verifiable by standard techniques \cite{KR07,Zu03,Zu09} which employ the partial-fraction decomposition of the rational functions.
A remarkable outcome of this identity is the \emph{coincidence} of two different-looking rational approximations to the zeta value.
Such coincidences are often a source of deep algorithmic and analytical developments\,---\,check \cite{EZZ20} for another exploration of this theme (see also \cite{BCS20}).
The main difficulty in establishing equality \eqref{hyp1} (in contrast to tackling, for example, Ap\'ery's sums in~\cite{AperyCA} for $\zeta(3)$) is that its both sides are not hypergeometric functions but rather \emph{derivatives} of hypergeometric functions. Another issue is that the summation range on the left-hand side is somewhat unnatural.
\section{Symbolic summation}
\label{sec2}
Denote by $Z_l(n,m)$ and $Z_r(n,m)$ the left- and right-hand sides of~\eqref{hyp1}, respectively.
In order to prove the identity~\eqref{hyp1} we proceed as follows.
\smallskip
\noindent
\textbf{(A)} We compute the linear recurrence
\begin{equation}\label{Equ:ZRec}
a_0(n,m) Z(n,m)+a_1(n,m) Z(n,m+1)+a_2(n,m) Z(n,m+2) = 0
\end{equation}
with
\begin{equation}\label{Equ:ZRecCoeff}
\begin{split}
a_0(n,m)&=(2n-m)^5,
\\
a_1(n,m)&=-(4n-2m-1) (6n^4-24n^3m+22n^2m^2-8nm^3+m^4-24n^3
\\ &\qquad
+30n^2m-14nm^2+2m^3+8n^2-10nm+2m^2-4n+m),
\\
a_2(n,m)&=-(2n-m-1)^3 (4n-m) (m+2),
\end{split}
\end{equation}
which holds simultaneously for $Z(n,m)=Z_l(n,m)$ and $Z(n,m)=Z_r(n,m)$ for all $n,m\in\ZZ_{\geq0}$ with $n-2\ge m\ge0$. In addition, we observe that $a_2(n,m)\neq0$ for all $n,m\in\ZZ_{\geq0}$ with $0\leq m<n$.
\smallskip
\noindent
\textbf{(B)} We show that the following initial values hold:
\begin{align}
Z_l(n,0)&=Z_r(n,0) \quad\text{for all}\; n\geq0,
\label{Equ:Initial0}\\
Z_l(n,1)&=Z_r(n,1) \quad\text{for all}\; n\geq1.
\label{Equ:Initial1}
\end{align}
Combined with \textbf{(A)} this proves that
$Z_l(n,m)=Z_r(n,m)$
holds true for all $n\ge m\ge0$.
\smallskip
In order to carry out the steps \textbf{(A)} and \textbf{(B)}, advanced symbolic summation techniques in the setting of difference rings are utilized. Among them the following three summation paradigms play a decisive role, that are available within the summation package~\texttt{Sigma}~\cite{Schneider:07a}.
\noindent
\textbf{(i) Creative telescoping.}
Given a sum $F(m)=\sum_{\nu=a}^bf(m,\nu)$ and $\delta\in\ZZ_{\geq0}$, one searches for polynomials $c_0(m),\dots,c_{\delta}(m)$, free of $\nu$, and $g(m,\nu)$ such that
\begin{equation}\label{Equ:SummandRecurrence}
g(m,\nu+1)-g(m,\nu)=c_0(n)f(m,\nu)+c_1(m)f(m+1,\nu)+\dots+c_{\delta}(m)f(m+\delta,\nu)
\end{equation}
holds for all $a\leq\nu\leq b$.
Thus summing~\eqref{Equ:SummandRecurrence} over $\nu$ one obtains the recurrence
\begin{equation}\label{Equ:SumRecurrence}
g(m,b+1)-g(m,a)=c_0(m)F(m)+c_1(m)F(m+1)+\dots+c_{\delta}(m)F(m+\delta).
\end{equation}
By specializing $a,b$ further\,---\,e.g., to $a=0$ and $b=m$, or sending $b$ to $\infty$ if the limit exists\,---\,one obtains recurrence relations for more specific sums. The computed creative telescoping solution $(c_0(m),\dots,c_{\delta}(m),g(m,\nu))$ is also called a proof certificate for the recurrence~\eqref{Equ:SumRecurrence} found: usually it allows one to verify that $F(m)$ is a solution of~\eqref{Equ:SumRecurrence} by simple polynomial arithmetic, without analyzing the usually complicated computation steps of the underlying summation algorithm. The algorithmic version of creative telescoping has been introduced in~\cite{Zeilberger:91,AequalB} for hypergeometric sums. In order to prove~\eqref{hyp1}, we will employ a generalized machinery for creative telescoping~\cite{Schneider:15} where the summand can be composed not only in terms of hypergeometric products, but of indefinite nested sums defined over hypergeometric products. We emphasize that all recurrences produced below (using the \texttt{Sigma}-command \texttt{GenerateRecurrence}) are accompanied by such proof certificates which guarantee the correctness of all the calculations. Since the output is rather large and can be easily reproduced with \texttt{Sigma}, any explicit printout of the proof certificates is skipped.
\noindent
\textbf{(ii) Recurrence solving.}
Given a linear recurrence of the form~\eqref{Equ:SumRecurrence}, one can search for solutions that are expressible within certain classes function spaces. Using the \texttt{Sigma}-command \texttt{SolveRecurrence} one can search for hypergeometric solutions~\cite{Petkov:92,AequalB} and, more generally, for all solutions that are expressible in terms of indefinite nested sums defined over hypergeometric products. Such solutions are also called d'Alembertian solutions~\cite{Abramov:94,Schneider:01} a subclass of Liouvillian solutions~\cite{Singer:99}.
\noindent
\textbf{(iii) Simplification of expressions.}
Within \texttt{Sigma} the expressions in terms of indefinite nested sums defined over hypergeometric products are represented in the setting of difference rings and fields~\cite{Karr:81,Schneider:08c,DR1}.
Utilizing this difference ring machinery~\cite{Schneider:10c,DR3} (compare also~\cite{Singer:08}) one can apply, e.g.,
the \texttt{Sigma}-command \texttt{SigmaReduce} to an expression in terms of indefinite nested sums. Then the output is a simplified expression where the arising sums and products (except products such as $(-1)^m$) are independent among each other as functions of their external parameter. In particular, the input expression evaluates to zero (from a certain point on) if and only if \texttt{Sigma} reduces the expression to the zero-expression.
\smallskip
These summation paradigms can be used to transform a definite (multi-)sum to an expression in terms of indefinite nested sums by deriving a linear recurrence, solving the recurrence found in terms of indefinite nested sums, and, in case that sufficiently many solutions are found, combining them to an expression that evaluates to the same sequence as the input sum. Recently this machinery has been used for large scale problems coming from particle physics (see, e.g.,~\cite{CALadder:16} and references therein). In this regard, also the package \texttt{EvaluateMultiSum}~\cite{Schneider:13a}, which automatizes this summation mechanism, has been utilized non-trivially in the sections below.
In the following sections we present the main steps of our proof for Theorem~\ref{th:main} that is based on the above summation algorithms. All the necessary calculation steps are collected in a Mathematica notebook that can be accessed vi
\footnote{In case that the reader does not have access to Mathematica, we supplement the pdf file \href{https://www.risc.jku.at/people/cschneid/data/SchneiderZudilinMMA.pdf}{\texttt{SchneiderZudilinMMA.pdf}} (same www-path!) that contains all the calculations in printed form.}
\begin{center}
\url{https://www.risc.jku.at/people/cschneid/data/SchneiderZudilinMMA.nb}\,.
\end{center}
\section{A linear recurrence in $m$ for the left-hand side}
\label{Subsec:Z_l}
In order to activate the summation package \texttt{Sigma}, the sums arising in~\eqref{hyp1} have to be tailored to an appropriate input format. As it turns out below, one can carry out the differentiation by introducing additionally the harmonic numbers
$$
S_a(n)=\sum_{k=1}^n\frac1{k^a}
$$
of order $a\in\ZZ_{\ge0}$.
Though we see no natural way to obtain such a representation for the full summation range $\nu$ with $n-m+1\leq\nu$, splitting it into the ranges over $\nu$ with $n-m+1\leq\nu\leq 2n-m-1$ and $2n-m\leq\nu$ makes the job well.
More precisely, we split the left-hand side of~\eqref{hyp1} into the two subsums
\begin{align*}
W_1(n,m)&=\sum_{\nu=2n-m+1}^{\infty}\frac{\d R_{n,m}(t)}{\d t}\bigg|_{t=\nu}=\sum_{\nu=1}^{\infty}\frac{\d R_{n,m}(t+2n-m)}{\d t}\bigg|_{t=\nu}
\\ \intertext{and}
W_2(n,m)&=\sum_{\nu=n-m+1}^{2n-m}\frac{\d R_{n,m}(t)}{\d t}\bigg|_{t=\nu}=\sum_{\nu=1}^{n}\frac{\d R_{n,m}(t+n-m)}{\d t}\bigg|_{t=\nu},
\end{align*}
so that
\begin{equation}\label{ZW1Ws}
Z_l(n,m)=-\frac1{3}\big(W_1(n,m)+W_2(n,m)\big).
\end{equation}
Observe that
\begin{multline*}
R_{n,m}(t+2n-m)=
(-1)^m\Bigl(t+2n-m+\frac n2\Bigr)\frac{(t+n-m)_m}{m!}\,\frac{(t)_{2n-m}}{(2n-m)!}
\\
\times
\frac{(t+3n-m+1)_n}{(t+2n-m)_{n+1}}\,\frac{(t+3n-m+1)_{2n-m}}{(t)_{2n-m+1}}\,\biggl(\frac{n!}{(t+2n-m)_{n+1}}\biggr)^2
\end{multline*}
and
\begin{multline*}
R_{n,m}(t+n-m)=(-1)^m\Bigl(t+n-m+\frac n2\Bigr)\frac{(t-m)_m}{m!}\,\frac{(t-n)_{2n-m}}{(2n-m)!}
\\ \qquad\times
\frac{(t+2n-m+1)_n}{(t+n-m)_{n+1}}\,\frac{(t+2n-m+1)_{2n-m}}{(t+n-m)_{2n-m+1}}\,\biggl(\frac{n!}{(t+n-m)_{n+1}}\biggr)^2.
\end{multline*}
By definition all Pochhammer symbols in the former expression are of the form $(t+x)_k$ for some $x\in\ZZ_{>0}$ and $k\geq0$. Thus, we can apply the formula
\begin{equation}\label{Equ:DPochhammer}
\frac{\d}{\d t}(x+t)_k\big|_{t=\nu}=(x+\nu)_k\big(S_1(\nu+x+k-1)-S_1(\nu+x-1)\big)
\end{equation}
for $\nu\in\ZZ$ with $x+\nu\in\ZZ_{>0}$ which follows from the product-rule of differentiation.
Employing this formula we get for all $\nu=1,2,\dots$ the following representation:
\begin{align*}
&
F_1(n,m,\nu)
=\frac{\d}{\d t}R_{n,m}(t+2n-m)\bigg|_{t=\nu}
\\ &\quad
=\frac{(-1)^m n!^2 (1+\nu )_{-1 -m+2 n} (-m+n +\nu)_m (1-m+3 n+\nu)_n (1-m+3 n+\nu)_{-m+2 n}}
{2\,m! (-m+2 n)!(-m+2 n+\nu )_{1+n}^3 (-m+2 n+\nu)_{1 -m+2 n}}
\\ &\quad\;\times
\bigg(-6 \nu
+\nu (-2 m+5 n+2 \nu)
\big(
-S_1({\nu })
-S_1({-m+n+\nu })
+5 S_1({-m+2 n+\nu })
\\ &\quad\;\quad
-5 S_1({-m+3 n+\nu })
-S_1({-2 m+4 n+\nu })
+S_1({n+\nu })
+S_1({-m+4 n+\nu })
\\ &\quad\;\quad
+S_1({-2 m+5 n+\nu })
\big)
+\frac{5 n (m-2 n)}{m-2 n-\nu}
+\frac{n (-2 m+3 n)}{n+\nu}
+\frac{3 n (m-n)}{-m+n+\nu}
\bigg).
\end{align*}
Further, we prepare the summand of $W_2(n,m)$. Notice that the rule~\eqref{Equ:DPochhammer} cannot be applied to the arising factor $(t-n)_{2n-m}$. However we can easily overcome this issue by using the following elementary identity:
For $\nu\in\ZZ_{>0}$ with $1\leq \nu\leq n$ and any differentiable function $f(t)$, we have
\begin{equation}\label{Equ:SpecialD1}
\frac{\d}{\d t}\big((t-n)_{2n-m}f(t)\big)\bigg|_{t=\nu}
=(-1)^{n-\nu}f(\nu)(\nu+n-m-1)!(n-\nu)!\,.
\end{equation}
Therefore, for all $\nu\in\ZZ_{>0}$ with $1\leq \nu\leq n$ we get
\begin{align*}
F_2(n,m,\nu)&=\frac{\d R_{n,m}(t+n-m)}{\d t}\\
&=(-1)^m\Bigl(\nu+n-m+\frac n2\Bigr)\frac{(\nu-m)_m}{m!}\,\frac{(-1)^{n-\nu}(\nu+n-m-1)!(n-\nu)!}{(2n-m)!}\\
&\quad\times
\frac{(\nu+2n-m+1)_n}{(\nu+n-m)_{n+1}}\,\frac{(\nu+2n-m+1)_{2n-m}}{(\nu+n-m)_{2n-m+1}}\,\biggl(\frac{n!}{(\nu+n-m)_{n+1}}\biggr)^2.
\end{align*}
Because of the factor $(\nu-m)_m$, we have $F_2(\nu)=0$ for all $\nu\in\ZZ_{>0}$ with $1\leq \nu\leq m$.
Consequently, $W_1(n,m)$ and $W_2(n,m)$ can be written as
\begin{equation*}
W_1(n,m)=\sum_{\nu=1}^{\infty}F_1(\nu)
\quad\text{and}\quad
W_2(n,m)=\sum_{\nu=m+1}^{n}F_2(\nu)=\sum_{\nu=1}^{n-m}F_2(\nu+m),
\end{equation*}
where the summands $F_1(\nu)$ and $F_2(\nu)$ are given in terms of hypergeometric products and linear combinations of harmonic numbers.
Since these sums fit the input class of \texttt{Sigma}, we can apply the command \texttt{GenerateRecurrence} to both sums and compute for $0\leq m\leq n$ the recurrences
\begin{equation*}
a_0(n,m) W_s(n,m) + a_1(n,m) W_s(n,m+1)+a_2(n,m) W_s(n,m+2) = r_s(n,m)
\quad\text{for}\; s=1,2,
\end{equation*}
where the coefficients are given in~\eqref{Equ:ZRecCoeff} and where $r_1(n,m)=-r_2(n,m)$ is too large to be reproduced here (verification of the latter equality required an extra simplification step with \texttt{Sigma}).
To compute the recurrence for the \emph{hypergeometric} sum $W_2(n,m)$ one can alternatively use the Mathematica package~\texttt{fastZeil} \cite{PauleSchorn:95} based on~\cite{Zeilberger:91}.
Thus, $Z_l(n,m)$ given in~\eqref{ZW1Ws} is a solution of the recurrence~\eqref{Equ:ZRec}.
For this part we needed 15~minutes to compute both recurrences and to combine them to~\eqref{Equ:ZRec}.
\section{A linear recurrence in $m$ for the right-hand side}
\label{Subsec:Z_r}
In order to calculate a linear recurrence for $Z_r(n,m)$ we follow the same strategy as for $Z_l(n,m)$ in Section~\ref{Subsec:Z_l} by utilizing more advanced summation tools of \texttt{Sigma}.
Collecting all products in~\eqref{Equ:RTilde} to
$$
G_{n,m,j}(t)=\frac{n!\,(t-n)_{2n-m}}{(t)_{n+1}(t)_{2n-m+1}}{\binom nj}^2\binom{2n-m+j}n\frac{(t-j)_n}{n!},
$$
the right-hand side of~\eqref{hyp1} can be rewritten as
\begin{align*}
Z_r(n,m)
&:=\frac16\sum_{\nu=1}^{\infty}\sum_{j=0}^{n}\frac{\d^2}{\d t^2}G_{n,m,j}(t)\bigg|_{t=\nu}.
\\
\intertext{Similarly to the previous section, we split the sum further into subsums (see~\eqref{Equ:ZrSumExpr} for the final split) such that the differential operator acting on the summands can be replaced by modified summands in terms of harmonic numbers. On the first step, we write}
Z_r(n,m)
&=\frac16\big(C_1(n,m)+C_2(n,m)\big)
\end{align*}
with
\begin{align*}
C_1(n,m)=\sum_{\nu=1}^{\infty}\sum_{j=0}^{n}\frac{\d^2}{\d t^2}G_{n,m,j}(t+n)\bigg|_{t=\nu}
\quad\text{and}\quad
C_2(n,m)=\sum_{\nu=1}^{n}\sum_{j=0}^{n}\frac{\d^2}{\d t^2}G_{n,m,j}(t)\bigg|_{t=\nu}
\end{align*}
and apply, as before, formula~\eqref{Equ:DPochhammer} and its relatives to get a monster summand of $C_1(n,m)$ (that fills two pages) in terms of the harmonic numbers of order $1$ and~$2$. For illustration we print out only a few lines:
\begin{align*}
G_1(n,m,j,\nu)
=&\frac{\d^2}{\d t^2}G_{n,m,j}(t+n)\bigg|_{t=\nu}
\\
=&\quad
\frac{\binom{n}{j}^2 \binom{j
-m
+2 n
}{n} (\nu )_{-m
+2 n
} (-j
+n
+\nu
)_n}{(n
+\nu
)_{1+n} (n
+\nu
)_{1
-m
+2 n
}}\bigg(\cdots\\
&+S_1({-j+n+\nu })^2
+S_1({-j+2 n+\nu })^2
+S_1({-m+2 n+\nu })^2\\
&+S_1({n+\nu })
\frac{4(-j^2 m n
+2 j^2 n^2
+\dots
+m \nu ^3
-7 n \nu ^3
-2 \nu ^4)}{\nu (n
+\nu
) (-j
+n
+\nu
) (-j
+2 n
+\nu
) (-m
+2 n
+\nu
)}\\
&+\cdots\bigg).
\end{align*}
In order to tackle the summand of $C_2(n,m)$, we have to differentiate $G_{n,m,j}(t)$ twice.
With $p(t)=(t-n)_{2n-m}$ and
\begin{equation}\label{Equ:qDef}
q(t)=\frac{G_{n,m}(t)}{p(t)}=\frac{n!}{(t)_{n+1}(t)_{2n-m+1}}{\binom nj}^2\binom{2n-m+j}n\frac{(t-j)_n}{n!}
\end{equation}
we conclude that for all $1\leq\nu\leq n$ we have
\begin{align*}
\tilde{G}(\nu)
&=\frac{\d^2}{\d t^2}G_{n,m,j}(t)\bigg|_{t=\nu}
=q(t)\,\frac{\d^2p(t)}{\d t^2}+2\frac{\d p(t)}{\d t}\,\frac{\d q(t)}{\d t}+p(t)\,\frac{\d^2q(t)}{\d t^2}\bigg|_{t=\nu}
\\
&=q(t)\frac{\d^2p(t)}{\d t^2}+2\frac{\d p(t)}{\d t}\frac{\d q(t)}{\d t}\bigg|_{t=\nu};
\end{align*}
the last equality follows since $p(t)|_{t=\nu}=0$ for all $1\leq\nu\leq n$.
Similarly to~\eqref{Equ:SpecialD1}, we can use in addition the following calculation:
For $\nu\in\ZZ_{>0}$ and $1\leq\nu\leq n$, we have
$$
\frac{\d}{\d t}(t-n)_{2n-m}\bigg|_{t=\nu}
=h(t)\bigg|_{t=\nu}
\quad\text{and}\quad
\frac12\,\frac{\d^2}{\d t^2}(t-n)_{2n-m}\bigg|_{t=\nu}
=\frac{\d}{\d t}h(t)\bigg|_{t=\nu}
$$
with
$$h(t)=\frac{(-1)^{n-\nu}\Gamma(t+n-m)(\nu-t+1)_{n-\nu}}{\Gamma(t-\nu+1)}.$$
In particular, if $\nu>j$, we can apply the rule~\eqref{Equ:DPochhammer} to all Pochhammer symbols in~\eqref{Equ:qDef}: \begin{align*}
&
G_2(n,m,j,\nu)=\tilde{G}(\nu)
\\ &\quad
= 2q(t)\frac{\d}{\d t}h(t)+2h(t)\frac{\d}{\d t} q(t)\Big|_{t=\nu}
\\ &\quad
= \frac{2(-1)^{n+\nu } \binom{n}{j}^2 \binom{j-m+2 n}{n} (1)_{n-\nu} (2)_{-1-m+n+\nu} (1-j+\nu)_{-1+n}}
{\nu ^3 (-m+n+\nu)^2(1+\nu )_n (1+\nu )_{-m+2 n}}
\\ &\quad\;\times
\bigg(
\nu (-j+\nu ) (-m+n+\nu ) \Big(\frac{1}{j-n-\nu}-S_1({-j+\nu })+S_1({-j+n+\nu })\Big)
\\ &\quad\quad
+\nu (-j+\nu ) (-m+n+\nu ) \big(-S_1({-m+2 n+\nu })+S_1({\nu })\big)
\\ &\quad\quad
+\nu (-j+\nu ) (-m+n+\nu ) \big(S_1({\nu })-S_1({n+\nu })\big)
\\ &\quad\quad
-\nu (-j+\nu )+\nu (-m+n+\nu )
\\ &\quad\quad
+2 (j-\nu ) (-m+n+\nu )
+\nu (-j+\nu ) (-m+n+\nu )
\\ &\quad\quad
+\nu (-j+\nu ) (-m+n+\nu ) \big(-1+S_1({-m+n+\nu })\big)
\\ &\quad\quad
-\nu (-j+\nu ) (-m+n+\nu ) S_1({n-\nu })
\bigg).
\end{align*}
For $1\leq\nu\leq j$, we use $q(\nu)=0$ and apply the rule
$$
\frac{\d}{\d t}\bigl((t-j)_{n}f(t)\bigr)\bigg|_{t=\nu}
=f(\nu)(\nu-j)_{j-\nu} (n+\nu-j-1)!
$$
(compare with \eqref{Equ:SpecialD1})
valid for any differentiable function $f(t)$, in place of \eqref{Equ:DPochhammer}, to~\eqref{Equ:qDef}.
It follows that
\begin{align*}
G_3(n,m,j,\nu)=&\tilde{G}(\nu)={2\binom nj}^2\binom{2n-m+j}n\\
&\times\frac{(-1)^{n+\nu }(
n
+\nu-m-1
)! (n
-\nu
)! (n
+\nu-j-1
)! (\nu-j)_{j
-\nu
}}{(\nu )_{1+n} (\nu )_{
2 n-m+1
}}.
\end{align*}
Therefore,
\begin{align*}
C_2(n,m)
&=\sum_{\nu=1}^{n}\sum_{j=0}^{n}\frac{\d^2}{\d t^2}G_{n,m}(t)\bigg|_{t=\nu}
\\
&=\sum_{j=0}^{n-1}\sum_{\nu=j+1}^{n}G_2(n,m,j,\nu)
+\sum_{j=1}^{n}\sum_{\nu=1}^{j}G_3(n,m,j,\nu),
\end{align*}
hence
\begin{align}
Z_r(n,m)
&=\frac16\big(C_1(n,m)+C_2(n,m)\big)
\nonumber\\
&=\frac16\bigg(\sum_{j=0}^n\sum_{\nu=1}^{\infty}G_1(n,m,j,\nu)
+\sum_{j=0}^{n-1}\sum_{\nu=j+1}^{n}G_2(n,m,j,\nu)
\nonumber\\ &\quad
+\sum_{j=1}^{n}\sum_{\nu=1}^{j}G_3(n,m,j,\nu)\bigg).
\label{Equ:ZrSumExpr}
\end{align}
Denote by $A_1(n,m)$, $A_2(n,m)$ and $A_3(n,m)$ the three resulting sums in~\eqref{Equ:ZrSumExpr}
and use \texttt{Sigma} to compute three linear recurrences of $A_{s}(n,m)$ with $s=1,2,3$.
A routine calculation demonstrates that each of the recurrences found can be brought to the form
\begin{equation}\label{Equ:ARecs}
a_0(n,m) A_s(n,m) + a_1(n,m) A_s(n,m+1)+ a_2(n,m) A_s(n,m+2) = u_s(n,m),
\end{equation}
where the coefficients are given in~\eqref{Equ:ZRecCoeff} and where only the right-hand sides $u_s(n,m)$ for $s=1,2,3$ differ.
As an illustration, we provide with details about how we treat
$$
A_1(n,m)=C_1(n,m)=\sum_{j=0}^n\sum_{\nu=1}^{\infty}G_1(n,m,j,\nu).
$$
In the first step, \texttt{Sigma} is used to compute a linear recurrence of the inner sum
\begin{equation}\label{Equ:cDef}
c(n,m,j)=\sum_{\nu=1}^{\infty}G_1(n,m,j,\nu)
\end{equation}
in~$j$,
\begin{align}
&
(j-n)^2 (j-n+1)^2 (j-m+2n+1) (j-m+2n+2)c(n,m,j)
\nonumber\\ &\;
-(j-n+1)^2 (j-m+2n+2)
\big(2 j^3-2 j^2 m+2 j m n-3 j n^2+m n^2-2 n^3
\nonumber\\ &\; \qquad\qquad\qquad
+8 j^2-5 j m-2 j n+4 m n-7 n^2
+11 j-3 m-4 n
+5
\big)c(n,m,j+1)
\nonumber\\ &\;
+(j+2)^3(j-2n+1) (j-m+n+2)^2c(n,m,j+2)=r(n,m,j),
\label{Equ:RecPureInJ}
\end{align}
and one additional recurrence with one shift in $m$ and one shift in~$j$,
\begin{align}
&
(j-n)^2 (j-m+2n+1)
\big(
j^3+j m^2-j^2 m-m^3-2 j m n+4 m^2 n-4 m n^2
\nonumber\\ &\; \qquad\qquad\qquad
+2 j^2-j m-2 j n+2 m n-4 n^2
+j-2 n
\big)c(n,m,j)
\nonumber\\ &\;
-(j+1)^3(j-2 n) (j-m+n+1)^2c(n,m,j+1)
\nonumber\\ &\;
-(j-n)^2(j-m+2n) (j-m+2n+1) (m+1)(m-2n) c(n,m+1,j)
=s(n,m,j);
\label{Equ:RecmPureInJ}
\end{align}
here the right-hand sides $r(n,m,j)$ and $s(n,m,j)$ are large expressions in terms of hypergeometric products and the harmonic numbers
$S_ 1({n})$, $S_ 1({2 n})$, $S_ 1(n-j)$, $S_ 1({2 n-j})$, $S_ 1({2 n-m})$, $S_1({3 n-m})$.
Finally, we use new algorithms that are described in~\cite{BRS:18} and that are built on ideas from~\cite{Schneider:05d,APS:05}.
Activating these new features of \texttt{Sigma} we can compute the linear recurrence~\eqref{Equ:ARecs} with $s=1$ where the right-hand side $u_0(n,m)$ is an expression in terms of the harmonic numbers $S_1({n}),S_1({2 n}),S_1({2 n-m}),S_1({3 n-m})$, the infinite sums
\begin{equation}\label{Equ:InfiniteSums}
c(n,m,0), \; c(n,m,1), \; c(n,m,n+1)
\end{equation}
and the definite sums
\begin{equation}\label{Equ:FiniteSums}
\begin{aligned}
&\sum_{i=0}^n \binom{n}{i}^2 \binom{2n-m+i}{n} \frac{(n-i+1)_{n}}{(2n-i)^k}&&\quad\text{for}\; k=0,1,2,
\\
&\sum_{i=0}^n \binom{n}{i}^2 \binom{2n-m+i}{n} \frac{(n-i+1)_{n}}{(2n-i)^k}S_1(n-i)&&\quad\text{for}\; k=0,1,
\\
&\sum_{i=0}^n \binom{n}{i}^2 \binom{2n-m+i}{n} \frac{(n-i+1)_{n}}{(2n-i)^k}S_1(2n-i)&&\quad\text{for}\; k=0,1.
\end{aligned}
\end{equation}
Note that all these definite sums in~\eqref{Equ:FiniteSums} are \emph{not} expressible in terms of hypergeometric products and indefinite nested sums defined over such products.
For example, the linear recurrence for the last sum in~\eqref{Equ:FiniteSums} with $k=0$ computed with \texttt{Sigma} has order $5$ and has not even a hypergeometric product solution.
We further remark that the above approach is connected to the classical holonomic summation approach~\cite{Zeilberger:90a} and their improvements given in~\cite{Chyzak:00,Koutschan:13}.
In all these traditional versions one needs systems composed by homogeneous recurrences. However, the transformations of~\eqref{Equ:RecPureInJ} and~\eqref{Equ:RecmPureInJ} to such a form would lead to gigantic recurrence systems and the computation of the desired linear recurrence~\eqref{Equ:ZRec} would be out of scope.
Using this refined holonomic summation approach with \texttt{Sigma}, we needed in total 10 minutes to derive the recurrence for $A_1(n,m)$ which holds for all $0\leq m\leq n$. Similarly, one can compute for the other two double sums $A_2(n,m)$ and $A_3(n,m)$ the recurrence~\eqref{Equ:ARecs} in 15 and 2 minutes, respectively, which hold for all $0\leq m\leq n-2$.
Here the right-hand sides $u_2(n,m)$, $u_3(n,m)$ consist of similar definite sums as given in~\eqref{Equ:FiniteSums}.
Adding up \eqref{Equ:ARecs} corresponding to $s=1,2,3$,
results in a linear recurrence for $Z_r(n,m)$ with~\eqref{Equ:ZRec} on the left-hand side and
$$
u(n,m)=\frac16\big(u_1(n,m)+u_2(n,m)+u_3(n,m)\big)
$$
on the right-hand side which holds for all $0\leq m\leq n-2$.
It remains to show that the inhomogeneous part evaluates to zero,
$u(n,m)=0$ for $0\leq m\leq n-2$.
As indicated earlier, the expression $u(n,m)$ is composed by
\begin{itemize}
\item the infinite sums~\eqref{Equ:InfiniteSums} with~\eqref{Equ:cDef};
\item finite definite sums like those given in~\eqref{Equ:FiniteSums}.
\end{itemize}
A verification for all $n-2\geq m\geq0$ looks rather challenging. However, using the toolbox of~\texttt{Sigma}, this task can be accomplished automatically in 16~minutes of calculation time.
First, we treat the infinite sums by merging them to one big infinite sum and then compute a linear recurrence for it, which happens to be completely solvable in terms of indefinite nested sums.
This reduces all the infinite sums to indefinite nested sums.
The finite definite sums are a tougher nut to crack.
Internally, all sums (including~\eqref{Equ:FiniteSums}) are first considered as indefinite nested versions (with a common upper bound, say $a$). Then a finite subset of the sums arising is calculated with the command \texttt{SigmaReduce} such that there are no dependences among them and such that all the remaining sums can be represented in terms of these independent sums.
It turns out that all sums (with $a$ now replaced by the `synchronized' upper bound $n-3$) cancel and only one definite sum remains. Activating the package \texttt{EvaluateMultiSums}~\cite{Schneider:13a} (that combines automatically the available summation tools of \texttt{Sigma}) this remaining sum simplifies to
\begin{align*}
&
\sum_{i=1}^{n-3}\frac{(-1)^i}i\binom{n}{i}\binom{2n-m+i}{n}
\\ &\quad
=\frac{(-1)^n \binom{3n-m-2}{n-2}}{2 (n-2) (n-1)^2 n^2}
\big(
-4 m
-4 m^2
+12 n+30 m n+6 m^2 n
-54 n^2
-43 m n^2-7 m^2 n^2
\\ &\quad\qquad
+70 n^3
+40 m n^3+4 m^2 n^3
-54 n^4
-19 m n^4
-m^2 n^4
+22 n^5
+4 m n^5
-4 n^6
\big)
\\ &\quad\;
-\binom{2 n-m}{n}\big(S_1(2n-m)+S_1(n)-S_1(n-m)\big).
\end{align*}
In a nutshell, $u(n,m)$ can be reduced to an expression given purely in terms of indefinite nested sums, which after further simplifications collapses to zero. This shows that not only the left-hand side but also the right-hand side of~\eqref{hyp1} satisfies the same recurrence~\eqref{Equ:ZRec}. The verification of this fact took in total 43 minutes.
\section{Dealing with the initial values}
\label{sec5}
In order to verify~\eqref{hyp1}, it remains to show~\eqref{Equ:Initial0} and~\eqref{Equ:Initial1}. For~\eqref{Equ:Initial0} we proceed as follows. First, we compute for $Z_l(n,0)$ the recurrence
\begin{multline}\label{Equ:RecIni0LHS}
-16 (2 n+1)^4 Z_l(n,0)
-(n+1)^4 Z_l(n+1,0)
\\
=-\frac{(-1)^n n!^8 (1+2 n)_{2 n}(1+4 n) \big(
831+5265 n+12601 n^2+13499 n^3+5460 n^4\big)}{48 (2 n+1)!^5}.
\end{multline}
Internally, we follow the strategy in Section~\ref{Subsec:Z_l}: we use the representation from~\eqref{ZW1Ws} to get
$$
Z_l(n,0)=-\frac1{3}\big(W_1(n,0)+W_2(n,0)\big)
$$
and, for $W_1(n,0)$ and $W_2(n,0)$, compute two recurrences, where both have the \emph{same} homogeneous part.
Thus adding the inhomogeneous parts and simplifying the result further leads to~\eqref{Equ:RecIni0LHS}.
Solving this recurrence leads, for any $n\ge0$, to the closed form
\begin{align}
Z_l(n,0)
&=\frac{(-1)^n}{30720}
\big(
105 U_9(n)
+955 U_8(n)
+3095 U_7(n)
+2045 U_6(n)
\nonumber\\ &\;\quad
-12140 U_5(n)
-27300 U_4(n)
+12288 \zeta(2)^2
\big) \binom{2 n}{n}^4
\nonumber\\ &\;
+\frac{(-1)^n(4n+1)(5460 n^4+13499 n^3+12601 n^2+5265 n+831) \binom{4 n}{2 n}}{768(2n+1)^9\binom{2 n}{n}^4}
\label{Equ:Zln=0ClosedForm}
\end{align}
in terms of indefinite nested sums
\begin{equation}\label{eq:U}
U_k(n)=\sum_{i=0}^n\frac{\binom{4i}{2i}}{(2i+1)^k\binom{2i}{i}^8}
\quad\text{with}\; k=1,2,\dotsc.
\end{equation}
Similarly to Section~\ref{Subsec:Z_r}, we use the sum representation in~\eqref{Equ:ZrSumExpr} with $m=0$ encoded by $A_0(n,0)+\dots+A_3(n,0)$ to compute the recurrence
\begin{equation}\label{Equ:Zrm0}
\begin{split}
&
16 (n+1)^3 (2 n+1)^4 (4 n+3) (4 n+5) (5460 n^4+35339 n^3+85858 n^2+92804 n+37656) Z_r(n,0)
\\ &\;
+(357913920 n^{13}
+5716680688 n^{12}
+41762423804 n^{11}
+184637211081 n^{10}
\\ &\;\quad
+550778114541 n^9
+1169740743051 n^8
+1818232366245 n^7
+2092705983417 n^6
\\ &\;\quad
+1782121652067 n^5
+1108272850929 n^4
+488951050619 n^3
\\ &\;\quad
+144869028586 n^2
+25833166356 n
+2094206184) Z_r(n+1,0)
\\ &\;
+8 (n+2)^4 (2 n+3)^5 (5460 n^4+13499 n^3+12601 n^2+5265 n+831) Z_r(n+2,0)
=0
\end{split}
\end{equation}
which holds true for all $n\geq0$.
Furthermore, we verify that $Z_l(n,0)$ is also a solution of this recurrence by plugging its representation~\eqref{Equ:Zln=0ClosedForm} into the recurrence and checking that the expression simplifies to zero.
Finally, we verify that the first two initial values of $Z_l(n,0)$ and $Z_r(n,0)$ agree:
\begin{equation*}
Z_l(0,0)=Z_r(0,0)=\frac25\zeta(2)^2,
\quad
Z_l(1,0)=Z_r(1,0)=\frac{277}{16} - \frac{32}{5}\zeta(2)^2;
\end{equation*}
to determine these evaluations again \texttt{Sigma} has been utilized. Together with the fact that the leading coefficient in~\eqref{Equ:Zrm0} is nonzero for all $n\geq0$, this implies that~\eqref{Equ:Initial0} holds.
To verify~\eqref{Equ:Initial1}, we repeat the same game for $Z_l(n,1)$ and $Z_r(n,1)$: namely, we find the closed form representation
\begin{align}
Z_l(n,1)
&=
\frac{3 n(-1)^n}{40960}\big(
105 U_9(n)
+955 U_8(n)
+3095 U_7(n)
+2045 U_6(n)
\nonumber\\ &\;\quad
-12140 U_5(n)
-27300 U_4(n)
+12288 \zeta(2)^2
\big) \binom{2 n}{n}^4
\nonumber\\ &\;
-\frac{(-1)^n\binom{4 n}{2 n}}{1024 n^3 (2 n+1)^9\binom{2 n}{n}^4}\,
(16 n^9 + 116544 n^8 + 398115 n^7 + 587145 n^6
\nonumber\\ &\;\quad
+ 490329 n^5 + 255555 n^4 + 86016 n^3 + 18432 n^2 + 2304 n + 128)
\label{Equ:Zln=1ClosedForm}
\end{align}
valid for all $n\geq1$. In addition, we compute a recurrence of order 2 for $Z_r(n,1)$
and, as above, verify that $Z_l(n,1)$ is also its solution (by plugging in the representation~\eqref{Equ:Zln=1ClosedForm}).
Together with the initial values
\begin{equation*}
Z_l(1,1)=Z_r(1,1)=-13 +\frac{24}{5}\zeta(2)^2,
\quad
Z_l(2,1)=Z_r(2,1)=\frac{4090247}{1944} - \frac{3888}{5}\zeta(2)^2
\end{equation*}
this implies that \eqref{Equ:Initial1} holds as well and completes the proof of~\eqref{hyp1}.
We note that the verification of each initial value problem, \eqref{Equ:Initial0} and~\eqref{Equ:Initial1}, took about 25 minutes.
\section{Summary}
\label{sec6}
Summarizing, the full proof of~\eqref{hyp1} took in total around 2 hours (excluding all the human trials and errors to find the tailored paths described above, and days to physically write this paper).
The initial values~\eqref{Equ:Zln=0ClosedForm} and~\eqref{Equ:Zln=1ClosedForm} are given through $2\zeta(2)^2/5=\zeta(4)$, hypergeometric products and the indefinite nested sums \eqref{eq:U} with $k=4,5,6,7,8,9$.
Thus, feeding the recurrence~\eqref{Equ:ZRec} with all this stuff we get the following corollary.
\begin{theorem}
For any $n\ge m\ge0$, both sides of $Z_l(n,m)=Z_r(n,m)$ can be expressed \textup(and computed in linear time\textup) in terms of $\zeta(4)$ and $U_4(n),\dots,U_9(n)$ in~\eqref{eq:U}.
\end{theorem}
The project \cite{MZ20} implicitly suggests that there can be further\,---\,more general(!)\,---\,forms of \eqref{hyp1}, with more than two independent parameters.
We have tried (unsuccessfully) to find some but cannot even figure out how to adopt \eqref{hyp1} to the case $m>n$.
\medskip
\noindent
\textbf{Acknowledgements.}
This project commenced during the joint visit of the authors in the Max Planck Institute for Mathematics (Bonn) in 2007
and went on during the second author's visit in the Research Institute for Symbolic Computation (Linz) in February 2020.
We thank the staff of these institutes for providing such excellent conditions for research.
|
1,108,101,564,707 | arxiv | \section{Introduction}
\label{sec:introduction}
Over the years, a variety of experimental methods have been developed allowing in-depth investigations of biomolecular systems.\cite{serdyuk2017methods} However, a successful application of these tools to gain new insight is often not possible due to system complexity and limitations of the adopted tool itself.\cite{renaud2016biophysics} On the other hand, advances in computational hardware and efficient algorithm design have enabled implementing physical laws in a simulated world that allows the examination of these systems from a different perspective. Molecular dynamics (MD) is one such well established computational technique that generates a trajectory describing the time evolution of a system in all-atom, femtosecond resolution.\cite{karplus2002molecular} These trajectories can be analyzed to gain new thermodynamic and mechanistic insights. These insights can arguably be best encapsulated through the reaction coordinate (RC), which is the most informative mechanistic degree of freedom describing a system. The RC can differentiate between relevant metastable states and the pathways for moving between them. However, interpreting the huge amount of data produced by MD to identify the RC is not straightforward, and many methods have been proposed for this purpose.\cite{best2005reaction,ma2005automatic,peters2006obtaining,nadler2006diffusion,rohrdanz2011determination,mardt2018vampnets}
Additionally, complex and practically relevant problems can involve rare events and take place in time-scales that are still unreachable in MD. To overcome this limitation, a variety of enhanced sampling algorithms such as umbrella sampling, replica exchange MD, metadynamics, weighted ensemble and several others have been developed.\cite{torrie1977nonphysical,barducci2008well,sugita1999replica,zuckerman2017weighted,votapka2017seekr} A successful application of most of these methods can depend on the knowledge of the system's RC.\cite{bussi2015free} However, for practical problems it can be difficult to know the RC \textit{a priori} and a deduction of RC from MD simulations requires good sampling. This challenge clearly illustrates the need for developing novel algorithms to interpret MD results to learn the RC and tackle the rare event problem.
State Predictive Information Bottleneck (SPIB) \cite{wang2021state} is one such recent method that enables learning the RC in systems with an arbitrary and \textit{a priori} unknown number of metastable states. It belongs to the Reweighted Autoencoded Variational Bayes (RAVE) \cite{ribeiro2018reweighted} family of methods. SPIB employs the concept of information bottleneck from information theory to approximate the RC of a system and is built on the principle of using minimal information from the past to reliably predict the state of the system at a future time. As shown in Ref. \onlinecite{wang2021state} such a state predictive information bottleneck approximates the perfect RC as given by the committor.\cite{bolhuis2002transition}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{Images/system.jpg}
\caption{\textbf{Schematic representation of the systems studied in this work}: (a) Chiral transitions in (Aib)$_{9}$, (b) permeation of benzoic acid (BA) through a symmetric phospholipid bilayer. For (Aib)$_{9}$ we considered different dihedrals as order parameters (OPs). For the BA permeation through lipid bilayer, our OPs were built from two key physical quantities highlighted here: (i) membrane-BA Z distance, $d1_Z$ and (ii) angle between BA and Z axis ($\theta_Z$). Further details of OPs are given in Sec. \ref{sec:System_OP_details}.}
\label{fig:system}
\end{figure*}
In this work, we demonstrate how the RC as learnt through SPIB performed on short, under-sampled trajectories can be used as a biasing coordinate in enhanced sampling, allowing significant and nearly automated acceleration of protein conformational dynamics and small molecule permeation through biological membranes. \cite{cardenas2012unassisted,shinoda2016permeability} Here we use SPIB with well-tempered metadynamics, but the protocol should be more generally applicable to other enhanced sampling methods.\cite{barducci2008well,tiwary2016review,sugita1999replica,zuckerman2017weighted,faradjian2004computing,defever2019contour,valsson2016enhancing,tiwary2013metadynamics,votapka2017seekr} Additionally, we show how SPIB can be performed on the biased trajectories to interpret them.
We demonstrate this SPIB-metadynamics protocol on two test-piece systems. First, we apply SPIB on $\alpha$-aminoisobutyric acid$_{9}$ (Aib)$_{9}$ which is a 9-residue synthetic peptide with alpha helical secondary structure (Fig. \ref{fig:system}(a)). On long time-scales the system is achiral, but on shorter time-scales the system undergoes left to right-handed chiral transitions and vice-versa.\cite{sittel2017principal,biswas2018metadynamics} Due to the time-scale limitations of unbiased MD, calculating the relative stability of fully left-handed and fully right-handed conformations of this system through this approach represents a computationally difficult task. Secondly, we study the permeation of a small asymmetric compound through a synthetic, symmetric phospholipid bilayer constructed from pure DMPC (1,2-dimyristoyl-sn-glycero-3-phosphocholine) lipids.\cite{lee2016simulation} The protonated benzoic acid (BA) molecule has a polar region that interacts with the hydrophilic head groups of DMPCs (Fig.~\ref{fig:system}(b)). For both systems we show how SPIB clearly helps in accelerating and making sense of the molecular dynamics. Additionally, we demonstrate different strategies involving the utilization of multiple independent trajectories as well as the initialization of the protocol (Sec. \ref{sec:Results}) for implementing SPIB. We thus believe this work represents a step forward in practical and automated use of AI-augmented enhanced sampling simulations for studying complex biomolecular problems.
\section{Methods}
\label{sec:Methods}
\subsection{State predictive information bottleneck (SPIB)}
\label{sec:spib_overview}
SPIB uses an information-bottleneck-based protocol\cite{alemi2016deep} to learn the RC, which then can be used to iteratively guide an appropriate enhanced sampling scheme. Consider a bio-physical system characterized by a set of order parameters (OPs) $\bm{X}$ and some metastable states $\bm{y}$. $\bm{X}$ can be amino acid dihedral angles for (Aib)$_{9}$ and specific distances, and angular coordinates for BA-DMPC as discussed in Sec. \ref{sec:System_OP_details}. The state label, $\bm{y}$, could be composed of the fully left-handed state and the fully right-handed state for (Aib)$_{9}$, and the bound state and unbound state for BA-DMPC. We describe these in detail in Sec. \ref{sec:Results}. As the number and location of such states are usually unavailable \textit{a priori}, SPIB only requires an initial assignment of state labels to launch its training process, and then it will automatically refine the assignments in an iterative manner. In SPIB, the RC is defined as the predictive information bottleneck $\bm{z}$ which carries the maximal predictive power for the future state of the system. In practice, a non-linear ANN encoder first converts the high dimensional input data into a low dimensional RC representation. Then, the ANN decoder classifies this RC space into different metastable states. In contrast to a RAVE \cite{ribeiro2018reweighted} decoder predicting the entire input space, an SPIB decoder predicts the metastable state of the system at a specific future time defined as the time-delay $\Delta t$. In this regard, SPIB can be thought of as a `fast mode filter' where the hyperparameter, $\Delta t$ can be used to tune the coarse-graining of the identified slow modes, as demonstrated in its original proof-of-principle publication.\cite{wang2021state}
Thus, for a given unbiased trajectory $\{\bm{X}^1,\cdots,\bm{X}^{M+s}\}$ and its corresponding state labels $\{\bm{y}^1,\cdots,\bm{y}^{M+s}\}$ with large enough $M$, we can employ the deep variational information bottleneck framework \cite{wang2021state,alemi2016deep} and construct an artificial neural network (ANN) that is trained to maximize the following objective function:
\begin{equation}
\begin{aligned}
\label{eq:SPIB_obj}
\mathcal{L}=\frac{1}{M}\sum_{n=1}^M &\Bigl[\log q_{\theta}(\bm{y}^{n+s}|\bm{z}^{n})-\beta \log \frac{p_{\theta}(\bm{z}^{n}|\bm{X}^n)}{r_{\theta}(\bm{z}^{n})} \Bigr]
\end{aligned}
\end{equation}
where $\bm{z}^{n}$ is sampled from $p_{\theta}(\bm{z}|\bm{X}^n)$ and the time interval between $\bm{X}^n$ and $\bm{X}^{n+s}$ is the time delay $\Delta t$, or how far into the future SPIB should predict. The first term $\log q_{\theta}(\bm{y}^{n+s}|\bm{z}^{(n)})$ measures the ability of our representation to predict the desired target, while the second term $\log \frac{p_{\theta}(\bm{z}^{n}|\bm{X}^n)}{r_{\theta}(\bm{z}^{n})}$ can be interpreted as the complexity penalty that acts as a regulariser. Such a trade-off between the prediction capacity and model complexity is then controlled by a hyper-parameter $\beta\in[0,\infty)$. All these three probability distributions $\left\{ p_{\theta}(\bm{z}|\bm{X}), q_{\theta}(\bm{y}|\bm{z}), r_{\theta}(\bm{z}) \right\}$ are implemented through deep neural networks with model parameters $\theta$. Further implementation details are provided in supplementary materials (SM).
However, for biased data generated from metadynamics, we need to reweight out the effect of the bias. Thus, along with the time series $\{\bm{X}^1,\cdots,\bm{X}^{M+s}\}$, we will also have the corresponding time-series for the bias $V$ applied to the system $\{V^1,\cdots,V^{M+s}\}$. We can then use the principle of importance sampling similar to Ref. \onlinecite{ribeiro2018reweighted} and write our objective function $\mathcal{L}'$ as follows:
\begin{equation}
\begin{aligned}
\label{eq:biased_SPIB_obj}
\mathcal{L}=\Bigl[\sum_{n=1}^M e^{\beta V^n}\Bigl]^{-1}\sum_{n=1}^M e^{\beta V^n}&\Bigl[\log q_{\theta}(\bm{y}^{n+s}|\bm{z}^{n})\\
&-\beta \log \frac{p_{\theta}(\bm{z}^{n}|\bm{X}^n)}{r_{\theta}(\bm{z}^{n})} \Bigr]
\end{aligned}
\end{equation}
where $\beta$ is the inverse temperature. The above equation doesn't correct the kinetics for biased simulations, but as shown in Sec. \ref{sec:Results}, we found in practice it can still robustly identify physically meaningful metastable states and high-quality RCs.
\subsection{System setup and OP description}
\label{sec:System_OP_details}
CHARMM36m\cite{huang2017charmm36m} all atom force field is used to parametrize the (Aib), and DMPC residues while CGenFF \cite{vanommeslaeghe2012automation} is used to parametrize BA. The (Aib)$_{9}$ system contains 4,749 atoms in total and is solvated using 1,539 TIP3P \cite{mackerell1998all,jorgensen1983comparison} water molecules. For the BA-DMPC system, the biological membrane is constructed as a phospholipid bilayer from 80 pure DMPC residues of which 40 DMPC forming upper and lower membrane leaflets respectively. The system contains a total of 222,785 atoms and is solvated using 71,102 TIP3P water molecules.
Since (Aib)$_{9}$ is a 9-residue peptide, a natural choice of OPs for this system, which is easily generalizable to other peptides, involves the nine $\phi$ and $\psi$ dihedral angles.\cite{biswas2018metadynamics} We consider sines and cosines of all the dihedrals to remove angular periodicity which amounts to a total of 36 OPs.
To construct an OP space for the BA-DMPC system, four different virtual position coordinates: the entire membrane center of mass (COM), membrane upper leaflet COM, membrane lower leaflet COM, BA benzene ring COM are first calculated. Based on these quantities, five key vectors from membrane COM to BA benzene ring COM ($\vec{d1}$), membrane COM to oxygen atom of BA ($-$OH) group ($\vec{d2}$), membrane COM to oxygen atom of BA (=O) group ($\vec{d3}$), BA benzene ring COM to oxygen atom of BA ($-$OH) group ($\vec{d4}$), lower leaflet COM to upper leaflet COM ($\vec{d5}$) are defined. $X, Y, Z$ components of $\vec{d1}, \vec{d2}, \vec{d3}$ constitute 9 OPs. Each of the 6 angles: $\theta_{X}, \theta_{Y}, \theta_{Z}, \omega_{X}, \omega_{Y},\omega_{Z}$ that $\vec{d4}$ and $\vec{d5}$ make with $X, Y, Z$ axes of the simulation box respectively are converted into sines and cosines to obtain 12 additional OPs. In total, a 21-dimensional input space for SPIB is constructed for BA-DMPC.
\subsection{SPIB augmented MD}
\label{sec:MD_details}
A complete workflow for applying SPIB to accelerate and interpret MD simulations is shown in Fig. \ref{fig:protocol}. Its key aspects are exemplified below in the context of the two systems studied here: (1) conformational transitions in (Aib)$_{9}$ peptide and (2) BA permeation through phospholipid bilayer.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Images/protocol.jpg}
\caption{\textbf{Flowchart illustrates our protocol for SPIB-based enhanced sampling.} Starting with short unbiased MD trajectories, SPIB is employed to learn an optimal RC which to enhance the sampling of rare events. Well-tempered metadynamics \cite{valsson2016enhancing} was the enhanced sampling method employed in this work.}
\label{fig:protocol}
\hfill
\end{figure}
The starting point of this protocol involves performing a relatively short unbiased MD simulation that provides time-series of the features or OPs. Training an ANN model using SPIB involves feeding these OPs and the initial state assignments with minimal use of human intuition to construct a low dimensional system RC.
The initial assignment of states can be carried out in at least two different robust manners with their respective strengths and weaknesses (see Sec. \ref{sec:Results} and SM). If one has \textit{a priori} structural information about the system, then this can be directly used in what we call a ``structural" scheme. However, if no such information is known at all, one can simply partition the time-series into discrete labels \cite{hyvarinen2016unsupervised} which then serve as a ``temporal" scheme for initial state assignment.
The RC is then utilized to perform well-tempered metadynamics and achieve enhanced sampling. Ideally, the length of the unbiased simulation used to learn system RC, should be long enough so that the rare event of interest has been seen at least once. However, for truly rare event systems such as ligand dissociation \cite{shekhar2021protein,PANT2020} this might be impossible, and in this case the protocol might need more rounds to converge to an optimized RC as shown in Fig. \ref{fig:protocol}. The metadynamics trajectory, appropriately reweighted, can be used to learn an improved RC and corresponding metastable states. In principle, this trajectory RC can also be used to perform further metadynamics to obtain even better sampling \cite{ribeiro2018reweighted,wang2019past,wang2021interrogating} as we do for the (Aib)$_{9}$ system (Sec. \ref{sec:Results}). Final metadynamics trajectories for both systems, with reweighting factors accounting for the bias \cite{tiwary2015time} are again analyzed through SPIB to identify the final RC and relevant metastable states which we report in Sec. \ref{sec:Results}. All MD simulations are performed by employing Nose-Hoover thermostats and Parrinello-Rahman barostats \cite{parrinello1980crystal, hoover1985canonical} using GROMACS 2020.2 \cite{van2005gromacs}, patched with PLUMED\cite{tribello2014plumed} 2.6.2. Further simulation details can be found in the SM.
\section{Results}
\label{sec:Results}
\subsection{Chiral transitions in (Aib)$_{9}$}
\label{sec:aib_result}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{Images/fig_3_4.jpg}
\caption{\textbf{Chiral transitions in (Aib)$_{9}$.} $\zeta$ time series for (a) unbiased MD, and (b) SPIB augmented metadynamics. Multiple back and forth transitions were captured in SPIB metadynamics compared to unbiased MD. (c) Free energy along OP $\zeta$ highlighting R,rlrrl, llrlr, L metastable states showing similar free energy for L and R, (d) free energy along 2-d RC for time delay, $\Delta t = 0.3 ns$, (e) converged state labels, and (f) previously used\cite{biswas2018metadynamics} expertise-based RC $\zeta$ projected on the 2-d RC generated by SPIB.}
\label{fig:time_series}
\end{figure*}
We stay consistent with Ref. \onlinecite{biswas2018metadynamics} and define the fully left (L) handed and fully right (R) handed configurations of (Aib)$_{9}$ only based on the inner 5 amino acid residues. The system transitions from L to R and vice-versa as the chirality of each of the inner 5 residues flip from one to the other. In subsequent discussions we implemented ``structural" initial state assignment scheme for SPIB by discretizing along the dihedral angle $\phi$ of the inner 5 residues, leading to $2^5=32$ initial state labels. We also explore an alternate approach for (Aib)$_9$ which we call `temporal' initial state assignment scheme and relevant results for this approach are provided in SM.
At first, we conducted a short $500 ns$ unbiased MD for (Aib)$_9$ in NPT ensemble at 400K and 1 atm. However, this is not long enough to observe R-L transitions and the system remained trapped in the R helical conformation during the entire trajectory as demonstrated in Fig. \ref{fig:time_series}(a) by the hybrid OP $\zeta \equiv \sum_{n=3}^{7}\phi$, which we refer to as expertise-based RC.\cite{biswas2018metadynamics} As $\phi\approx-1$ for a right-handed residue and $\phi\approx1$ for a left-handed residue, $\zeta\approx-5$ and $\zeta\approx5$ correspond to the fully right (R) handed and fully left (L) handed states respectively.
From such an under-sampled unbiased trajectory, a 2-dimensional RC is learned by SPIB and used as the biasing variable to perform $200 ns$ metadynamics simulation. Even in this first round, we observe many new metastable states and a full back and forth transitions between the two most stable states (L and R) (see SM). This biased trajectory is reweighted and used to run a second round of SPIB to determine a more informative RC. Subsequently, this improved RC is used as the biasing variable of metadynamics to perform $700 ns$ MD simulation. This trajectory contains multiple back and forth transitions between the two most stable states (L and R) as shown in Fig. \ref{fig:time_series}(b). Therefore, we finally achieve an acceleration of 40 times by considering number of back and forth per unit time after 2 rounds of SPIB-metadynamics. Fig. \ref{fig:time_series}(b) highlights the first $500ns$ metadynamics and the complete $700ns$ simulation result is provided in SM.
Fig. 3(c) demonstrates the free energy along $\zeta$ highlighting two intermediate metastable state regions x, and y between R and L taking the reweighted metadynamics into account.\cite{tiwary2015time} The rlrrl, and llrllr (Aib)$_9$ conformations are highlighted that belong to these intermediate metstable states respectively.This figure clearly shows the limitations of such expertise-based RC, as there is a huge degeneracy of conformations along $\zeta$ and it fails to provide a clear picture of (Aib)$_9$ chiral transitions.
To interpret this $700 ns$ MD trajectory, a third round of final SPIB models were trained to learn system RC and gain biophysical insights. While training SPIB models, the SPIB hyperparameter, $\Delta t$ was used to neglect the fast modes. A range of different $\Delta t$ were chosen while training different models and the number of converged states detected by SPIB decreased with $\Delta t$ (see SM). Fig. \ref{fig:time_series}(d) shows free energy along the 2-d RC space for a particular $\Delta t = 0.3ns$ that detects a reasonable number of metastable states. The trained SPIB model detected 9 converged state labels for $\Delta t = 0.3 ns$. In Fig. \ref{fig:time_series}(e), fully right (R) handed and fully left (L) handed states are classified to SPIB states 0 and 31 respectively, while the intermediate states between R and L are represented by the other 7 converged states. The remaining 23 SPIB states were found to have no significant population after training convergence. Projection of $\zeta$ on this 2-d RC space clearly demonstrates the strong correspondence between our RC learned by SPIB and the expertise-based RC ($\zeta$) as shown in Fig. \ref{fig:time_series}(f). Here, $RC_0$ is similar to $\zeta$ as it differentiates between L and R metastable states while $RC_1$ distinguishes between the metastable states that are degenerate in $\zeta$.
\subsection{BA permeation through phospholipid bilayer}
\label{sec:BA_result}
For this system we are guided by two simple pieces of intuition. Firstly, a successful permeation event can occur when BA explores the direction of membrane surface normal. Secondly, the polar region of BA possibly plays a role in permeation process by interacting with the membrane. Based on this intuition, initial states for BA-DMPC were assigned by using ${d1_Z}$, and $\theta_Z$. To demonstrate that SPIB is capable of utilizing independent and discontinuous MD trajectories, we learnt a 1-d RC by combining two $25 ns$ unbiased trajectories launched by initially placing BA on the two opposite sides of the membrane along Z direction. In both simulations, BA gets trapped near the membrane surface regions after entering the membrane (see SM).
$500 ns$ metadynamics based on this 1-d RC achieves 3 complete permeation events compared to none in unbiased MD as shown in Fig. \ref{fig:BA-DMPC} (a). Free energy along ($d1_Z,\theta_Z$) highlights the tendency of BA to stay trapped near membrane surface region upon entry (Fig. \ref{fig:BA-DMPC}(b)). The position of the key barrier acting against this permeation process was identified by the SPIB learnt 1-d RC as shown in Fig. (\ref{fig:BA-DMPC}(c)). A relatively high time delay, $\Delta t=20 ps$ was chosen to recognize two metastable states and thus identifying the key barrier.
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{Images/fig_5.jpg}
\caption{Time series data for (a) $500ns$ unbiased MD started from a configuration with BA placed outside the membrane , and SPIB based metadynamics, (b) free energy along ($d1_Z, \theta_Z$) for biased simulation. (c) two converged SPIB states depicting key permeation barrier. Permeation mechanisms are highlighted in (d, e, f) where in respective cases, BA has found an entry point into the membrane, is stuck at the surface along +Z direction, and is stuck at the surface along -Z direction from membrane COM.}
\label{fig:BA-DMPC}
\end{figure*}
Additionally, SPIB augmented metadynamics enabled the study of interesting entry/exit mechanisms of BA shown in Fig. \ref{fig:BA-DMPC}(d,e,f). Prior to entering the membrane, the polar region of BA begins interacting with membrane surface and acts as an anchor. This enables the BA benzene ring to flip inside the membrane and enter. Once inside, the BA polar region keeps interacting with the headgroup region of the membrane and the hydrophobic BA benzene ring positions itself close to membrane center, where it interacts with the lipid tails. Finally, after overcoming the key barrier, which corresponds to its flipping within the membrane, BA reaches the other leaflet of the membrane and BA polar region starts interacting with membrane headgroup. Thus, ligand permeation here is a composite of entropic and enthalpic barriers, corresponding respectively to diffusion to the membrane and subsequent escape beyond the membrane surface. Notice that metadynamics helped here with speeding up the movement across the energetic barrier significantly (Fig. \ref{fig:BA-DMPC}(a)). The entropic process could also be improved further by using SPIB in a method such as weighted ensemble, milestoning or forward flux sampling. \cite{zuckerman2017weighted, faradjian2004computing,defever2019contour}
It can be argued that a traditional metadynamics with $d1_Z$ as the biasing variable can also generate membrane permeation events. However, SPIB learnt 1-d RC learns the underlying physics and takes contributions from all the 21 OPs into account including $d1_Z$. For example, Fig. \ref{fig:BA-DMPC}(c) exemplifies that SPIB considered $\theta_Z$ contributions in addition to $d1_Z$ for identifying the key barrier. In this way, the resultant trajectory will reflect biophysically relevant events and avoid brute-force entry that would possibly be observed when using only $d1_Z$ as the biasing variable.\cite{lee2016simulation} This demonstrates a key strength of SPIB augmented metadynamics.
\section{Conclusion}
In this work, we have applied the recently developed SPIB framework\cite{wang2021state} of the RAVE class of methods\cite{ribeiro2018reweighted,wang2019past} to accelerate and understand two prototypical biophysical problems plagued with rare events that are inaccessible in microsecond-long unbiased MD. The two problems considered here are protein conformational dynamics in a 9-residue peptide (Aib)$_9$\cite{sittel2017principal, biswas2018metadynamics} and permeation of the small molecule benzoic acid through a phospholipid bilayer. For both systems, the SPIB reaction coordinate based metadynamics was able to successfully and significantly accelerate the simulations. In addition, the SPIB augmented metadynamics helped us gain physical insights into the respective problems. For (Aib)$_{9}$, SPIB identifies the complex energetic landscape behind chiral transition process and for BA-DMPC, discovers the position of the key permeation barrier. In future, we aim to augment SPIB with other enhanced sampling schemes to effectively tackle bio-physical problems across spatio-temporal scales. \newline
\textbf{Supplementary material\newline }
See supplementary material for system information, neural network architecture and other details. \newline
\textbf{Acknowledgements\newline }
Research reported in this publication was supported by the National Institute Of
General Medical Sciences of the National Institutes of
Health under Award Number R35GM142719 (P.T.)
The content is solely the responsibility of the authors
and does not necessarily represent the official views
of the National Institutes of Health. The authors thank Deepthought2, MARCC, and
XSEDE (projects CHE180007P and CHE180027P) for
providing computational resources used in this work. The authors thank Dr. Eric Beyerle, Yihang Wang and Zachary Smith for helpful discussions and valuable insights. \newline
\textbf{Data availability statement\newline }
The data and codes that support the findings of this study will be made available through GitHub and PLUMED-NEST. \newline
\textbf{Conflict of Interest\newline}
The authors declare the following competing financial
interest(s): P.T. is a consultant to Schrodinger, Inc. and S.P. is currently an employee of Loxo Oncology {@} Lilly and is a shareholder of stock in Eli Lilly and Co. \newline \newline
\textbf{References}
|
1,108,101,564,708 | arxiv | \section{Approach}
Visual localization provides information about motion of the camera
relative to structures in the surrounding environment through direct
observation of the changes in their projected position in the images.
This prevents accumulation of position and orientation errors as long as
the same global features can be kept visible. The position is extracted
through processing of the image information and the pose (position and
orientation) change is calculated with frame-rates in the range of
30-120Hz. This frequency limits the maximal possible dynamic motion of
the system to changes occurring with a frequency smaller than half of
that frame-rate.
In contrast, sensors like inertial units (IMUs) rely solely on physical
effects within the sensor as a response to applied velocities and
accelerations. They provide a significantly higher measurement rate,
which can reach for an inertial unit values around 800-1000Hz. Small
errors in the estimate due to noise or external disturbance cannot be
compensated here through a global reference. They cause drifts that are
accumulated during the integration of the consecutive measurements. A
visual system exposes similar errors but with a significantly slower
frequency, when a reference used to measure the current position needs
to be changed to a new landmark (hand-off
problem)~\cite{darius_loc_chapter}.
We can see in Fig.~\ref{teaser::fig} that the reliable area of the
tracks that can be used for navigation does not provide unique matches
in the tracking system that provides the information about the motion
of the train. Local matching strategies that work reliably for flying
systems and in the automotive domain cannot be applied in railroad
environments due to this strong self-similarity of the local features.
We propose a final system architecture depicted in Fig.~\ref{system:fig} to
solve this problem. The navigation unit fuses the information from a
point-based structure-from-motion (SfM unit - Section~\ref{nav:sec})
with a unit correlating large areas of the tracks to estimate robustly
the metric motion of the train (correlation unit). The dynamic motion
state of the train is currently estimated only from fusion of the
optical unit with the Kalman Filter prediction. We plan to extend it
with the information provided by an additional inertial unit (IMU) to
allow capture of higher dynamic motions of lighter train setups.
\begin{figure}[ht]
\centering
\includegraphics[width=9cm]{./pics/system.pdf}
\caption{\label{system:fig} System architecture of the planned goal
navigation system.(SfM: Structure from Motion)}
\end{figure}
Our system is calculating the pose changes from a monocular image
sequence. This sequence is passed to a correlation unit that estimates
the metric translational motion of the train from the motion of an image
template in the track area between the images of the sequence. The
details of the processing are presented in Section~\ref{mouse:sec} in
more detail. The rotational parameters and the direction of motion is
calculated from a modified SfM module, where additionally the accuracy
of the current navigation result is estimated. It is important for
correct fusion in the Fusion Unit and for the planned certification of
the system. This processing is presented in Section~\ref{nav:sec}.
The presented system cannot avoid long term drifts, because the
correlation unit and the SfM module rely only on local features that can
be used as reference only in limited space. Our system uses an
additional long focal length camera that identifies
April-Tags~\cite{aprilurl} placed instead of the usual identifiers along
the track. These tags are used to compensate possible drifts in the
navigation unit. They provide geo-tagged information about the position
of the train in the world.
The navigation unit~(Fig.~\ref{system:fig}) can further optimize the
calculation of the distance by freezing the reference frame~$\cal{I'_t}$
(key-frame)
for a number of following frames, if the estimated velocity is slow. Since
the distance travel is the integral of the responses from the optical
correlation, small detection errors usually integrate to increasing
drifts in the distance. Switching to the key-frame-processing results in
the detection errors appearing as noise overlayed over the true distance
instead of appearing as accumulated drift (Section~\ref{mouse_result}).
\subsection{Robust Estimation of Metric Motion Parameters}
\label{mouse:sec}
Conventional Visual SLAM approaches use the information from a sparse
point matching system in the camera images. The points are {\em tracked}
between the image pairs from the sequence or {\em matched} based on the
local information in the neighborhood of the points. The difference is
that while {\em tracking} assumes a local search around the expected
position, in which a local image patch is searched, {\em matching}
allows larger changes in the image position, because each point is
described by a more or less complex description (SIFT~\cite{lowe},
AGAST~\cite{agast}).
While this processing works in most
flying and automotive environments, we need to be able to match the
information in the area of the tracks with a very strong self similarity
that leads to many mismatches between the frames. We increase the
uniqueness of the local environment by growing the local region to a
large area shown in the Fig.~\ref{mousereg:fig}. We try to match this
template in the consecutive image using a Sum-of-Square-Differences
(SSD) method from OpenCV. We refer to this module because of the
similarity to an optical computer mouse as ``Train Mouse''.
\begin{figure}[ht]
\centering
\vspace{2ex}
\includegraphics[width=0.8\columnwidth]{./pics/mouse_template.png}
\caption{\label{mousereg:fig} The rectangular region shown in the left
image is rectified to the ``top-view'' image shown on the right. A
template in this image is searched in the consecutive image rectified in
the same way.}
\end{figure}
A homography matrix~$\tilde{H}$ that is used to calculate the rectified
image~$\cal{I'}$ in Fig.~\ref{mousereg:fig} right has the generic
structure~(\ref{hom:eq}):
\begin{equation}
\label{hom:eq}
\tilde{H}=\left(\tilde{R}+\frac{\vec{T}\vec{n}^T}{d}\right)
\quad \rightarrow \quad\cal{I'}=\tilde{H}\cdot \cal{I}
\end{equation}
The rotation matrix~$\tilde{R}$ describes the rotation between the
current orientation of the physical camera and the top-view orientation
of the rectified view. The vector~$\vec{T}$ describes the translation
between the images, which is zero in our case. Therefore, plane normal
vector~$\vec{n}$ of the tracks and the distance of the camera to the
tracks~$d$ become irrelevant here. We use the homography to rotate the
camera image~$\cal{I}$ to the top-view~$\cal{I'}$ orientation.
We search for a rectangular template with the size~$(x',y')$ from
the~${\cal{I'}}_t$ region of the first image in the
corresponding region~${\cal{I'}}_{t+1}$ using the SSD~template matching
method that searches for the maximum of the
function~(\ref{pixelssd:eq}):
\begin{equation}
\label{pixelssd:eq}
f(x_p,y_p)=\sum_{x',y'}({\cal{I'}}_t(x',y')-{\cal{I'}}_{t+1}(x_p+x',y_p+y'))^2
\end{equation}
The estimated displacement~$(x_p,y_p)_t$ from the maximum response
of~$f(x_p,y_p)$ estimates the horizontal and vertical
image motion of the template between the images. This measures a pixel
accurate shift of the template between the images. The search for the
correct displacement for the current~$(x_p,y_p)_t$ can be accelerated by
using a prediction of these values. In a generic case, the system needs
to check the entire possible range of~$\{x_p,y_p\}$ that covers the
entire possible velocity profile. This is a computationally intensive
operation. Due to the high inertia of the
train, these value change only little between consecutive frames.
We can reduce the search for the correct placement of the template only
to a small band around the previous~$(x_p,y_p)_{t-1}$ values.
We can calculate a more accurate displacement of the template between
the images by applying a sub-pixel alignment of the templates. If the
remaining change between both images is under 1~$\left[\mbox{pixel}\right]$ then
we can use the Taylor series expansion to explain the brightness change
at a specific pixel~${\cal{I'}}(x,y)$ to:
\begin{gather}
{\cal{I'}}_t(x+\delta x.y+\delta y)\approx\\ \nonumber
{\cal{I'}}_t(x,y)+
\frac{\partial {\cal{I'}}_t(x,y)}{\partial x}\delta x+
\frac{\partial {\cal{I'}}_t(x,y)}{\partial y}\delta x
\end{gather}
If we assume that the new image~${\cal{I'}}_{t+1}$ is a result of a
sub-pixel motion~$(\delta x,\delta y)$ then we can estimate from the
equation:
\begin{gather}
\nonumber{\cal{I'}}_{t+1}(x.y)-{\cal{I'}}_t(x,y)\approx\\
\label{partial:eq}
\frac{\partial {\cal{I'}}_t(x,y)}{\partial x}\delta x+
\frac{\partial {\cal{I'}}_t(x,y)}{\partial y}\delta
x=\vec{\cal{G}}^T\cdot\delta \vec p=
||\vec{\cal{G}}||\cdot||\delta\vec p||\\
\mbox{with} \quad\vec{\cal{G}}=\left(\frac{\partial
{\cal{I'}}_t(x,y)}{\partial x}, \frac{\partial
{\cal{I'}}_t(x,y)}{\partial y}\right)^T\nonumber
\end{gather}
We see that once we calculated the gradient vector~${\cal{G}}$ from
the previous image, we can calculate the sub-pixel update of the motion
in horizontal and vertical direction~$(\delta x.\delta y)$ by
decomposing the motion~$||\vec{\delta p}||$ along the gradient according to the
horizontal and vertical ratios of~$\vec{\cal{G}}$.
We calculate the resulting shift as an average of responses within the
template. It is obvious from~(\ref{partial:eq}) that only pixels with
a difference in brightness between the images contribute to the motion
estimation. We reduce the sensitivity to noise by using only pixels with
the gradient above a threshold
$||\vec{\cal{G}}||>\epsilon_G$, which is tuned depending on the expected
camera noise.
The resulting average image motion~$(\Delta x,\Delta y)$ can be linearly
scaled to the forward and side-wards metric velocity with knowledge about
the mounting height~L above the ground.
The metric values of the forward velocity~$v_l$ and the side-wards
motion~$v_s$
(due to curves in the route) can be computed from {\em similar triangles}
relation between the camera projection on the image plane and the
relation of the height~L of a rectified camera providing the
image~$\cal{I'}$ to:
\begin{gather}
\Delta x_i=x_p+\delta x, \quad \Delta y_i=y_p+\delta y\nonumber\\
\label{img_mot:eq}
v_l=\frac{L\cdot p_y}{f\cdot t_f}\Delta y_i, \quad
v_s=\frac{L\cdot p_x}{f\cdot
t_f}\Delta x_i
\end{gather}
Possible changes in the orientation of the camera image~$\cal{I'}$
scale it with the focal length~$f$, the metric pixel-size~$(p_x,p_y)$
and the time interval between two frames~$t_f$ as it is shown
in~(\ref{img_mot:eq}). The improvement achieved with the extension to
the sub-pixel accuracy is shown in Section~\ref{mouse_result}.
A possible error in the estimate of the traveled global
distance~$(x_g,y_g)$ can occur due to the noise in the
brightness information~${\cal{I'}}_t(x,y)+\nu_i$. Since the global shift
is an integral (sum) of the consecutive steps~$(\Delta x,\Delta y)$, the
error accumulates fast in each step. The resulting shift in
each step is estimated as an average response of all significant
brightness changes within the templates. The statistical distribution of
the error helps to reduce the error in the final estimates. This can be
pushed even further by tracking a template not only between consecutive
images but over a longer period of time. The reference template from the
original image, which we will refer to as {\em keyframe} in the
following text, is used to estimate the shift in multiple following
frames. It is done until the template moves out from the area warped in
the convolution step above. This processing introduces the brightness
noise-related error only once in the navigation process, instead of being
added multiple times with each new delta step. The length of the
sequence, in which a keyframe can be used, depends directly on the
current speed of the train. We will see in Section~\ref{nav:sec} that
this processing has an additional advantage on the motion estimation
process.
A significant advantage of adding a separate estimation of the forward
motion is the possibility of estimation of the typically unobservable
motion error~$\sigma_z$ along the optical axis~Z. We are able to estimate this
error from the~$\Delta y_i$ responses of all~N contributing pixels in the
template with the property~$||{\cal{G}}||>\epsilon_G$ to:
\begin{gather}
\label{sigma:eq}
\sigma_z^2=\frac{1}{N}\sum_{i\in ||\vec{\cal{G}}||>\epsilon_G} (\Delta
y_i-\overline{\Delta y})^2 ,\; \overline{\Delta y}=\frac{1}{N}\sum_{i\in ||\vec{\cal{G}}||>\epsilon_G} \Delta y_i
\end{gather}
\subsection{Robust Key-frame-based Monocular Motion Estimation}
\label{nav:sec}
The {\em keyframe processing} introduced in the previous section
improves also the accuracy of the estimation of the direction of motion
in the monocular Essential matrix decomposition~\cite{Zisserman}. Our
problem with the matching of features between images of the sequence is
the significant self-similarity of the observed features
(Fig.~\ref{similarity:fig}). Typical matching algorithms like SURF,
BRISK, KAZE, find multiple matching candidates for a tracked point.
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{./pics/gleis_detect.png}
\caption{\label{similarity:fig} Strongest matching candidates are often
not the correct correspondences for feature points in track area.}
\end{figure}
The selection of the correct matching candidate can be largely
simplified. Since the previous optical correlation step found the planar
direction of motion, which represents the first~$T_x$ and the last~$T_z$
parameters of the motion vector, we can estimate the horizontal position
of the epipole in the image. The direction of the motion
vector~$\vec{T}$ defines the position of the intersection point of all
optical flow lines, which are segments of the corresponding epipolar
lines (see Fig.~\ref{epipole:fig}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{./pics/epipole_direct.png}
\caption{\label{epipole:fig} While the camera is moving along the
vector~$\vec{T}$, the tracked points move along the dashed epipolar lines
in the second image frame.}
\end{figure}
Our system uses the predicted value for the rotation matrix~R that is
calculated in the fusion framework of the {\em Navigation Unit}
(Fig.~\ref{system:fig}). We rotate all matched points~$\vec{p_i}=(u_i,v_i,f)^T$
by this matrix to a rotation compensated version~$\vec{p}'_i$:
\begin{equation}
\vec{l}=\tilde{R}^T\cdot\vec{p_i}, \quad
\underline{\vec{p}'_i=\frac{f}{l_z}\cdot \vec{l}}
\end{equation}
The resulting optical flow has just the translational component, which
intersects in the expected epipole. Since the rotation is just a
prediction, we allow the optical flow lines to deviate by a small pixel
value from this epipole. An example for the compensated optical flow
field can be found in Fig.~\ref{teaser::fig}. We choose the matches from
the matching pool, which point towards the expected epipole.
Once the correct matches between features in both images are found, we
estimate the new corrected~$\tilde{R}$ and the direction of motion
vector~$\vec{T}$ using processing similar to standard {\em calc\_pose()} method from
OpenCV without the RANSAC part. The filtering was done before in a
deterministic way. The proposed novelty is the way, how we additionally filter the
wrong correspondences based on the expected epipole above. The solution
becomes ambiguous especially in strongly limited visible space without
this processing.
An important final step in the processing is the calculation of the
covariance of the result. We estimated the metric~$\sigma_z$ component
already in~(\ref{sigma:eq}). We estimate the remaining two components by
estimating the distances, how far the lines associated with the flow segment
miss the epipole. For an i-th flow vector with start and
end-point~$(\vec{p}_{si},\vec{p}_{ei})$, we can estimate the epipole
point~$\vec{x}_e$ to:
\begin{gather}
\vec{k}_i=\left(\begin{array}{c} k_u\\
k_v\end{array}\right)=\vec{p}_{ei}-\vec{p}_{si}\nonumber, \quad
\vec{n}_i=\left(\begin{array}{c} -k_v\\ k_u\end{array}\right)\\
\tilde{A}=\left(\begin{array}{c} \vec{n}_1^T\\.\\.\\
\vec{n}_k^T\end{array}\right),\quad
\vec{b}^T=\tilde{A}\cdot\left(\vec{p}_{s1},
\dots,\vec{p}_{sk}\right),\quad
\underline{\tilde{A}\cdot\vec{x}_e=\vec{b}}
\label{pseudo:eq}
\end{gather}
The epipole position~$\vec{x}_e$ can be estimated using a pseudo-inverse
of the non-square matrix~$\tilde{A}$ in~(\ref{pseudo:eq}).
An essential information for a fusion in the {\em Navigation Unit} is the
covariance of the estimated value. It helps to assess the current
uncertainty in the measurement.
We calculate the
closest distance~$\vec{\delta p}_i$ for flow optical flow-line~$(\vec{p}_{si},\vec{p}_{ei})$
from~$\vec{x}_e$ using~$\Delta x$ result from~(\ref{img_mot:eq}) for a
scaling from pixel-values to meters as:
\begin{equation}
\vec{\delta_p}_i=\frac{\Delta x\cdot p_x}{f}\left[\vec{n}_i^T\cdot\left(\vec{x}_{si}-\vec{x}_e\right)\right]\cdot\vec{n}_i
\end{equation}
The resulting covariance matrix~$\tilde{P}$ in the xy-plane from~k optical flow lines is
constructed as:
\begin{equation}
\tilde{P}=\frac{1}{k}\sum_{i=1}^k \vec{\delta_p}_i\cdot \vec{\delta_p}_i^T
\end{equation}
The {\em keyframe processing} helps similar to the previous chapter to
reduce the error while switching to new references by significantly
reducing the number of switches. Additionally, the flow line-segments
in the images become longer. If we assume a constant detection error for
the flow endpoints in the images then longer lines are less sensitive to
orientation changes due to the detection error.
\subsection{Drift Compensation from Global Landmarks}
\label{drif:sec}
Since (visual) odometry or IMUs only provide a relative localization, global landmarks are required for getting world coordinates. These relative algorithms also suffer from an accumulative offset and an unknown initial condition of the real value that leads to a drift from the real position. To compensate for that drift other sensors needs to be included in the sensor system. Using GNSS is the most promising approach, however in areas without GNSS coverage (e.g. in tunnels or valleys) another approach could be helpful. As such an alternative vision based solutions such as April tags (or similar tags or signs) mounted on the poles of the catenary, whose position is known in world coordinates with an accuracy within 10 cm, can be successfully used. The global pose of the camera relative to the tags can be easily derived. Further vision based approaches could be using other global and fixed landmarks of the environment (e.g. points).
\section{Conclusions and Future Work}
\label{conclusion:sec}
We presented a system that represents an approach to deal with specific
requirements of unstructured dynamic railroad scenarios. The system
shown in Fig.~\ref{system:fig} has a modular structure with modules
that provide the dynamic motion updates with varying update rates and
drift properties. In our current application, the train dynamics is slow
enough due to the stiff suspension of the trains to observe the dynamics
with the 60Hz update rate of our monocular camera system. We plan to
extend it to more agile train suspensions that will require a faster
dynamic update and an addition of an IMU unit shown in the
Fig~\ref{system:fig}.
Our main contribution is three-fold:
\paragraph{Low-Level Matching under Strong Self-Smilarity} - we adapted
the low-level vision unit to cope with the ambiguous world of the
railroad environment with very strong self-similarity between the local
objects (stones, screws, etc.). We extended the local descriptor to the
entire track area under the planarity assumption for the track bed. This
makes the matching system more robust. The vanishing points from the
motion estimate allow also a robust filtering of correct landmarks for
SfM module without any random selections as it is the case for RANSAC
systems. This is an essential pre-requisite to be able to make the
system verifiable for the SIL 4.0 requirements.
\paragraph{Calculation
of Error Covariance} - our system calculates not only the current pose
change but also the confidence of the result as a covariance matrix.
This allows on one hand a better monitoring of the QoS (Quality of
Service) but at the same time it improves essentially the convergence
properties of the fusion network in the Navigation Unit
(Fig.~\ref{system:fig}). The processing allows also weighing of the used
optical flow vectors depending on their reliability (length in the
image). This is our next step to improve the accuracy of the SfM module.
\paragraph{Key-frame Processing} - instead of a bundle-adjustment step
common for most of the SLAM approaches, we apply the key-frame
processing idea that reduces the number of reference changes during
operation of the unit. The specific problem of railroad environments is
the strong occlusions of distant features which requires to focus on the
track-bed itself as navigation area. We currently compensate for drifts
with artificial landmarks, e.g., April tags along the way, but we plan
to use a system that will try to re-identify distant objects (once they
come into view after a train pass again) that will also allow to
compensate for the drift more efficiently.
With the above shown “optical navigation” the following properties were achieved for the visual sensors:
1. skid-free odometer (visual odometry)
2. “visual” balise (detecting April tags)
3. incremental motion
4. track selectivity
5. global pose reference in six dimensions (combined with a map) (visual localization)
6. real-time capability of the image processing @ 60 Hz
The “optical navigation” presented here has the advantage that it is deterministic and does not require machine learning algorithms (for example neural networks or deep learning). No SIL4 application based on machine learning has yet been approved by a relevant certification body.
This “optical navigation” opens up the opportunity to provide evidence of a safe and secure image processing for the localization. Since the image processing can provide incremental motion as well as global pose reference it is a potential sensor for a safe sensor-fusion, but to guarantee diversity it must be combined with other sensors (for example IMU).
Interoperability within the European railways and setting international standardization has to be achieved, after the sensor-fusion is proven and the first approval through a relevant certification body was accomplished .
The data acquisition took place in a vision friendly environment with good weather conditions. Therefore dealing with worse weather conditions or other environments is beyond the scope of this paper and it will be focused on in the next steps.
\section{Experimental Results}
\label{result:sec}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{./pics/measurement_train.png}
\caption{\label{mtrain:fig} SBB measurement train and camera setup in locomotive (Re 420) and control car}
\end{figure}
The setup up for the trial was a specific train run organized by SBB
where cameras were installed on the windscreens of the locomotive and
the control car (see Fig.~\ref{mtrain:fig}). There were two cameras
each, one for the ``Train Mouse'' and one for the April tags mounted on
the poles for the catenary. The 10 bit NIR cameras with a resolution of
1280 by 1024 pixels were used at a frame rate of 60 fps. The cameras
were supported by IR-illuminators to overcome tunnels. All camera were
calibrated using the Bouquet toolbox with a 9x6 checkerboard with 8cm by
8cm tile size. The SBB telecom measurement wagon in the middle of the
train composition provided the position reference as it was equipped
with a DGNSS solution combined with an IMU (high performance ring-laser
gyro). The approx. 20 minutes trip between Ostermundigen and Thun were
repeated four times in order to reproduce results with different speeds
up to 140 km/h and different scenarios (e.g. occlusion by other trains).
This route was chosen because it also contains the 8km long fiber optic
sensing (FOS) test track and enabled a comparison of the results of the
different localization sensors. The route consists mostly of two
parallel tracks and 4 railway station were there are several points and
up to 6 parallel tracks.
\subsection{Accuracy of the Correlation Approach (``Train Mouse'')}
\label{mouse_result}
Fig.~\ref{subpixel:fig} depicts the necessity to include the sub-pixel
motion estimation into the metric motion estimation system. This allows
an early notification about the train setting in motion even before the
human eye can observe it. It is also very important at higher velocities,
where a change of one pixel in motion between 2 frames corresponds to
multiple km/h at typical speeds of up to 140km/h.
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\columnwidth]{./pics/subpixel1.png}
\includegraphics[width=0.48\columnwidth]{./pics/subpixel2.png}
\caption{\label{subpixel:fig}Comparison between velocity estimation
without (blue) and with (red) sub-pixel optimization.}
\end{figure}
The optical correlation system (``Train Mouse'') achieves an accuracy
beyond the capabilities of the existing mechanical and GNSS sensors
(Fig.~\ref{subpixel:fig}). It is possible to see small velocity
changes, which can be used to analyze changes in the dynamic state of
the train, if multiple units are distributed over the length of the
train. We can observe changes, e.g. due to oscillations of the train
control system, on the track. It is to our knowledge the first system in
the rail domain operating at velocities equal or higher than 140 kmh
because of successful solution of the matching problem through
correlation.
The estimated profiles tracked over one of our test runs are depicted in
Fig.~\ref{mousedist:fig}. The tracked velocity was confirmed with GNSS
measurements in areas, where the GNSS reception was available. We show
the comparison in Fig.~\ref{mousedist:fig}middle.
\begin{figure}[ht]
\centering
\includegraphics[height=2.5cm]{./pics/google.png}
\includegraphics[width=0.5\columnwidth]{./pics/mouse_shortRange_w1050_nocut.png}
\includegraphics[width=0.3\columnwidth]{./pics/drift_2.png}
\caption{\label{mousedist:fig}(Left) Overlay of estimated route (blue)
and GPS estimate(red);(middle)Velocity plot from the Optical Correlation
(``Train Mouse'') Module. The steps due to the pixel quantization show
the necessity for sub-pixel processing (blue) compared to pixel-accurate
SSD only method (red);
(right) Reduction of the drift accumulation through longer keyframe
sequences in optical correlation (``Train Mouse'').}
\end{figure}
The extension of the number of frames, in which the same template is
tracked results in significant reduction of drifts
(Fig.~\ref{mousedist:fig})right.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{./pics/dist_travel.png}
\caption{\label{drift2:fig}
Agreement between the GPS measurement and distance estimated with
``Train Mouse''.}
\end{figure}
The additional drift in accumulated over a distance of 855m was 0.84m
for a system with 1/4 less reference frame switches. In 22 seconds the
system made 30*22 switches for the (green) case of continuous reference updates
and 15*22 switches for the (blue) case of a keyframe sequence length of
4. The decrease in the measured distance is -1.31m for the blue and
0.49m in the green case. We see that the blue curve traveled a shorter
distance than the continuously switching case which corresponds to the
zero line in Fig.~\ref{mousedist:fig}right.
Fig.~\ref{drift2:fig} shows a good estimate of the traveled distance
compared to the GPS measurement based entirely on the ``Train Mouse''
without compensation with April-Tags. We see how different parameters, like
focal length, height above the ground, and gauge distance influence the
parameters. We currently work on on-line re-calibration of this error.
\subsection{Performance of the Motion Estimator}
The system was run on a Quadcore Pentium i5\@ 3.1GHz with an NVIDIA
GTX1080 for low-level image processing. The system was able to do
per-frame calculations in the range 07-11ms/frame, which allows online
processing of the 60Hz image streams from the camera.
Fig.~\ref{visodo:fig} shows the system processing along the route for
the case of an open space and a curve motion. The map plotted in red
next to the visualization window corresponds to the route form from the
local region. The accuracy of this part of the processing was already
successfully verified in~\cite{darius_loc_chapter}. Our current test was
to see, how many optical flow vectors can explain the current motion of
the train but passing the epipole not further than 0.5 pixels. The while
line segments in Fig.~\ref{visodo:fig} shows a very large number of such
segments with a large spread over the image. This results in a very
small drift in motion orientation~\cite{elmarIros}.
\begin{figure}[ht]
\centering
\includegraphics[height=2.6cm]{./pics/visodo1.png}
\includegraphics[height=2.6cm]{./pics/visodo2.png}
\caption{\label{visodo:fig} Real-time calculation of filtered flow for
higher velocities (left) and curve motion (right).}
\end{figure}
We see in Fig.~\ref{visodo:fig} that the epipole prediction from the
computation in the Optical Correlation (``Train Mouse'') module can
successfully be used to filter correct correspondences that capture the
ego motion of the train with the point of expansion in the intersection
point of the ego-velocity vector with the image plane (epipole).
In comparison to current system like the ones in ~\cite{siegwart}, we can keep up with speeds greater than 140 km/h. There is nearly no influence from other objects, since we only rely on a small track area in front of the rail vehicle. Using NIR cameras in our approach weakens the effect of shadows and sunlight. With the “train mouse” we derive the gain for the z-axis directly from the known track gauge width, avoiding gain drifts as in ~\cite{siegwart}.
\subsection{Fusion of Navigation Data from Global Landmarks}
Using sensor fusion techniques (e.g. Kalman filtering based on a train model) the relative positions can be combined with the world coordinates thus effectively compensating for drift and filter off outliers and noise. Using an IMU together with the train model will provide even more robust results. In addition an accurate and trusted topological map of the tracks can be used to further improve accuracy and to determine the integrity of the position information.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{./pics/Tag_results.png}
\caption{\label{tag_results:fig} Results of pose estimation for an April tag.}
\end{figure}
In Figure~\ref{tag_results:fig} the pose estimation results for a given April tag when passing by with the measurement train are shown. It can be seen that the perpendicular distance from the pole to the track (X) is nearly constant as well as the height (Y) while passing an April tag mounted on a pole of the catenary. The table shows the values for 9 consecutive image frames while the train covers a distance of approx. 5 m.
\section{Motivation}
The increasing demand on public transportation requires an increase of
the train density, which begins reaching the capacity of the
conventional train infrastructure. The infrastructure based on balises
and signals has a fixed segment size that can accommodate just one train
with an empty segment in-between the trains. The increasing density
requires to switch to more flexible infrastructure, which is able to
localize the train within the train route and check the train
consistency. It is important that the entire train is leaving a specific
area and no train cars are left behind. To exploit the potential of new
technologies, SBB, BLS, Schweizerische Südostbahn AG (SOB), Rhätische
Bahn (RhB), Transports publics fribourgeois (TPF) and the Association
of Public Transport (VöV) have joined forces in the SmartRail 4.0
program. With the SmartRail 4.0 program, the Swiss Railways want to
further increase capacity and safety, use the railway infrastructure
more efficiently, save costs, and maintain the competitiveness of
the railways in the long term. SmartRail 4.0 has the ambition to
achieve a substantial improvement in the core of railway production.
Railway production includes all resources, systems and processes for
planning and safely executing movements on the railway infrastructure.
More capacity is to be made available on the existing track
infrastructure, for which a more precise and safe localization of
rail-bound vehicles is absolutely necessary.
Localization is essential in the field of control and safety technology
for the railway operation. Today, the localization of rail-bound
vehicles is based on the artificial infrastructure consisting of track
clearance sensors, balises in the track or signals in the event of a
fault. Disadvantages are the high costs of these outdoor facilities,
suboptimal use of the line capacity due to the necessity of segment-wise
operation. Today, absolute localization is only solved for specific
use-cases in certain areas, for example in the ETCS Level 2 corridors at
a speed above 40 km/h.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\columnwidth]{./pics/image_no_train.png}
\includegraphics[width=0.48\columnwidth]{./pics/image_passing_train.png}
\caption{\label{teaser::fig} Strong dynamic occlusions in railroad
scenarios: (left) tracking data in static
environment; (right) navigation under strong dynamic occlusion}
\end{figure}
In order to be able to remove the additional track-side infrastructure,
the precise and safe localization unit must be available on the vehicle.
The challenge that requires additional research beyond the current state
of the art method is that such a unit needs to provide verifiable
results, which does not allow an application of Deep Learning methods
and forbids even an application of standard methods, like RANSAC, due
to its randomized processing with slightly varying results on same
inputs. Additionally, the reliable static background information is
limited only on the planar surfaces with highly self-similar, planar and
repetitive structure of the gravel. This prevents applications of
stereo-based system, which rely on clear 3D boundaries of planes, and of
traditional sparse systems like ORBSlam, because the local features are
not unique. It also requires powerful Future Railway Mobile Communication System
(FRMCS) technology to monitor train integrity and send the exact
position to the central interlocking.~\cite{germann}. The development
of localization is a decisive factor in the evolution of digitalization
in the field of control and safety technology of railway systems.
Additionally, the use of localization can also trigger a performance
boost in today's digital interlocking technology. With the integration
of the new, precise localization technology, the operational performance
of today's control and safety technology can be increased in multiple
domains. A standstill detection would make it possible to avoid track
closures due to a lack of slip paths and thus achieve a more efficient
use of the existing facilities. If today's infrastructure- and
odometry-based localization can be further developed into a continuous,
object-side, autonomous SIL4 localization, enormous opportunities will
open up for increasing efficiency and safety in a large number of
railway applications.
We aim to make it possible to operate within the absolute braking
distance in so called “Moving Blocks”. Consequently a more efficient
handling of rail traffic is possible and leads to more track capacity on
the very dense rail network used.
There are three main obstacles for the safe and precise localization of
rail-bound vehicles:
\begin{enumerate}
\item Finding a sensor-combination and -fusion for a highly available, secure, safe and precise localization.
\item Obtain a SIL4 approval for the new localization system through the relevant certification bodies.
\item Secure interoperability within the European railways and setting international standardization.
\end{enumerate}
The approach for localization described in this paper is only one out of several possible approaches that are taken into account in SmartRail 4.0.
\subsection{Rail-specific Navigation Problems}
The applications for trains and rail infrastructure have
to comply with highly-restrictive standards, like the CENELEC standards
(EN 5012x, EN 50657), in order to be certified by the authorities.
RAMS (Reliability, Availability, Maintainability, Safety) requirements
have to be fulfilled to reach the required SIL (system integrity level).
Therefore, safety critical applications such as a SIL 4 localization of
track bound vehicles must be redundant (to reach availability) and need
to meet diverse constraints in using deterministic algorithms (to reach
safety level). Thus a use of machine learning or artificial intelligence
approaches is not suitable for such systems. As we mentioned already
earlier, the requirement to provide the same accurate measurement
together with an information about the achieved accuracy from a specific
image set, does not allow to use any probabilistic methods. It prohibits
even the use of common techniques, like RANSAC, for model verification.
\subsection{Related Work}
Optical navigation systems can be categorized based on the input data that
they rely on. There exist many commercial systems~\cite{realsense,zed}
that provide optical navigation data from 3D reconstructions in a
binocular camera or camera-projector system. These approaches require
static 3D structures like trees, houses or other objects in the scene
that can be tracked over time. There are active 3D navigation systems
mostly for door navigation, like RealSense camera, and outdoor binocular
stereo systems, like ZED. As we can
see in Fig.~\ref{teaser::fig} such systems fail occasionally in the
specific application field of a railroad scenario, because the strong
occlusions by other trains passing on parallel tracks limit the
available data to just the track bed in front of the train. The other
type of navigation systems rely on the image information itself and can
be subdivided in dense systems using the information of every pixel in the
image~\cite{lsdslam} or matching significant points representing strong
multi-directional brightness changes in the image~\cite{Zisserman,
zinf}. We do not consider learning approaches in our framework, because
the resulting navigation system needs to undergo a strict verification
process to be applied on trains and the current learning approaches do
not meet this requirement.
There exist many optical navigation frameworks developed for the field of service
robotics~\cite{davison,burschka,lsdslam} and for outdoor navigation~\cite{newmann,
burschka, orbslam}. These systems have in common that they rely on matching
of local image information over a time-sequence of images that creates
the so-called {\em optical flow}, which is analyzed for its rotational
and translational effects~\cite{Zisserman}. These approaches fail in
many situations in railroad environments, because of the strong
self-similarity of structures in the track bed, which is often the only
reliable reference to the static environment. This required us to
develop a different matching system that copes with this unstructured,
repetitive property of the environment. It is presented in
Section~\ref{mouse:sec}.
Current monocular or stereo systems when applied on trains also suffer from
the unstructured, repetitive environment, drifting gains and from aliasing to due the
limited frame rate (20fps) at a max speed of 52,4 km/h~\cite{siegwart}.
In addition scenarios with other dynamic objects like cars and other trams or fast switching between shadow and sunlight limits the system performance in ~\cite{siegwart}.
Internal projects at SBB used cameras of maintenance vehicles to
identify landmarks on or close to the track (e.g. balises, signs).
However they are using machine learning algorithms which cannot be
applied on safety critical applications like train localization. Due to
the high RAMS requirements most of such vision based systems are applied
as assistance systems only, today, even if autonomous driving would be
the final goal~\cite{siemens}. The tram driver still has to override
such assistance systems in order to avoid unnecessary emergency braking.
Applying navigation assistance systems from the automotive domain fails
due to the different environment and safety cases in rail. Odometry supported by wheel encoder like those used in automotive suffer from the high slip of rail vehicles (modern locomotives drive intentionally with slip.)
Many of the available systems are not able to provide any additional
information about the quality of the currently estimated pose change of
the camera. If the result is supposed to be fused in a fusion framework
like a Kalman-Filter then the resulting covariance needs to be kept at a
constant, worst-case level. We propose to extend the navigation
approaches to provide the current uncertainty in the estimation of the
pose in parallel to the navigation information. The accuracy may
strongly vary based on the distance to the observed objects and their
distribution in the camera image. The extension based on our
work in~\cite{elmarIros} allows to estimate the quality of the
processing (QoS) for each navigation step enabling a better convergence
of the fusion framework. The proposed processing is described in
Section~\ref{nav:sec} in more detail.
We present our approach, how to estimate the motion properties in
Section~\ref{nav:sec}. Section~\ref{mouse:sec} presents our approach,
how to estimate robustly the metric motion of the camera in the highly
unstructured area of the track bed. The possible drifts of the presented
system are compensated through occasional information from global
infrastructure, which is presented in Section~\ref{nav:sec}.We present in
Section~\ref{result:sec} the achieved accuracy in the motion estimation
and metric measurements. We conclude with some final evaluation of the
achieved system properties and discuss our further directions in
Section~\ref{conclusion:sec}.
|
1,108,101,564,709 | arxiv | \section{Introduction} \label{Introduction}
Understanding the electric conductance of concentrated electrolytes has posed a great theoretical challenge for over a century. The theory of electrolytic conductivity was pioneered by Debye and H\"uckel~\cite{DH}. They used the notion of an {\it ionic cloud}, where each ion is assumed to be surrounded by a smeared ionic distribution of net opposite charge, which gets distorted upon movement of the central ion. Onsager detected a flaw in the Debye-H\"uckel account of the central ion diffusion~\cite{Collected} and a few years later corrected the theory, yielding the so-called Debye-H\"uckel-Onsager (DHO) equation (also known as the ``Onsager limiting law") for the conductivity of electrolytes~\cite{Onsager,Onsager2}. Due to its elegance and accurate predictions, the DHO equation is considered to be one of the cornerstones of electrolyte theory.
A few decades after it was established, the DHO equation was extended to arbitrary electric-field strengths by Onsager and Kim~\cite{Onsager1957}, who relied on the unpublished thesis of Wilson on binary electrolytes~\cite{Wilson1936}. The modified theory (often called ``Onsager-Wilson (OW) theory'') captures the {\it Wien effect}~\cite{Wien1, Wien2, Eckstrom1939} that is an increase in the conductivity with the electric-field strength, attributed to the destruction of the ionic cloud. A related phenomenon called ``the second Wien effect"~\cite{Onsager1934,Kaiser2013}, occurs in weak electrolytes. Here, the conductivity increases with the electric field strength due to a modification in the dissociation kinetics of chemically bound pairs.
While being a remarkable achievement, the DHO and OW theories can be applied only for very dilute electrolyte solutions. They break down when the ion concentration exceeds the threshold of a few millimolar for monovalent ions and an even lower threshold for multivalent ions~\cite{Bockris,RobinsonStokes}. Since its onset in the 1920s', there have been many attempts to extend the DHO theory to higher concentrations. Initially, by Onsager himself (the ``Onsager-Fuoss" theory)~\cite{OnsagerFuoss1957,OnsagerFuoss1962}, and later on by others~\cite{Pitts1953,Friedman1983,Bernard1991,Chandra1999,Fraenkel2018,Zhang2020}. However, previous works either used fit parameters that limit their predictive power or included very complicated results that are difficult to use and are not thoroughly transparent. Moreover, to the best of our knowledge, no previous work in the concentrated regime was generalized to finite electric-field strengths; Thus, not capturing the Wien effect.
In recent years, highly concentrated electrolytes have attracted a lot of attention~\cite{Kornyshev2020,Feng2019,Adar2019,Benaglia2021} due to their numerous potential applications and surprising experimental observations~\cite{Israelachvili2013,Perkin2016,Perkin2017}. At the same time, advances in nonequilibrium theories such as stochastic density functional theory (often referred to as the Kawasaki-Dean equation)~\cite{Kawasaki1994,Dean1996,Vrugt2020,Golestanian2018}, have led to a new way of calculating the ionic conductivity in the dilute limit~\cite{Demery2016,Peraud2017,Donev2019}, which is far simpler than the previous ionic-cloud-based approach. Relying on these theoretical advances and using a modified pair-potential to account for the finite ion size, we recently formulated~\cite{Avni2022} a new model for the conductivity of concentrated electrolytes. The model was shown to agree well with experimental data for different aqueous solutions at concentrations as high as $3$\,M but was limited to binary monovalent ions and low electric fields.
In the present work, we extend this model and apply it to multivalent ions and finite electric fields. We derive a general expression for the conductivity of binary electrolytes and then focus on two cases: (i) the weak-field limit with any $z_1{:}z_2$ ionic valencies, and; (ii) the symmetric $z{:}z$ electrolyte at finite field intensities, where we recover the Wien effect and provide new predictions for the high concentration regime. Our results compare favorably to experiments and recent simulations.
The outline of this paper is as follows: in Sec.~\ref{model}, we present the model system and derive the conductivity equations for ionic solutions with an arbitrary number of species, at high ionic concentrations. In Sec.~\ref{Two}, we restrict ourselves to binary electrolytes and analyze the low electric-field limit as well as the case of symmetric ions with any finite electric field. In Sec.~\ref{Comparison}, we compare our results to experiments and simulations. Finally, in Sec.~\ref{Conclusions}, we conclude and suggest future experiments to further test our predictions.
\section{The model} \label{model}
\subsection{The Equations of Motion} \label{formalism}
We consider a homogeneous ionic solution composed of $M$ ionic species of charge ${q_\alpha}$ and average concentration $n^0_{\alpha}$, where ${\alpha=1,...,M}$. The ions are embedded in a solvent with dielectric permittivity $\varepsilon$ and viscosity $\eta$ at temperature $T$. The solution is subjected to a constant (static) external electric field ${\boldsymbol{E}_{0}}$ pointing in a fixed direction.
The local ionic concentrations, denoted by $n_\alpha({\bf r},t)$, satisfy the continuity equation
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{continuity}
\partial_{t}n_{\alpha} = -\boldsymbol{\nabla}\cdot\boldsymbol{j}_{\alpha}\,\,\,\,\,\,\,\,\,\,\,\, \alpha={1,...,M},
\end{eqnarray}
where ${\boldsymbol j_\alpha}({\boldsymbol r},t)$ is the ionic flux of the $\alpha$ species, given by,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{j}
\boldsymbol{j}_{\alpha}=n_{\alpha}\boldsymbol{u}-D_{\alpha}\boldsymbol{\nabla}n_{\alpha}+\mu_{\alpha}\boldsymbol{f}_{\alpha}-\sqrt{2D_{\alpha}n_{\alpha}}\boldsymbol{\zeta}_{\alpha}.
\end{eqnarray}
The first and second terms on the right-hand-side of Eq.~(\ref{j}) are advection and diffusion terms, respectively, where $\boldsymbol{u}(\boldsymbol{r},t)$ is the solvent velocity field, and $D_\alpha$ is the diffusion coefficient of the $\alpha$ species at infinite ionic dilution. The third term accounts for the motion due to the external field and inter-ionic forces. Here, $\mu_\alpha$ is the ion mobility at infinite ionic dilution, related to $D_{\alpha}$ by the Einstein relation $\mu_\alpha=D_\alpha/k_BT$, with $k_B$ being the Boltzmann constant, and $\boldsymbol{f}_{\alpha}(\boldsymbol{r},t)$ is the force density given by
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{force_density}
\boldsymbol{f}_{\alpha}=n_{\alpha}q_{\alpha}\boldsymbol{E}_{0}-n_{\alpha}\sum_{\beta}\int{\rm d}^{3}r'\,n_{\beta}\left(\boldsymbol{r}',t\right)\boldsymbol{\nabla}v_{\alpha \beta}\left(\left|\boldsymbol{r}-\boldsymbol{r}'\right|\right),
\end{eqnarray}
where $v_{\alpha \beta}$ is the pair interaction energy between ions of species $\alpha$ and $\beta$. Note that for generality sake we do not specify $v_{\alpha \beta}$ until Sec.~\ref{modified}. The last term in Eq.~(\ref{j}) is a stochastic flux, where $\boldsymbol{\zeta}_{\alpha}(\boldsymbol{r},t)$ is a 3D white-noise function, satisfying
\begin{eqnarray}
&& \langle\boldsymbol{\zeta}_{\alpha}\left(\boldsymbol{r},t\right)\rangle =0\\
&&\langle{\zeta}_{\alpha}^{n}\left(\boldsymbol{r},t\right){\zeta}_{\beta}^{m}\left(\boldsymbol{r}',t'\right)\rangle = \delta_{\alpha \beta}\delta_{nm}\delta\left(t-t'\right)\delta\left(\boldsymbol{r}-\boldsymbol{r}'\right),\nonumber
\end{eqnarray}
where $n$ and $m$ denote the cartesian components of the vector $\boldsymbol{\zeta}_{\alpha}$. Equations~(\ref{continuity}) and~(\ref{j}) can be derived by transforming the Langevin equation from individual particle representation to concentration fields using Ito calculus, and it is referred to as {\it stochastic density-functional theory} (SDFT)~\cite{Dean1996}.\footnote{The derivation of the stochastic density-functional theory in Ref.~\cite{Dean1996} was done without considering the advection by the solvent. However, advection can be easily incorporated into the formalism by adding the solvent velocity to the Langevin equation, yielding Eqs.~(\ref{continuity}) and~(\ref{j}) exactly.}
The ion continuity equation is coupled to the Navier-Stokes equation for an incompressible fluid,
\begin{eqnarray} \label{Stokes0}
&&\rho \left[ \partial_t\boldsymbol{u}+(\boldsymbol{u}\cdot\nabla)\boldsymbol{u}\right]=\eta\nabla^{2}\boldsymbol{u}-\boldsymbol{\nabla}p+\sum_{\alpha=1}^M \boldsymbol{f}_{\alpha}
\end{eqnarray}
where $p(\boldsymbol{r},t)$ is the local pressure and $\rho$ is the solvent density. The $\rho(\boldsymbol{u}\cdot\nabla)\boldsymbol{u}$ term will disappear when we linearize the equations of motion around $\boldsymbol{u}=0$ in the next~\ref{conductivity_calc} subsection. The $\rho\partial_t\boldsymbol{u}$ term can also be neglected as long as the electric field is not too strong. The typical time and length scales that characterize the ionic motion, $T$ and $L$, satisfy $L^2/T=D_{\alpha}$. Applying this rescaling we get that
$|\rho\partial_t \delta \boldsymbol{u}|/|\eta\nabla^2\boldsymbol{u}| \sim \rho D_{\alpha}/\eta$, which is the inverse of the Schmidt number~\cite{Bergman2011} and is roughly $\sim 0.001$ for standard electrolytes. Therefore, the resulting equation for the solvent velocity is the Stokes equation for an incompressible fluid,
\begin{eqnarray} \label{Stokes2}
&&\boldsymbol{\nabla}\cdot \boldsymbol{u}=0\nonumber\\
&&\eta\nabla^{2}\boldsymbol{u}-\boldsymbol{\nabla}p+\sum_\alpha \boldsymbol{f}_{\alpha}=0.
\end{eqnarray}
\subsection{Calculation of the conductivity} \label{conductivity_calc}
The conductivity of the ionic solution is defined by the ratio
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{kappa_def}
\kappa=\langle J_{\parallel}\rangle/E_{0},
\end{eqnarray}
where $\langle ... \rangle$ is the thermodynamic ensemble average,
and $\boldsymbol{J}(\boldsymbol{r},t)$ is the electric current density, which depends on the ionic fluxes, $\boldsymbol{j}_{\alpha}$,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{J}
\boldsymbol{J}=\sum_{\alpha=1}^M q_\alpha\boldsymbol{j}_{\alpha}(\boldsymbol{r},t).
\end{eqnarray}
The subscript ``$\parallel$" in Eq.~(\ref{kappa_def}) denotes the vector projection on the external field direction, $J_{\parallel}={\boldsymbol J}\cdot\hat{ E_0}$. Substituting Eq.~(\ref{j}) in Eq.~(\ref{J}) and performing the average in Eq.~(\ref{kappa_def}), we obtain
\begin{eqnarray} \label{Full_kappa1}
&& \kappa = \kappa_{0}+\kappa_{\text{hyd}}+\kappa_{\text{el}} \nonumber\\
&& \kappa_{\text{hyd}} =\sum_\alpha \frac{q_\alpha}{E_{0}} \langle u_{\parallel}\left(\boldsymbol{r},t\right) n_\alpha \left(\boldsymbol{r},t\right)\rangle \nonumber\\
&& \kappa_{\text{el}} = -\sum_{\alpha,\beta}\frac{q_{\alpha} \mu_{\alpha}}{E_{0}}\int{\rm d}^{3}{r}'\, \partial_{r_\parallel}v_{\alpha \beta}\left(\left|\boldsymbol{r}-\boldsymbol{r}'\right|\right) \langle n_{\alpha}\left(\boldsymbol{r},t\right)n_\beta\left(\boldsymbol{r}',t\right)\rangle,
\end{eqnarray}
where $\kappa_{0}$ is the conductivity at infinite dilution,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{kappa_0}
\kappa_{0}=\sum_{\alpha=1}^M q^2_\alpha \mu_\alpha n^0_{\alpha}.
\end{eqnarray}
Note that $\kappa_0$ depends linearly on the concentrations as the ions do not interact in this limit. In order to obtain Eq.~(\ref{Full_kappa1}), we invoke the system homogeneity and the independence between $n_\alpha$ and ${\boldsymbol \zeta}_\alpha$ at equal times.
Equation~(\ref{Full_kappa1}) implies that at finite concentrations the conductivity deviates from its dilute-limit behavior due to two effects. The first effect, incorporated in $\kappa_{\text{hyd}}$, is a hydrodynamically mediated interaction between the ions, and it is traditionally referred to as the {\it electrophoretic effect}. We note that in its present form, the average $\langle u_{\parallel}\left(\boldsymbol{r},t\right) n_\alpha \left(\boldsymbol{r},t\right)\rangle$ in $\kappa_{\text{hyd}}$ includes the ion interaction with its own induced velocity field, resulting in a self-interaction that should be subtracted, as will be done later on. The second effect, incorporated in $\kappa_{\text{el}}$, is a direct ionic interaction that is mostly electrostatic but will include finite-size corrections as we explain in Sec.~\ref{modified}, below. This effect is often referred to as the {\it relaxation effect}.
In order to calculate the averages in Eq.~(\ref{Full_kappa1}) we need to solve the equations of motion, Eqs.~(\ref{continuity}), (\ref{j}) and~(\ref{Stokes2}). An exact solution cannot be obtained. Instead, we linearize the equations by writing $n_{\alpha}({\boldsymbol r},t) =n^0_{\alpha} +\delta n_{\alpha}({\boldsymbol r},t)$, $\boldsymbol{u}({\boldsymbol r},t) =\delta\boldsymbol{u}({\boldsymbol r},t)$ and $p({\boldsymbol r},t) =p_0 + \delta{p}({\boldsymbol r},t)$, and keeping only terms up to linear order in $\delta n_{\alpha}$, $\delta \boldsymbol{u}$, $\delta p$, and $\zeta_{\alpha}$. The linearization is justified for small fluctuations around the mean-field values. In Fourier space, the linearized form of the ion equation of motion is
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{matrix_equation}
\frac{\partial \delta{\tilde{n}}_{\alpha}(\boldsymbol{k})}{\partial t}=A_{\alpha \beta}(\boldsymbol{k})\delta\tilde{n}_{\beta}(\boldsymbol{k})+B_{\alpha \beta}(\boldsymbol{k})\tilde{\zeta}_{\beta}(\boldsymbol{k}),
\end{eqnarray}
where $\tilde{f}(\boldsymbol{k})=\int {\rm d}^3r\, f(\boldsymbol{r}){\rm e}^{-i\boldsymbol{k}\cdot\boldsymbol{r}}$ is the Fourier transform of the function $f(\boldsymbol{r})$. The matrices $A(\boldsymbol{k})$ and $B(\boldsymbol{k})$ are
\begin{eqnarray} \label{matrices}
A_{\alpha\beta}(\boldsymbol{k}) && =\begin{cases}
-D_{\alpha}k^{2}-i\mu_{\alpha}q_{\alpha}k_{\parallel}E_{0}-\mu_{\alpha} n^0_{\alpha} k^{2}\tilde{v}_{\alpha\alpha}(k) & \alpha=\beta\nonumber\\[3pt]
-\mu_{\alpha} n^0_{\alpha} k^{2}\tilde{v}_{\alpha\beta}(k) & \alpha\neq\beta
\end{cases}\\
B_{\alpha\beta}(\boldsymbol{k}) && =i\sqrt{2D_{\alpha} n^0_{\alpha}}k\delta_{\alpha\beta},
\end{eqnarray}
and $\tilde{\zeta}_{\alpha}(\boldsymbol{k},t)$ is a white-noise scalar function, $\alpha=1,...,M$, satisfying
\begin{eqnarray}
&&\langle\tilde{\zeta}_{\alpha}(\boldsymbol{k},t)\rangle =0\\
&&{\langle\tilde{\zeta}_{\alpha}(\boldsymbol{k},t)\tilde{\zeta}_{\beta}(\boldsymbol{k}',t)\rangle =\left(2\pi\right)^{3}\delta_{\alpha \beta}\delta\left(t-t'\right)\delta\left(\boldsymbol{k}+\boldsymbol{k}'\right)}. \nonumber
\end{eqnarray}
Note that we used the fact that $\boldsymbol{k}\cdot\tilde{\boldsymbol{\zeta}}_{\alpha}(\boldsymbol{k})=\sum\limits_{\alpha=1}^{3} k_{\alpha} \tilde{\zeta}^{i}_{\alpha}(\boldsymbol{k})$, is a sum of three independent white noise functions with zero mean. Therefore, it can be replaced by a single white noise function, whose variance is the sum of the variances of the three functions, $k\tilde{\zeta}_{\alpha}(\boldsymbol{k})$.
The linearized form of the Stokes equation in Fourier space is
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{Stokes_Fourier}
k^{2}\eta\delta\tilde{\boldsymbol{u}}\left(\boldsymbol{k}\right)=-i\boldsymbol{k}\delta\tilde{p}\left(\boldsymbol{k}\right)+\sum_{\alpha}q_{\alpha}\boldsymbol{E}_{0}\delta\tilde{n}_{\alpha}\left(\boldsymbol{k}\right)+i\boldsymbol{k}\sum_{\alpha,\beta}n^0_{\alpha}\,\tilde{v}_{\alpha\beta}(k)\delta\tilde{n}_{\beta}(\boldsymbol{k}).
\end{eqnarray}
We use the Fourier transform of the incompressiblity condition (Eq.~(\ref{Stokes2})), $\boldsymbol{k}\cdot\delta\boldsymbol{\tilde{u}}(\boldsymbol{k})= 0$, to eliminate $\tilde{p}(\boldsymbol{k})$ in Eq.~(\ref{Stokes_Fourier}), and obtain $\delta\tilde{u}_{\parallel}(\boldsymbol{k})$ in terms of the concentrations,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{velocity_conc}
\delta\tilde{u}_{\parallel}\left({\boldsymbol k}\right)=\frac{E_{0}}{\eta}\frac{1}{k^{2}}\left(1-\frac{k_{\parallel}^{2}}{k^{2}}\right)\sum_{\alpha}q_{\alpha}\delta\tilde{n}_{\alpha}\left({\boldsymbol k}\right).
\end{eqnarray}
Writing Eq.~(\ref{Full_kappa1}) in terms of the fluctuational variables in Fourier space and using Eq.~(\ref{velocity_conc}), we obtain
\begin{eqnarray}
&& \kappa_{\text{hyd}}=\sum_{\alpha,\beta}\frac{q_{\alpha}q_{\beta}}{\eta}\int\frac{{\rm d}^{3}k{\rm d}^{3}k'}{\left(2\pi\right)^{6}}{\rm e}^{i\left(\boldsymbol{k}+\boldsymbol{k}'\right)\cdot\boldsymbol{r}}\frac{1}{k^{2}}\left(1-\frac{k_{\parallel}^{2}}{k^{2}}\right)\langle\delta\tilde{n}_{\beta}\left({\boldsymbol{k}},t\right)\delta\tilde{n}_{\alpha}\left(\boldsymbol{k}',t\right)\rangle\nonumber
\\
&& \label{cor2} \kappa_{\text{el}} = -\sum_{\alpha,\beta}\frac{q_\alpha\mu_{\alpha}}{E_{0}}\int \frac{{\rm d}^{3}k{\rm d}^{3}k'}{\left(2\pi\right)^{6}}{\rm e}^{i\left(\boldsymbol{k}+\boldsymbol{k}'\right)\cdot\boldsymbol{r}}\left(ik_{\parallel}'\right)\tilde{v}_{\alpha\beta}({k'})\langle\delta\tilde{n}_{\alpha}(\boldsymbol{k},t)\delta\tilde{n}_{\beta}(\boldsymbol{k}',t)\rangle.
\end{eqnarray}
In steady state, the set of linear equations in Eq.~(\ref{matrix_equation}) leads to
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{correlator_k}
\langle\delta\tilde{n}_{\alpha}(\boldsymbol{k},t)\delta\tilde{n}_{\beta}(\boldsymbol{k}',t)\rangle=\left(2\pi\right)^{3}C_{\alpha \beta}(\boldsymbol{k})\delta\left(\boldsymbol{k}+\boldsymbol{k}'\right),
\end{eqnarray}
where the correlation matrix, $C(\boldsymbol{k})$, is given by the relation~\cite{RobertZwanzig}
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{mat_eq}
A(\boldsymbol{k})C(\boldsymbol{k})+C(\boldsymbol{k})A^{\dagger}(\boldsymbol{k})=-B(\boldsymbol{k})B^{\dagger}(\boldsymbol{k}),
\end{eqnarray}
where $\dagger$ is the Hermitian conjugate. In order to subtract the ion self-correlation, we define the following {\it subtracted correlation matrix}~\cite{Supplemental},
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{norm}
\widehat{C}_{\alpha \beta}(\boldsymbol{k})= C_{\alpha \beta}(\boldsymbol{k})-n^0_{\alpha} \delta_{\alpha \beta},
\end{eqnarray}
and use $\widehat{C}$ instead of $C$ from here on.
Substituting the subtracted correlation matrix in Eq~(\ref{cor2}) and recalling that $\kappa_{{\rm hyd}}$, $\kappa_{{\rm el}}$ and $\tilde{v}_{\alpha\beta}(k)$ are real functions, we obtain
\begin{eqnarray} \label{integrals}
\kappa_{{\rm hyd}}&&=\int\frac{{\rm d}^{3}k}{\left(2\pi\right)^{3}}\frac{1}{\eta k^{2}}\left(1-\frac{k_{\parallel}^{2}}{k^{2}}\right)\sum_{\alpha,\beta}q_\alpha q_\beta\,\text{Re}\left[\widehat{C}_{\alpha\beta}(\boldsymbol{k})\right] \nonumber\\
\kappa_{{\rm el}}&&=-\int\frac{{\rm d}^{3}k}{\left(2\pi\right)^{3}}\:\frac{k_{\parallel}}{E_{0}} \sum_{\alpha,\beta}q_\alpha\mu_\alpha\tilde{v}_{\alpha\beta}({k})\,\text{Im}\left[\widehat{C}_{\alpha\beta}(\boldsymbol{k})\right].
\end{eqnarray}
In summary, the correlation matrix is obtained by solving Eq.~(\ref{mat_eq}) and redefining a subtracted correlation matrix $\widehat{C}$ in Eq.~(\ref{norm}). By substituting the matrix $\widehat{C}$ in Eq.~(\ref{integrals}) we can compute the conductivity.
\subsection{The modified pair-potential} \label{modified}
Up until now, we did not specify the pair potential, and the results were written in terms of a general $v_{\alpha \beta}(r)$ interaction. For point-like ions, the pair potential equals to the Coulomb interaction, $v_{\alpha \beta}\left(r\right)=q_{\alpha} q_{\beta}/(4\pi\varepsilon_{0}\varepsilon r)$, where $\varepsilon_{0}$ is the vacuum permittivity. However, the point-like approximation leads to an unphysical attraction between oppositely charged ions at distances smaller than the ion diameter (see Fig.~\ref{Fig1}). This becomes problematic at high concentrations, where the ions are more likely to get close to each, leading to an unphysical decrease in the conductivity due to enhanced inter-ionic correlations. This deficiency is present in the DHO theory that assumes point-like ions.
\begin{figure}
\includegraphics[width = 0.45 \columnwidth,draft=false]{Fig1}
\caption{\textsf{A schematic drawing, adapted from Ref.~\cite{Avni2022}, of cations (blue) and anions (red) moving in opposite directions in response to an applied electric field $\boldsymbol{E_0}$. The grey lines represent the fluid velocity field around point-like particles. If the interaction is purely Coulombic, oppositely charged ions are likely to get unrealistically close to one another (right side), thus reducing the conductivity. We use a modified potential to avoid such proximity, prohibited by the ionic finite-size.}}
\label{Fig1}
\end{figure}
The problem can be remedied by including in $v_{\alpha \beta}(r)$ a hard-core potential,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{hardcore}
v_{\alpha\beta}(r)=\begin{cases}
\frac{q_{\alpha}q_{\beta}}{4\pi\varepsilon_{0}\varepsilon r} & r>r_{\alpha}+r_{\beta}\\
\infty & {\rm else}
\end{cases}
\end{eqnarray}
where $r_\alpha$ is the ion radius, and $r_{\alpha}+r_{\beta}$ is the distance of closest approach between two ions. Unfortunately, such a diverging potential breaks down the perturbative approach introduced in Sec.~\ref{conductivity_calc}. Instead, a viable modification is to introduce a low cutoff to the Coulomb interaction~\cite{Adar2019},
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{u_co}
v_{\alpha \beta}\left(r\right)=\frac{{q_{\alpha}q_{\beta}}}{4\pi\varepsilon_{0}\varepsilon r}\Theta\left(r-r_\alpha-r_\beta\right),
\end{eqnarray}
where $\Theta(r)$ is the Heaviside function. In Appendix~\ref{Testing}, Eq.~(\ref{u_co}) is shown to approximate well the average distance between two ions interacting via the pair potential as in Eq.~(\ref{hardcore}), in a confined system that corresponds to concentrated electrolytes (with short inter-ionic distances). It is also shown that the approximation becomes less accurate for multivalent ions at high concentrations.
We note that while the modified potential successfully suppresses the short-range electrostatic attraction between oppositely charged ions, it induces an unphysical attraction at very short distances between ions with the same electric charge sign. It might seem that this problem can be circumvented by keeping the standard Coulomb potential (which diverges at small distances) for ions with the same electric charge sign, or by assigning a finite yet large positive value to the potential at $r<r_\alpha+r_\beta$. However, to be consistent within the perturbative approach we need to keep the potential small enough.
Thus, in our approach we keep $v_{\alpha \beta}=0$ for $r<r_\alpha+r_\beta$, for any type of ions, $\alpha$ and $\beta$.
Substituting the Fourier transform of Eq.~(\ref{u_co}), ${\tilde{v}_{\alpha \beta}({k})=q_{\alpha} q_{\beta}\cos\left(kr_{\alpha} + kr_{\beta}\right)/(\varepsilon_{0}\varepsilon k^{2})}$, in Eq.~(\ref{matrices}) and following the analysis of Sec.~\ref{conductivity_calc} leads to a closed-form expression for the conductivity. It is presented in the next Section for binary electrolytes.
\section{Binary electrolytes}\label{Two}
We consider hereafter a binary electrolyte $z_+{:}z_-$ containing cations of charge $q_+=ez_+$ and anions of charge $q_-=-ez_-$, where $z_{\pm}$ are the valencies (in absolute value) and $e$ is the elementary charge. The electro-neutrality condition implies $z_+ n^0_+=z_- n^0_-$, where $n^0_{+}$ and $n^0_{-}$ are the average cation and anion concentrations. The experimentally controlled salt concentration, $n_{\rm salt}$, is $n_{\rm salt}\equiv n^0_{+}/z_-=n^0_{-}/z_+$. Note that in the case where the two valencies $z_{\pm}$ have a common divisor ({\it e.g.,} 2:4), it is more natural to define the salt concentration as $n_{\rm salt}$ multiplied by the greatest common divisor. This is done later on for symmetric $z{:}z$ salts.
We make a further simplification by replacing the species-dependent cutoff length in Eq.~(\ref{u_co}), $r_{\alpha}+r_{\beta}$, by a single cutoff that equals the sum of the cation and anion radii, $a\equiv r_+ + r_-$. This simplification is motivated by the fact that the primary role of the cutoff is to prevent attraction between oppositely charged ions. In Appendix~\ref{different cutoffs}, we explore the difference between the conductivity when a single cutoff is used as opposed to three different cutoffs ($r_+ + r_-$, $2r_+$ and $2r_-$), and show that the difference is negligible for standard electrolytes.
Under these simplifications, the conductivity is written as follows,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{binary_kappa}
&& \kappa =\kappa_{0}+\kappa_{\text{hyd}}+\kappa_{\text{el}}\nonumber \\
&& \kappa_{{\rm hyd}}=\frac{e^{2}}{\eta}\int\frac{{\rm d}^{3}k}{\left(2\pi\right)^{3}}\frac{1}{k^{2}}\left(1-\frac{k_{\parallel}^{2}}{k^{2}}\right)\left(z_{+}^{2}\widehat{C}_{++}(\boldsymbol{k})+z_{-}^{2}\widehat{C}_{--}(\boldsymbol{k})-2z_{+}z_{-}\,\text{Re}\left[\widehat{C}_{+-}(\boldsymbol{k})\right]\right)\nonumber
\\
&& \label{cond3} \kappa_{{\rm el}} =\frac{e^{3}z_+ z_- (z_+ \mu_++z_-\mu_-)}{E_{0}\varepsilon\varepsilon_{0}}\int\frac{{\rm d}^{3}k}{\left(2\pi\right)^{3}}\:\frac{k_{\parallel}}{k^{2}}\cos\left(ka\right)\text{Im}\left[\widehat{C}_{+-}(\boldsymbol{k})\right],
\end{eqnarray}
where we used the fact that $\widehat{C}_{\alpha\beta}$ is hermitian. Equation~(\ref{cond3}) indicates that $\kappa_{\rm hyd}$ depends on the difference between the spatial correlations of equal and opposite charges, while $\kappa_{\rm el}$ depends on the spatial correlations between opposite charges only.
The components of the subtracted correlation matrix are
\begin{eqnarray} \label{C12}
&& \widehat{C}_{\pm\pm}(k)=-\frac{n_{\rm salt} z_+ z_-}{\bar{z}}\frac{\cos\left(ka\right)\left[h^{2}\left(k\right)+2\bar{z}\gamma^{2}\lambda_{D}^{2}l_{E}^{-2}\cos^{2}\theta\left(z_{\mp}\cos\left(ka\right)+2\bar{z}k^{2}\lambda_{D}^{2}\right)\right]}{g\left(k\right)+f\left(k\right)\lambda_{D}^{2}l_{E}^{-2}\cos^{2}\theta}\nonumber\\
\nonumber \\ && \widehat{C}_{+-}(k)=\widehat{C}_{-+}^{\,\ast}(k)=\frac{n_{\rm salt} z_+ z_-}{\bar{z}}\frac{\cos\left(ka\right)\left[h^{2}\left(k\right)-2i\bar{z}\gamma h\left(k\right)\lambda_{D}^{2}l_{E}^{-1}k\cos\theta\right]}{g\left(k\right)+f\left(k\right)\lambda_{D}^{2}l_{E}^{-2}\cos^{2}\theta},
\end{eqnarray}
where $\cos \theta={\hat k}\cdot {\hat E}_0$, the Debye screening length is
\begin{eqnarray}}% can be used as {equation} or {eqnarray
\lambda_{D}=\frac{1}{\sqrt{\left[e^{2}(z_{+}^{2}n_{+}^{0}+z_{-}^{2}n_{-}^{0})/\varepsilon\varepsilon_{0}k_{B}T\right]}}=\frac{1}{\sqrt{\left[e^{2}(z_{+}+z_{-})z_{-}z_{+}n_{\rm salt}/\varepsilon\varepsilon_{0}k_{B}T\right]}},
\end{eqnarray}
and the electric field length is $l_E= k_B T/(e E_0)$ (note that $l_E$ is inversely proportional to the electric-field intensity). We also defined the average valency, $\bar{z}\equiv(z_++z_-)/2$, the parameter $\gamma$,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{gamma}
\gamma \equiv \frac{2\left(\mu_{+}z_{+}+\mu_{-}z_{-}\right)}{\left(\mu_{-}+\mu_{+}\right)\left(z_{+}+z_{-}\right)},
\end{eqnarray}
and the following functions for brevity
\begin{eqnarray}
&&f\left(k\right)=2\gamma^{2}\left[z_{+}\cos\left(ka\right)+2\bar{z}k^{2}\lambda_{D}^{2}\right]\left[z_{-}\cos\left(ka\right)+2\bar{z}k^{2}\lambda_{D}^{2}\right]\nonumber\\
&& g\left(k\right)=2\left[\cos\left(ka\right)+k^{2}\lambda_{D}^{2}\right]\left[\gamma\cos\left(ka\right)+2k^{2}\lambda_{D}^{2}\right]^{2}\nonumber\\
&& h\left(k\right)=\gamma\cos\left(ka\right)+2k^{2}\lambda_{D}^{2}.
\end{eqnarray}
For $z_+\neq z_-$, $\gamma<1$ ($\gamma>1$) if the ion with the larger valency has a smaller (larger) mobility. Typically, multivalent ions have smaller mobilities than monovalent ions; thus, asymmetric salts commonly have $\gamma<1$ (see Table~\Romannum{2} in Sec.~\ref{Comparison}). For symmetric salts with $z_+=z_-\equiv z$, $\gamma=1$ and $n^0_+=n^0_-\equiv n$, and the correlation matrix reduces to
\begin{eqnarray} \label{C_symmetric}
&& \widehat{C}_{\pm\pm}(k)=-\frac{n\cos\left(ka\right)\left[\cos\left(ka\right)+2\lambda_{D}^{2}\left(k^{2}+\bar{z}^{2}l_{E}^{-2}\cos^{2}\theta\right)\right]}{2\left[\cos\left(ka\right)+2k^{2}\lambda_{D}^{2}\right]\left[\cos\left(ka\right)+\lambda_{D}^{2}\left(k^{2}+\bar{z}^{2}l_{E}^{-2}\cos^{2}\theta\right)\right]}\nonumber\\
&& \widehat{C}_{+-}(k)=\widehat{C}_{-+}^{\,\ast}(k)=\frac{n\cos\left(ka\right)\left[\cos\left(ka\right)+2\lambda_{D}^{2}\left(k^{2}-i\bar{z}l_{E}^{-1}k\cos\theta\right)\right]}{2\left[\cos\left(ka\right)+2k^{2}\lambda_{D}^{2}\right]\left[\cos\left(ka\right)+\lambda_{D}^{2}\left(k^{2}+\bar{z}^{2}l_{E}^{-2}\cos^{2}\theta\right)\right]}.
\end{eqnarray}
We note that $n=z n_{\rm salt}$ as explained at the beginning of Sec.~\ref{Two}.
A visualization of the correlations can be obtained by plotting the {\it pair-correlation function},
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{pair_corr}
h_{\alpha\beta}({\bf r})=\frac{1}{n_{\alpha}^{0}n_{\beta}^{0}}\langle\delta n_{\alpha}\left(0\right)\delta n_{\beta}\left({\bf r}\right)\rangle-\frac{\delta_{\alpha\beta}\delta\left({\bf r}\right)}{n_{\alpha}^{0}}=\frac{1}{\left(2\pi\right)^{3}}\int{\rm d}^{3}k\,\frac{\widehat{C}_{\alpha\beta}\left({\bf k}\right)}{n_{\alpha}^{0}n_{\beta}^{0}}{\rm e}^{-i{\bf k}\cdot{\bf r}},\end{eqnarray}
where $\delta n_{\alpha}({\boldsymbol r},t)= n_{\alpha}({\boldsymbol r},t) -n^0_{\alpha}$. The pair-correlation function for symmetric ions is shown in Figs.~\ref{Fig2} and~\ref{Fig3}.
The correlation function is rescaled by $(2e^2/k_B T \varepsilon_0 \varepsilon)^{3/2}\sqrt{n}$ and depends on two parameters: $z \lambda_D/l_E$, that is the normalized electric field $E_0$, and $a/\lambda_D$, a rescaled cutoff length.
\begin{figure}
\includegraphics[width = 0.85 \columnwidth,draft=false]{Fig2}
\caption{\textsf{Color plot of the pair-correlation functions, $h_{++}({\bf r})$ (top) and $h_{+-}({\bf r})$ (bottom), rescaled by $(2e^2z^2/k_B T \varepsilon_0 \varepsilon)^{3/2}\sqrt{n}$, for a symmetric ionic solution ($z_+=z_-=z$) at equilibrium. In cylindrical coordinates, the two axes, $r_{\parallel}/\lambda_D$ and $r_{\perp}/\lambda_D$, denote the axial position and radial distance, rescaled by the screening length, $\lambda_{D}$. The three columns differ by the value of $a/\lambda_D$, where $a$ is the finite ion-size parameter, as is indicated above each column.}}
\label{Fig2}
\end{figure}
\begin{figure}
\includegraphics[width = 0.68 \columnwidth,draft=false]{Fig3}
\caption{\textsf{The pair-correlation functions, $h_{++}({\bf r})$ and $h_{+-}({\bf r})$, as in Fig.~\ref{Fig2}, but driven out of equilibrium by an external electric field $E_0$ pointing in the $r_{\parallel}$ direction with rescaled field intensity $z \lambda_D/ l_E= 2$ (${E_0 =2k_B T/e z \lambda_D}$). Left column: $a/\lambda_D= 0.5$; right column: $a/\lambda_D=1.5$.}}
\label{Fig3}
\end{figure}
At equilibrium, $z \lambda_D/l_E=0$, and $h_{\alpha \beta}({\bf r})$ is spherically symmetric (Fig.~\ref{Fig2}). For $a/\lambda_D=0$ (point-like ion), the standard ionic atmosphere, $h_{\alpha \beta}(r) \propto {\rm e}^{-r /\lambda_D}/r$, is obtained. Equal charges are depleted, opposite charges are more abundant around the test charge, and the correlation function diverges at $r \to 0$. For small non-zero $a/\lambda_D$, the pair-correlation function behaves similarly to the point-like case, except that it has a finite value at $r \to 0$. The value is positive for $h_{++}(r)$ and $h_{--}(r)$ and negative for $h_{+-}(r)$. For larger $a/\lambda_D$ values ($1 \lesssim a/\lambda_D\lesssim 2.8$ for symmetric ions), the pair-correlation function decays in an oscillatory manner~(for a full derivation, see Ref.~\cite{Adar2019}). Similar damped oscillations were shown to exist at highly concentrated solutions~\cite{Mezger2008,Bazant2011,Kornyshev2008a}. When $a/\lambda_D$ is very large ($\gtrsim 2.8$ for symmetric ions, not shown in Fig.~\ref{Fig2}), the correlation function diverges with pure oscillatory modes leading to unphysical long-range order~\cite{Adar2019}. For such high $a/\lambda_D$ values, which are reached only at very high concentrations (as high as $9$\,M for NaCl in water at room temperature, which gets beyond crystallization), the use of the modified cutoff potential cannot be justified.
When an electric field is applied to the system (Fig.~\ref{Fig3}), the pair-correlation function maintains rotational symmetry around the direction of the electric field, $\hat{E}_0$. Whereas $h_{++}(r)$ is symmetric under reflection with respect to the electric-field direction, the symmetry is broken for $h_{+-}(r)$. An ion moving in the direction of the electric field is likely to have an oppositely charged ion behind it, yet far enough from the excluded-volume region. On the other hand, there is a depletion area of oppositely charged ions in front of the moving ion at short distances, and at larger distances, oppositely charged ions are more abundant. For large $a/\lambda_D$ values, the electric field destroys the concentric rings of positive/negative charge density of the equilibrium pair-correlation function. Further analysis of the pair-correlation function in the presence of an applied field can be found in Ref.~\cite{Frusawa2022}.
\pagebreak
Substituting Eq.~(\ref{C12}) in Eq.~(\ref{cond3}), and performing the angular part of the $k$-space integral, we obtain the following expressions for the conductivity corrections $\kappa_{{\rm hyd}}$ and $\kappa_{{\rm el}}$,
\begin{eqnarray} \label{kappa_el}
\kappa_{{\rm hyd}} =&&\frac{2\kappa_{0}}{\pi\gamma}\frac{r_{s}l_{E}^{3}}{\lambda_{D}^{3}}\int\limits _{-\infty}^{\infty}{\rm d}k\,\frac{g\left(k\right)\cos\left(ka\right)}{f^{2}\left(k\right)}\bigg\{3\gamma^{2}\sqrt{\frac{g\left(k\right)}{f\left(k\right)}}\left(1+\frac{\lambda_{D}^{2}}{l_{E}^{2}}\frac{f\left(k\right)}{g\left(k\right)}\right)\tan^{-1}\left(\frac{\lambda_{D}}{l_{E}}\sqrt{\frac{f\left(k\right)}{g\left(k\right)}}\right)\nonumber\\
&& \times\left(z_{+}z_{-}\cos\left(ka\right)+k^{2}\lambda_{D}^{2}\left(z_{+}^{2}+z_{-}^{2}\right)-\frac{f\left(k\right)h^{2}\left(k\right)}{\gamma^{2}g\left(k\right)}\right)\\
&& +\frac{\lambda_{D}}{l_{E}}\left[\frac{3f\left(k\right)h^{2}\left(k\right)}{g\left(k\right)}-3\gamma^{2}\left(z_{+}z_{-}\cos\left(ka\right)+\left(z_{+}^{2}+z_{-}^{2}\right)k^{2}\lambda_{D}^{2}\right)\left(1+\frac{2f\left(k\right)}{3g\left(k\right)}\lambda_{D}^{2}l_{E}^{-2}\right)\right]\bigg\}\nonumber\\\nonumber\\
\kappa_{{\rm el}}=&&-\frac{4\kappa_{0}}{\pi}\gamma z_{+}z_{-}l_{B}l_{E}^{2}\int\limits _{-\infty}^{\infty}{\rm d}k\,\frac{k^{2}\cos^{2}\left(ka\right)h\left(k\right)}{f\left(k\right)}\left[1-\frac{l_{E}}{\lambda_{D}}\sqrt{\frac{g\left(k\right)}{f\left(k\right)}}\tan^{-1}\left(\frac{\lambda_{D}}{l_{E}}\sqrt{\frac{f\left(k\right)}{g\left(k\right)}}\right)\right]\nonumber
\end{eqnarray}
where $r_s= 1/(6\pi\eta\bar{\mu})$ is a reduced Stokes radius with $\bar{\mu}=(\mu_++\mu_-)/2$ and $l_B = e^2/(4\pi k_B \varepsilon_0 \varepsilon)$ is the Bjerrum length. We can see from Eq.~(\ref{kappa_el}) that the rescaled conductivities, $\kappa_{\rm hyd}/\kappa_0$ and $\kappa_{\rm el}/\kappa_0$ depend on the ratios between the length-scales: $\lambda_D$, $l_B$, $r_s$, $l_E$ and $a$, on the valencies $z_{\pm}$, and the asymmetry parameter $\gamma$. Equation~(\ref{kappa_el}) is the main result of this paper. In the next sections, we will explore different limits and cases.
\subsection{The conductivity in the weak $E_0$ limit}\label{vanishing}
The first case that we would like to examine is the limit $\lambda_D/l_E \to 0$, {\it i.e.}, $E_0\ll k_B T/e \lambda_D$. As an example, for aqueous solutions at room temperature with monovalent ions, $\lambda_D/l_E\approx 100 \,E_0{\rm [V/ \AA]}/\sqrt{n_{\rm salt}{\rm [M]}}$, which means that the $\lambda_D/l_E\to 0$ limit occurs when $E_0 \ll 10^{-4}\,{\rm V/ \AA }=1\,{\rm V/ \mu m }$ for $n_{\rm salt}=1$\,mM and ${E_0 \ll10^{-2}\,{\rm V/ \AA }=100\,{\rm V/ \mu m }}$ for $n_{\rm salt}=1$\,M.
In this limit, Eq.~(\ref{kappa_el}) reduces to
\begin{eqnarray} \label{result1}
&&\kappa_{\rm hyd}/\kappa_0=-\frac{r_{s}}{\lambda_{D}}\frac{2}{\pi\gamma}\int\limits _{-\infty}^{\infty} {\rm d}x\,\frac{\cos(ax/\lambda_{D})}{\cos(ax/\lambda_{D})+x^{2}}\nonumber\\ \nonumber\\
&&\kappa_{\rm el}/\kappa_0=-\frac{l_{B}}{\lambda_{D}}\frac{z_{+}z_{-}\gamma}{3\pi} \int\limits _{-\infty}^{\infty} {\rm d}x\,\frac{x^{2}\cos^{2}(ax/\lambda_{D})}{\left(\cos(ax/\lambda_{D})+x^{2}\right)\left(\frac{1}{2}\gamma\cos(ax/\lambda_{D})+x^{2}\right)},
\end{eqnarray}
where we used the change of variables $x=\lambda_D k$. Although the integrals in Eq.~(\ref{result1}) cannot be performed analytically, they can be easily computed numerically.
The rescaled conductivity correction terms $\kappa_{\rm hyd}/\kappa_0$ and $\kappa_{\rm el}/\kappa_0$, are shown in Fig.~\ref{Fig4} on a semi-log plot as a function of $n_{\rm salt}$, the salt concentration, for monovalent salts, $z_{\pm}=1$. Both $\kappa_{\rm hyd}/\kappa_0$ and $\kappa_{\rm el}/\kappa_0$ approach zero in the infinite dilution limit ($n_{\rm salt}\to0$). One sees from the figure that $\kappa_{\rm hyd}/\kappa_0$ decreases as the concentration increases until a minimum is reached at $\sim 1\,{\rm M}$. Then, $\kappa_{\rm hyd}/\kappa_0$ increases until it diverges at a finite concentration. The minimum occurs very close to (but not exactly at) the onset of damped oscillations in the pair-correlation function, discussed in the previous subsection, while the divergence occurs exactly when the correlation function diverges. The second correction term, $\kappa_{\rm el}/\kappa_0$, shows a different behavior. It decreases as the salt concentration increases until it diverges to $-\infty$ at the same concentration where the correlation function diverges. We note that for $\gamma>2$, which is very uncommon for small inorganic ions, $\kappa_{\rm el}$ diverges prior to the diverges threshold of the correlation function, due to the term of $\frac{1}{2}\gamma\cos(ax/\lambda_{D})+x^{2}$ in the denominator of the $\kappa_{\rm el}/\kappa_0$ expression in Eq.~(\ref{result1}).
The two integrals of Eq.~(\ref{result1}) can be approximated for small $a$ by approximating $\cos(a x/\lambda_{D})\approx 1$ in their denominators. The integrals can then be calculated analytically using the residue theorem, yielding
\begin{eqnarray}}% can be used as {equation} or {eqnarray \label{approx}
\kappa/\kappa_0=1-\frac{r_{\rm s}}{\gamma \lambda_{\rm D}}{\rm e}^{-a/\lambda_{\rm D}}\,-\,\frac{z_{+}z_{-}\gamma}{12(1-\gamma/2)}\frac{ l_{B}}{\lambda_{\rm D}}\left(1-\sqrt{\frac{\gamma}{2}}+{\rm e}^{-2a/\lambda_{{\rm D}}}-\sqrt{\frac{\gamma}{2}}{\rm e}^{-\sqrt{2\gamma}a/\lambda_{{\rm D}}}\right).
\end{eqnarray}
The divergence that occurs in the exact result at high concentrations is not present in the analytical approximation. By taking $a\to0$ in Eq.~(\ref{approx}), the DHO result for the conductivity is exactly recovered~\cite{OnsagerFuoss1932},
\begin{eqnarray} \label{DHO}
\kappa/\kappa_0=1-\frac{r_{{\rm s}}}{\gamma\lambda_{{\rm D}}}-\frac{z_{+}z_{-}\gamma}{6(1+\sqrt{\gamma/2})}\frac{l_{B}}{\lambda_{\rm D}},
\end{eqnarray}
where both $\kappa_{\rm hyd}/\kappa_0$ and $\kappa_{\rm el}/\kappa_0$ (second and third terms on the right-hand-side, respectively) are inversely proportional to $\lambda_D$.
\begin{figure}
\includegraphics[width = 0.4 \columnwidth,draft=false]{Fig4}
\caption{\textsf{The rescaled conductivity corrections, $\kappa_{\rm hyd}/\kappa_0$ (blue) and $\kappa_{\rm el}/\kappa_0$ (red) of monovalent salt solutions, as a funtion of the salt concentration $n_{\rm salt}$ on a semi-log plot. The conductivity corrections are calculated from Eq.~(\ref{result1}) with the parameters $l_B=7 {\rm \AA}$, $r_s=1.5 {\rm \AA}$ and $a=3 {\rm \AA}$. Vertical dotted line is plotted at the concentration where the correlation function displays damped oscillations and vertical dotted-dashed line corresponds to the concentration where the correlation function, as well as $\kappa_{\rm hyd}/\kappa_0$ and $\kappa_{\rm el}/\kappa_0$, diverge.}}
\label{Fig4}
\end{figure}
\begin{figure}
\includegraphics[width = 0.42 \columnwidth,draft=false]{Fig5}
\caption{\textsf{The rescaled conductivity of 1:1 (blue) and 2:1 (red) electrolytes, as a function of the salt concentration $n_{\rm salt}=n^0_+$. Solid lines are numerical results, Eq.~(\ref{result1}), dashed lines are the analytic approximation, Eq.~(\ref{approx}) and dotted-dashed lines are DHO theory, Eq.~(\ref{DHO}). For the 1:1 case $\gamma=1$ and $r_s=1.5 {\rm \AA}$, while for the 2:1 case $\gamma=0.89$ and $r_s=2 {\rm \AA}$.
Other system parameters are: $l_B=7 {\rm \AA}$ and $a=3 {\rm \AA}$. Vertical dotted lines are plotted at the concentration where the correlation function displays damped oscillations for the 1:1 case (blue) and 2:1 case (red).}}
\label{Fig5}
\end{figure}
In Fig.~\ref{Fig5}, we explore the effect of multivalency on the conductivity by plotting the rescaled conductivity using our numerical results of Eq.~(\ref{result1}), the analytical approximation (Eq.~(\ref{approx})), and the DHO result (Eq.~(\ref{DHO})) of 1:1 monovalent electrolytes and 2:1 electrolytes ($+2e$ and $-e$ charges). We used a crude approximation that the mobility is inversely proportional to the valency (see Table~\Romannum{1} in Sec.~\ref{Comparison}) in order to estimate $\gamma$ and $r_s$, assigning a smaller $\gamma$ and larger $r_s$ for the 2:1 case. According to our numerical results, shown in Fig.~\ref{Fig5}, the rescaled conductivity decreases with multivalency, as is expected when the correlations become stronger. The threshold of decaying oscillations of the correlation function appears at lower concentrations for multivalent ions as compared to monovalent ions. The analytic approximation for the conductivity is shown to be in very good agreement with the numerical results below $1$\,M for the 1:1 case and below $\sim 0.5\,{\rm M}$ for the 2:1 case. Our results deviate substantially from the DHO result beyond a concentration of $\sim 10$\,mM, where a more pronounced deviation is seen for multivalent ions.
\subsection{The conductivity at finite electric field for symmetric electrolytes} \label{finite_field}
For non-zero values of $E_0$, we keep $\lambda_D/l_E$ finite in Eq.~(\ref{kappa_el}) and assume for simplicity that the ions are symmetric, {\it i.e.} $z_+=z_-=z$, leading from Eq.~(\ref{gamma}) to $\gamma=1$. The rescaled conductivity correction terms become,
\small
\begin{eqnarray} \label{electric_field1}
\kappa_{\rm hyd}/\kappa_0 =&&-\frac{2r_{{\rm s}}}{\pi\lambda_{{\rm D}}}\int\limits _{0}^{\infty}{\rm d}x\frac{\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)}{\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)+2x^{2}}\left[1-\frac{3x^{2}}{2\xi^{2}}\frac{\frac{3}{2}x^{2}\left(\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)+x^{2}+\xi^{2}\right)}{\xi^{3}\sqrt{\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)+x^{2}}}\tan^{-1}\frac{\xi}{\sqrt{\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)+x^{2}}}\right]\nonumber\\ \nonumber\\
\kappa_{\rm el}/\kappa_0= &&-\frac{l_{{\rm B}} z^2}{\pi\lambda_{{\rm D}}}\int\limits _{0}^{\infty}{\rm d}x\frac{x^{2}\cos^{2}\left(\frac{ax}{\lambda_{{\rm D}}}\right)}{\frac{1}{2}\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)+x^{2}}\left[\frac{1}{\xi^{2}} -\frac{1}{\xi^{3}}\sqrt{\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)+x^{2}}\tan^{-1}\frac{\xi}{\sqrt{\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)+x^{2}}}\right],
\end{eqnarray}
\normalsize
where $\xi\equiv z \lambda_D/l_E\propto E_0$.
In the $a\to 0$ limit, the integrals can be performed analytically. A convenient way to perform the integrals is to take the $a\to0$ limit already in the 3-dimensional integral expressions of Eq.~(\ref{binary_kappa}), and then perform the radial integration in $k$-space before the angular part. The Onsager-Wilson (OW) result is then recovered~\cite{Onsager1957},
\begin{eqnarray} \label{ons_E1}
\kappa_{\rm hyd}/\kappa_0=&&-\frac{r_{{\rm s}}}{8\lambda_{{\rm D}}\xi^{3}}\bigg[\left(4\sqrt{2}\xi^{3}-3\sqrt{1+\xi^{2}}+3\sqrt{2}\right)\xi\nonumber\\
&&+\,6\xi^{2}\sinh^{-1}(\xi)-3\left(1+2\xi^{2}\right)\tan^{-1}\left(\sqrt{2}\xi\right)+3\left(1+2\xi^{2}\right)\tan^{-1}\frac{\xi}{\sqrt{1+\xi^{2}}}\bigg]\nonumber\\\nonumber\\
\kappa_{\rm el}/\kappa_0 = && \frac{l_{{\rm B}}z^2}{4\xi^{3}\lambda_{{\rm D}}}\bigg[\xi\left(\sqrt{2}-\sqrt{1+\xi^{2}}\right)-\tan^{-1}(\sqrt{2}\xi)+\tan^{-1}\frac{\xi}{\sqrt{1+\xi^{2}}}\bigg].
\end{eqnarray}
\normalsize
In Fig.~\ref{Fig6}, we show the rescaled conductivity, $\kappa/\kappa_0=1-\kappa_{\rm hyd}/\kappa_0-\kappa_{\rm el}/\kappa_0$, according to Eq.~(\ref{electric_field1}), as a function of $E_0$, and in comparison to the OW result, Eq.~(\ref{ons_E1}). Two monovalent electrolyte concentrations are calculated: $n_{\rm salt}=0.01\,{\rm M}$ and $n_{\rm salt}=0.1\,{\rm M}$. As the electric-field effect is more pronounced when the ionic interactions are stronger, we used system parameters that correspond to solvents with a low dielectric constant (compared to $\varepsilon_{\rm water}\simeq80$), such as methanol with $\varepsilon_{\rm methanol}\simeq33$. The figure shows that the conductivity increases when $E_0$ is increased. This is a manifestation of the Wien effect, where the electric field lowers the ionic correlations.
Additionally, $\kappa/\kappa_0$ saturates at high electric fields. Such high fields are often not accessible experimentally, as they introduce other effects such as Joule heating~\cite{Joule_heating}. The relative increase in $\kappa/\kappa_0$, induced by the electric field, is more pronounced at high concentrations as compared to low concentrations. Our results predict that the relative increase in $\kappa/\kappa_0$ is smaller than the increase predicted by the OW theory. This is due to the suppression of the electrostatic interactions at short distances, included in our theory. As in the low electric field case analyzed in the previous subsection (Sec.~\ref{vanishing}), the difference between our results for $\kappa/\kappa_0$ as compared to the $a\to0$ case (OW theory) increases with the ion concentration. We conclude that the Wien effect becomes more pronounced as the ion concentration increases, but to a lesser extent than the OW theory prediction.
\begin{figure}
\includegraphics[width = 0.4 \columnwidth,draft=false]{Fig6}
\caption{\textsf{The rescaled conductivity $\kappa/\kappa_0$ of monovalent electrolytes as a function of the electric field intensity, plotted for two concentrations: $n_{\rm salt}=0.01\,$M in blue and $n_{\rm salt}=0.1\,$M in red. The system parameters are: $l_B = 1.7\,{\rm nm}$ (appropriate for methanol $\varepsilon_{\rm methanol}=32.7$
at room temperature), $r_s = 2.5\,{\rm \AA}$ and $a = 3\,{\rm \AA}$. Full lines are numerical results of Eq.~(\ref{electric_field1}) and dashed lines are OW theory of Eq.~(\ref{ons_E1}).}}
\label{Fig6}
\end{figure}
By expanding Eq.~(\ref{electric_field1}) in powers of $\xi=z \lambda_{D}/l_{E}$ at weak electric fields, we see that the conductivity grows quadratically with $\xi\propto E_0$,
\begin{eqnarray}}% can be used as {equation} or {eqnarray
\left[\kappa(\xi)-\kappa(0)\right]/\kappa_0=\frac{2}{5\pi\lambda_{{\rm D}}}\int\limits _{0}^{\infty}{\rm d}x\frac{x^{2}\left[r_{{\rm s}}+l_{B}z^2\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)\right]\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)}{\left[x^{2}+\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)\right]^{2}\left[2x^{2}+\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)\right]}\xi^{2}
+\mathcal{O}(\xi^4).
\end{eqnarray}
In the $\xi \to \infty$ limit, $\kappa_{\rm el}/\kappa_0\to 0$, while $\kappa_{\rm hyd}/\kappa_0$ approaches a constant value,
\begin{eqnarray}}% can be used as {equation} or {eqnarray
\lim_{\xi\to\infty}\kappa_{\rm hyd}/\kappa_0=-\frac{2r_{{\rm s}}}{\pi\lambda_{{\rm D}}}\int\limits _{0}^{\infty}{\rm d}x\frac{\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)}{\cos\left(\frac{ax}{\lambda_{{\rm D}}}\right)+2x^{2}}.
\end{eqnarray}
The system behavior at strong electric fields can be understood from the correlation matrix, Eq.~(\ref{C_symmetric}). Taking the $\xi \to \infty$ limit is equivalent to $l_E\to 0$, and yields $\widehat{C}_{+-}(k)\to0$ and $\widehat{C}_{++}(k),\widehat{C}_{--}(k)\to-n\cos\left(ka\right)/[\cos\left(ka\right)+2k^{2}\lambda_{D}^{2}]$. The external electric field ``tears apart" pairs of oppositely charged ions and the correlation between such pairs vanishes in the $\xi \to \infty$ limit. However, since $E_0$ drags equally charged ions in the same direction, it does not destroy their (anti-) correlations. Thus, $C_{++}$ and $C_{--}$ remain finite. Since $\kappa_{\rm el}/\kappa_0$ is proportional to $C_{+-}$, it vanishes in the $\xi \to \infty$ limit. However, $\kappa_{\rm hyd}/\kappa_0$ depends on the difference between $C_{++}$ and $C_{+-}$. Therefore, it reaches a constant value in this limit.
\section{Comparison with experiments and simulations}\label{Comparison}
In Fig.~\ref{Fig7}, we compare our numerical results for weak electric fields (Sec.~\ref{vanishing}) to experimental data for different aqueous ionic solutions at $T=25^\circ{\rm C}$, taken from Refs.~\cite{Lide_Book} and~\cite{Lobo}, where an extensive body of measurements is summarized. The {\it molar conductivity}, $\kappa/n_{\rm salt}$, which is commonly used in experiments, is plotted as a function of $n_{\rm salt}$ for 1:1 electrolytes (NaCl and KBr), 2:1 electrolytes (BaCl$_2$ and CaCl$_2$), and 3:1 electrolyte (LaCl$_3$). Note that the molar conductivity is proportional to the rescaled conductivity, $\kappa/\kappa_0$, since $\kappa_0$ is linear in $n_{\rm salt}$.
The electrolytes we consider have monovalent anions, $z_-=1$, and different cationic valencies $z_+$. Therefore, $n_{\rm salt}$ is the concentration of the cations, $n^0_{+}$.
At $T=25^\circ{\rm C}$, water viscosity is $\eta = 0.890\,{\rm mPa\cdot s}$~\cite{Korson1969} and the dielectric permittivity is $\varepsilon=78.3$~\cite{Malmberg1956}, yielding a Bjerrum length of $l_B=e^2/(4\pi k_B \varepsilon_0 \varepsilon)=7.15\,\rm{\AA}$, and a screening length of $\lambda_{D}=1/\sqrt{4\pi l_{B}(z_+^2 n^0_++z_-^2 n^0_-)}=4.30\,[{\rm \AA}] /\sqrt{z_+(z_++1) n_{\rm salt}\,[{\rm M}]}$. In Table~\Romannum{1}, we summarize the values of the radii and diffusion coefficients at infinite dilution for all the ions considered in Figs.~\ref{Fig7}. In Table~\Romannum{2}, we present the electrolytes asymmetry parameter $\gamma$, cutoff length, a reduced Stokes radius $r_s$, and molar conductivity $\kappa_0/n_{\rm salt}$ at infinite dilution. They are all calculated from the parameters in Table~\Romannum{1} and the solution parameters $T$, $\varepsilon$ and $\eta$ mentioned above.
\begin{table}[h] \label{tab1}
\caption{The ion radii~\cite{Shannon1976} and diffusion coefficients for aqueous solutions at $T=25^\circ $C at infinite dilution limit~\cite{Lide_Book}. We use the ``Effective ionic radii" by Shannon with six-coordinate. Other sets for the ionic radii give very similar results~\cite{Shannon1976}.}
\begin{tabular}{ c c c }
\hline\hline
\,\,\,\,\,\,\,Ion\,\,\,\,\,\,\,\,\,& \,\,\,\,\,\,$r \rm{[\AA]}$\,\,\,\,\,\, & \,\,\,\,\,\,$D [10^{-5} \,{\rm cm}^2\, {\rm s}^{-1}]$\,\,\,\,\,\, \\ [0.5ex]
\hline
Na$^{+}$ & $1.02$ & 1.334\\
K$^{+}$ & $1.38$ & 1.957 \\
Ba$^{2+}$ & $1.35$ & 0.847 \\
Mg$^{2+}$ & 0.72 & 0.706 \\
La$^{3+}$ & 1.03 & 0.619\\
Cl$^{-}$ & 1.81& 2.032\\
Br$^{-}$ & 1.96& 2.080\\
\hline\hline
\end{tabular}
\end{table}
\begin{table}[h] \label{tab1}
\caption{The electrolytes asymmetry parameter $\gamma$ (Eq.~(\ref{gamma})), cutoff length ${a=r_+ + r_-}$, reduced Stokes radius ${r_s=k_B T/3\pi \eta (D_+ + D_-)}$, and molar conductivity at infinite dilution $\kappa_0/n_{\rm salt}$ (Eq.~(\ref{kappa_0}) with $n_{\rm salt}=n^0_+$) where S is the Siemens electric conductance unit, calculated from the parameters in Table~\Romannum{1} for aqueous solutions at $T=25^\circ $C.}
\begin{tabular}{ c c c c c}
\hline\hline
\,\,\,\,\,\,\,\,\,\,\,\,Salt\,\,\,\,\,\,\,\,\,\,\,\,\,& \,\,\,\,\,\,\,\,$\gamma$\,\,\,\,\,\,\,\, & \,\,\,\,\,\,\,\,$a[{\rm \AA}]$\,\,\,\,\,\,\,\, & \,\,\,\,\,\,\,\,$r_s[{\rm \AA}]$\,\,\,\,\,\,\,\, & \,\,\,\,\,\,\,\,$\kappa_0/n_{\rm salt} [\rm{cm^2 \cdot S\cdot mol^{-1}}]$\,\,\,\,\,\,\,\,\\ [0.5ex]
\hline
NaCl & $1$ & 2.83 & 1.46 & 126.3\\
KBr & $1$ & 3.34& 1.22 & 151.4\\
BaCl$_2$ & 0.86 & 3.16 &1.70& 279.6 \\
MgCl$_2$ & 0.84 & 2.53 & 1.79& 258.4\\
LaCl$_3$ & 0.73 & 2.84 & 1.85& 437.6\\
\hline\hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width = 1 \columnwidth,draft=false]{Fig7}
\caption{\textsf{The molar conductivity, $\kappa/n_{\rm salt}$, as a function of the salt concentration $n_{\rm salt}$. Two types of $1{:}1$ electrolytes are shown in (a); two types of $2{:}1$ electrolytes in (b); and one $3{:}1$ electrolyte in (c). Triangles are experimental data~\cite{Lide_Book,Lobo}; full lines are our numerical results, Eq.~(\ref{result1}); and dashed lines are the results obtained from DHO theory, Eq.~(\ref{DHO}). The electrolytes physical parameters are specified in Sec.~\ref{Comparison}.}}
\label{Fig7}
\end{figure}
Figure~\ref{Fig7} shows that our numerical results are in good agreement with the experimental data at high concentrations, without any fit parameters. Furthermore, also shown in the figure, our numerical results present a significant improvement as compared to the DHO theory. However, the results become less accurate for multivalent ions at high concentrations. For 1:1 electrolytes, deviations exceed $ 5 \%$ only at concentrations above $\sim 2\,$M. For 2:1 electrolytes, deviations of $5\%$ emerge at concentrations above $\sim 0.1\,$M, while for 3:1 electrolytes, such deviations occur already at much smaller concentrations above $\sim 0.02\,$M. The inaccuracy of our results for multivalent ions at high concentrations has several causes. First, multivalent ions introduce very strong electrostatic interactions that break the perturbative calculation. Secondly, the modified potential does not approximate well the Coulomb potential with hardcore for multivalent ions at high concentrations, as demonstrated in Appendix~\ref{Testing} (see Fig.~\ref{Fig10}(a)). Finally, the high charge density orders the liquid around the ions. The ordering changes the dielectric constant $\varepsilon$ and viscosity $\eta$, and these extra factors are not accounted for in our theory.
Figure~\ref{Fig8} summarizes different levels of approximation for the conductivity in units of $[{\rm S}/\mu {\rm m}]$ (rather than the rescaled conductivity). It shows experimental measurements of the conductivity of NaCl as a function of the concentration, compared to: (\romannum{1}) infinite dilution limit ($\kappa_0$) that is linear in $n_{\rm salt}$, (\romannum{2}) our numerical results (Eq.~(\ref{result1})), (\romannum{3}) our approximated results (Eq.~(\ref{approx})), and (\romannum{4}) the classical DHO theory (Eq.~(\ref{DHO})). The numerical results are in excellent agreement with experimental measurements for concentrations up to $3\,$M. The analytical approximation also agrees quite well with the experimental data. As expected, the DHO theory and the infinite dilution limit deviate from the experimental measurements at high concentrations (substantial deviations occur above $\sim 0.5\,$M).
\begin{figure}
\includegraphics[width = 0.37 \columnwidth,draft=false]{Fig8}
\caption{\textsf{The conductivity, $\kappa$, of an aqueous solution of NaCl at $T=25^\circ{\rm C}$, as a function of the salt concentration $n_{\rm salt}$. Black triangles are the experimental data \cite{Lide_Book,Lobo}; green dotted line is the conductivity at infinite dilution, $\kappa_0$; full blue line is obtained numerically from Eq.~(\ref{result1}); dotted-dashed purple line is plotted from our analytical approximation, Eq.~(\ref{approx}); and dashed red line is obtained from DHO theory, Eq.~(\ref{DHO}). The electrolyte physical parameters are specified in Sec.~\ref{Comparison}}}
\label{Fig8}
\end{figure}
Our results in Sec.~\ref{finite_field} for the Wien effect at high concentrations should be compared to conductivity measurements at finite electric fields at high ionic concentrations. However, to the best of our knowledge, no experimental data are available in this regime. Moreover, little experimental data exist on the Wien effect even for dilute solutions. The reason, at least in part, is due to the experimental challenges involved in applying an external field while maintaining the system at a constant temperature.
\begin{figure}
\includegraphics[width = 0.4 \columnwidth,draft=false]{Fig9}
\caption{\textsf{The molar conductivity, $\kappa/n_{\rm salt}$ as a function of the electric field, $E_0$, excluding the hydrodynamic correction term, $\kappa_{\rm hyd}$ (in order to be consistent with the simulations, see Sec.~\ref{Comparison}). The system parameters match the implicit solvent simulation parameters in Ref.~\cite{Lesnicki2021}: $z_+=z_-=1$, $n_{\rm salt}=0.1\,$M, $T=300\,$K, $\varepsilon = 10$, $\mu = 3.14\,{\rm s/Kg}$ and $a=r_++r_-=3.49\,{\rm \AA}$. The molar conductuvity in our numerical results~(Eq.~(\ref{electric_field1})) and OW theory~(Eq.~(\ref{ons_E1})), are plotted without their respective hydrodynamic correction term, and compared to simulation data with implicit solvent, taken from Ref.~\cite{Lesnicki2021}, and to the infinite dilution limit.}}
\label{Fig9}
\end{figure}
Recently, field-dependent ionic conductivities were calculated from molecular dynamics simulations, using generalized fluctuation-dissipation relations~\cite{Lesnicki2020,Lesnicki2021}. This method yields the differential conductivity, ${\kappa_{\rm diff} \equiv {\rm d}\langle J_{\parallel}\rangle/{\rm d}E_0}$, related to the standard conductivity, ${\kappa=\langle J_{\parallel}\rangle/E_0}$, by ${\kappa=(1/E_{0})\int_{0}^{E_{0}}\kappa_{{\rm diff}}(E)\,{\rm d}E}$. In Fig.~\ref{Fig9}, the simulation results of Ref.~\cite{Lesnicki2021} for the molar conductivity are reproduced, where the differential conductivity is converted by integration to the standard conductivity. The simulations take into account the solvent only implicitly and do not account for the conductivity correction due to the counterflow of the solvent. Thus, they are compared to our numerical results, Eq.~(\ref{electric_field1}) and to the OW theory, Eq.~(\ref{ons_E1}), {\it without} the hydrodynamic correction term, $\kappa_{\rm hyd}$. While our numerical results deviate significantly from the simulations, they describe the same qualitative behavior and are in much better agreement with the simulations than the OW theory is. In particular, the simulations support our prediction that at high ionic concentrations, the relative increase of the conductivity due to the Wien effect is smaller as compared to the increase predicted by the OW theory. We note that for system parameters as in Fig.~\ref{Fig7}, the OW result does not make sense as it predicts negative conductivity for weak electric fields.
\section{Conclusions}\label{Conclusions}
This paper presents a theory for conductivity at high electrolyte concentrations, applicable for multivalent ions and finite electric fields. We used a stochastic density functional theory (SDFT) and a modified electrostatic potential that suppresses the unphysical short-range attraction between oppositely charged ions. At low electric fields, the theory is particularly accurate for monovalent salts, showing excellent agreement with experimental data at concentrations as high as a few molars with no fit parameters. Its range of applicability decreases for multivalent ions due to the strong electrostatic interactions that break the perturbative approach, and the inaccuracy of the modified potential at high concentrations. Nevertheless, the theory provides accurate predictions for 2:1 and 3:1 electrolytes up to concentrations of $\sim0.1$\,M and $\sim0.02$\,M, respectively, without any fit parameters. This is far beyond the applicability range of the well-known Debye-H\"uckel-Onsager (DHO) theory.
For strong electric fields, we recover the Wien effect and show that similarly to dilute solutions, the rescaled conductivity at high ionic concentrations displays a sigmoid-like behavior, where the conductivity will make a transition between two limiting values when the field strength increases. The relative increase in the rescaled conductivity at high concentrations is smaller than the increase predicted by the WO theory due to the suppression of the electrostatic interactions at short distances. Recent simulations done in this concentrated regime with an implicit solvent show that our results present an improvement over the WO theory in capturing the Wien effect. In order to further test the theory, experiments for high concentrations at finite electric fields are needed. The theory can be extended to lower dimensions with relevance to the Wien effect in nanofluidic pores and slits~\cite{Kavokine2019, Robin2021}.
\bigskip\bigskip
{\em Acknowledgements}~~
We would like to thank V. D\'emery, A. Donev, and G. Yossifon for discussions and suggestions, and B. Rotenberg for presenting us the simulations performed in his group. Y. A. is thankful for the support of the Clore Scholars Programme of the Clore Israel Foundation. This work was supported by the Israel Science Foundation (ISF) under Grant No. 213/19 and by the National Natural Science Foundation of China (NSFC) -- ISF joint program under Grant No. 3396/19.
\bigskip\bigskip
{\em Data Availability}~~
The data that supports the findings of this study are available within the article.
|
1,108,101,564,710 | arxiv | \section{Introduction}
The rise of Social Media platforms has strengthened the interest of researchers for studying human behavior on different contexts. The potential of these platforms relies on how users behavior can be traced during or even long after they have manifested it, which facilitates the work of behavioral scientists to track them under different circumstances. This is possible thanks to the chance of crawling real time data from the users, and also to the fact that most data remains stored or published during long periods of time \cite{bayerl2014social}. Data, however, must be analyzed with an adequate approach, depending on their type (\textit{e.g.} based on interactions, text or images) and their source (\textit{\textit{e.g.}} the type of social platform).
Taking into consideration that most of the content published on the Internet is textual, it is unsurprising that one of the most frequently used approaches for online pattern extraction comes from Natural Language Processing (NLP). This discipline uses a set of computational methods for making human language accessible to computers, and more specifically for giving the computers the ability to understand and generate human language \cite{eisenstein2019introduction}. NLP techniques are used in both academia and industry for text analysis applications, such as medicine \cite{wang2018clinical,savova2019use}, mental health \cite{calvo2017natural,stewart2021applied}, economy \cite{fisher2016natural} or crime prevention \cite{schmidt2017survey}.
One of the area that has benefited of NLP techniques on recent years is the study of extremist discourse, particularly due to the increasing use of Social Media by different extremist groups as a speaker for disseminating their ideologies. While the first relevant extremist movements of the century (\textit{e.g.} the 11-S) took advantage of emails for their communication and organization, the growth of online platforms such as blogs, forums and finally Social Media platforms (\textit{e.g.} Twitter or Facebook) has changed the way extremists communicate, recruit and disseminate their ideas \cite{dean2012dark}. The rise of groups such as Islamic State or the Alt-right, together with their use of online Social Media platforms with different objectives \cite{jawhar2016terrorists}, has represented a threat for many countries, specially considering that extremism can facilitate the justification of violent actions to achieve a movement's agenda \cite{thomas2012responding}. This threat led different countries to finance research projects and other initiatives related to the study of the traces that extremists users left online, with the aim of identifying early behaviors to stop them before embracing violent extremism. In fact, during the worst days of the jihadist threat (between 2015 and 2018), the European Union invested in several research projects where NLP was applied to track terrorism and online extremism \cite{bouzar2018stages,fernandez2018contextual,florea2019complex,torregrosa2018risktrack}. The core of most of the initiatives aimed to counter this phenomenon, detecting and classifying extremist content that could lead people to adopt these ideologies. Machine learning techniques made a great contribution to this purpose (see, for example, Scanlon \& Gerber \cite{scanlon2014automatic}).
As stated before, the use of NLP techniques has led to several contributions focused on extremism research. After the fruitful period of research from different perspectives aimed to study and analyze the extremism phenomenon, a few systematic surveys have approached the specific relationship between NLP and extremism research. These systematic reviews can be divided in two types. The first type has analyzed NLP contributions to areas conceptually related to extremism, such as hate speech \cite{fortuna2018survey} or law enforcement \cite{edwards2015systematic}. The second type gravitates on extremism, including NLP as a key part of its identification \cite{aldera2021,Gaikwad2021}. However, reviews belong to this latter type undergo two main limitations. On the on hand, their content is restricted to the specific task of detection, not covering the rest of the whole data mining process \cite{Gaikwad2021}. On the other hand, their lack of depth when studying the NLP approaches under focus \cite{aldera2021}, missing to provide a thorough description of the diverse spectrum of techniques used in both descriptive and detection processes.
This article aims to cover the gap left by this prior work and other similar surveys by placing an emphasis on NLP contributions to extremism analysis (including both description and classification/detection tasks), with a more comprehensive and critical approach on the different types of NLP techniques used to date. To this end, a systematic review is conducted to collect and systematically analyze the literature regarding NLP contributions on the study of extremism. This review will present a whole picture about the state of the art of this research field, both from a descriptive and a comparative approach. The first one will focus on describing the features of the articles and the content they study, whereas the second will compare their outcomes to extract useful insights for researchers. To do so, five research questions (summarized in Fig. \ref{fig:index}) are formulated to orchestrate the contributions of this review:
\begin{itemize}
\item \textbf{RQ1. What are the current topics and contributions from NLP to extremism research?}
This question aims to highlight the most relevant topics analysed by the articles, such as type of extremism or which platform is used for the research approaches, among others. This will eventually help presenting a general picture of the research field.
\item \textbf{RQ2. What NLP techniques are used on extremism research?}
After the screening process, the NLP techniques used by each of the articles included on the review will be extracted. After that, they will be briefly described and compared, with the aim of showing their main contributions and differences.
\item \textbf{RQ3. How have NLP techniques been applied in the field of extremism research?}
The different applications of the NLP techniques found in the literature will be categorized and divided depending on their approach, either the description of extremist texts or their classification. The main extremist discourse features found by the articles will be highlighted, together with the machine learning algorithms used to identify extremist texts.
\item \textbf{RQ4. What NLP software tools are commonly used on extremism research?}
The objective of this research question is to compare the different NLP tools (open or commercial) used by the articles reviewed.
\item \textbf{RQ5. Which publicly available datasets or datasources have authors used to conduct NLP experiments on extremism research?}
This research question will approach the availability of public datasets and datasources including extremist content, to facilitate researchers their experiments.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=0.65\linewidth]{Images/ResearchQuestions.pdf}
\caption{Summary of the research questions.}
\label{fig:index}
\end{figure}
The main contributions of the article can be summarized in six points:
\begin{enumerate}
\item It provides a general picture of the theoretical foundations behind the concept of "extremism", discussing its differences and similarities with other concepts that are often misused as synonyms on the literature.
\item It briefly defines the concept of extremist discourse, including some key elements that are present on this type of discourses.
\item It presents an updated picture of the NLP techniques (including pre-processing techniques) used on extremism research, together with an analysis and comparison of their advantages and disadvantages.
\item It summarizes the different applications that NLP techniques can have on extremism research, such as discourse description and classification. The main machine learning algorithms used to identify extremist content are also highlighted.
\item It presents different available tools, together with open datasets and datasources regarding extremism, which may be helpful for authors interested on conducting future experiments.
\item It highlights future trends, challenges and directions of this field, regarding the conclusions extracted from the analysis.
\end{enumerate}
A summary of the structure of the paper can be seen in Figure \ref{fig:index}, which is presented as follows:
Section \ref{state_of_the_art} defines what is understood as extremism, the differences among extremism and other topics and what defines the concept of extremist discourse. Section \ref{methodology} explains how the review was planned and conducted, including the inclusion and exclusion criteria, and a brief summary of the process. Section \ref{general} presents a general descriptive analysis of the outcomes of the search conducted, including the trends of publication and the main keywords associated to the articles. Section \ref{Techniques} describes and compares the different NLP techniques used by the authors. Section \ref{Aplications} focus on the applications of these techniques, dividing them in two approaches: text description and text classification, including the machine learning algorithms used for this task. Section \ref{Software} describes the NLP open datasets, datasources and tools used by the authors. Finally, section \ref{Discussion} focuses on answering the research questions and on presenting future trends, challenges and directions of the area, and presents the final conclusions.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{Images/StructurePaper.pdf}
\caption{Structure of the overall review.}
\label{fig:index}
\end{figure}
\section{Contextualizing the concept of extremist discourse}
\label{state_of_the_art}
The definition of extremism has traditionally led to different misconceptions in the literature, specially for authors with few background on social sciences. This section deals with the different definitions around this topic. To do so, the first subsection analysis the differences between extremism and radicalisation, two concepts that are frequently used indistinctly \cite{schmid2013radicalisation}. The second subsection briefly presents other concepts related to extremism, including definitions and relationships with it. Finally, the last subsection presents how the concept of extremism will be used in this article, including an operativization of the extremist language that will act as a framework on which the different articles reviewed can be compared.
\subsection{Extremism and radicalisation: differences and similarities}
Literature shows that extremism and radicalisation are often used as synonyms or exchangeable terms to refer the same phenomenon, which leads to the false idea that both terms mean the same. However, while authors do not usually distinguish between them from a methodological perspective, there are indeed theoretical differences that make both terms conceptually different. While there is no academic consensus about the definitions of extremism and radicalisation \cite{van2019subjectivity}, the different perspectives concerning their relationship can be summarized into three main approaches:
\begin{enumerate}
\item \textbf{Both concepts are synonyms}. This could be related to the use of both terms on political discourse, which has transformed them in pejorative concepts that are used indistinctly \cite{schmid2013radicalisation}
\item \textbf{Both concepts are different, but one of them is part of the other}. In this line, several articles use the concept of radicalisation as a term to refer to the psychological process previous to the involvement on terrorism and extremism \cite{schuurman2018reconsidering}.
\item \textbf{Both concepts are different without a necessary relationship among them}. Regarding this approach, Botticher \cite{botticher2017towards} conducted a deep analysis of the historical roots of these concepts, when trying to define the differences underlying them. Essentially, the term radicalisation was born during the 18th century, as a way to define a movement against the establishment, but that is not inherently violent or positioned against democratic values. Meanwhile, the concept of extremism refers to an anti-democratic movement, and stands against "all those who do not embrace its dogmatic recipe for a transformation of society". Another reference to this article can be found in Schuurman and Taylor \cite{schuurman2018reconsidering} which highlight that radicalisation, understood in its historical context, does not necessarily imply a negative connotation of "change" of the socio-political order, while extremism does.
\end{enumerate}
Concerning the present review, it is necessary to have an open position towards the three different approaches. Extremism will be considered the core concept to study on this review, and therefore it will be used as a keyword instead of radicalisation (as all the social movements of interest for this article are, essentially, those against democratic values). However, due to the misconception or confusing use of both terms in the literature, both radicalisation and extremism will be used as keywords to conduct the search on the databases during the article gathering process. Through this decision we will be able to include articles from authors considering both terms as synonyms and those using one as part of the other.
\subsection{Extremism and other related concepts}
Similarly to the terms extremism and radicalisation, there are other concepts that are currently confusing on their use in the context of extremism research. While some of these terms are quite related, they do not share the same theoretical definition.
Table \ref{tab:concepts_related_extremism} includes some of these terms, their definition, their difference with the concept of extremism and an example from the literature regarding them. Taking into account that the main characteristic to classify a movement as extremism is that it goes against democratic values, we can find three different types of concepts related to extremism in this table. The first two terms (supremacism and sectarianism) are actually subtypes of extremism since they are both different types of ideological movements that try to suppress or limit certain fundamental democratic values of other social groups. When these ideological movements against democratic values use violence to try to achieve their objectives, it could be said that they constitute a type of Terrorism (third term in the Table). Finally, the last three terms shown in the Table (polarization, fundamentalism and nationalism), although are related to extremism, do not necessarily share its main characteristic of going against democratic values.
There are other concepts that, appearing related to extremism, are just manifestations of the violence and discrimination underlying this concept. Some examples could be hate speech \cite{olteanu2018effect}, racism \cite{fuchs2016racism} or stalking/cyber-stalking \cite{kruglanski2020psychology}. The creation of fake news \cite{spohr2017fake} and its relationship with extremism currently represents another rising problem that has attracted the attention from researchers.
\begin{table}
\centering
\caption{Concepts, definitions and distinction from extremism.}
\begin{tabular}{|l|p{4cm}|p{4cm}|p{3.2cm}|}
\hline
\textbf{Concept} &
\textbf{Definition} &
\textbf{Distinction from extremism} &
\textbf{Example of the concept}\\ \hline
\textit{Supremacism} &
Ideology that assumes that one group is naturally superior to another one, due to their race, sex, economic status, nation, etc. \cite{schaefer1990racial} &
Could be a subtype of extremism, as supremacist groups are contrary to the existence of equal rights.
&
White supremacist movement \cite{kantrowitz2015ben} \\ \hline
\textit{Sectarianism} &
Form of discrimination between groups based on a specific factor. For years, it was limited to religion, but nowadays this concept is technically similar to supremacism. \cite{phillips2015sectarianism} &
Would be a subtype of extremism as Supremacism, and it is contrary to the existence of equal rights.
&
Conflicts between Nationalists and Unionists in Northern Ireland \cite{cairns1998conflict} or political disparity between Shia and Sunni Muslims \cite{wehrey2017beyond}
\\ \hline
\textit{Terrorism} &
Systematic use of violence, propaganda and fear towards and specific population to achieve ideological objectives. \cite{lopez2016boko} &
Always implies violence, while extremism does not necessarely use it. However, both are against one or more fundamental values of a society.
&
IRA in Ireland/North Ireland \cite{pruitt2007readiness}, ETA in Spain \cite{shepard2002eta}, FARC in Colombia \cite{saab2009criminality}, The Islamic State \cite{roy2017jihad} or Al'Qaeda \cite{burke2004qaeda} \\ \hline
\textit{Polarization} &
Ideological movement towards a more extreme point of view in whatever direction is indicated by the member's predeliberation tendency \cite{sunstein1999law}
&
It is not necessarely violent or against fundamental values of a society, as occurs with radicalization.
&
Political or ``partisan" polarization \cite{prior2013media} \\ \hline
\textit{Nationalism} &
Ideology based on the nodal point ``nation", on which a community is tied to a certain space, and that is structured through the opposition between the nation and different outgroups. \cite{de2017populism} &
Does not necessarily imply a negative connotation. When it turns extremist, it would convert to supremacism.
&
Catalonia, Scotland and Canada have some renowned political movements related to nationalism \cite{keating1996nations} \\ \hline
\textit{Fundamentalism} &
Tendency to follow literally certain dogmas or ideologies from the "fundamental" and unchangeable practices of the past. As sectarianism, it has a religious connotation. \cite{hunsberger1995religion} &
Is not necessarely violent or against democratic values.
&
The ``Amish" (example of christian fundamentalist group) \cite{hill2005psychology}. \\ \hline
\end{tabular}
\label{tab:concepts_related_extremism}
\end{table}
\subsection{Definition and operativization of extremist discourse}
\label{operativization}
Until now, it has been presented a distinction between the concepts of radicalization and extremism, choosing extremism as a key concept to justify the aims of this article. Also, extremism has been compared with other concepts that tend to appear. As has been stated, this term can have different meanings depending on the approach considered by the author, and this is why its relevant to establish a clear definition to work with. In this review, our definition of extremism will be "\textit{an ideological movement, contrary to the democratic and ethical values of a society, that uses different methods, including violence (physical or verbal) to achieve its objectives}".
Following this definition, a second step would be to clarify what does this article means when it refers to extremist discourse. While it could be addressed as "\textit{the use of language held by people when expressing their extremist views}", it shall be notice how authors have highlighted several features that characterizes an extremist narrative from a regular discourse. These features, derived from different authors \cite{ashour2010online,bennett2011war,fortuna2018survey,sakki2016discursive,torregrosa2020linguistic}, can be summarized as follows:
\begin{itemize}
\item \textbf{Types of extremist narrative:} there are several ways on which extremist narratives try to justify their vision and objectives. Ashour \cite{ashour2010online} divided these narratives into five categories: political, historical, socio-psychological, instrumental and theological/moral.
\begin{itemize}
\item \textbf{Political:} the discourse includes references to grievances from one or more group towards other group.
\item \textbf{Historical:} legitimization of the political grievance narratives through the use of historical examples and similes.
\item \textbf{Socio-psychological:} glorification of acts against the system, both violent or not.
\item \textbf{Instrumental:} justification of the violence and "self-defense" as a way of reaching objectives.
\item \textbf{Theological/moral:} legitimization of actions or reactions against political grievance or social oppression through religion, morality or ethics.
\end{itemize}
\item \textbf{Linguistic style:} the narrative styles or topics previously mentioned are built based on a specific vocabulary and style, that helps extremists structuring their discourse. Several articles have found differences on the linguistic style from radical and extremist texts compared to a regular sample of texts \cite{cohen2014detecting}. For example, the higher use of first and third person plural pronouns, a more negative tone or the use of more words related to negative topics are common in these texts \cite{torregrosa2020linguistic}.
\item \textbf{Use of discursive resources such as hate speech, otherness or war narrative:} extremist texts tend to use discursive resources to justify their actions and ideas towards others. Some of these techniques have been deeply studied, such as hate speech \cite{fortuna2018survey}, otherness \cite{sakki2016discursive} or the use of war terminology to create "enemies" and a "call to action" on others \cite{bennett2011war}.
\end{itemize}
On this point, both the definition and operativization of extremist discourse have been stated. This type of discourse is characterized by the use of specific narratives, an aggressive and polarized linguistic style and several techniques oriented to justify a feeling of superiority or inferiority towards another group. The next sections of this article will review how authors have used NLP to detect and describe the extremist discourse on Social Media, and the outcomes they have reached.
\section{Methodology}
\label{methodology}
This section describes the process carried out to conduct the survey of the articles that apply NLP to extremism research. This process was conducted through a systematic approach, extracting all the articles from four databases: Scopus, ScienceDirect, IEEE Xplore and Web of Science.
Concerning the thesaurus used for the search, it was decided to use both the terms extremism and radicalisation on the search. The reason behind this decision was that, as stated before, it is quite common that authors miss-use these concepts as synonyms \cite{botticher2017towards,schmid2013radicalisation}.
Second, while the thesaurus "Natural Language Processing" was included, it was also decided to expand the search with different subtopics, such as "Sentiment analysis", "Topic detection" and "Semantic analysis". Eventually, and due to the recent contributions from the field of deep learning to natural language processing \cite{young2018recent}, it was decided to include also the subtopic "Deep learning" to the search.
Therefore, the thesaurus finally included on the searching process are presented below:
\\
\textit{("Natural Language Processing" OR "Sentiment Analysis" OR "Topic Detection" OR "Semantic Analysis" OR "Deep Learning") AND ("Extremism" OR "Radicalization")}
\\
No time limits were selected when conducting the review, meaning that the articles could be published any year. The extraction was conducted in January 2021. 729 documents were found on the different databases. Table \ref{screen_databases} shows the distribution of articles found per database. After deleting the duplicates and the non-scientific articles (e.g. indexes), 675 articles remained on the survey.
\begin{table}
\centering
\caption{Articles extracted from the different databases that apply NLP to extremism research.}
\begin{tabular}{|l|r|}
\hline
\textbf{Datasource} & \textbf{No. Articles} \\ \hline
ScienceDirect & 95 \\ \hline
Scopus & 573 \\ \hline
Web of Science & 41 \\ \hline
IEEE Xplore & 20 \\ \hline
\end{tabular}
\label{screen_databases}
\end{table}
After the searching process, a general screening of the articles was conducted. This screening process included checking the title, the abstract and the methodology to find out if the articles accomplished the inclusion criteria of the review. This criteria can be summarized as:
\begin{enumerate}
\item The documents shall empirically apply NLP to extremism description or classification.
\item The analysis conducted on the documents shall be quantitative.
\item The documents shall clearly state the NLP techniques they use to conduct the analysis.
\item The documents shall present a clear methodology, including all the scores and the process they followed to conduct the analysis.
\item The article shall be written in English
\end{enumerate}
After this general screening, 70 documents remained on the review. Next, a more exhaustive review was conducted over those 70 articles, reading the content of the document and excluding the ones not accomplishing the criteria presented above. After the second screening, 6 articles were discarded. The rest of the articles (a total of 64) were finally included for the review.
\section{General descriptive analysis of the articles}
\label{general}
This section presents a general descriptive analysis of the articles finally included on the review. Firstly, a general introduction is presented where the publishing years and the types of extremism detected are reviewed. Then, to identify the most relevant topics related to NLP that deal with the selected articles, a textual analysis has been performed using their indexing keywords. This description will also be used to structure the following sections of the paper, as it shows a general picture about the main topics addressed by the reviewed papers.
Analyzing the timeline of the publications reviewed and the type of extremism addressed, it can be seen that the interest for conducting research works applying NLP to study extremism has been increasing during recent years, as shown in Fig \ref{fig:type_radical}. This in turn supports the ideas presented on the introduction of this article: most of the articles were published during or after 2015, which overlaps the time lapse when ISIS was more active.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{Images/Publication_year.png}
\caption{Type of extremism addressed by the articles included in the survey}
\label{fig:type_radical}
\end{figure}
Besides, as Fig \ref{fig:type_radical} confirms, the most frequently addressed type of extremism in the articles is jihadi extremism, with a significant gap with the rest of types. In general terms, there are 5 types of extremism approached in the literature reviewed: religious (all of them concerning jihadism), political (far-right) political mixed (concerning far right/far left), war (concerning conflicts in different countries, such as Afghanistan), and mixed (studying both religious and political extremism). Since 2015, the studies approaching NLP and extremism grew year by year has, having a substantial increase. In this last period, while jihadi extremism has attracted more interest, political extremism remains relatively steady. Therefore, it could be concluded that the two predominant types of extremism analyzed religious and political.
Continuing the preliminary analysis to determine the more common topics associated with the thesaurus used on the search on the articles, a textual analysis of the keywords related to the reviewed articles has been performed. For this purpose, Fig. \ref{fig:WordCloudKeywords} shows a word cloud with the top 30 of the most frequently used keywords by the articles (keywords used as thesaurus were excluded from the count).
\begin{figure}[h]
\centering
\includegraphics[width=0.65\linewidth]{Images/WordCloudKeywords.png}
\caption{Word cloud of keywords extracted from the articles analysed}
\label{fig:WordCloudKeywords}
\end{figure}
As can be seen, there are different keywords than can be grouped under 4 similar concepts. First, different NLP techniques are mentioned on the keywords (e.g. topic modeling, sentiment classification or semantics). Second, the source of the data analyzed also appears frequently on the articles keywords (e.g. Twitter, social media, YouTube, web pages, or Dabiq, a jihadi magazine), as well as specific tools that can be used (e.g. LIWC). Third, different keywords related to extremism are mentioned (e.g. terrorism, ISIS, far-right, extreme right, hate speech, online radicalisation, or radicalism). Finally, some keywords are related to the methods applied, including classification techniques, to detect extremist content (machine learning, classification, data/text, logistic regression or feature engineering). It should be mentioned that, while not the objective of this survey, Social Network Analysis (SNA) appeared as one of the most used keywords (both Social Network Analysis, Social Networks and Network Analysis), and one concurrent approach when conducting NLP analysis.
\section{NLP techniques for extremisn research}
\label{Techniques}
The main objective of NLP techniques is to transform free text into structured data by capturing its lexical, syntactic and semantic information to acquire or infer new knowledge. Considering this, the NLP process can be divided into two main phases: text pre-processing (simplifying and preparing the text for its analysis) and feature generation (transforming the text into a structured data representation suitable to be used by the different algorithms or methods of analysis). According to this division, the following subsections present a detailed analysis of the techniques used by the articles reviewed for each of those specific phases.
\subsection{Text pre-processing}
\label{pre-processing Techniques}
The pre-processing of textual data is a key part of NLP, as it helps to identify and establish the fundamental units that will be assessed during the analysis \cite{kannan2014preprocessing,vijayarani2015preprocessing}. This process includes a set of techniques that allow NLP algorithms to compute and analyse words, simplifying and preparing the text for its analysis. The pre-processing techniques mentioned on the articles are:
\begin{itemize}
\item \textbf{Tokenization:} Process of dividing a sentence in smaller units (tokens), such as words.
\item \textbf{Cleaning:} Removal of strange or non-informative characters of the text URLs, such as symbols, punctuation, hashtags or other special characters.
\item \textbf{Stop-words}: Removal of words that occurs frequently and do not carry relevant information in most contexts (such as articles or prepositions).
\item \textbf{Lowercasing:} Process of lowercasing the capital letters (some NLP algorithms do not discriminate between lower and uppercase).
\item \textbf{Lemmatization:} Process of reducing inflected words to their roots (lemma).
\item \textbf{Stemming:} Process of erasing prefixes or suffixes from a word to obtain its stem.
\end{itemize}
Table \ref{tab:pre-processing} shows all the articles reviewed that explicitly mention using one or more of these pre-processing techniques. Tokenization is an essential task in natural language processing used to break up a string of words into semantically useful units called tokens, so all the articles, even if not explicitly mentioned, perform this pre-processing as part of it NLP process.
The rest of the preprocessing tasks may or may not be performed depending on the application to be carried out. As shown in Table \ref{tab:pre-processing}, most of the works reviewed apply cleaning processes to the texts to eliminate unnecessary characters and words, while few of them use the process of lowercasing. However, in the case of the process of transformation of the words to their stems or lemmas, it can be concluded that stemming is the most used technique. The aim of both techniques is to reduce the words to a common base form. As opposed to stemming, lemmatization does not simply chop off inflections. Instead it uses lexical knowledge bases to get the correct base forms of words, and it is more expensive computationally, which may be the reason for its minor application.
\begin{table}[h]
\centering
\caption{NLP pre-processing techniques applied in reviewed articles.}
\begin{tabular}{|l|r|p{8cm}|}
\hline
\textbf{Pre-processing techniques} & \textbf{Percentage Use} & \textbf{Articles explicitly mentioning them} \\ \hline
Tokenization & 100\% & All the articles use tokenization. \\ \hline
Stop-words & 35.93\% &
\cite{masood15using,abddetecting2020,nouh2019understanding,johnston2017identifying,ahmad2019detection,sharif2019empirical,hartung2017identifying,mariconti2019you,o2012analysis,ben2016hate,saif2017semantic,mirani2016sentiment,rehman2021understanding,rekik2020recursive,sharif2020detecting,heidarysafa2020women,rekik2019violent,zahra2018framework,fernandez2018understanding,kinney2018theming,sabbah2015hybridized,alghamdi2012topic,bermingham2009combining} \\ \hline
Filtering & 37.5\% &
\cite{abddetecting2020,nouh2019understanding,ahmad2019detection,hartung2017identifying,o2012analysis,saif2017semantic,mirani2016sentiment,rehman2021understanding,rekik2020recursive,sharif2020detecting,heidarysafa2020women,rekik2019violent,zahra2018framework,fernandez2018understanding,kinney2018theming,sabbah2015hybridized,ottoni2018analyzing,araque2020approach,gomes2017profiling,agarwal2015using,torregrosa2020analyzing,alizadeh2019psychology,bisgin2019analyzing,hall2020machines} \\ \hline
Lowercasing & 9.37\% &
\cite{zahra2018framework,fernandez2018understanding,nouh2019understanding,araque2020approach,hall2020machines,kursuncu2019modeling} \\ \hline
Lemmatization & 4.68\% &
\cite{ottoni2018analyzing,nouh2019understanding,figea2016measuring} \\ \hline
Stemming & 20.31\% &
\cite{bermingham2009combining,alghamdi2012topic,o2012analysis,sabbah2015hybridized,mirani2016sentiment,zahra2018framework,sharif2019empirical,mariconti2019you,masood15using,rehman2021understanding,figea2016measuring,stankov2010contemporary,yang2011social} \\ \hline
\end{tabular}%
\label{tab:pre-processing}
\end{table}
\subsection{Feature Extraction}
\label{FeatureGeneration}
After the pre-processing of the textual data, different text mining techniques are used to transform tokens into structured data by capturing its lexical, syntactic and semantic information. These structured data can be eventually used as input for the different algorithms to acquire or infer new knowledge. Table \ref{NLP_technique_Summary} presents all the techniques mentioned on the review, together with the articles included on the review that have apply them as part of their methodological approach. These techniques can be grouped into 3 different categories according to the type of linguistic information captured, which are explained below in detail in the following sections. A first descriptive analysis of the techniques is conducted for each of these subsections. Afterwards, a comparative analysis of these techniques is carried out within the area of research on extremism, highlighting the advantages and disadvantages of each technique within this specific domain.
\begin{table} [h]
\centering
\caption{Summary of NLP techniques for feature generation using in the articles reviewed.}
\begin{tabular}{|p{3.1cm}|p{2.5cm}|r|p{6cm}|}
\hline
\textbf{Approach} &
\textbf{NLP technique} &
\textbf{\begin{tabular}[c]{@{}l@{}} Percentage Use \end{tabular}} &
\textbf{Articles} \\ \hline
\multirow{6}{*}{Lexical or Vectorial} &
N-grams &
28,12\% &
\cite{de2020radical,rehman2021understanding,sharif2019empirical,kinney2018theming,masood15using,kim2017empirical,hartung2017identifying,saif2017semantic,ben2016hate,prentice2012language,rekik2019violent,rekik2020recursive,fernandez2018understanding,sharif2020detecting,abddetecting2020,kursuncu2019modeling,nouh2019understanding,hall2020machines} \\ \cline{2-4}
&
Dictionaries &
37.5\% &
\cite{scrivens2020measuring,alizadeh2019psychology,devyatkin2017exploring,mirani2016sentiment,saif2016role,bisgin2019analyzing,rowe2016mining,scanlon2015forecasting,gomes2017profiling,johnston2017identifying,johnston2020identifying,ottoni2018analyzing,hall2020machines,saif2017semantic,abdelzaher2019systematic,dillon2020comparison,klein2019online,owoeye2018classification,rekik2019violent,rekik2020recursive,torregrosa2020analyzing,wei2018detecting,fernandez2018understanding,smith2020detecting} \\ \cline{2-4}
& TF &
50\% &
\cite{abdelzaher2019systematic,agarwal2015using,ben2016hate,bisgin2019analyzing,chen2008sentiment,de2020radical,dillon2020comparison,figea2016measuring,hartung2017identifying,kinney2018theming,klein2019online,macnair2018changes,owoeye2018classification,owoeye2019classification,rekik2019violent,rekik2020recursive,rowe2016mining,scanlon2015forecasting,scrivens2015sentiment,scrivens2020measuring,torregrosa2020analyzing,wei2016identification,wei2018detecting,alizadeh2019psychology,fernandez2018understanding,smith2020detecting,bermingham2009combining,araque2020approach,kursuncu2019modeling,prentice2012language,alghamdi2012topic,devyatkin2017exploring,stankov2010contemporary} \\ \cline{2-4}
&
TF-IDF &
23.43\% &
\cite{alghamdi2012topic,ahmad2019detection,heidarysafa2020women,mariconti2019you,o2012analysis,rehman2021understanding,sabbah2015hybridized,sharif2019empirical,sharif2020detecting,yang2011social,zahra2018framework,abddetecting2020,kim2017empirical,masood15using,nouh2019understanding} \\ \cline{2-4}
&
Dichotomous appearance &
1.56\% &
\cite{wadhwa2015approach} \\ \cline{2-4}
&
Log-likelihood &
3.12\% &
\cite{stankov2010contemporary,prentice2012language} \\ \hline
\multirow{3}{*}{Neural Lenguage Models} &
Word2Vec &
9.37\% &
\cite{abddetecting2020,araque2020approach,johnston2020identifying,kim2017empirical,kursuncu2019modeling,masood15using,nouh2019understanding,ottoni2018analyzing} \\ \cline{2-4}
&
FastText &
4.68\% &
\cite{ahmad2019detection,araque2020approach,devyatkin2017exploring} \\ \cline{2-4}
&
GloVe &
3.12\% &
\cite{araque2020approach,gomes2017profiling} \\ \hline
\multirow{8}{*}{Sintantic and Semantic} &
Part-of-speech &
25\% &
\cite{devyatkin2017exploring,owoeye2018classification,mariconti2019you,masood15using,wignell2018natural,macnair2018changes,figea2016measuring,skillicorn2015empirical,scrivens2016sentiment,scrivens2018searching,weir2016positing,de2020radical,owoeye2019classification,scrivens2015sentiment,sikos2014authorship,yang2011social} \\ \cline{2-4}
&
NER &
7.81\% &
\cite{bisgin2019analyzing,saif2017semantic,saif2016role,fernandez2018contextual,hartung2017identifying} \\ \cline{2-4}
&
LSF &
4.68\% &
\cite{kim2017empirical,masood15using,hartung2017identifying} \\ \cline{2-4}
&
Parse trees &
1.56\% &
\cite{sikos2014authorship} \\ \cline{2-4}
&
LDA &
15.62\% &
\cite{bisgin2019analyzing,scanlon2015forecasting,ottoni2018analyzing,hall2020machines,saif2017semantic,kursuncu2019modeling,heidarysafa2020women,alizadeh2019psychology,kinney2018theming,kim2017empirical} \\ \cline{2-4}
&
NMF &
4.68\% &
\cite{heidarysafa2020women,o2015down,o2012analysis} \\ \cline{2-4}
&
Sentiment scoring &
37.49\% &
\cite{wignell2018natural,chen2008sentiment,saif2017semantic,hartung2017identifying,masood15using,heidarysafa2020women,hall2020machines,owoeye2018classification,macnair2018changes,figea2016measuring,scrivens2016sentiment,scrivens2018searching,weir2016positing,owoeye2019classification,scrivens2015sentiment,mirani2016sentiment,rowe2016mining,dillon2020comparison,torregrosa2020analyzing,araque2020approach,scrivens2020measuring,wei2016identification,bermingham2009combining,ahmad2019detection} \\ \cline{2-4}
&
Semantic tagging &
12.50\% &
\cite{wignell2018natural,saif2017semantic,saif2016role,fernandez2018contextual,ottoni2018analyzing,devyatkin2017exploring,abdelzaher2019systematic,prentice2012language} \\ \cline{2-4}
&
Word/sentence length &
7.81\% &
\cite{stankov2010contemporary,yang2011social,sikos2014authorship,weir2016positing,scrivens2018searching} \\ \cline{2-4}
&
Use of emoticons &
3.12\% &
\cite{agarwal2015using,wei2016identification} \\ \cline{2-4}
&
Use of punctuation &
3.12\% &
\cite{sikos2014authorship,yang2011social} \\ \hline
\end{tabular}%
\label{tab:NLP_technique_Summary}
\end{table}
\subsubsection{Lexical or Vectorial Based Features}
\label{LexicalVectorial}
The tokens extracted from the pre-processing phase have to be transformed into more complex data structures representing a final textual features to be further processed. For this purpose, different techniques of text representation modeling can be applied. Vector Space Models (VSM) \cite{turney2010frequency} is one of the most widely text representation used in classical NLP approaches. The idea of the VSM is to represent each text or document in a collection as a point in a space (a vector in a vector space) based on the token extracted. After the tokenization process, the first step to generate this type of representation consists on defining the weighting technique to compute the tokens (terms) appearance's frequency in a text. The articles reviewed mention several different techniques to generate this vector representation:
\begin{itemize}
\item \textbf{N-grams:} tokens of size 1 are obtained from pre-process the free texts, which means that represents only one word. However, sentences generally contain compound terms (such as living room or coffee machine) formed by several words with a single meaning. The use of grouping multiple tokens together to represent that inherent meaning can be very beneficial for NLP subsequent tasks, and this is what n-grams models provide \cite{sidorov2012syntactic}. A uni-gram is any single element of the text, while a bi-gram or a tri-gram is composed by two or three elements, respectively, that appear sequentially on the text. Skip-gram is a special version of n-gram, as it works the same way, but considering tokens that are not necessarily juxtaposed on the text. Therefore, an analysis based on n-grams consider n elements as a single token. One of the main advantages of this approach is that high ``n" sizes help providing context for words \cite{fortuna2018survey}. Table \ref{tab:ngram_type} summarizes which type of n-gram used the articles reviewed, where the unigrams are not shown since, as mentioned above, they would be 1-sized tokens that have been already obtained through the pre-process techniques.
\begin{table}
\centering
\caption{Type of n-gram used on the article reviewed. }
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|l|p{8cm}|}
\hline
\textbf{N-gram type} & \textbf{Percentage Use} &
\textbf{Articles using it} \\ \hline
Bi-gram & 15.62\% &
\cite{de2020radical,rehman2021understanding,sharif2019empirical,kinney2018theming,masood15using,kim2017empirical,hartung2017identifying,saif2017semantic,ben2016hate,prentice2012language} \\ \hline
Bi-gram + Tri-gram & 6.25\% &
\cite{rekik2019violent,rekik2020recursive,fernandez2018understanding,sharif2020detecting} \\
\hline
Bi-gram + Tri-gram + Skip-gram & 4.68\% &
\cite{abddetecting2020,kursuncu2019modeling,nouh2019understanding} \\
\hline
Tri-gram + Tetra-gram + Penta-gram & 1.56\% &
\cite{hall2020machines} \\ \hline
\end{tabular}%
}
\label{tab:ngram_type}
\end{table}
\item \textbf{Dictionaries:} uses pre-established lists of lexicons (words or sentences) to filter or group the pre-processed tokens. Therefore, any term found inside the lexicon is considered as a final token to generate the final text representation. Dictionaries can also group the frequency of terms as a whole token, thus calculating the frequency of occurrence of a dictionary itself. The main advantage of the dictionaries is that they capture concepts defined by different terms. However, they are also very vulnerable to words not previously included on the lexicon.
\item \textbf{Term frequency (TF):} is the more basic weighting technique in NLP, and consists on the raw sum of the apparition of each token found in the text. It can be represented as \textit{tf(t, d)}, on which \textit{t} is the number of times a token appears on the document \textit{d}.
\item \textbf{Term Frequency - Inverse Document Frequency (TF-IDF):} is an evolution of the aforementioned TF. While the TF just sums the frequency of occurrence of a token in a text, TF-IDF also divides it by the frequency of occurrence of a word in the whole corpus. When a word is more frequent in a text than in the whole corpus, it means that this word is relevant for the text, and therefore it has a higher score \cite{chen2008using}. It is useful to discriminate between relevant words and words with no relevant meaning, such as stop-words \cite{fortuna2018survey}.
\item \textbf{Dichotomous appearance:} this technique does not consider the frequency of a token, but instead the presence/not presence of a token. Therefore, it is computed as 0 if the term does not appear, and 1 if the term appears.
\item \textbf{Log-likelihood:} this association metric \cite{dunning1993accurate} is used to compute the significance of co-occurrence of two variables (for example, two tokens, a token with the group used to classify, etc.). Therefore, this technique is not focused on the frequency of a single token, but in the frequency of two conditions appearing together, which may include one or two tokens.
\end{itemize}
Focusing on Table \ref{NLP_technique_Summary}, the first point to note is the high use of N-grams and dictionary techniques, exceeding 25\% in both cases. This is due to the fact that, from the text pre-processing phase, tokens of size 1 are obtained representing the text, and in many cases, before applying more complex techniques that transform such tokens into complex data structures, it is beneficial to apply some basic NLP techniques. These techniques allow to group or filter the tokens by aggregating them a first level of lexical information.
The major advantage provided by the N-grams approach is that it is independent from the text. This means that all the text can be vectorized using these techniques, no matter if they appear on a lexicon or not. This is specially useful when applying NLP to extremism research, as texts usually combine terms in different languages. However, this versatility also has a handicap: the terms vectored may have no relevant meaning for the researcher, and therefore extra work shall be conducted here to identify which terms are relevant or not.
On the other hand, the use of dictionaries is helpful to detect and classify tokens into meaningful psycho-linguistic categories \cite{fernandez2018understanding,figea2016measuring}. This is a great advantage in the field of extremism research, taking into consideration the psychological background that motivates extremist behaviour. In fact, one of the main dictionaries' based tool, LIWC, was born with the aim of conducting psychological research from texts, and has been frequently applied to extract psychological insights and extremist slang from extremist texts \cite{torregrosa2020linguistic}. However, they require a previous effort from the researchers to prepare the lexicons or to adapt them to other languages \cite{sikos2014authorship}. This last point is specially relevant in the case of jihadi extremism, as texts usually combine Islamic terminology (written in Arabic) with different languages \cite{sikos2014authorship}.
Continuing with the analysis of the Vectoral Space Models applied in the reviewed articles, TF and TF-IDF are the most used techniques. The second is a evolution from the first one, using IDF to eliminate the common terms from the language that are not relevant to categorize texts (in this case, extremist content). Taking into consideration that several articles from the review conduct filtering pre-processing techniques to eliminate irrelevant terms (such as stop-words), there is not a huge difference among them concerning the extremism research field. The main advantage of these techniques is their simplicity and broad use, which lead them to be the most commonly applied techniques. But they have the great disadvantage that they do not provide semantic information about the terms.
Dichotomous appearance was only used in one article. While it presents a clear advantage (it is quite easy to implement), it has one main disadvantage: as stated on the previous section, some terms are used with different semantic meanings in regular and extremist texts \cite{fernandez2018contextual,gomes2017profiling,saif2016role,wei2018detecting}. Analyzing only the apparition of a term can be poorly informative for the researcher. Finally, Log-likelihood can be used for analysing association among terms, which allows to provide more contextual information, but it is still a very little used technique within the extremist field of study.
A brief summary of the advantages and disadvantages of all these techniques appears on table \ref{tab:token_featurization_comparison}:
\begin{table}[h]
\centering
\caption{Comparison of Vector Space Models based techniques to generate features in the articles reviewed.}
\begin{tabular}{|c|p{5.3cm}|p{5.3cm}|}
\hline
\textbf{Technique} &
\multicolumn{1}{c|}{\textbf{Advantages}} &
\multicolumn{1}{c|}{\textbf{Disadvantages}} \\ \hline
N-grams &
-Able to keep semantic information. &
-Captures basic semantic information. \\
&
-High versatility, due to its independence from the text (useful for multi-language texts) &
-The tokens detected may not have interest for the researcher. \\ \hline
Dictionaries &
-Useful to conduct psycho-linguistic meaningful analysis &
-Low versatility (vulnerable to changes on the language and word structure). \\
\multicolumn{1}{|l|}{} &
-Useful to detect and classify specific slang and terminology. &
-Highly dependent on the lexicons included. \\ \hline
TF/TF-IDF &
-Simple and widely used.&
-Not capture semantic context information. \\ & & -TF needs a previous stopwords filtering. \\ \hline
Dichotomous appearance &
-The simplest technique. &
-Not capture semantic context information. \\ \hline
Log Likelihood &
-Captures information of association among terms. &
-Few applied in the area. \\ \hline
\end{tabular}%
\label{tab:token_featurization_comparison}
\end{table}
\subsubsection{Neural Language Models (Word embedding)}
\label{embeddings}
Techniques based on Neural Models include a set of methods that transform tokens obtained from the pre-processing phase into meaningful vectors through the use of neural networks, allowing to capture the relationship among them \cite{levy2014dependency} and, therefore, information about words semantically related. In recent years, the application of these models in the field of extremism research have gained more relevance, as they are useful to keep information about the semantic meaning of terms. This is, precisely, the advantage of this type of models to extract textual features compared with the classics models seen in the previous section. This aspect is specially relevant when applied to classification tasks and the use of deep learning to identify extremist content \cite{johnston2020identifying,johnston2017identifying}. The most common Neural Models found on the reviewed articles are:
\begin{itemize}
\item \textbf{Word2Vec:} allows to predict words depending on the context, maintaining the semantic meaning of the sentence. To do so, the model creates a vector related to each word through the use of a single layer neural network, which can be interpreted as a space. The words that are more likely to appear together on the text will appear closer on that space, therefore sharing semantic context \cite{mikolov2013efficient}. Among the different versions of this technique, Continuous Bag-of-Word model and Skip-Gram model are the more commonly used \cite{goldberg2014word2vec,rong2014word2vec}.
\item \textbf{FastText:} technique developed by Facebook \cite{bojanowski2017enriching} that works similarly than Word2Vec skip-gram, but overcoming two limitations of this model: it allows to incorporate subwords in the embedding process, and therefore allows to include words not contained on the original test lexicon \cite{schmitt2018joint}.
\item \textbf{GloVe:} standing for Global Vectors for Word Representation, this technique was developed in Stanford \cite{pennington2014glove}, and relies on the use of a word co-occurrence matrix on which factorization techniques are applied to extract the vectors associated with each word. While Word2Vec appears to have a better performance than this technique, Glove has the advantage of having more available trained models to work with \cite{mikolov2017advances}.
\end{itemize}
Analysing the application of these approaches in the reviewed articles on extremism, three different purposes can be identified: bias analysis (how pejorative terms are related to some entities and not to others) \cite{ottoni2018analyzing}, to check how two texts use similar tokens but with different meanings \cite{kursuncu2019modeling,gomes2017profiling}, or to create new lexicons based on an already checked text \cite{araque2020approach,nouh2019understanding}. Another advantage of these techniques, beyond the variety of applications they have, is that they can be used to overcome language limitations on extremist detection \cite{johnston2017identifying}.
Regarding the frequency of use of these techniques in the field of extremism, as shown in the Table \ref{tab:NLP_technique_Summary}, the technique most used (Word2Vec) does not reach 10\%, a value much lower than most of the classical techniques based on vector space models. This is due to the fact that this type of approach is becoming of great importance just in the last few years, and it is at the current time when the extension of its application in the field of extremism is taking place.
Only one article reported a comparison among FastText, Word2Vec and GloVe on an extremism classification task. FastText performed slightly better than the others two. However, Word2Vec and its variations (doc2vec, graph2vec, etc.) still remain as most used word embedding technique. Table \ref{tab:comparison-word-embedding} summarizes the comparative of these techniques in the context of extremism research.
A brief summary of the advantages and disadvantages of all these techniques appear on table \ref{tab:comparison-word-embedding}.
\begin{table}[h]
\centering
\caption{Comparison of Neural Models based techniques to generate features used in articles reviewed.}
\begin{tabular}{|l|p{6cm}|p{6cm}|}
\hline
\textbf{Technique} &
\textbf{Advantages} &
\textbf{Disadvantages} \\ \hline
Word2Vec &
-Allows to predict words depending on the context. &
-Does not recognize words not included on the trained lexicon (problematic in multilingual approaches). \\ \hline
\multirow{2}{*}{FastText} &
-Allows to incorporate words not contained on trained lexicon. & -Few applied in the area \\ \hline
GloVe &
-High amount of trained models to work with. & -Few applied in the area\\ \hline
\end{tabular}%
\label{tab:comparison-word-embedding}
\end{table}
\subsubsection{Syntactic and Semantic Features}
\label{sytactic_semantic}
There are NLP techniques based on the analysis of data according to a particular context for generating features representing the text \cite{krippendorff2018content}. The type of contextual information assessed depends on the NLP technique applied, but common approaches include sentiment analysis, topic detection or semantic analysis, among others. Techniques of this type used by the reviewed articles include:
\begin{itemize}
\item \textbf{Part-of-Speech (POS):} allows tagging every word with its grammatical category (e.g. nouns, verbs or adjectives) depending on the structure of the text where it is found \cite{cutting1992practical}.
\item \textbf{Lexical Syntactic Feature-based (LSF):} allows capturing the dependence inside a sentence or a text between two terms \cite{benito2019design}. These two terms are later compared to determine the context and the direction of the expression.
\item \textbf{Named Entity Recognition (NER):} deals with the identification of entities (e.g. names, organizations or locations) in the text, tagging them as relevant subjects \cite{ritter2011named}.
\item \textbf{Parse trees (PT):} allows to construct a representation of how the concepts can be used recursively in a sentence. Parse trees include all the tokens and their relationships, along with a set of rules that allow to substitute the token while maintaining the syntactic rules.
\item \textbf{Latent Dirichlet Allocation (LDA):} is one of the most popular topic detection techniques on Natural Language Processing. It extracts topics from a corpus of text based on word probabilities: for each latent topic, it extracts the probability distribution of a combination of words, which helps to identify the main topics. \cite{jelodar2019latent}
\item \textbf{Non-Negative Matrix Factorization (NMF):} is a topic modeling technique which relies on the use of linear algebra algorithms in a TF-IDF document matrix to define topics \cite{chen2019experimental}.
\item \textbf{Sentiment Scoring: (SS)} provides a score for every text unit (e.g. sentence or text) based on its latent emotional valence, with the aim of understanding the authors opinion or emotional state about something \cite{feldman2013techniques}. This score can be computed as dimensional (through a single scoring about the valence) or categorical (specifying which emotions are expressed in the text). Table \ref{tab:sentiment_type} summarizes how both approaches are distributed among the reviewed articles:
\begin{table}[h]
\caption{Type of sentiment analysis approaches using in the reviewed articles on extremist}
\centering
\begin{tabular}{|l|l|p{7cm}|}
\hline
\textbf{Sentiment analysis approach} & \textbf{Percentage Use} & \textbf{Articles using it} \\ \hline
Sentiment scoring (dimensional) & 32.81\% &
\cite{wignell2018natural,owoeye2018classification,scrivens2015sentiment,hall2020machines,chen2008sentiment,macnair2018changes,figea2016measuring,scrivens2016sentiment,scrivens2018searching,weir2016positing,owoeye2019classification,mirani2016sentiment,rowe2016mining,dillon2020comparison,torregrosa2020analyzing,scrivens2020measuring,wei2016identification,bermingham2009combining,masood15using,saif2017semantic,ahmad2019detection} \\ \hline
Emotion scoring (categorical) & 9.37\% &
\cite{wignell2018natural,chen2008sentiment,heidarysafa2020women,araque2020approach,hartung2017identifying,ahmad2019detection} \\ \hline
\end{tabular}%
\label{tab:sentiment_type}
\end{table}
\item \textbf{Semantic tagging (ST):} implies the process of automatic extracting concepts, entities or topics from the tokens in a text \cite{jovanovic2014automated}.
\item \textbf{Word/sentence length:} analyses the length of the words (based on characters) and/or the sentences (based on words) \cite{stankov2010contemporary,yang2011social,sikos2014authorship,weir2016positing,scrivens2018searching}.
\item \textbf{Use of emoticons:} emoticons are built as graphical figures to express emotions or behaviours on the text, using a combination of characters \cite{agarwal2015using,wei2016identification}.
\item \textbf{Use of punctuation:} this approach implies the analysis of the use of punctuation signs as part of the syntactic distribution of the sentence \cite{sikos2014authorship,yang2011social}.
\end{itemize}
These types of techniques go a step further into the representation of texts, taking advantage of the tokens to conduct more complex analysis. This is specially useful in a field such as extremism research, on which simply token use or frequency can be misleading in the interpretation of outcomes \cite{fernandez2018contextual}.
The first four techniques mentioned, POS, NER, LSF and PT, are used to analyze, tag and extract information about the syntactical structure underlying tokens. POS and NER are used to identify the nouns and entities present on the text. Then this information is used to determine which nouns from the text are actual people, organizations or locations \cite{hartung2017identifying,saif2017semantic,saif2016role,fernandez2018contextual,bisgin2019analyzing}, among others. In particular, according to the articles reviewed, NER technique shows that using a combination of noun semantic categories was statistically more accurate to determine if a text included extremist content than using token analysis, sentiment or topic features \cite{saif2017semantic,saif2016role}. Analyzing the frequency of application shown in the Table \ref{tab:NLP_technique_Summary} of these 4 techniques in the field of extremism, it can be noticed that the technique most commonly used is POS with 25\%, being the rest of the techniques very few used in comparison.
On the other hand, LSF and PT take into consideration the syntax and the dependencies among tokens. In this case, LSF analyze the relationship between two syntactically dependent tokens \cite{kim2017empirical,masood15using}, while Parse trees build representations of several tokens and use their syntactic structure to find tokens combined in the same way \cite{sikos2014authorship}. LSF was compared with Vectorial Space Models as classification feature, but it did not perform better than the latter \cite{hartung2017identifying}.
Concerning topic extraction, LDA and NMF have been the mainly used techniques on the reviewed articles. LDA has the advantage of relying on a statistical base and to be commonly used in the literature \cite{heidarysafa2020women}. However, as a study states \cite{alizadeh2019psychology}, it performs poorly with short texts (e.g. tweets). Taking into account that most of the articles reviewed use Twitter to extract their extremist datasets, this is an important disadvantage. NMF appears as an alternative to LDA, as it appears to present more readily interpretable results \cite{o2012analysis,o2015down}, and also to have a better performance on short texts \cite{chen2019experimental}, although in the articles reviewed it is used much less frequently (see Table \ref{tab:NLP_technique_Summary}).
Adding a topic an ``emotional value" can help forming a representative idea about the author's agreement with that topic \cite{bermingham2009combining,scrivens2018searching}. For example, two studies focused on Arabic regular population found that Twitter users tone was more negative when ISIS conducted a murdered, won a battle or made a public call or movement \cite{mirani2016sentiment,ceron2019isis}. Sentiment scoring techniques are divided in two different approaches: a dimensional approach, based on a single score, and a categorical approach, based on the classification of tokens inside one or more emotions (such as anger, fear or happiness). A combination of both strategies can be found on some of the articles reviewed \cite{wignell2018natural,figea2016measuring}. These techniques can be used to measure the emotions expressed on the text, together with the opinion of the writer towards a specific token in the text\cite{bakshi2016opinion}. The main difference among them are their theoretical approach, but also how they are applied: dimensional scoring usually involves selecting a token, around which the scoring process takes part. On the other hand, categorical scoring usually classifies tokens depending on the emotion they represent, and therefore are more focused on single tokens. In the case of extremism research, both approaches can be useful, as they can be used to identify how do extremist texts approach different topics \cite{wignell2018natural,macnair2018changes}, which valence have their tones \cite{wei2016identification} or which connotations have the terms they use \cite{chen2008sentiment}. Finally, the concept of semantic tagging was used on the articles reviewed to tag tokens with semantic information regarding their context. This strategy, very similar to NER (sometimes using it), tags the tokens with entities, but also with concepts and categories \cite{wignell2018natural}. Focusing on the use of this type of techniques in the reviewed articles, Table \ref{tab:NLP_technique_Summary} shows that the sentiment analysis techniques are the most used within the techniques to extract syntactic and semantic features, exceeding 37\% in the case of sentiment scoring.
Last three techniques are focused on the analysis of the text formatting characteristics, to build other types of features that capture more information than that provided by the text itself. For example, the length and quantity of texts, sentences or words, the number of characters inside a word, the use of punctuation or emoticons, etc. In all the cases, text characteristic features have been used as a complement to other text features, never as single feature extracted from the free text. However, they have showed little impact to describe or predict extremism on texts, and in general all of them are being applied in few of the reviewed works (as can be seen in the last 3 rows of the Table \ref{tab:NLP_technique_Summary}).
Table \ref{tab:comparison_content_analysis} presents a summary of all the techniques used to generate syntactic and semantic features showing their advantages and disadvantages both in general application and in extremism literature.
\begin{table}[h]
\centering
\caption{Comparison of Syntactic and Semantic based techniques to generate features for text representation.}
\begin{tabular}{|l|p{6cm}|p{6cm}|}
\hline
\multicolumn{1}{|c|}{\textbf{Technique}} &
\multicolumn{1}{c|}{\textbf{Advantages}} &
\multicolumn{1}{c|}{\textbf{Disadvantages}} \\ \hline
\multirow{2}{*}{POS} &
-Allows to detect the grammatical type of tokens &
-Regarding nouns, not as informative as NER. \\
&
-Widely used in the area with different applications (term disambiguation or classification)&
\\ \hline
NER &
-Detects entities, categorizing them. Useful to identify the main actors in an extremist discourse. &
-Not as extended as POS, limited to nouns and to a trained lexicon. \\ \hline
LSF &
-Provides a meaningful relationship among tokens. &
-Does not perform better in the applications within the area than more simple features. \\ \hline
PT &
-Finds sentences with an structure grammatically similar. &
-Does not inform about the tokens itself. Not commonly used on extremism literature. \\ \hline
\multirow{2}{*}{LDA} &
-Widely used on extremism research. &
-Performs poorly on short texts, such as tweets (very used to conduct extremism analysis). \\
&
-Performs closer to a human topic classifier than other techniques. &
-Tends to over-generalize topics \\ \hline
NMF &
-Alternative for LDA showing a good performance on short texts. &
-Not commonly used by authors, who tend to use LDA. \\ \hline
\multirow{2}{*}{SS (Dim.)} &
-Simple way of measuring a sentence emotional value. &
-Does not provide elaborate information about emotions in the sentence. \\
&
-Useful to detect opinions, specially useful when combined with the detection of entities in the radical discourse. &
\\ \hline
SC. (Cat.) &
-Provides information about emotions in the sentence, tagging tokens and sentences with emotional categories (Happiness, sadness, anger...) &
-Not so useful to detect opinions or tone towards a token. \\ \hline
ST &
-As an evolution of NER, this approach "tags" nouns with their entity, concept and category. &
-Useful to discriminate a word thanks to its context, very useful on extremism research. \\ \hline
Text formatting &
-Captures more information than those provided by the text itself. &
-Has to be used as a complement to other text features. \\ \hline
\end{tabular}%
\label{tab:comparison_content_analysis}
\end{table}
\section{Applications of NLP in extremism research}
\label{Aplications}
Previous section has detailed all the NLP techniques used in the reviewed works on extremism to process data in text form and generate features as structured data. Depending on the objectives to be achieved in each of the reviewed works, one or several of these generated features are used to acquire new knowledge. But, in general, two main purposes have been identified in the reviewed papers for which they are used:
\begin{itemize}
\item As a feature for classification models generated with machine learning algorithms to discriminate between extremist and non-extremist content.
\item To conduct a descriptive analysis characterizing the extremism: for example, to detect specific slang in the case of extremism.
\end{itemize}
Based on these two main approaches, next subsections present a descriptive and comparative analysis of the works that apply each one, highlighting their outcomes.
\subsection{Classification approaches}
\label{Machine}
As can be derived from the general analysis of the reviewed articles presented in Section \ref{general}, classification was one of the main topics of interest regarding NLP applications on extremism. This is unsurprising, as one of the key objectives of this research field is to help Law Enforcement Agencies to identify extremist content. More than half of the articles included on the review (54.68\% of the articles) applied one or more classification algorithms, specially during the first years of ISIS activity. As shown in Fig. \ref{fig:methodological}, 2015 and 2018 were the only years after the beginning of ISIS activity on which there are more articles not using classification techniques than articles using them. The common use of classification approaches shows that there was a bigger interest on detecting extremism than on defining it.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{Images/Machine_Learning_Year}
\caption{Frequency of articles using classification techniques vs those not using them.}
\label{fig:methodological}
\end{figure}
With the objective of training classification models based on NLP features to discriminate between extremist and non-extremist content, different Machine Learning (ML) algorithms have been applied in the reviewed works. These works uses ML approaches to address issues that goes from sentiment tagging (using a pre-labelled dataset) to proper user classification (extremist vs non-extremist). Fig. \ref{fig:machine_learning} shows the frequency of application of every ML algorithm found on the articles reviewed, where it can be seen that Support Vector Machine (SVM) is the most commonly used model, followed by Random Forest, Naïve Bayes and Decision Tree (J48).
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{Images/Machine_Learning}
\caption{Type of Machine Learning algorithm used among the articles}
\label{fig:machine_learning}
\end{figure}
Concerning the model used by each article, Table \ref{tab:ML_algorithm} summarizes what kind of Machine Learning algorithms were used by all the articles including classification tasks. It also highlights the NLP features that are directly (or indirectly) involved on the generation of the classification models.
Apart from these classification tasks, five articles conducted other predictive learning tasks. These include the prediction of how the radicalization process takes place \cite{fernandez2018understanding}, how extremist behavioral changes occur among the members of a group \cite{smith2020detecting}, the daily level of online recruitment activities conducted by extremist groups \cite{scanlon2015forecasting}, the risk of a video to be raided by extremist groups \cite{mariconti2019you} or the risk of pro-ISIS terms as part of a person's vocabulary \cite{rowe2016mining}.
\begin{table}[!h]
\vspace{-0.1cm}
\caption{Type of features input to the ML models employed in the reviewed articles (SVM: Support Vector Machine, KNN: K-Nearest Neighbors, NB: Naïve Bayes, RF: Random Forest, Log R: Logistic Regression, LMM: Linear Mixed Models, RNN: Recurrent Neural Networks, CNN: Convolutional Neural Networks, FCNN: Fully-Connected Neural Networks, SGD: Stochastic gradient descent).}
\vspace{-0.1cm}
\footnotesize
\resizebox{0.9\columnwidth}{!}{\begin{tabularx}{\textwidth}{@{}X@{\hphantom{0}}X@{\hphantom{1}}X@{\hphantom{1}}X@{\hphantom{1}}X@{\hphantom{1}}X@{\hphantom{1}}l@{\hphantom{1}}X@{\hphantom{1}}l@{\hphantom{1}}X@{\hphantom{1}}X@{\hphantom{1}}X@{\hphantom{1}}l@{\hphantom{1}}l@{}}
\toprule
\multirow{2}{*}{\textbf{\shortstack[l]{ML\\method}}} &
\multicolumn{13}{c}{\textbf{Features}} \\
& N-grams & Dic. & TF-IDF & TF & POS &
NER &
LSF &
PT &
SS &
LDA &
Emb. &
ST &
Others \\ \midrule
\multirow{5}{*}{SVM} &
\cite{hartung2017identifying,masood15using,saif2017semantic,rehman2021understanding,sharif2019empirical,abddetecting2020} &
\cite{figea2016measuring,sikos2014authorship,yang2011social,agarwal2015using,rehman2021understanding} &
\cite{yang2011social,rehman2021understanding,sharif2019empirical,masood15using,kim2017empirical,abddetecting2020} &
\cite{hartung2017identifying,masood15using,scanlon2015forecasting,ahmad2019detection,devyatkin2017exploring,kim2017empirical,mirani2016sentiment,chen2008sentiment,figea2016measuring,wei2016identification,agarwal2015using,araque2020approach,fernandez2018contextual} &
\cite{figea2016measuring,sikos2014authorship,yang2011social,devyatkin2017exploring} &
\cite{hartung2017identifying,yang2011social} &
\cite{hartung2017identifying,masood15using,kim2017empirical} &
\cite{sikos2014authorship} &
\cite{figea2016measuring,mirani2016sentiment,wei2016identification,masood15using,saif2017semantic,yang2011social,ahmad2019detection,araque2020approach,hartung2017identifying} &
\cite{saif2017semantic,scanlon2015forecasting,kim2017empirical} &
\cite{araque2020approach,masood15using,devyatkin2017exploring,kim2017empirical,abddetecting2020} &
\cite{saif2017semantic,fernandez2018contextual,devyatkin2017exploring} &
\cite{sikos2014authorship,yang2011social} \\ \midrule
\multirow{2}{*}{KNN} &
\cite{sharif2019empirical,abddetecting2020} &
\cite{agarwal2015using} &
\cite{sharif2019empirical,sharif2020detecting,abddetecting2020} &
\cite{ahmad2019detection,wei2016identification,agarwal2015using} &
&
&
&
&
\cite{wei2016identification,ahmad2019detection} &
&
&
&
\\ \midrule
\multirow{5}{*}{NB} &
\cite{masood15using,rehman2021understanding,sharif2019empirical,sharif2020detecting,abddetecting2020} &
\cite{yang2011social,rehman2021understanding,fernandez2018understanding} &
\cite{yang2011social,zahra2018framework,rehman2021understanding,sharif2019empirical,masood15using,abddetecting2020} &
\cite{masood15using,scanlon2015forecasting,ahmad2019detection,saif2016role,devyatkin2017exploring,sharif2020detecting,wei2016identification,fernandez2018understanding,kursuncu2019modeling,fernandez2018contextual} &
\cite{yang2011social,devyatkin2017exploring} &
\cite{yang2011social} &
\cite{masood15using} &
&
\cite{wei2016identification,masood15using,yang2011social,ahmad2019detection} &
\cite{scanlon2015forecasting} &
\cite{masood15using,devyatkin2017exploring,kursuncu2019modeling,abddetecting2020} &
\cite{saif2016role,fernandez2018contextual,devyatkin2017exploring} &
\cite{yang2011social} \\ \midrule
\multirow{2}{*}{Boosting} &
&
&
&
\cite{scanlon2015forecasting,devyatkin2017exploring}
\cite{devyatkin2017exploring} &
&
&
&
&
&
\cite{scanlon2015forecasting} &
\cite{devyatkin2017exploring} &
\cite{devyatkin2017exploring} &
\\ \midrule
\multirow{4}{*}{J48} &
\cite{sharif2019empirical,rekik2020recursive,sharif2020detecting,abddetecting2020} &
\cite{fernandez2018understanding} &
\cite{sharif2019empirical,sharif2020detecting,masood15using,abddetecting2020} &
\cite{sharif2020detecting,mirani2016sentiment,owoeye2019classification,rekik2020recursive,owoeye2018classification,fernandez2018understanding,fernandez2018contextual} &
\cite{owoeye2018classification} &
&
&
&
\cite{owoeye2018classification,scrivens2016sentiment,weir2016positing,owoeye2019classification,mirani2016sentiment} &
&
\cite{abddetecting2020} &
\cite{fernandez2018contextual} &
\cite{weir2016positing} \\ \midrule
\multirow{5}{*}{RF} &
\cite{masood15using,de2020radical,rehman2021understanding,sharif2019empirical,sharif2020detecting,abddetecting2020,nouh2019understanding} &
\cite{figea2016measuring,rehman2021understanding,nouh2019understanding} &
\cite{ahmad2019detection,mariconti2019you,rehman2021understanding,sharif2019empirical,sharif2020detecting,abddetecting2020,nouh2019understanding} &
\cite{masood15using,mariconti2019you,ahmad2019detection,devyatkin2017exploring,sharif2020detecting,mirani2016sentiment,figea2016measuring,de2020radical,kursuncu2019modeling} &
\cite{figea2016measuring,devyatkin2017exploring,de2020radical} &
&
\cite{masood15using} &
&
\cite{figea2016measuring,weir2016positing,mirani2016sentiment,masood15using,ahmad2019detection,nouh2019understanding} &
&
\cite{masood15using,devyatkin2017exploring,kursuncu2019modeling,abddetecting2020,nouh2019understanding} &
\cite{devyatkin2017exploring} &
\cite{weir2016positing,de2020radical} \\ \midrule
Adaboost &
&
\cite{figea2016measuring,yang2011social} &
\cite{yang2011social} &
\cite{figea2016measuring} &
\cite{figea2016measuring,yang2011social} &
\cite{yang2011social} &
&
&
\cite{figea2016measuring,yang2011social} &
&
&
&
\cite{yang2011social} \\ \midrule
\multirow{4}{*}{Log R} &
\cite{masood15using,sharif2020detecting,abddetecting2020} &
\cite{smith2020detecting,fernandez2018understanding} &
\cite{sharif2020detecting,masood15using,abddetecting2020} &
\cite{masood15using,devyatkin2017exploring,sharif2020detecting,wei2016identification,smith2020detecting,fernandez2018understanding,araque2020approach} &
\cite{devyatkin2017exploring} &
&
\cite{masood15using} &
&
\cite{wei2016identification,masood15using,araque2020approach} &
&
\cite{araque2020approach,masood15using,johnston2020identifying,devyatkin2017exploring,abddetecting2020} &
\cite{devyatkin2017exploring} &
\\ \midrule
LMM &
&
\cite{smith2020detecting} &
&
\cite{smith2020detecting} &
&
&
&
&
&
&
&
&
\\ \midrule
XGBoost &
&
&
\cite{kim2017empirical} &
&
&
&
\cite{kim2017empirical} &
&
&
\cite{kim2017empirical} &
\cite{kim2017empirical} &
&
\\ \midrule
Maximum Entropy &
&
&
&
\cite{mirani2016sentiment} &
&
&
&
&
\cite{mirani2016sentiment} &
&
&
&
\\ \midrule
Bagging &
&
&
&
\cite{mirani2016sentiment} &
&
&
&
&
\cite{mirani2016sentiment} &
&
&
&
\\ \midrule
RNN &
&
&
&
\cite{mariconti2019you} &
&
&
&
&
\cite{ahmad2019detection} &
&
\cite{johnston2020identifying,ahmad2019detection} &
&
\\ \midrule
CNN &
&
&
&
&
&
&
&
&
\cite{ahmad2019detection} &
&
\cite{ahmad2019detection} &
&
\\ \midrule
FCNN &
&
&
&
&
&
&
&
&
&
&
\cite{johnston2017identifying} &
&
\\ \midrule
Extra Random Trees &
&
&
\cite{mariconti2019you} &
\cite{mariconti2019you} &
&
&
&
&
&
&
&
&
\\ \midrule
Ensemble methods &
\cite{sharif2019empirical} &
&
\cite{sharif2019empirical} &
&
&
&
&
&
&
&
&
&
\\ \midrule
SGD &
\cite{sharif2020detecting} &
&
\cite{sharif2020detecting} &
\cite{sharif2020detecting} &
&
&
&
&
&
&
&
&
\\ \midrule
\end{tabularx}}%
\vspace{-0.1cm}
\label{tab:ML_algorithm}
\end{table}
Focusing on the use of basic features based on vectorial space models, such as n-grams and dictionaries (shown in Table \ref{tab:ML_algorithm}), the first ones \cite{bisgin2019analyzing,hartung2017identifying,kursuncu2019modeling,owoeye2018classification,rekik2019violent,scanlon2015forecasting,sharif2019empirical,zahra2018framework} has been used more than the second ones \cite{ahmad2019detection,araque2020approach,fernandez2018understanding,kursuncu2019modeling}.
It would be difficult to determine which of these two techniques performs better. In fact, the study of Figea et al. \cite{figea2016measuring} found that there is no relevant difference between using dependent techniques (such n-grams) or independent (such as LIWC) from the text when creating a classification model. A general limitation from both techniques is that similar terms can be used with different meanings in two texts, leading to confusions on the data interpretation process \cite{saif2016role,fernandez2018contextual,wei2018detecting,gomes2017profiling}. This is common in the context of religious radicalization, where religious terms can be used by regular religious texts, but also by extremists texts \cite{gomes2017profiling}. While using n-grams (when $n > 1$) is a way to overcome this limitation, they are a primitive option to keep semantic information \cite{hall2020machines,sharif2019empirical}. There are, however, techniques that are more informative than these to conduct complex NLP analysis. For example, n-grams were found to be less able to identify topics in radical texts than LDA or dictionaries \cite{hall2020machines}.
Regarding sentiment features, they are not usually used as a single feature to detect extremist content, specially concerning political radicalisation \cite{scrivens2015sentiment}. While the type of features do not perform bad either and they, in fact, perfform better than other less complex features \cite{ahmad2019detection}, usually classification models trained with more features perform better than those who use only sentiment features \cite{weir2016positing,hartung2017identifying,saif2017semantic,owoeye2018classification,owoeye2019classification,araque2020approach}. In fact, those classifiers based on semantic features exclusively performed better than those based on sentiment features exclusively \cite{saif2017semantic,araque2020approach}. For example, a study conducted by Weir et al. \cite{weir2016positing} compared the usefulness of two classification tools, one based on sentiment features and the other using POS feature together with text formatting features such as number of sentences, average length or quantity of characters. The second showed a better performance, but it could be due to the high number of features used on it. Other three articles \cite{sikos2014authorship,yang2011social,stankov2010contemporary} also used text formatting features and other text features, as models to describe and classify extremist content. None showed a significant difference from classifiers that only use features that extract information from the text itself. But there are several works which conclude that text text formatting features (such as sentence length\cite{yang2011social} or emoticons \cite{agarwal2015using,wei2016identification}) are a good add-on to improve the accuracy of the classification models.
Finally, the best classification outcomes are achieved using features based on Neural Language Models (word embedding). Articles using this type of textual representation as classification feature found that it tends to perform better than other classical features such as vectorial space models \cite{devyatkin2017exploring,kursuncu2019modeling,masood15using}, or syntactic and semantic features \cite{kim2017empirical,araque2020approach}. One article, however, pointed that word embedding tend to perform poorly than n-grams on short pieces of text \cite{abddetecting2020}. As happened with other NLP features, it was found that combining word embedding based features with the other types also showed better classification outcomes than using it isolated \cite{araque2020approach,nouh2019understanding}.
The main purpose of most articles that use features based on Neural Language Models in classification tasks is the detection of extremist content. As other types of features, they are quite dependent on the type of machine learning algorithm used \cite{masood15using,kim2017empirical,johnston2020identifying,devyatkin2017exploring}, but they work specially good when combined with neural networks of different type \cite{ahmad2019detection,johnston2017identifying}. They are also good to detect radical users, but have been found to perform poorer than n-grams to detect extremism on small pieces of text \cite{abddetecting2020}.
\subsection{Descriptive approaches}
\label{Descriptive}
A second application of NLP techniques in extremism research found is the characterization and study of the phenomenon of extremism from a descriptive perspective. Within these works, 5 different descriptive focus can be identified:
\begin{itemize}
\item \textit{Terms}: descriptive analysis on the terms commonly used by extremists. Characterization of the type of extremist vocabulary.
\item \textit{Topics}: detection of the most common topics discussed by extremist texts.
\item \textit{Sentiment}: analysis of the sentiment and tone of an extremist discourse.
\item \textit{Semantic}: analysis of the contextual information around terms inside an extremist text.
\item \textit{Punctuation}: descriptive analysis of the text format commonly used in the extremist environment.
\end{itemize}
Table \ref{tab:descriptive_approach} summarizes the type of descriptive analysis performed for each of the articles reviewed. The most simple descriptive approach would focus on the terms, while the inclusion of other approaches (topics, sentiment, semantic or punctuation) add extra layers to the description of the discourse. This is why the terms approach is the most common. In addition, we can see that almost all the rest of descriptive analyses have previously performed a term analysis, showing that all the approaches are complementary. Sentiment analysis is the only one that is occasionally performed independently.
\begin{table}[h]
\centering
\caption{Descriptive linguistic approach used by the reviewed articles.}
\begin{tabular}{|l|l|p{8cm}|}
\hline
\textbf{Descriptive linguistic approach} & \textbf{Percentage Use} &
\textbf{Articles using it} \\ \hline
Terms & 67.85\% &
\cite{heidarysafa2020women,rekik2019violent,kinney2018theming,gomes2017profiling,torregrosa2020analyzing,alizadeh2019psychology,bisgin2019analyzing,hall2020machines,stankov2010contemporary,prentice2012language,ben2016hate,alghamdi2012topic,bermingham2009combining,klein2019online,abdelzaher2019systematic,wei2018detecting,wignell2018natural,macnair2018changes,skillicorn2015empirical} \\ \hline
Topics & 46,42\% &
\cite{heidarysafa2020women,kinney2018theming,alizadeh2019psychology,bisgin2019analyzing,hall2020machines,ben2016hate,alghamdi2012topic,bermingham2009combining,klein2019online,o2012analysis,ottoni2018analyzing,o2015down,wadhwa2015approach} \\
\hline
Sentiment & 39.28\% &
\cite{heidarysafa2020women,bermingham2009combining,torregrosa2020analyzing,wignell2018natural,macnair2018changes,chen2008sentiment,scrivens2020measuring,dillon2020comparison,scrivens2018searching,scrivens2015sentiment,alizadeh2019psychology} \\
\hline
Semantic & 17.85\% &
\cite{wignell2018natural,ottoni2018analyzing,gomes2017profiling,prentice2012language,abdelzaher2019systematic} \\
\hline
Punctuation & 3.57\% &
\cite{stankov2010contemporary} \\ \hline
\end{tabular}%
\label{tab:descriptive_approach}
\end{table}
Regarding the insights about extremism found in the reviewed works, sections \ref{insight_religious} and \ref{insight_farright} highlight the main patterns observed, classified by the two predominant types of extremism found in Section \ref{general}: Religious and Political. Table \ref{tab:comparison-extremist-discourse} introduces a summary and comparison between the two most studied extremist movements, Jihadism and Far-right, which are explained in detail below.
\begin{table}[h]
\centering
\caption{Comparison of discourse insights from the most commonly mentioned extremist groups' discourse.}
\begin{tabular}{|p{1.5cm}|p{6.3cm}|p{6.3cm}|}
\hline
\textbf{Insights} &
\textbf{Jihadi extremism} &
\textbf{Far-right extremism} \\ \hline
\multirow{2}{*}{Terms} &
-Religious terms, geographical references. &
-Supremacist, racist, antiimmigration and anti-left terms. Specific slang regarding these \\
&
-Specific slang related to the religious conflict (e. g. ``Crusaders", ``Kaffir", etc.). &
-Specific slang regarding the previously mentioned terms (e.g. ``Illegal Aliens", ``WhiteGenocide", ``14-88", etc.) \\ \hline
Topics &
-Religion, war, geopolitics, extremist philosophy, recruitment, military. &
-Politics, racial topics, immigration and war. \\ \hline
\multirow{4}{*}{Sentiment} &
-Jihadi women tend to be more extreme on their messages than men. &
-Negative messages directed against Jews, LGBT and black people. \\
&
-General presence of negative tone. &
-General presence of negative tone \\
&
-Words related with emotions of fear, hate and violence, except when talking about topics such as paradise or martyrdom. &
-Use of anger, disgust and negativity related terms. \\
&
-A positive tone towards ISIS can be related to a complicity with this group. &
\\ \hline
Semantic &
-Preference for terms such as ``Islamic State" or ``Caliphate", instead of ISIS. Entities are a good way to discriminate between a regular or an extremist use of a term. &
-Common semantic categories include ``violence" and ``anger" \\ \hline
Punctuation &
-Frequent use of Arabic terms, even in non-Arabic texts. &
\\ \hline
\end{tabular}%
\label{tab:comparison-extremist-discourse}
\end{table}
\subsubsection{Literature insights about religious extremism}
\label{insight_religious}
Concerning common terms used by religious extremism, the name "ISIS" was more mentioned by neutral users than by extremist users \cite{wignell2018natural,gomes2017profiling,bisgin2019analyzing}, who preferred the term "Islamic State" or "Caliphate".The more frequent terms found in the extremist text analyzed in the articles were related to religious (e.g. Allah, Jihad or Islam) or geographical references (e.g. Syria, Raqqa, America or Iraq) \cite{wignell2018natural,gomes2017profiling,wei2018detecting,bisgin2019analyzing,skillicorn2015empirical}. The descriptive analysis of the text also detected the common use of specific slang terms, such as "Crusaders", "Mujahideen" or "Abu" \cite{gomes2017profiling,wei2018detecting}.
The works carrying out an descriptive analysis focused on the topics shows that the most frequent topic related to Jihadi extremism was, unsurprisingly, religion \cite{scanlon2015forecasting,bermingham2009combining,kinney2018theming}. Jihadi magazines more easily identifiable topics were war, geopolitics, religious speech, government and administration \cite{bisgin2019analyzing}. Inspire (Al Qaeda's magazine) was more focused on conflict legitimisation and philosophy, while Dabiq and Rumiyah (ISIS magazine) were more focused on the geopolitical conflict \cite{kinney2018theming}. Some of the topics, such as recruitment, were found hidden among topics referring to religious and military aspects of the Syria conflict\cite{scanlon2015forecasting}.
Combining sentiment analysis and topic detection, jihadi women were found more extreme than men on their messages on nearly every relevant topic \cite{bermingham2009combining}. Concerning the journals, it was found that most of their texts had negative tone, while using terms related to fear, except when they discussed about topics such as paradise or martyrdom \cite{wignell2018natural,macnair2018changes}. Words such as Allah or Islamic State were also found to have negative connotations when analyzed through a sentiment analysis approach. Authors hypothesize that this can be due to their use use as justification of violent behaviours. A study concerning jihadi radical forums also found that the most extremist texts also scored more on negative dimensions, using violence and hate terms, than those more moderate \cite{chen2008sentiment}. Finally, a study hypothesized that radical users that presented a good tone towards ISIS (on their tweets) showed in fact complicity with it \cite{wei2016identification}.
While the descriptive term analysis approach helps providing a first insight, it shall be remembered that context can vary the meaning of a token \cite{wei2018detecting}. From this perspective, articles focused on semantic discrimination allowed to check how these keywords are used depending on the intention of the text. For example, Gomes et al. \cite{gomes2017profiling} stated that the background of the terms "ISIS", "Islamic" and "Syria" changes depending on the origin of the text analysed (neutral or extremist). A study analyzing divergences on the semantic meaning of words, conducted by Fernandez et al. \cite{fernandez2018contextual}, classified terms into different semantic groups (category, entity and type of entity). It was found that similar words were used differently by radical and non-radical users, including the name of radical groups. Entities were found to be a good way of discriminating the semantic meaning of a term. Finally, the study of Kursunku et al. \cite{kursuncu2019modeling} conducted a comparative analysis between extremist and non-extremist religious users. They found that, while both groups shared terminology when referring to the religious concept, the extremist group used much more terms related to radical Islamism and hate speech. This is why using token analysis techniques combined with other strategies can be more informative than using it isolated.
As can be stated by these insights, and taking into consideration the features of an extremist discourse presented on section \ref{operativization}, Jihadi extremism presents several of these features. Their use of specific slang and expressions, together with a negative tone, shows how they present and specific linguistic style. Also, they build their discourses with a special emphasis on a theological and moral narrative, but also with the glorification of religious acts of violence against a common enemy (Western society and non-believers). While it is difficult to determine how much their use of war topics is related with a specific narrative or the geopolitical situation of the territories on which they operated, it can also be stated that war (and its instrumentalization) its a key element on the construction of their narrative.
\subsubsection{Literature insights about political extremism}
\label{insight_farright}
Regarding the reviewed works focused on conducting a descriptive analysis of the terms most commonly used by far-right extremism, an article analyzing an Alt-right community \cite{torregrosa2020analyzing} found that they used racist (BlackMagic, WhitesLivesMatter), anti-immmigration (BuildTheWall, IllegalAliens) supremacist (WhiteGenocide, WhitePeople, ChasingDownWhites) and anti-left (AntifaTerrorists) terms and hashtags on their tweets. This work also found to use specific slang to refer to other racial minorities, such as "aliens" to refer to immigrants. Among a sample of videos massively attacked by far right groups from 4chan, some of the most mentioned keywords were ``black", ``police", ``white", ``shot", ``gun", ``world", ``war", ``American", ``government" or ``law". \cite{mariconti2019you}. Other relevant keywords on far-right extremist groups can be the mention of the numbers ``14" (a reference to the ``fourteen words", a white nationalist slogan) and ``88" (meaning ``Heil Hitler", as the H is the 8th letter of the alphabet), but also to the genocide, nazism, anti-islamic and anti-jewish groups \cite{o2012analysis,o2015down}.
Concerning the analysis of topics in political extremist groups, it was found that the more common topics discussed by far-right groups were racial topics \cite{ottoni2018analyzing,ben2016hate,alizadeh2019psychology,o2015down}, immigration \cite{ottoni2018analyzing,ben2016hate} and war \cite{ottoni2018analyzing}, being very aggressive with these topics \cite{mariconti2019you}. This was unsurprising, as both racial content, war and immigration are topics commonly found on the far-right discourse \cite{panizo2019describing}. An interesting pattern was to find that non institutional groups were more focused on a racial and anti-immigration discourse \cite{ben2016hate,klein2019online} than the institutional far-right groups, such as political parties. Those parties were occasionally found to have a populist discourse directed against the elites \cite{klein2019online}. The only article analysing far-left groups found that they discussed more about feeling related topics than other groups\cite{alizadeh2019psychology}.
Regarding sentiment analysis, one of the reviewed articles \cite{torregrosa2020analyzing} also found that a higher relevance on a far right community was related to a significantly higher use of negative and aggressive terminology. Similarly, the study of Figea et al. \cite{figea2016measuring} found that words of anger can also be useful to identify emotional concepts related to political extremist content, such as aggressiveness and concerns about other groups. Also, high negative messages were commonly directed against Jews, LGBT and black people (specially the first two) \cite{scrivens2020measuring}.
Only one article \cite{alizadeh2019psychology} focused on analyzing differences between far-right and far-left discourses, using a dictionary-based approach (both LIWC and Moral Foundation dictionaries). For these purpose the authors combine different NLP features to achieve a descriptive analysis from different perspectives, terms, topics and feelings. They found that far-right used more positive words, together with terms regarding obedience to authority and pureness, while far-left used more negative terms, anxiety words and terms related with justice and harm avoidance. Concerning a sentiment approach, this study found that both groups used a general negative tone compared to non-extremist political groups. However, from all the previously mentioned outcomes, only obedience to authority words showed a significant difference.
Finally, the only reference to semantic analysis in political extremism related articles appear on Ottoni et al. \cite{ottoni2018analyzing}, who detected that terms from extremist groups tend to be classified in "negative" categories using the semantic tagger from Empath. Among this category, the more relevants were anger and violence.
As it happened with religious extremism, far-right extremism also presented several features of the extremist discourses presented on section \ref{operativization}. One of their most relevant traits is their use of specific and aggressive slang to refer to other groups. However, this is not specially surprising, considering that some of these groups are very active on the Internet. They rely on political and historical narratives to build their discourse, also including a component of ``self-victimization" on it. They also use hate speech and otherness as discursive resources (specially the first one, compared to religious extremism), and frequently include references to war narrative.
\section{NLP Dataset \& Tools }
\label{Software}
In the analysis carried out in Section \ref{general}, it was noted that
the sources and the specific tool used for NLP appear frequently as relevant keywords of the articles. This is because they are a fundamental part of any research work related to the study of a particular domain, in this case the extremism phenomena. The following subsections present a detailed description of both the data sources and tools used in the works reviewed.
\subsection{Datasets and datasources}
\label{Datasets}
Obtaining a dataset is a key part of the NLP research process. In the case of online extremism, this step becomes specially difficult, as most of the information represents a risk for security or anonymity. Therefore, it becomes a hard task to find open datasets online.
\begin{table}[h]
\centering
\caption{Publicly available extremist datasets.}
\begin{tabular}{|p{3.5cm}|l|l|p{2.5cm}|p{2cm}|}
\hline
\textbf{Dataset} &
\textbf{Size} &
\textbf{Language} &
\textbf{Source} &
\textbf{Articles using this source} \\ \hline
Al-Firdaws \cite{AZSecure-AlFirdaws} &
39.715 posts - 2.187 users &
Arabic &
Dark web forum &
\cite{chen2008sentiment} \\ \hline
Montada \cite{AZSecure-Montada} &
1.865.807 posts - 52.546 users &
Arabic &
Dark web forum &
\cite{chen2008sentiment} \\ \hline
Ansar1 \cite{AZSecure-Ansar1} &
29.492 posts - 382 users &
English &
Dark web forum &
\cite{scanlon2015forecasting} \\ \hline
How ISIS uses Twitter (Kaggle) \cite{KaggleHowIsis} &
17.410 tweets - 112 users &
English &
Twitter &
\cite{araque2020approach,zahra2018framework,fernandez2018contextual,rehman2021understanding,kursuncu2019modeling,fernandez2018understanding,abddetecting2020,nouh2019understanding,gomes2017profiling} \\ \hline
Automated Hate Speech Detection and the Problem of Offensive Language \cite{Davidson-dataset} &
24.802 tweets - N/A users &
English &
Twitter &
\cite{johnston2020identifying} \\ \hline
Crisis Lex Dataset (not specified) \cite{Crisis-dataset} &
Not specified &
English &
Twitter &
\cite{zahra2018framework} \\ \hline
UDI-TwitterCrawl-Aug2012 \cite{UDI-dataset2012} &
50.000.000 tweets - 147.909 users &
English &
Twitter &
\cite{agarwal2015using} \\ \hline
Dataset-ATM-TwitterCrawl-Aug2013 \cite{UDI-dataset2013} &
5.000.000 tweets - N/A users &
English &
Twitter &
\cite{agarwal2015using} \\ \hline
Religious Texts Used By ISIS \cite{KaggleReligiousTexts} &
2,685 religious texts &
English &
Religious texts
&
\cite{rehman2021understanding,abddetecting2020} \\ \hline
Tweets targeting ISIS \cite{KaggleTargeting} &
122.000 tweets - 95.725 users &
English &
Twitter &
\cite{rehman2021understanding,abddetecting2020,nouh2019understanding} \\ \hline
Gawaher \cite{AZSecure-Gawaher} &
372.499 posts - 9.629 users &
English & Dark web forum &
\cite{scrivens2015sentiment,scrivens2018searching} \\ \hline
Turn to Islam \cite{AZSecure-Turn} &
335.338 posts - 10.858 users &
English &
Dark web forum &
\cite{scrivens2015sentiment,scrivens2018searching} \\ \hline
\end{tabular}%
\label{tab:datasets}
\end{table}
Much of the articles included on the review use their owns datasets. The reader is encouraged to contact with the authors of the different articles to ask for their data. However, this section deals with the articles which used datasets that are either public, or can be obtained from their original source. Table \ref{tab:datasets} shows a summary of the publicly available datasets used by the literature. This table contains the name of the dataset, an approximation to its size, the original language, the source of the data,the articles using those datasets and a bibliographic reference including a link to the dataset itself.
Also, there are data sources which are often used to extract texts, but that are not pre-processed in the way datasets are. Table \ref{tab:data_sources} presents the different extremist journals used by the literature to conduct NLP analysis. The data from these sources, however, shall be curated before conducting any analysis.
It shall be stated that, besides the already mentioned datasets (who are part of this review), there are other sources that might be useful for the person interested on obtaining more textual data related to the topic of extremism and radicalization. While these datasets are not used by the reviewed documents, and therefore remain outside of this article scope, the authors considered interesting to highlight some of them in order to help researchers to find more publicly available data. As with the type of extremism of the articles in this review, they will be divided into two groups: political and religious extremism.
Concerning political extremism, a dataset of the far-right forum named Stormfront \cite{gibert2018hate} can be found on GitHub\footnote{https://github.com/Vicomtech/hate-speech-dataset}. Also, a dataset of alt-right users was validated by Thorburn et al. \cite{thorburn2018measuring}, which is publicly available under request to authors. Besides, speeches from different political parties can be found on the webpage of the Manifesto Project Database\footnote{https://manifestoproject.wzb.eu/}, with textual data from parties with different ideologies.
Finally, related to religious extremism, the Global Terrorism Research Project (which is the source to download Inspire magazine on Table \ref{tab:datasets}) present much more content than previously stated, including more magazines or datasets\footnote{http://gtrp.haverford.edu/resources/}. Same happens with the AZSecure webpage, which contains datasets from dark web jihadist forums in different languages\footnote{https://www.azsecure-data.org/dark-web-forums.html}.
\begin{table}[h]
\centering
\caption{Publicly available extremist data sources.}
\begin{tabular}{|l|l|l|}
\hline
\textbf{Data source} &
\textbf{Type of source} &
\textbf{\begin{tabular}[c]{@{}l@{}}Articles using this \\ source\end{tabular}} \\ \hline
Dabiq \cite{GTRP-Dabiq} &
Extremist magazine &
\cite{macnair2018changes,kinney2018theming,wignell2018natural,bisgin2019analyzing,araque2020approach,johnston2017identifying,johnston2020identifying,de2020radical,heidarysafa2020women,skillicorn2015empirical} \\ \hline
Rumiyah \cite{GTRP-Rumiyah} &
Extremist magazine &
\cite{macnair2018changes,kinney2018theming,wignell2018natural,araque2020approach,johnston2017identifying,johnston2020identifying,de2020radical,heidarysafa2020women} \\ \hline
Inspire \cite{GTRP-Inspire} &
Extremist magazine &
\cite{sikos2014authorship,johnston2017identifying,johnston2020identifying,skillicorn2015empirical} \\ \hline
Azan \cite{Archive-Azan} &
Extremist magazine &
\cite{skillicorn2015empirical} \\ \hline
\end{tabular}%
\label{tab:data_sources}
\end{table}
\subsection{Tools}
While conducting a research work, authors shall consider which tools are they using for their experiments, along with which knowledge bases do they use, for example, to create a lexicon. In this section, the more frequently used NLP tools when studying extremism and radicalisation will be reviewed.
\begin{figure}[h]
\centering
\includegraphics[width=0.65\linewidth]{Images/Imagen_NLP_Tools.png}
\caption{NLP tools used among the articles}
\label{fig:software}
\end{figure}
Fig. \ref{fig:software} shows the frequency of use of different NLP tools. Only those being used on three or more articles have their own category, while the rest are included under the "others" category. Also, the category "non specified" includes all those articles not clearly mentioning the software tools they used \cite{chen2008sentiment,alghamdi2012topic,rowe2016mining,wei2018detecting,scanlon2015forecasting,hartung2017identifying,zahra2018framework,sharif2019empirical,fernandez2018understanding}.
The most frequently used NLP tools are:
\begin{itemize}
\item \textbf{SentiStrength\footnote{http://sentistrength.wlv.ac.uk/}:} this tool, developed in 2010 \cite{thelwall2010sentiment}, was created to analyse the emotional valence (sentiment) of short texts. It uses a dictionary with sentiment related terms, from which it calculates the "strength" of the tone of different expressions. SentiStreght can report binary (positive vs negative), trinary (positive/negative/neutral) and single scale (-4 to +4) sentiment results. From the reviewed articles, it was the most commonly used tool to determine sentiment \cite{weir2016positing,scrivens2016sentiment,wei2016identification,saif2017semantic,owoeye2019classification,scrivens2015sentiment,macnair2018changes,scrivens2020measuring,scrivens2018searching}.
\item \textbf{Linguistic Inquiry Word Count\footnote{http://liwc.wpengine.com/}:} this tool, also known as LIWC \cite{pennebaker2001linguistic}, was created on 2007 with the aim of studying the language through a psychological perspective. LIWC relies on the usage of pre-established dictionaries (which can be expanded with dictionaries made by the researcher) that are used to identify categories of words and psycho-linguistic processes underlying a text \cite{tausczik2010psychological} . Eight articles used it to conduct their analysis \cite{alizadeh2019psychology,hall2020machines,smith2020detecting,sikos2014authorship,figea2016measuring,nouh2019understanding,torregrosa2020analyzing,rehman2021understanding}.
\item \textbf{OpenNLP\footnote{https://opennlp.apache.org/}:} Apache OpenNLP library is a machine learning based toolkit for the processing of natural language text \footnote{https://opennlp.apache.org/docs/}, coded in Java. It supports different NLP tasks, providing several options to analyse texts. Four articles used OpenNLP on the review \cite{scrivens2018searching,scrivens2015sentiment,scrivens2016sentiment,weir2016positing}.
\item \textbf{IBM Watson Natural Language Understanding\footnote{https://www.ibm.com/watson/natural-language-processing}:} this software, developed by IBM, includes in fact several packages inside it, which allows to conduct NLP analysis from different approaches (for example, open analysis vs questions and answers). This software can apply several NLP techniques to texts, such as semantic tagging, sentiment scoring or keywords and topic extraction. It was used by two articles on the review \cite{ahmad2019detection,wignell2018natural}. Also, the software AlchemyAPI, which was used by other two articles from the review \cite{saif2017semantic,saif2016role}, was included in the core of Watson NLU in 2015\footnote{https://www.ibm.com/cloud/blog/announcements/bye-bye-alchemyapi}.
\item \textbf{Natural Language Toolkit\footnote{https://www.nltk.org/}:} the Natural Language Toolkit (NLTK) is a NLP Python library created on 2002 \cite{loper2002nltk}. It performs very similar NLP tasks than OpenNLP. Four articles used this algorithm \cite{ben2016hate,heidarysafa2020women,kinney2018theming,klein2019online}.
\item \textbf{Stanford Core NLP\footnote{https://stanfordnlp.github.io/CoreNLP/}:} the Stanford CoreNLP is another Java based NLP tool, developed on Stanford \cite{manning2014stanford}. It can perform analysis in different languages, and one its main features is that it is quite easy to set up and run \cite{pinto2016comparing}. Three articles used this NLP tool \cite{wei2016identification,kim2017empirical,bisgin2019analyzing}.
\end{itemize}
Even though Fig. \ref{fig:software} summarizes the most used NLP tools, other tools are used by less than three reviewed articles. These tools include WordNet \cite{bermingham2009combining}, Stanford Maximum Entropy Part-of-speech Tagger \cite{bermingham2009combining}, Vader\cite{wei2016identification,torregrosa2020analyzing}, WMatrix \cite{prentice2012language}, Gensim \cite{ottoni2018analyzing}, iSA \cite{ceron2019isis}, the Arules Package \cite{rekik2019violent}, MALLET \cite{hall2020machines}, the Language Detection Library for Java \cite{agarwal2015using}, POSIT \cite{weir2016positing,owoeye2018classification}, TextRazor \cite{fernandez2018contextual}, Language Model Toolkit \cite{mariconti2019you}, ConcepNet \cite{mariconti2019you}, TensorFlow Vocabulary Processor \cite{johnston2020identifying} and the Python-based tone analyzer API \cite{ahmad2019detection}.
\section{Discussion and Conclusion}
\label{Discussion}
This review aimed to explain the contributions that NLP has provided for extremism research so far. This interest was divided in several research questions presented in the introduction, regarding the different NLP issues analyzed. Through the whole article, those issues have been both descriptively and comparatively analyzed based in the literature included in the review. This last section presents three topics: the answer to the research questions previously presented, the summary of future trends, challenges and directions, and a brief conclusion.
\subsection{Answer to research questions}
The different research questions, regarding the state of the literature, were presented on the introduction as a justification to conduct the survey. These research questions can now be answered through the detailed analysis and insights drawn from the literature review process conducted in this article. Figure \ref{fig:RQ_summary} shows a summary of the conclusions reached after the exhaustive review, highlighting the main findings drawn for each of the questions posed at the beginning of the review. Each of these answers is explained in more detail below.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{Images/ResearchQuestionsAnswer.pdf}
\caption{Research questions' answer highlights.}
\label{fig:RQ_summary}
\end{figure}
\begin{itemize}
\item \textbf{RQ1. What are the current topics and contributions from NLP to extremism research?}
Literature related to NLP approaches to extremism research has experienced a growth over the last years. Religious extremism remains as the most covered topic, followed by far-right extremism.
Terrorism (specially Jihadist terrorism) and counter-terrorism appear to be key motivations behind the interest for these topics, as detecting extremist content can help preventing radicalisation processes and, therefore, avoid attacks as the one experienced on recent years \cite{johansson2017detecting}.
The interest for extremism detection appears reflected on the many mentions to machine learning algorithms, as their combination with NLP approaches can be useful to create classification models to identify extremist content. Finally, even though it is beyond the scope of this review, SNA also appears as an analytical approach commonly linked to the study of language on extremism research.
\item \textbf{RQ2. What NLP techniques are used on extremism research?}
Section \ref{Techniques} highlights that n-grams, TF/TF-IDF and sentiment analysis are the techniques most commonly used to study extremist discourse. It is unsurprising to see the first two approaches as the most common, taking into account that they are a previous step to conduct more complex analysis, for example sentiment analysis itself.
However, it shall be considered that the use of neural networks models (word embedding) is getting more recurrent on the study of extremist discourses, and therefore it shall be considered for researchers interested on this topic. This is specially relevant, as authors have pointed that detecting a the most commons terms used in the specific domain is not enough to understand in what meaning they are being used in the text. Therefore, techniques capturing information about the context and the meaning of the terms (e.g. embedding or semantic tagging) shall also be considered as an important part of any textual analysis. Specially, taking into account that extremist texts use words from regular discourses, but with different objectives.
\item \textbf{RQ3. How have NLP techniques been applied in the field of extremism research?}
54.68\% of the articles reviewed performed classification tasks using ML approaches, as stated on Section \ref{Machine}. Again, this is unsurprising, as the main objective of extremism researchers is to detect and prevent that content. Among the ML algorithms, SVM was the most commonly used, followed by Random Forests, Naïve Bayes and Decision Trees. Concerning the best models performing classification, SVM reported having a general good performance. However, in the most recent research works, Neural Networks approaches performed specially good compared to other models, and appear as a promising trend on the detection of extremism.
The rest of the articles (see Section \ref{Descriptive} focused on describing the main features that differentiate between regular and extremist texts, with the interest of defining this type of discourse. This helped providing insights that could be helpful for future researchers to identify which textual features are more useful to analyse to detect (and prevent) extremism on Social Media.
\item \textbf{RQ4. What NLP software tools are commonly used on extremism research?}
Section \ref{Software} highlights SentiStrength as the most used tools to conduct NLP analysis. Specifically, this tool is used to conduct sentiment scoring, through automatic tagging of words around a token. The second one is LIWC, a tool based on dictionaries with a psycholinguistic approach.
Two points shall be stated here. First, 25 articles did not report the software tool they used to conduct the analysis. Second, 17 articles used a software tool used by less of three articles. Therefore, while several NLP software tools were used, it can be stated that there is not a commonly used technique in the literature to conduct NLP analysis.
\item \textbf{RQ5. Which publicly available datasets or datasources have authos used to conduct NLP experiments on extremism research?}
Most of the articles included on the review relied on their own private datasets to conduct their research. However, some of the datasets, specially those concerning religious radicalization and Twitter, forums or radical magazines, remain currently public. A summary of those public datasets, together with extra datasets suggested by the authors, are presented on section \ref{Datasets}.
\end{itemize}
\subsection{Future trends and challenges}
The research questions and their answers provide a general picture of the current state of the art concerning contributions of NLP on extremism research. However, the analysis conducted on this survey also provides with different insights concerning the future of the area. This section presents the future trends that the literature will follow concerning its current state, the challenges that will be faced, and the directions to confront those challenges (see Figure \ref{fig:future}).
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{Images/TrendsChallanges.pdf}
\caption{Future trends and challenges of NLP approaches applying on extremism research area.}
\label{fig:future}
\end{figure}
As shown in Figure \ref{fig:future}, there are 3 main trends that can be derived from the research questions and 3 future challenges for the NLP applications to extremism research, which are explained in detail as follows:
\begin{itemize}
\item \textit{Future Trends}:
\begin{enumerate}
\item Interest for political extremism relevance will grow in a short term. At the time this survey is being written, the Capitol assault and the shutdown of Parler (one of the most famous online bastions of Far-right groups) have attracted the interest of both the general public and researchers. In fact, several datasets concerning online political extremism are being released nowadays, which will also increase the opportunities of studying this phenomena, also taking advantage of the lessons learnt on the study of religious extremism. Therefore, this research field remains promising for future years.
\item Concerning Machine Learning for extremist prediction, Neural Network based techniques have shown promising outcomes on the articles reviewed. Therefore, and due to the few literature that has approached extremist classification from this perspective, the use of these techniques remains as one promising trend for the future. The use of Deep Learning-NLP approaches (based on neural language models), also provides a way to overcome the lack of semantic information extracted from the texts, which is a key challenge in the study of extremist discourses. However, the overcome of their main limitation for this area (the explainability of the model) will represent a turning point on the use of this type of approaches, which will lead to more accurate discriminant models.
\item Multivariate classification models (those using different types of features to discriminate) achieve better results in the reviewed papers. Furthermore, the general analysis carried out in section \ref{general} shows that some of works reviewed use Social Network Analysis to pursue research studies in the area of extremism. This approach, based on the analysis of interactions among users, could be a good complement to the study of extremist dynamics on online environments \cite{camacho2020four}. Indeed, approaches combining NLP and SNA have been used in other research fields, such as fake news \cite{zhou2020survey}, and also on some articles on the extremism area \cite{torregrosa2020analyzing}, providing good results. Therefore, the application of approaches combining techniques from both areas to not only analyze extremist behaviors based on discourse (text), but also on their dynamics in online social media, will be another relevant trend to address in a short term.
\end{enumerate}
\item \textit{Future Challenges}:
\begin{enumerate}
\setcounter{enumi}{3}
\item The presence of multiple languages on an extremist text is one of the first limitation of the research area (especially, those concerning religious extremism). This limitation, very common on these type of texts, can bias an analysis depending on the techniques used (for example, dictionaries vs n-grams), which therefore implies a lot of extra interpretative or preparatory work for the researcher. The use of new approaches, such as word embedding (and, specially, those that recognize word variations) could be the right direction to follow here, together with the creation of specific lexicons for different types of extremism.
\item The explainability of the classification models is one of the most important challenges currently facing the area due to the psychological, criminological and sociological roots of extremism. The interest of detecting extremist content relays not only on the detection itself, but on the extraction of insights to understand more about the mind of extremists. With this understanding, classification can be fine-tuned, discourses can be countered, and first signs can be identified. Therefore, a balance shall be found between the accuracy of the model and its explainability.
\item The relative absence of public data sources will remain as one of the more challenging tasks to confront on extremism research. Even though there is a lot of data that can be extracted from online platforms, such as Twitter or web forums, the ethical concerns related to anonymity and the private nature of most of the data stored cause researchers to avoid sharing their datasets. This ultimately leads researchers to create new datasets each time they want to conduct a new experiment, instead of improving data already stored with new information. Therefore, creating and sharing full datasets with other researchers, always respecting the ethical steps to do it, will facilitate the access of new researchers to this field, improving the quality and quantity of the outcomes.
\end{enumerate}
\end{itemize}
\subsection{Conclusion}
\label{Conclusion}
Currently, extremism represents a security and ideological challenge for Europe. Different kind of movements, such as jihadi terrorism and far-right groups, have changed the political and social agenda of several countries, including hot topics that are now discussed as relevant issues for those countries \cite{ali2021far}. To confront this phenomena, it is first necessary to understand the discourse, which is a reflect of the ideology of extremist groups. Only through this understanding these movements can be countered.
NLP, with its limitations, offers technical resources to describe these discourses, together with ways of extracting insights regarding how extremists use language compared to non-extremist groups. Through the descriptive and comparative analysis of techniques, software tools, classification approaches and datasets, this survey aims to provide the reader with a global picture of the applications NLP can provide to the study of extremism. This, ultimately, will help authors to identify future research directions, relevant trends and challenges to overcome in the study of extremist discourses.
\section*{Acknowledgements}
\label{sec:acknowledgements}
This research has been supported by Ministry of Science and Education under DeepBio (TIN2017-85727-C4-3-P) and CHIST-ERA 2017 BDSI PACMEL projects (PCI2019-103623), by Comunidad Aut\'{o}noma de Madrid under S2018/ TCS-4566 (CYNAMON) and S2017/BMD-3688 grants and by the project DeepSCOP-Ayudas Fundación BBVA a Equipos de Investigación Científica en Big Data 2018. Eugenio Martínez-Cámara is supported by the Spanish Government fellowship program Juan de la Cierva Incorporación (IJC2018-036092-I). Javier Del Ser acknowledges funding support received from the Basque Government (Consolidated Research Group MATHMODE, ref. IT1294-19).
\bibliographystyle{unsrt}
|
1,108,101,564,711 | arxiv | \section{Introduction}
\label{sec:intro}
As voice assistant systems like Amazon Alexa, Google Assistant, Apple Siri and Microsoft Cortana become more ubiquitous and integrated into modern life, so does the risk of antagonists taking control of devices that we depend on.
Adversarial Attacks on Audio \cite{fgsm} are one way that a voice assistant could be subverted to send an unwanted message, or bank transfer, or unlock a home.
All the mentioned voice assistants rely on wake-word detection to initiate their automatic speech recognition (ASR) systems that give control over their functions.
The wake word detection system is both a voice assistant's first line of defense against adversarial attacks and its least complex feature.
\begin{figure}[t!]
\centering
\centering
\includegraphics[width=1\linewidth]{figures/adv_channel.png}
\caption{The flow of information for a voice assistant wake-word detector and two attack vectors. The speaker generates audio for their intended command. The environment is an acoustic environment containing the speaker, other sources of audio like background and other speakers, distortions like reverberations, and the voice assistant's microphone and audio hardware including the ADC. The data (assumed to be 16 bit integer and 16kHz) is then available to a model which ultimately makes the decision whether or not to "wake up" its general ASR capability.}
\label{fig:adv_channel}
\end{figure}
A number of recent works have achieved adversarial examples for automatic speech recognition.
The Carlini-Wagner attack \cite{cw} can produce an adversarial waveform that is 99.9\% similar to the original, but produces any other intended sentence by the ASR.
Adversarial examples have since developed and improved in several ways.
An important attribute for practical adversarial audio is that it be causal.
Often an adversarial example is a function of the complete original audio that it is meant to modify, so thus original audio from hundreds of milliseconds in the future must be accessed to play an adversarial sound in the present.
One solution is {\it universal adversarial examples} \cite{univadv}, examples that can behave adversarially regardless of what original audio they are superimposed on.
Another important attribute for practical adversarial audio is that it be robust, that is the adversarial example remains effective despite some level of distortions.
This goal is not completely orthogonal to universal adversarial examples, since examples cannot be universal without also being invariant to translation in time, and when superimposed with some other audio.
The ultimate test of robustness is over-the-air attacks, where as shown in \cref{fig:adv_channel} the adversarial example is introduced to the environment generating the data for the model rather than to the data directly.
Robust adversarial examples have been demonstrated to exist for simulated environment ASR systems \cite{qin_2019}, and more recently robust and universal examples have been generated for ASR systems, and even commercial wake word systems \cite{adv_music}.
\begin{figure*}[t!]
\begin{minipage}[b]{.7\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/corticalnet}}
\centerline{(a) Our proposed cortical network}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.29\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/cortical_complex_enhanced}}
\vspace{.75cm}
\centerline{(b) Complex colorized STRFs}\medskip
\end{minipage}
\caption{Our proposed cortical network (a), meant to emulate the first two stages of audio processing in the brain (labeled in light blue). Integrating this features extraction directly into the network gives us two advantages over pre-processing: 1) The feature representation can be enhanced with learning (dropout with 1x1 convolutions) 2) We can backpropagate through these layers are find adversarial audio directly without any lossy transformations (such as Griffin-Lim). The complex STRFs of (a) are shown in (b).}
\label{fig:cortical_net}
\end{figure*}
Across many languages speech has a pattern of spectrotemporal modulations that differ from most environmental noise and distortions \cite{ding_2017}, and the brain has developed a hierarchical neural system that uniquely responds to these signals \cite{lyon_book}.
These specialized neuronal structures are in the auditory cortex, and for example are largely unresponsive to pure tones \cite{lyon_book}.
It has been demonstrated in animals that the auditory cortex encodes vocalization features while suppressing or being invariant to the noise features, and invasive experiments in humans also have shown that auditory cortex neurons selectively suppress of acoustic features of noise in their neural responses \cite{khalighinejad_2019}.
This characteristic of speech-noise separability found the auditory cortex is exactly the same characteristic we would like in artificial intelligence voice assistants.
For example, if a wake-word system had an intermediate representation that was tuned to speech the same way our brains are, then it would be necessary to perceptively modify speech to change that representation and ultimately the output of the wake-word model, making imperceptible adversarial examples impossible.
Indeed the aforementioned biological findings have inspired a computational model of the mammalian auditory cortex to create such a representation \cite{chi_2005,nsl_doc}.
Our aim in this work is to incorporate the advantages of this cortical representation into a voice assistant and show the efficacy of this representation for defending against adversarial audio attacks.
\section{Background}
\subsection{Cortical transform}
The details of human hearing have been studied for centuries. In this section we highlight some of the consensus of the last few decades of modeling for the first stages of how the brain makes sense of speech.
In the early stage, acoustic signals are transformed into 2 dimensional spectrograms in the cochlea, which acts as a filter bank, and filtered by the hair cell and lateral inhibitory network \cite{lyon_book,mesgarani_2011}.
These lower auditory areas are tonotopically organized: each cochlear location is associated with a best frequency it responds to, with time as the other axis this representation can be interpreted as an auditory image \cite{lyon_book}.
As signals move out to the auditory cortex (A1), what was essentially 1 dimensional processing in the cochlea becomes 2 dimensional in A1 \cite{nsl_doc}.
Invasive experiments have measured the responses of ferrets' A1 neurons and characterized them with corresponding spectro-temporal receptive fields (STRFs) \cite{mesgarani_2008}.
An STRF of a neuron indicates the frequencies and time lags that correlate with an increased response of that neuron; these two dimensions are often referred to as scale (spectral) and rate (temporal).
A computational model for these receptive fields can be estimated from the measurements of these neurons responses \cite{shamma_95,nsl_doc}.
On the scale axis the transfer function of these neurons are well approximated by the second derivative of a Gaussian function \cite{nsl_doc}.
Neurons are tuned to be responsive to a variety of rates and scales \cite{mesgarani_2008},
for scale features we parameterize the approximation of the neural response as:
\begin{equation}
R_{2,\psi}(y) = (\Omega_2 y / \psi)^2)\exp{(1-(\Omega_2 y / \psi)^2)}
\end{equation}
\noindent
where $R_{2,\psi}(y) = \mathcal{F}\{ r_{2,\psi}(f) \}$ \cite{nsl_doc}.
On the rate axis, the response is modeled by a function with a central excitatory band surrounded by inhibitory side bands, this is approximated with:
\begin{equation}
r_{1,\omega}(t) = (t/\Omega_1\omega)^2\omega\exp{(-\alpha t / \Omega\omega)}\sin (2\pi t \ \Omega_1\omega
\end{equation}
\noindent
where $\alpha=3.5$ was chosen empirically, as detailed in \cite{nsl_doc}.
Finally the 2D STRF filters are
\begin{equation}
f(\omega,\psi,\phi) = \begin{cases}
r_{2,\psi} \otimes r_{1,\omega} \ , & \phi = 1 \\
r_{2,\psi} \otimes \overline{r_{1,\omega}} \ , & \phi = -1
\end{cases
\label{eq:strfs}
\end{equation}
\noindent
When phase is $-1$ the rate filter is conjugated.
These filters with $\psi\in\{.25, .5, 1, 2, 4, 8\}$ (cyc/oct) and $\omega\in \{4, 8, 16, 32\}$ (Hz) are displayed in \cref{fig:cortical_net}.
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{figures/cortical_complex_enhanced}
\caption{...}
\label{fig:cortical_filt}
\end{figure}
The 48 cortical filters used in the cortical network. Along the rows are the six scale parameters $\psi \in \{.25, .5, 1, 2, 4, 8\}$, the left four columns are the rate parameters $\omega \in \{4, 8, 16, 32\}$ with phase $\phi=+1$ and the four right columns have phase $\phi=-1$ and the same scale $\times$ rate parameters. Colorization with hue = $(\angle z + \pi) / (2\pi) + 1/2$ and lightness = $1 - 1/(1 + \abs{z}^{0.3})$.
\end{comment}
Taking these biophysical observations, we can build an audio processing pipeline that transforms sound into a two dimensional auditory spectrogram, and then applies the STRFs in \cref{eq:strfs} to form a higher dimensional {\it cortical representation} from the magnitude response. This abstraction captures important physiological observations: selectivity to combined spectral and temporal features, and with proper choice of $\mathcal{W}$ the temporal dynamics of phase locking which decreases to less than 30 Hz in the cortex \cite{chi_2005} (something only possible with large filters).
This pipeline can be seen in \cref{fig:cortical_net}.
The cortical transform of audio is a high dimensional tensor with dimensions (time $\times$ frequency $\times$ rate $\times$ scale).
This can be reduced to two 3D tensors by reducing/max pooling across the rate ({\it scalegram}) or scale ({\it rategram}) \cite{mesgarani_2011}.
Such a pre-processing feature extraction pipeline has previously been applied successfully to speaker detection \cite{mesgarani_2006} and phoneme classification \cite{mesgarani_2011}.
The cortical representation captures the distinctive dynamics of phonemes, which are easily visually discernible in rategrams and scalegrams \cite{mesgarani_2011}.
The magnitude cortical response carries enough information to reconstruct the spectrogram that generated it \cite{chi_2005}.
Because of its ability to capture speech specific features, inverting the magnitude cortical response has been applied to speech enhancement since it has high fidelity in the reconstruction of speech and low fidelity in the reconstruction of background noise \cite{mesgarani_2007}.
\subsection{Adversarial attack methods}
We attack wake-word networks $f: x \mapsto y\in \{-1,0,1\}$, which predict other-speech, no speech, wake-word, at each time step.
In the targeted adversarial attack threat model, the goal is to find a small adversarial noise $\delta$ s.t. $f(x+\delta)=y'\neq f(x)$, where $y'$ is a target label.
That is we minimize $l(f(x+\delta), y')$ s.t. $\norm{\delta}_\infty \leq \epsilon$, where $l$ is cross-entropy.
To solve this minimization problem, the Fast Gradient Sign Method (FGSM) finds the direction of the sign of the gradient and takes a corresponding step in this direction:
$x = x - \epsilon \ \mathrm{sign}(\nabla l(f(x),y'))$
\cite{fgsm}.
More advanced adversarial example generation algorithms have adopted an iterative scheme to solve the optimization task \cite{pgd,deepfool,cw,univadv}, including the projected gradient descent (PGD) attack, the DeepFool attack, as well as Carlini-Wagner (CW) attack. The DeepFool attack iteratively makes a linear approximation of the decision boundary and then takes a step normal to the tangent plane \cite{deepfool}. The CW attack solves a single optimization algorithm $l(f(x+\delta), y')+c\|\delta\|$, where the loss function $l$ has been carefully chosen \cite{cw}. The PGD attack projects the perturbed input to the constrained $\epsilon$-ball to enforce the $\ell_\infty$ constraint. In our paper, we consider these three attack algorithms as "strong" attacks and the goal is to show the efficacy of the cortical network to defend against these adversarial attacks.
\subsection{Wake-word detection}
A variety of networks have been published by team members at Amazon, and the time-delayed bottleneck highway network (TDB-HW) is the latest in the literature \cite{amazon_HW}.
We choose this network as a baseline because it is a larger and more powerful network than some others in the literature \cite{tinyml_from}.
It consists of a feature extractor stage and a classifier stage.
Before the feature extraction stage, auditory spectrogram (a log-mel filter bank spectrogram, abbreviated LFBE) features are fed into a two layer FC prenet with dropout.
The feature extraction stage of TDB-HW consists of four highway blocks.
Then, a bottleneck with 20 left (200ms) and 10 right (100ms) contexts is applied.
Finally, six highway blocks and one FC layer are applied to provide a classification.
This is the network we refer to as a "baseline".
\begin{figure}[t!]
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/univ_fgsm}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/univ_pgd}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/univ_deepfool}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/univ_carlini}}
\end{minipage}
\caption{
Three baseline and three cortical networks are attacked in four trials each with various SOTA attacks from IBM's ART toolbox \cite{ibmart}.
Each point in each figure corresponds to the hyperparameters for a single type of attack constrained to an $\ell_\infty$ ball.
Each point is the mean of 12 trials, showing the mean test mask accuracy for the best (minimum accuracy) adversarial noise found during the attack trial on a different attack training set. 1 standard deviation is shaded.}
\label{fig:defense1}
\end{figure}
\begin{figure}[t!]
\begin{minipage}[b]{.32\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/hard_FA_MR}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.32\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/nirvana_FA_MR}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.32\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/nirvana_cm}}
\end{minipage}
\caption{False alarm and Miss rates of the networks that we compare. One line is plotted for each of three training trials for each of the two architectures. Left test set performance, Middle performance with adversarial music \cite{adv_music}, and Right shows the AUC of the baseline networks decreases the most with adversarial music.}
\label{fig:perf}
\end{figure}
\section{Proposed Architecture}
For our cortical network, the input LFBE features are convolved with 48 size 32x32 complex STRF filters from \cref{eq:strfs}. These 48 filters are shown in \cref{fig:cortical_net}, and have parameters which mimic the response measured in auditory cortex neurons \cite{nsl_doc}. After applying an absolute value and dropout, max-pooling is applied along the rate and scale axes of the features to yield a scalegram and a rategram. These are concatenated, a 1x1 convolution is applied, and the original spectrogram with substantial dropout (0.9) is added to this 3D tensor. A small prenet is applied before the TDB-HW network is applied. The residual addition is motivated by the loss of 2D high frequency features by the STRFs, and a way to recover some of that information.
The STRFs are not learned during training, so the total learnable parameters are similar between the two models.
\section{Experimental Results and Discussion}
\subsection{Methods}
We created several monaural 15 hour training datasets and 3 hour validation and test sets.
We create our train/val/test splits from different speakers of wake-words from the Kaggle Alexa dataset and speakers in the Librispeech dataset.
For additional background noise we use the DEMAND dataset kitchen, living room, laundry room, hallway, office, cafeteria, cafe environments.
We extract words from our positive and negative datasets, pitch shift and time stretch these words, and mix them in a room, randomizing all these variables according to reasonable distributions.
We evaluate the merit of the cortical network as a defense to adversarial examples on Universal FGSM, PGD, Deepfool, and Carlini-Wagner attacks \cite{fgsm,pgd,deepfool,cw,univadv}. We use the standard implementation of these in the IBM Adversarial Robustness Toolbox (ART) \cite{ibmart}.
The FGSM, Deepfool, and Carlini-Wagner attacks were performed for 4000 iterations and evaluated every 400, PGD was performed for 250 examples for 100 iterations each, evaluated every 25 examples.
For each trial, the minimum accuracy noise was recorded on a test set, and the average minimum accuracy of 12 trials over the three networks is show in \cref{fig:defense1}.
\begin{comment}
We developed an algorithm to extract separate words from speech, which we use to isolate single words from the Librispeech dataset for negative example words. We use this algorithm and fine tune it manually when necessary on the Alexa dataset to extract start/end times positive words (but keep surrounding audio including noise), and we use it unsupervised on Librispeech to extract negative words. Positive words are augmented randomly to have duration between 400ms and 900ms (drawn from a distribution fitted on the training speakers), and to have a base frequency drawn from $\mathcal{U}( [80,350] \cap [ f_{\mathrm{source}} - 60, f_{\mathrm{source}} + 60 ] )$ Hz using the high quality C++ sound library Rubberband \cite{rubberband}. Negative words have modified length multiplied by $\mathcal{U}([0.8,1.2])$, and modified pitch multiplied by $\mathcal{U}([0.8,1.4])$ using the same library. To form a single sound clip, words are randomly selected to be positive or negative, and arranged with silence ($\mathcal{U}(50,150)$ ms) to form a multi channel clip. Background noise form DEMAND is added as an additional channel. To form the single channel audio the channels are normalized to -15 dBFS and simulated in a bedroom or living room with absorption $\mathcal{U}([0.4,0.9])$ with random distances from the simulated microphone \cite{pyroom_acoustics}. Finally the single channel is normalized -15 dBFS and quantized to 16-bits.
\end{comment}
\begin{comment}
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/fig2noises_fgsm_015_base.png}
\caption{$\epsilon=0.015$}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/fig2noises_fgsm_015_cort.png}
\caption{$\epsilon=0.015$}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/fig2noises_pgd_00375_base.png}
\caption{$\epsilon=0.00375$}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/fig2noises_pgd_00375_cort.png}
\caption{$\epsilon=0.00375$}
\end{subfigure}
\caption{This figure shows qualitative differences between the adversarial attacks on baseline and cortical networks for the FGSM and PGD attacks in the mel spectrogram domain. Each subfigure concatenates the best noise from 6 trials. We see a slightly higher dependence of modifying speaker frequencies in the baseline network over the cortical network. This is consistent with our hypothesis that the cortical representation of speech is more invariant to noise, at least for speaker frequencies (below 1kHz). In the PGD attacks we see more horizontal lines in the baseline, and some diagonal lines in the cortical. This is consistent with our expectation that cortical features filter out stationary noises.}
\label{fig:qual1}
\end{figure}
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/fig2noises_deepfool_005_base.png}
\caption{$\epsilon=0.005$}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/fig2noises_deepfool_005_cort.png}
\caption{$\epsilon=0.005$}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/fig2noises_carlini_01__base.png}
\caption{$\epsilon=0.01$}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/fig2noises_carlini_01__cort.png}
\caption{$\epsilon=0.01$}
\end{subfigure}
\caption{This figure shows qualitative differences between the adversarial attacks on baseline and cortical networks for the Deepfool and Carlini Wagner attacks in the mel spectrogram domain. Each subfigure concatenates the best noise from 6 trials, and reports the SNR and accuracy on these 6 noises on the x-axis. The cortical network attacks appear to pulsate more in time while the baseline network attacks are more stationary. The cortical attacks also exhibit more diagonal lines at resonant frequencies.}
\label{fig:qual2}
\end{figure}
\end{comment}
\begin{figure}[t!]
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/fig2noises_deepfool_005_base}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/fig2noises_deepfool_005_cort}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/fig2noises_carlini_01__base}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/fig2noises_carlini_01__cort}}
\end{minipage}
\caption{Qualitative differences between the adversarial attacks on baseline and cortical networks for the Deepfool and Carlini-Wagner attacks in the mel spectrogram domain. Each subfigure concatenates the best noise from 6 trials, and reports the SNR and accuracy on these 6 noises on the x-axis. The cortical network attacks appear to pulsate more in time while the baseline network attacks are more stationary. The cortical attacks also exhibit more diagonal lines at resonant frequencies.}
\label{fig:qual2}
\end{figure}
\begin{comment}
\medskip
\noindent
{\bf The word extraction algorithm.}
To extract words from audio in an unsupervised method on Librispeech we used the following algorithm.
Original audio was filtered for speech base frequencies $85-255$ Hz and normalized, then the energy of this signal was convolved with a 100ms Hann window at 25ms hop, giving an envelope for the signal's energy . Local min/max extrema were found, and segments were marked around triplets of low-high-low extrema points. Neighboring segments were joined if less than 350ms, and segments longer than 1.2s were discarded. Then words segments were sorted by the quietest silences preceding and following the word, and were extracted from the original unmodified audio.
\end{comment}
\subsection{Analysis}
The baseline and cortical networks were both able to train to similar competitive rates as seen in the Detection Error Tradeoff (DET) curves in \cref{fig:perf}.
These curves are specific to the dataset that we created, but for comparison,
the company PicoVoice has created a dataset in a similar manner to ours and tested the CMU Sphinx Open Source Toolkit PocketSphinx, the open source KITT.AI, and their own commercial product PicoVoice.
They advertise a miss rate of 6\% at 1 false alarm per 10 hours for their own product, 32\% for KITT.AI, and 48\% for PocketSphinx.
On our dataset our miss rate is 7\% at 1 false alarm per 10 hours for our baseline and 10\% for the cortical network.
However this is an extreme mode of operation, at 1 false alarm per 1 hour our miss rate is 5\% for both models.
The FGSM and PGD attacks have the slowest accuracy decay rate as the strength of the noise grows on the cortical network.
This is likely due to the more dispersed patterns of attacks that these two methods create.
Because of the STRFs' selectivity for changing phoneme dynamics \cite{mesgarani_2011}, the cortical network should be more invariant to stationary noises as is the human auditory cortex \cite{khalighinejad_2019}, and is only slowly impacted by this type of noise.
We see the noises generated by the Deepfool and Carlini-Wagner attacks are very textured in \cref{fig:qual2}.
For the same search in a given $\ell_\infty$ size ball, the cortical network requires a more elaborate noise than the baseline network off of the auditory spectrogram (and still performs better at the same SNR).
The noise for the baseline is more stationary and has a higher impact on speaking frequencies.
However, all the adversarial noises cannot be said to be imperceptible to the human ear, though speech is still clearly heard and understood by a human, the fricative like quality of the adversarial noises is perceptible even at 25dB SNR.
In the right hand side of \cref{fig:perf} we create adversarial music attacks \cite{adv_music}. These are also defended against by the cortical representation, as seen by the greater decrease in AUC by the baseline. These sounds are much more subtle since they are hidden in guitar like plucks.
\section{Conclusion \& Further Work}
We apply several white-box iterative optimization-based adversarial attacks to an implementation of Amazon Alexa's HW network, and a modified version of this network with an integrated cortical representation, and show that the cortical features help defend against universal adversarial examples.
At the same level of distortion, the adversarial noises found for the cortical network are always less effective for universal attacks.
Further work could start by refining the auditory preprocessing, a more faithful auditory spectrogram would simulate the transduction of the hair cells and the reduction of the lateral inhibitory network.
We also constrained our analysis to highly effective but not imperceptible adversarial noise. Further work could create more subtle noises that degrade performance gradually, enforcing imperceptibility with perceptual masking \cite{qin_2019}.
\begin{comment}
\begin{figure}[htb]
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/nirvana_cm}}
\centerline{(b) Results 3}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/nirvana_FA_MR}}
\centerline{(c) Result 4}\medskip
\end{minipage}
\caption{Example of placing a figure with experimental results.}
\label{fig:res}
\end{figure}
\end{comment}
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
1,108,101,564,712 | arxiv | \section{Introduction}
Suppose that $x=\left( x_{n}\right) _{n=1}^{\infty }$ is an absolutely
summable sequence with infinitely many nonzero terms and let
\begin{equation*}
E( x) =\Big\{\sum_{n=1}^\infty\varepsilon
_{n}x_{n}:(\varepsilon_{n})_{n=1}^{\infty }\in \{ 0,1\}^{\mathbb{N}}\Big\}
\end{equation*
denote the set of all subsums of the series $\sum_{n=1}^{\infty }x_{n},$
called \emph{the achievement set} (or \emph{a partial sumset}) of $x$. The
investigation of topological properties of achievement sets was initiated
almost one hundred years ago.\ In 1914 Soichi Kakeya \cite{K} presented the
following result:
\begin{theorem}[Kakeya]
\label{kakeya} For any sequence $x\in l_{1}\setminus c_{00}$
\begin{enumerate}
\item $E(x)$ is a perfect compact set.
\item If $|x_{n}|>\sum_{i>n}|x_{i}|$ for almost all $n$, then $E(x)$ is
homeomorphic to the ternary Cantor set.
\item If $|x_{n}|\leq \sum_{i>n}|x_{i}|$ for almost all $n$, then $E(x)$ is
a finite union of closed intervals. In the case of non-increasing sequence
x $, the last inequality is also necessary for $E(x)$ to be a finite union
of intervals.
\end{enumerate}
\end{theorem}
Moreover, Kakeya conjectured that $E(x)$ is either nowhere dense or a finite
union of intervals. Probably, the first counterexample to this conjecture
was given by Weinstein and Shapiro (\cite{WS}) and, independently, by Ferens
(\cite{F}). The simplest example was presented by Guthrie and Nymann \cit
{GN}: for the sequence $c=\big(\frac{5+(-1)^n}{4^n}\big)_{n=1}^\infty$, the
set $T=E(c)$ contains an interval but is not a finite union of intervals. In
the same paper they formulated the following theorem, finally proved in \cit
{NS2}:
\begin{theorem}
\label{Guthrie Nymann} For any sequence $x\in l_{1}\setminus c_{00}$, $E(x)$
is one of the following sets:
\begin{enumerate}
\item a finite union of closed intervals;
\item homeomorphic to the Cantor set;
\item homeomorphic to the set $T$.
\end{enumerate}
\end{theorem}
Note, that the set $T=E(c)$ is homeomorphic to $C\cup \bigcup_{n=1}^{\infty
}S_{2n-1}$, where $S_{n}$ denotes the union of the $2^{n-1}$ open middle
thirds which are removed from $[0,1]$ at the $n$-th step in the construction
of the Cantor ternary set $C$. Such sets are called Cantorvals (to emphasize
their similarity to unions of intervals and to the Cantor set
simultaneously). Formally, a \emph{Cantorval} (more precisely, an $\mathcal{
}$-Cantorval, see \cite{MO}) is a non-empty compact subset $S$ of the real
line such that $S$ is the closure of its interior, and both endpoints of any
non-degenerated component are accumulation points of one-point components of
$S$. A non-empty subset $C$ of the real line $\mathbb{R}$ will be called a
\emph{Cantor set} if it is compact, zero-dimensional, and has no isolated
points.
Let us observe that Theorem \ref{Guthrie Nymann} says, that $l_{1}$ can be
devided into 4 sets: $c_{00}$ and the sets connected with cases (1), (2) and
(3). Some algebraic and topological properties of these sets have been
recently considered in \cite{BBGS}.
We will describe sequences constructed by Weinstein and Shapiro, Ferens and
Guthrie and Nymann using the notion of multigeometric sequence. We call a
sequence \emph{multigeometric} if it is of the for
\begin{equation*}
(k_{0},k_{1},\dots ,k_{m},k_{0}q,k_{1}q,\dots
,k_{m}q,k_{0}q^{2},k_{1}q^{2},\dots ,k_{m}q^{2},k_{0}q^{3}\dots )
\end{equation*
for some positive numbers $k_{0},\dots ,k_{m}$ and $q\in \left( 0,1\right)
.\ We will denote such a sequence by $(k_{0},k_{1},\dots ,k_{m};q)$. Keeping
in mind that the type of $E\left( x\right) $ is the same as $E\left( \alpha
x\right) $, for any $\alpha >0$, we can describe the Weinstein-Shapiro
sequence a
\begin{equation*}
a=(8,7,6,5,4;\tfrac{1}{10}),
\end{equation*
the Ferens sequence a
\begin{equation*}
b=(7,6,5,4,3;\tfrac{2}{27})
\end{equation*
and the Guthrie-Nymann sequence a
\begin{equation*}
c=(3,2;\tfrac{1}{4}).
\end{equation*
Another interesting example of a sequence $d$ with $E(d)$ being Cantorval
was presented by R. Jones in (\cite{J}). The sequence is of the for
\begin{equation*}
d=(3,2,2,2;\tfrac{19}{109}).
\end{equation*
In fact, Jones constructed continuum many sequences generating Cantorvals,
indexed by a parameter $q$, by proving that, for any positive number $q$
with
\begin{equation*}
\frac{1}{5}\leqslant \sum_{n=1}^{\infty }q^{n}<\frac{2}{9}
\end{equation*
(i.e. $\frac{1}{6}\leqslant q<\frac{2}{11}$) the achievement set of the
sequence
\begin{equation*}
(3,2,2,2;q)
\end{equation*
is a Cantorval.
The structure of the achievement sets $E(x)$ for multigeometric sequences $x$
was studied in the paper \cite{BFS}, which contains a necessary condition
for the achivement set $E(x)$\ to be an interval and sufficient conditions
for $E(x)$ to contain an interval or have Lebesgue measure zero. In the case
of a Guthrie-Nymann-Jones sequence
\begin{equation*}
x_{q}=(3,2,\dots ,2;q),
\end{equation*
of rank $m$ (i.e., with $m$ repeated 2's), the set $E(x_{q})\ $is an
interval if and only if $q\geqslant \frac{2}{2m+5}$, $E(x_{q})$ is a Cantor
set of measure zero if $q<\frac{1}{2m+2}$, and $E(x_{q})$ is a Cantorval if
q\in \{\frac{1}{2m+2}\}\cup \big[\frac{1}{2m},\frac{1}{2m+5}\big)$. In this
paper we reveal some structural properties of the sets $E(x_{q})$ for $q$
belonging to the \textquotedblleft misterious\textquotedblright\ interval $
\frac{1}{2m+2},\frac{1}{2m})$. In particular, we shall show that for almost
all $q$ in this interval the set $E(x_{q})$ has positive Lebesgue measure
and there is a decreasing sequence $(q_{n})$ convergent to $\frac{1}{2m+2}$
for which $E(x_{q_{n}})$ is a Cantor set of zero Lebesgue measure. The above
description of the structure of $E(x_{q})$ can be presented as follows:
\setlength{\unitlength}{1mm}
\begin{picture}(96,25)(-25,0)
\put(3,12){\line(1,0){100}}
\put(2,6){0}
\put(13,16){$\mathcal{C}_0$}
\put(25,6){$\frac{1}{2m+2}$}
\put(25,16){$\MC$}
\put(38,16){$\lambda^+$}
\put(51,6){$\frac{1}{2m}$}
\put(61,16){$\MC$}
\put(75,6){$\frac{2}{2m+5}$}
\put(89,16){$\mathcal{I}$}
\put(102,6){$1$}
\put(3,11){\line(0,1){2}}
\put(103,11){\line(0,1){2}}
\put(28,12){\circle*{1}}
\put(29,12){\circle{1}}
\put(31,12){\circle{1}}
\put(35,12){\circle{1}}
\put(43,12){\circle{1}}
\put(53,12){\circle*{1}}
\put(78,12){\circle*{1.5}}
\end{picture}\newline
where $\mathcal{C}_{0}$ (resp. $\mathcal{MC}$, $\mathcal{I}$) indicates sets
of numbers $q$ for which the set $E(x_{q})$ is a Cantor set of zero Lebesgue
measure (resp. a Cantorval, an interval). The symbol $\lambda ^{+}$
indicates that for almost all $q$ in a given interval the sets $E(x_{q})$
have positive Lebesgue measure, which means that the set $Z=\{q\in \big
\frac{1}{2m+2},\frac{1}{2m}\big):\lambda (E(x_{q}))=0\}$ has Lebesgue
measure $\lambda (Z)=0$. Similar diagrams we use later in this paper.
The achievement sets of multigeometric sequences are partial cases of
self-similar sets of the form
\begin{equation*}
K(\Sigma;q)=\Big\{\sum_{n=0}^\infty a_nq^n:(a_n)_{n=0}^\infty\in\Sigma^\omeg
\Big\}
\end{equation*}
where $\Sigma\subset\mathbb{R}$ is a set of real numbers and $q\in(0,1)$.
The set $K(\Sigma;q)$ is self-similar in the sense that $K(\Sigma;q)
\Sigma+q\cdot K(\Sigma;q)$. Moreover, the set $K(\Sigma;q)$ can be found as
a unique compact solution $K\subset\mathbb{R}$ of the equation $K=\Sigma+qK$.
It follows that for a multigeometric sequence $x_{q}=(k_{0},\dots ,k_{m};q)$
the achievement set $E(x)$ coincides with the self-similar set $K(\Sigma ;q)$
for the set
\begin{equation*}
\Sigma =\Big\{\sum_{n=0}^{m}k_{n}\varepsilon _{n}:(\varepsilon
_{n})_{n=0}^{m}\in \{0,1\}^{m+1}\Big\}
\end{equation*
of all possible sums of the numbers $k_{0},\dots ,k_{m}$. This makes
possible to apply for studying the achievement sets $E(x_{q})$ the theory of
self-similar sets developed in \cite{H}, \cite{Schief} and, first of all, in
\cite{Fa}.
In this paper we shall describe some topological and measure properties of
the self-similar sets $K(\Sigma ;q)$ depending on the value of the
similarity ratio $q\in (0,1)$, and shall apply the obtained result to
establishing topological and measure properties of achievement sets of
multigeometric progressions. To formulate the principal results we need to
introduce some number characteristics of compact subsets $A\subset \mathbb{R}
$.
Given a compact subset $A\subset \mathbb{R}$ containing more than one point
let
\begin{equation*}
\mathrm{diam}\,A=\sup \{|a-b|:a,b\in A\}
\end{equation*
be the diameter of $A$ and
\begin{equation*}
\delta (A)=\inf \{|a-b|:a,b\in A,\;a\neq b\}\mbox{ and }\Delta (A)=\sup
\{|a-b|:a,b\in A,\;(a,b)\cap A=\emptyset \}
\end{equation*
be the smallest and largest gaps in $A$, respectively. Observe that $A$ is
an interval (equal to $[\min A,\max A]$) if and only if $\Delta (A)=0$.
Also put
\begin{equation*}
I(A)=\frac{\Delta (A)}{\Delta (A)+\mathrm{diam}\,A}\mbox{ \ \ and \ \
i(A)=\inf \{I(B):B\subset A,\;\;2\leq |B|<\omega \}.
\end{equation*
In particular, given a finite subset $\Sigma \subset \mathbb{R}$ of
cardinality $|\Sigma |\geq 2$, we will write it as $\Sigma =\{\sigma
_{1},\dots ,\sigma _{s}\}$ for real numbers $\sigma _{1}<\dots <\sigma _{s}
. Then we have
\begin{equation*}
\mathrm{diam}(\Sigma )=\sigma _{s}-\sigma _{1},\;\;\delta (\Sigma
)=\min_{i<s}(\sigma _{i+1}-\sigma _{i}), \mbox{ \ and \ }\Delta (\Sigma
)=\max_{i<s}(\sigma _{i+1}-\sigma _{i}).
\end{equation*}
\begin{theorem}
\label{main} Let $\Sigma=\{\sigma_1,\dots,\sigma_s\}$ for some real numbers
\sigma_1<\dots<\sigma_s$. The self-similar sets $K(\Sigma;q)$ where
q\in(0,1)$ have the following properties:
\begin{enumerate}
\item $K(\Sigma;q)$ is an interval if and only if $q\ge I(\Sigma)$;
\item $K(\Sigma;q)$ is not a finite union of intervals if $q<I(\Sigma)$ and
\Delta(\Sigma)\in\{\sigma_2-\sigma_1,\sigma_s-\sigma_{s-1}\}$;
\item $K(\Sigma;q)$ contains an interval if $q\ge i(\Sigma)$;
\item If $d=\frac{\delta(\Sigma)}{\mathrm{diam}(\Sigma)}<\frac1{3+2\sqrt{2}}$
and $\frac1{|\Sigma|}<\frac{\sqrt{d}}{1+\sqrt{d}}$, then for almost all $q\i
\big(\frac1{|\Sigma|},\frac{\sqrt{d}}{1+\sqrt{d}}\big)$ the set $K(\Sigma;q)$
has positive Lebesgue measure and the set $K(\Sigma;\sqrt{q})$ contains an
interval;
\item $K(\Sigma;q)$ is a Cantor set of zero Lebesgue measure if
q<\frac1{|\Sigma|}$ or, more generally, if $q^n<\frac1{|\Sigma_n|}$ for some
$n\in\mathbb{N}$ where $\Sigma_n=\big\
\sum_{k=0}^{n-1}a_kq^k:(a_k)_{k=0}^{n-1}\in\Sigma^n\big\}$.
\item If $\Sigma\supset\{a,a+1,b+1,c+1,b+|\Sigma|,c+|\Sigma|\}$ for some
real numbers $a,b,c\in\mathbb{R}$ with $b\ne c$, then there is a strictly
decreasing sequence $(q_n)_{n\in\omega}$ with $\lim_{n\to\infty}q_n=\frac1{
\Sigma|}$ such that the sets $K(\Sigma;q_n)$ has Lebesgue mesure zero.
\end{enumerate}
\end{theorem}
The statements (1)--(3) from this theorem will be proved in Section~\re
{s:int}, the statement (4) in Section~\ref{s:pos} and (5),(6) in Section~\re
{s:null}. Writing that for almost all $q$ in an interval $(a,b)$ some
property $\mathcal{P}(q)$ holds we have in mind that the set $Z=\{q\in (a,b)
\mathcal{P}(q)$ does not hold$\}$ has Lebesgue measure $\lambda (Z)=0$.
\section{Intervals and Cantorvals}
\label{s:int}
In this section we generalize results of \cite{BFS} detecting the
self-similar sets $K(\Sigma ;q)$ which are intervals or Cantorvals. In the
following theorem we prove the statements (1)--(3) of Theorem~\ref{main}.
\begin{theorem}
\label{th3} Let $q\in(0,1)$ and $\Sigma=\{\sigma_1,\dots,\sigma_s\}\subse
\mathbb{R}$ be a finite set with $\sigma_1<\dots<\sigma_s$. The self-similar
set $K(\Sigma;q)=\big\{\sum_{i=0}^{\infty }a
_{i}q^{i}:(a_{i})_{i\in\omega}\in \Sigma ^{\omega}\big\}$
\begin{enumerate}
\item is an interval if and only if $q\ge I(\Sigma)$;
\item contains an interval if $q\ge i(\Sigma)$;
\item is not a finite union of intervals if $q<I(\Sigma)$ and
\Delta(\Sigma)\in\{\sigma_2-\sigma_1,\sigma_s-\sigma_{s-1}\}$.
\end{enumerate}
\end{theorem}
\begin{proof}
1. Observe that $\mathrm{diam} K(\Sigma;q)=\mathrm{diam}(\Sigma)/(1-q)$.
Assuming that $q\geq I(\Sigma)=\Delta(\Sigma)/(\Delta(\Sigma)+\mathrm{diam
\Sigma)$, we conclude that $\Delta(\Sigma)\le q\cdot \mathrm{diam}
(\Sigma)/(1-q)=q\cdot\mathrm{diam} K(\Sigma;q)$, which implies that
\begin{equation*}
\Delta(K(\Sigma;q))=\Delta(\Sigma+q\cdot K(\Sigma;q))\le \Delta(q\cdot
K(\Sigma;q))=q\cdot \Delta(K,\Sigma;q).
\end{equation*}
Since $q<1$ this inequality is possible only in case $\Delta(K(\Sigma;q))=0
, which means that $K(\Sigma;q)$ is an interval.
If $q<\Delta(\Sigma)/(\Delta(\Sigma)+\mathrm{diam}\Sigma)$, then
\Delta(\Sigma)>q\cdot \mathrm{diam}(\Sigma)/(1-q)=q\cdot \mathrm{diam}
(K(\Sigma;q))$ and we can find two consequtive points $a<b$ in $\Sigma$ with
$b=a+\Delta(\Sigma)>a+\mathrm{diam}(q K(\Sigma;q))$ and conclude that
[a,b]\cap K(\Sigma;q)=[a,b]\cap(\Sigma+qK(\Sigma;q))\subset [a,a+\mathrm{dia
}(q\,K(\Sigma;q))]\ne [a,b]$, so $K( \Sigma ;q) $ is not an interval.
\smallskip
2. Now assume that $q\ge i(\Sigma)$ and find a subset $B\subset \Sigma$ such
that $I(B)=i(\Sigma)<q$. By the preceding item, the self-similar set
K(B;q)=B+q K(B;q)$ is an interval. Consequently, $K(\Sigma;q)$ contains the
interval $K(B;q)$. \smallskip
3. Finally assume that $\Delta(\Sigma)=\sigma_2-\sigma_1$ and $q<I(\Sigma)$.
Since for every $a\in\Sigma$ we get $K(\Sigma-a;q)=K(\Sigma;q)-\frac{a}{1-q}
, we can replace $\Sigma$ by its shift and assume that $\sigma_1=0$ and
hence $\Delta(\Sigma)=\sigma_2-\sigma_1=\sigma_2$. It follows from
q<I(\Sigma)=\sigma_2/(\sigma_2+\mathrm{diam} \Sigma)$ that for any $j\in
\mathbb{N}$, the interval $\big( \sum_{n=j+1}^{\infty }q^{n}\sigma
_{s},q^{j}\sigma _{2}\big) $ is nonempty and disjoint from $K\left( \Sigma
;q\right) $. Hence, no interval of the form $\left[ 0,\varepsilon \right] $
is included in $K\left( \Sigma ;q\right) $. But $0\in K\left( \Sigma
;q\right) $, so $K\left( \Sigma ;q\right) $ is not a finite union of closed
intervals. By analogy we can consider the case $\Delta(\Sigma)=\sigma_s
\sigma_{s-1}$.
\end{proof}
In particular, Theorem \ref{th3} implies:
\begin{corollary}
For $\Sigma =\{0,1,2,\dots ,s-1\}$ the set $K\left( \Sigma ;q\right) $ is an
interval if and only if $q\geq I(\Sigma)=\frac{1}{|\Sigma|}$.
\end{corollary}
\begin{corollary}
If $\{k,k+1,\dots ,k+n-1\}\subset \Sigma $, then $i(\Sigma)\le \frac1n$ and
for every $q\ge \frac1n$ the set $K(\Sigma;q)$ contains an interval.
\end{corollary}
In particular, for the Guthrie-Nymann-Jones multigeometric sequence
x_{q}=(3,2,\dots ,2;q)$ of rank $m$ the sumset $\Sigma =\{0,2,\dots
,2m+1,2m+3\}$ has cardinality $|\Sigma |=2m+2$, $I(\Sigma )=\frac{\Delta
(\Sigma )}{\Delta (\Sigma )+\mathrm{diam}\Sigma }=\frac{2}{2m+5}$, $i(\Sigma
)=\min \big\{ \frac{1}{2m},\frac{2}{2m+5}\big\} $, and $d=\frac{\delta
(\Sigma )}{\mathrm{diam}(\Sigma )}=\frac{1}{2m+3}$. So, for $q\in \big[\frac
2}{2m+5},1\big)$ the set $E(x_{q})=K(\Sigma ;q)$ is an interval and for
q\in \big[\frac{1}{2m},\frac{2}{2m+5}\big)$ a Cantorval.
\section{Sets of positive measure}
\label{s:pos}
In this section we shall prove the statement (4) of Theorem~\ref{main}
detecting numbers $q$ for which the self-similar set $K(\Sigma;q)$ has
positive Lebesgue measure $\lambda(K(\Sigma;q))$. For this we shall apply
the deep results of Boris Solomyak \cite{S} related to the distribution of
the random series $\sum_{n=0}^{\infty }a_n\lambda ^{n},$ where the
coefficients $a_n\in\Sigma$ are chosen independently with probability $\frac
1}{|\Sigma|}$ each.
Given a finite subset $\Sigma\subset\mathbb{R}$ consider the number
\begin{equation*}
\alpha(\Sigma)=\inf\big\{x\in(0,1):\exists
(a_n)_{n\in\omega}\in(\Sigma-\Sigma)^\omega\setminus\{0\}^\omeg
\mbox{ such
that }\sum_{n=0}^\infty a_nx^n=0\mbox{ and }\sum_{n=1}^\infty na_nx^{n-1}=
\big\}.
\end{equation*}
The first part of the following theorem was proved by Solomyak in \cite[1.2
{S}:
\begin{theorem}
\label{soloma} Let $\Sigma\subset\mathbb{R}$ be a finite subset. If
\frac1{|\Sigma|}<\alpha(\Sigma)$, then for almost all $q$ in the interval
\big(\frac1{|\Sigma|},\alpha(\Sigma)\big)$ the self-similar set $K(\Sigma;q)$
has positive Lebesgue measure and the set $K(\Sigma;\sqrt{q})$ contains an
interval.
\end{theorem}
\begin{proof}
By Theorem 1.2 of \cite{S}, for almost all $q\in\big(\frac1{|\Sigma|}
\alpha(\Sigma)\big)$ the self-similar set $K(\Sigma;q)$ has positive
Lebesgue measure. Since $K(\Sigma;\sqrt{q})=K(\Sigma;q)+\sqrt{q}\cdot
K(\Sigma;q)$, the set $K(\Sigma;q)$ contains an interval, being the sum of
two sets of positive Lebesque measure (according to the famous Steinhaus
Theorem \cite{St}).
\end{proof}
The definition of Solomyak's constant $\alpha(\Sigma)$ does not suggest any
efficient way of its calculation. In \cite{S} Solomyak found an efficient
lower bound on $\alpha(\Sigma)$ based on the notion of a $(*)$-function,
i.e., a function of the form
\begin{equation*}
g(x)=-\sum_{k=1}^{n-1}x^k+\gamma x^n+\sum_{k=n+1}^\infty x^k
\end{equation*
for some $n\in\mathbb{N}$ and $\gamma\in[-1,1]$. In Lemma~3.1 \cite{S}
Solomyak proved that every $(*)$-function $g(x)$ has a unique critical point
on $[0,1)$ at which $g$ takes its minimal value. Moreover, for every $d>0$
there is a unique $(*)$-function $g_d(x)$ such that $\min_{[0,1)}g_d=-d$.
The unique critical point $x_d\in g_d^{-1}(-d)\in[0,1)$ of $g_d$ will be
denoted by $\underline{\alpha}(d)$. The following lower bound on the number
\alpha(\Sigma)$ follows from Proposition 3.2 and inequality (15) in \cite{S}.
\begin{lemma}
For every finite set $\Sigma\subset\mathbb{R}$ of cardinality $|\Sigma|\ge 2$
we get
\begin{equation*}
\alpha(\Sigma)\ge\underline{\alpha}(d)\mbox{ \ where \ }d=\frac
\delta(\Sigma)}{\mathrm{diam}(\Sigma)}.
\end{equation*}
\end{lemma}
The function $\underline{\alpha}(d)$ can be calculated effectively (at least
for $d\le\frac12$).
\begin{lemma}
\label{bound} If $0<d\le\frac1{3+2\sqrt{2}}$, then
\begin{equation*}
\underline{\alpha}(d)=\frac{\sqrt{d}}{1+\sqrt{d}}.
\end{equation*}
\end{lemma}
\begin{proof}
Observe that the minimal value of the $(*)$-function $g(x)=-x+\sum_{k=2}
\infty x^k=-x+\frac{x^2}{1-x}$ is equal to $-\frac1{3+2\sqrt{2}}$, which
implies that for $d\in \big(0,\frac1{3+2\sqrt{2}}\big]$ the number
\underline{\alpha}(d)$ is equal to the critical point of the unique $(*)
-function $g(x)=\gamma x+\sum_{k=2}^\infty x^k=-1+(\gamma-1)x+\frac1{1-x}$
with $\min_{[0,1)}g=-d$. This $(*)$-function has derivative
g^{\prime}(x)=(\gamma-1)+\frac1{(1-x)^2}$. If $x$ is the critical point of
g $, then $1-\gamma=\frac1{(1-x)^2}$ and the equality
\begin{equation*}
d=-1+(\gamma-1)x+\frac1{1-x}=-1-\frac{x}{(1-x)^2}+\frac1{1-x}
\end{equation*
has the solution
\begin{equation*}
x=1-\frac1{1+\sqrt{d}}=\frac{\sqrt{d}}{1+\sqrt{d}}
\end{equation*
which is equal to $\underline{\alpha}(d)$.
\end{proof}
For $d>\frac1{3+2\sqrt{2}}$ the formula for $\underline{\alpha}(d)$ is more
complex.
\begin{lemma}
If $\frac1{3+2\sqrt{2}}\le d\le \frac12$, then the value
\begin{equation*}
\underline{\alpha}(d)=\frac{1+d}3+\frac{\sqrt[3]{2}\cdot R}{6}+\frac
2d^2-8d-1}{3\sqrt[3]{2}\cdot R}
\end{equation*}
where
\begin{equation*}
R=\sqrt[3]{4d^3-24d^2+21d-5+3\sqrt{3}\sqrt{1-8d^3+39d^2-6d}}
\end{equation*}
can be found as the unique real solution of the qubic equation
\begin{equation*}
2(x-1)^3+(4-2d)(x-1)^2+3(x-1)+1=0.
\end{equation*}
\end{lemma}
\begin{proof}
Since the minimal values of the $(*)$-functions $g_1(x)=-x+\sum_{k=2}^\infty
x^k$ and $g(x)=-x-x^2+\sum_{k=3}^\infty x^k$ are equal to $-\frac1{3+2\sqrt{
}}$ and $-\frac12$, respectively, for $d\in\big[\frac1{3+2\sqrt{2}},\frac1
\big]$ the number $\underline{\alpha}(d)$ is equal to the critical point of
a unique $(*)$-function
\begin{equation*}
g(x)=-x+\gamma x^2+\sum_{k=3}^\infty x^k=-1-2x+(\gamma-1)x^2+\frac1{1-x}
\end{equation*}
with $\min_{[0,1)}g=-d$. At the critical point $x$ the derivative of $g$
equals zero:
\begin{equation*}
0=g^{\prime}(x)=-2+2(\gamma-1)x+\frac1{(1-x)^2}
\end{equation*
which implies that
\begin{equation*}
\gamma-1=\frac1{2x}\Big(2-\frac1{(1-x)^2}\Big)=\frac{2x^2-4x+1}{2x(1-x)^2}.
\end{equation*}
After substitution of $\gamma-1$ to the formula of the function $g(x)$, we
get
\begin{equation*}
-d=-1-2x-\frac{2x^3-4x^2+x}{2(1-x)^2}+\frac1{1-x}.
\end{equation*}
This equation is equivalent to the qubic equation
\begin{equation*}
2(x-1)^3+(4-2d)(x-1)^2+3(x-1)+1=0.
\end{equation*
Solving this equation with the Cardano formulas we can get the solution
\underline{\alpha}(d)$ written in the lemma.
\end{proof}
\begin{remark}
Calculating the value $\underline{\alpha}(d)$ for some concrete numbers $d$,
we get
\begin{equation*}
\underline{\alpha}(\tfrac15)\approx 0.32482,\;\;\underline{\alpha}(\tfrac
14)\approx 0.37097, \; \underline{\alpha}(\tfrac13)\approx 0.42773,\
\underline{\alpha}(\tfrac12)=0.5 .
\end{equation*}
\end{remark}
Theorem~\ref{soloma} and Lemma~\ref{bound} imply:
\begin{corollary}
\label{cor8} Let $\Sigma\subset\mathbb{R}$ be a finite subset containing
more than three points and $d=\delta(\Sigma)/\mathrm{diam}(\Sigma)$. If
d\le \frac1{3+2\sqrt{2}}$ and $\frac{\sqrt{d}}{1+\sqrt{d}}>\frac1{|\Sigma|}
, then for almost all $q$ in the interval $\big(\frac1{|\Sigma|},\frac{\sqrt
d}}{1+\sqrt{d}}\big)$ the self-similar set $K(\Sigma;q)$ has positive
Lebesgue measure and the set $K(\Sigma;\sqrt{q})$ contains an interval.
\end{corollary}
\begin{remark}
Theorem~\ref{th3} says that for $q\in \lbrack i(\Sigma ),1)$ the set
K(\Sigma ;q)$ contains an interval. By Theorem~\ref{soloma} under certain
conditions the same is true for almost all $q\in \big[\frac{1}{\sqrt{|\Sigma
|}},\sqrt{\alpha (\Sigma )}\big)$. Let us remark that the numbers $i(\Sigma
) $ and $\frac{1}{\sqrt{|\Sigma |}}$ are incomparable in general. Indeed,
for the multigeometric sequence $(1,\dots ,1;q)$ containing $k>1$ units the
set $\Sigma =\{0,\dots ,k\}$ has
\begin{equation*}
i(\Sigma )=I(\Sigma )=\frac{1}{k+1}=\frac{1}{|\Sigma |}<\frac{1}{\sqrt
|\Sigma |}}.
\end{equation*
On the other hand, for the multigeometric sequence $(3^{k-1},3^{k-2},\dots
,3,1;q)$ the set $\Sigma =\{\sum_{n=0}^{k-1}3^{n}\varepsilon
_{n}:(\varepsilon _{n})_{n<k}\in \{0,1\}^{k}\}$ has cardinality $|\Sigma
|=2^{k}$, diameter $\mathrm{diam}(\Sigma )=(3^{k}-1)/2$, $d=\frac{\delta
(\Sigma )}{\mathrm{diam}(\Sigma )}=\frac{2}{3^{k}-1}$ and $i(\Sigma
)=I(\Sigma )=\frac{1}{4}+\frac{1}{4\cdot 3^{k-1}}>\frac{1}{\sqrt{2}^{k}}
\frac{1}{\sqrt{|\Sigma |}}$. Corollary~\ref{cor8} guarantees that for almost
all $q\in \big(\frac{1}{\sqrt{2}^{k}},\frac{\sqrt[4]{d}}{\sqrt{1+\sqrt{d}}
\big)$ the set $K(\Sigma ;q)$ contains an interval.
\end{remark}
Multigeometric sequences of the for
\begin{equation*}
\left( k+m,\dots ,k+1,k;q\right)
\end{equation*
with $m\geq k$ we will call, after \cite{P-W}, \emph{Ferens-like sequences}.
The achievement set $E\left( x\right) $\ for a Ferens-like sequence
coincides with the self-similar set $K(\Sigma ;q)$\ for the set
\begin{equation*}
\Sigma =\{0,k,k+1,\dots ,n-k,n\}\text{.}
\end{equation*
where $n=(m+1)(2k+m)/2$.\ Sets $K\left( \Sigma ;q\right) $ with $\Sigma $ of
this form will be called \emph{Ferens-like fractals}.
Note that Guthrie-Nymann-Jones sequence of rank $m$ generates a Ferens-like
fractal (with $\Sigma =\{0,2,3,\dots ,2m+1,2m+3\}$. There are also
Ferens-like fractals which are not originated by any multigeometric sequence
(for example $K(\Sigma ;q)$ with $\Sigma =\left\{ 0,4,5,6,7,11\right\} $).
However, as an easy consequence of the main theorem of \cite{NS}, we obtain
for Ferens-like fractals \textquotedblleft trichotomy" analogous to that
formulated in Theorem \ref{Guthrie Nymann}. Moreover, some theorems
formulated for multigeometric sequences are in fact proved for $K(\Sigma ;q)$
(see for example Theorem 2 in \cite{BFS}).
\begin{example}
\label{ex9} For the Ferens-like sequence $x_{q}=(4,3,2;q)$ we get $\Sigma
=\{0,2,3,4,5,6,7,9\}$,
\begin{equation*}
d=\frac{\delta (\Sigma )}{\mathrm{diam}(\Sigma )}=\frac{1}{9}<\frac{1}{3+
\sqrt{2}}\mbox{ \ and \ }\frac{\sqrt{d}}{1+\sqrt{d}}=\frac{1}{4}>\frac{1}{6
=i(\Sigma ).
\end{equation*
By Corollary~\ref{cor8} (and Theorem~\ref{th3}), for almost all numbers
q\in \big(\frac{1}{8},1\big)$ the achievement set $E(x_{q})=K(\Sigma ;q)$
has positive Lebesgue measure (for $q<\frac{2}{11}=I(\Sigma )$ it is not a
finite union of intervals). By Theorem~\ref{th3}, for any $q\in \lbrack
i(\Sigma ),I(\Sigma ))=[\frac{1}{6};\frac{2}{11})$ the set $K(\Sigma ;q)$ is
a Cantorval. The structure of the sets $E(x_{q})=K(\Sigma ;q)$ is described
in the diagram:
\setlength{\unitlength}{1mm}
\begin{picture}(56,23)(-30,0)
\put(0,12){\vector(1,0){74}}
\put(5,15){$\mathcal{C}_0$}
\put(12,7){$\frac{1}{8}$}
\put(21,15){$\lambda^+$}
\put(32,7){$\frac16$}
\put(40,15){$\MC$}
\put(51,7){$\frac2{11}$}
\put(63,15){$\mathcal{I}$}
\multiput(13,12)(20,0){3}{\circle*{1}}
\end{picture}\smallskip
More generally, for any Ferens-like fractal, $|\Sigma |=n-2k+3$, $\Delta
(\Sigma )=k$, $\delta \left( \Sigma \right) =1$, $I(\Sigma )=\frac{k}{n+k}$,
$i(\Sigma )=\min \big(\frac{1}{\left\vert \Sigma \right\vert -2},I(\Sigma
\big)$ and $d=\frac{1}{n}$. Moreover, if $n\geq 7$ then $\underline{\alpha
(d)=\frac{1}{\sqrt{n}+1}$. Therefore, one can check that for any Ferens-like
sequence we have $\underline{\alpha }(d)>i(\Sigma )$, and we can draw an
analogous diagram. The same result we can obtain for any Ferens-like fractal
with $k=2$ (even if it is not originated by any Ferens-like sequence).
However, there are Ferens-like fractals with $\underline{\alpha
(d)<i(\Sigma )$ (for example $K(\Sigma ;q)$ with $\Sigma =\left\{
0,3,4,7\right\} $ or $\Sigma =\left\{ 0,4,5,6,7,11\right\} $).
\end{example}
\begin{example}
\label{ex9a} For the Guthrie-Nymann-Jones sequence $x_{q}=(3,2,\dots ,2;q)$
of rank $m\geq 2$ we get $\Sigma =\{0,2,3,\dots ,2m+1,2m+3\}$, $|\Sigma
|=2m+2$, $I(\Sigma )=\frac{2}{2m+5}$, $i(\Sigma)=\min \big\{\frac{1}{2m}
\frac{2}{2m+5}\big\}$, $d=\frac{1}{2m+3}$ and $\underline{\alpha }(d)=1/(1
\sqrt{2m+3})$. Moreover, we have $d<\frac{1}{3+2\sqrt{2}}$ and $\underline
\alpha }(d)\geq i(\Sigma )>\frac{1}{2m+2}=\frac{1}{|\Sigma |}$. So, we can
apply Corollary~\ref{cor8} and conclude that for almost all numbers $q\in
\big(\frac{1}{2m+2},\frac{1}{2m}\big)$ the self-similar set $K(\Sigma ;q)$
has positive Lebesgue measure. By Theorem~\ref{th3}, for any $q\in \lbrack
i(\Sigma ),\frac{2}{2m+5})$ the set $K(\Sigma;q)$ is a Cantorval and for all
$q\in \lbrack \frac{2}{2m+5},1)$ it is an interval.\newline
For $m=1$ we obtain $\underline{\alpha }(d)=\underline{\alpha }(\frac{1}{5})
\frac{2}{7}$. Therefore, for almost all numbers $q\in \big(\frac{1}{4},\frac
2}{7}\big)$ the set $K(\Sigma ;q)$ has positive Lebesgue measure.
\end{example}
\section{Self-similar sets of zero Lebesgue measure}
\label{s:null}
The results of the preceding section yields conditions under which for
almost all $q$ in an interval $\big[\frac{1}{|\Sigma|},\alpha(\Sigma)\big)$
the set $K( \Sigma ;q) $ has positive Lebesgue measure. In this section we
shall show that this interval can contain infinitely many numbers $q$ with
\lambda (K(\Sigma ;q)) =0$ thus proving the statements (5) and (6) of
Theorem~\ref{main}.
\begin{theorem}
\label{th9}If there exists $n\in \mathbb{N}$ such that
\begin{equation*}
\Big\vert \sum_{i=0}^{n-1}q^{i}\Sigma \Big\vert \cdot q^{n}<1
\end{equation*}
then the set $K(\Sigma ,q)$ has measure zero.
\end{theorem}
\begin{proof}
Denote $K:=K(\Sigma ,q)$. From the equality $K=\Sigma +qK$ we obtain, by
induction, that
\begin{equation*}
K=\sum_{i=0}^{n-1}q^{i}\Sigma +q^{n}K.
\end{equation*
Let $\Sigma _{n}=\sum_{i=0}^{n-1}q^{i}\Sigma $. If $|\Sigma _{n}|\cdot
q^{n}<1$, then
\begin{equation*}
\lambda (K)\leq |\Sigma _{n}|\cdot q^{n}\cdot \lambda (K)<1\cdot \lambda (K)
\end{equation*
which is possible only if $\lambda (K)=0$.
\end{proof}
To use the latter theorem we need a technical lemma:
\begin{lemma}
\label{l2}For any integer numbers $s>1$ and $n>1$ the unique positive
solution $q$ of the equatio
\begin{equation}
x+x^{2}+\dots+x^{n-1}=\frac{1}{s-1} \label{1}
\end{equation
is greater than $\frac{1}{s}$. Moreover, there is $n_{0}\in\mathbb{N}$ such
that for any $n>n_{0}$\
\begin{equation}
\left( s^{n}-2^{n-1}\right) \cdot q^{n}<1\text{.} \label{2}
\end{equation}
\begin{proof}
Clearly
\begin{equation*}
\sum_{i=1}^{n-1}\left( \frac{1}{s}\right) ^{i}=\frac{1}{s-1}\cdot \left( 1
\frac{1}{s^{n-1}}\right) <\frac{1}{s-1}\text{,}
\end{equation*
so $q>\frac{1}{s}$. From the equality
\begin{equation*}
\frac{1}{s-1}=\sum_{i=1}^{n-2}\left( \frac{1}{s}\right) ^{i}+\frac{1}{\left(
s-1\right) s^{n-2}}
\end{equation*
we obtai
\begin{equation*}
q^{n-1}=\frac{1}{s-1}-\sum_{i=1}^{n-2}q^{i}<\frac{1}{s-1}-\sum_{i=1}^{n-2
\left( \frac{1}{s}\right) ^{i}=\frac{1}{\left( s-1\right) s^{n-2}}\text{.}
\end{equation*
Using the latter inequality and the equalit
\begin{equation*}
\frac{1}{s-1}=\frac{q-q^{n}}{1-q}
\end{equation*
we hav
\begin{equation*}
\frac{1-q}{s-1}=q\left( 1-q^{n-1}\right) >q\left( 1-\frac{1}{\left(
s-1\right) s^{n-2}}\right) \text{.}
\end{equation*
Therefore
\begin{equation*}
1-q>\left( s-1\right) q-\frac{q}{s^{n-2}}
\end{equation*
(which means that $sq-\frac{q}{s^{n-2}}<1$)\ and finall
\begin{equation}
q<\frac{1}{s\left( 1-\frac{1}{s^{n-1}}\right) }\text{.} \label{3}
\end{equation
From Bernoulli's inequality it follows tha
\begin{equation*}
\left( 1-\frac{1}{s^{n-1}}\right) ^{n}\geq 1-\frac{n}{s^{n-1}}
\end{equation*
and, by (\ref{3}), we hav
\begin{equation*}
q^{n}<\frac{1}{s^{n}\cdot \left( 1-\frac{n}{s^{n-1}}\right) }\text{.}
\end{equation*
Consequently
\begin{equation*}
\left( s^{n}-2^{n-1}\right) \cdot q^{n}<\frac{s^{n}\cdot \left( 1-\frac
2^{n-1}}{s^{n}}\right) }{s^{n}\cdot \left( 1-\frac{n}{s^{n-1}}\right) }
\end{equation*
Obviously, for $n$ greater then some $n_{0}$\
\begin{equation*}
\frac{2^{n-1}}{s}>n
\end{equation*
and henc
\begin{equation*}
\frac{2^{n-1}}{s^{n}}>\frac{n}{s^{n-1}}
\end{equation*
which proves (\ref{2}).
\end{proof}
\end{lemma}
\begin{theorem}
\label{Th2}If a finite subset $\Sigma\subset\mathbb{R}$ contains the set
\{a,a+1,b+1,c+1,b+|\Sigma|,c+|\Sigma|\}$ for some real numbers $a,b,c$ with
b\ne c$, then there is a decreasing sequence $(q_{n})_{n=1}^{\infty }$
tending to $\frac{1}{|\Sigma|}$\ such that, for any $n\in\mathbb{N}$, the
self-similar set $K(\Sigma,q_{n})$ has Lebesgue measure zero.
\end{theorem}
\begin{proof}
Let $s=|\Sigma|$ and for every $n$ denote by $q_{n}$ the unique positive
solution of the equation (\ref{1}) from Lemma \ref{l2}. Let $n_{0}$ be a
natural number such tha
\begin{equation*}
\left( s^{n}-2^{n-1}\right) \cdot \left( q_{n}\right) ^{n}<1
\end{equation*
for any $n>n_{0}$. Clearly $(q_{n}) _{n=n_{0}}^{\infty }$ is a decreasing
sequence and $\lim_{n\rightarrow \infty }q_{n}=\frac{1}{s}$. It suffices to
show that $K(\Sigma ,q)$ has measure zero for $n>n_{0}$.\newline
Taking into account that each $q_{n}$ is a solution of (\ref{1}), we
conclude that
\begin{equation*}
a+\sum_{i=1}^{n-1}(s-1+\varepsilon _{i})( q_{n})
^{i}=(a+1)+\sum_{i=1}^{n-1}\varepsilon _{i}( q_{n}) ^{i}
\end{equation*
for any $\varepsilon _{i}\in \{b+1,c+1\}\subset\Sigma$. Therefor
\begin{equation*}
\left\vert \sum_{i=1}^{n-1}\left( q_{n}\right) ^{i}\Sigma \right\vert \leq
s^{n}-2^{n-1}\text{.}
\end{equation*
Hence, by Lemma \ref{l2},
\begin{equation*}
\left\vert \sum_{i=1}^{n-1}\left( q_{n}\right) ^{i}\Sigma \right\vert \cdot
\left( q_{n}\right) ^{n}<1.
\end{equation*
and we can apply Theorem \ref{th9} to conclude that $K(\Sigma ,q)$ has
Lebesgue measure zero.
\end{proof}
The condition
\begin{equation}
\big\{a,a+1,b+1,c+1,b+|\Sigma|,c+|\Sigma|\big\} \subset \Sigma \tag{$*$}
\label{gw}
\end{equation
looks a bit artificial but it can be easily verified for many sumsets
\Sigma $ of multigeometric sequences.
In particular, for the Guthrie-Nymann-Jones sequence of rank $m\geq 1$
\begin{equation*}
x_{q}=(3,2,\dots ,2;q),
\end{equation*
the sumset $\Sigma =\{0,2,3,\dots ,2m+1,2m+3\}$ has cardinality $|\Sigma
|=2m+2$. Observe that for the set $\Sigma $ the condition $(\ast )$ holds
for $a=2$, $b=1$ and $c=-1$. Because of that Theorem~\ref{Th2} yields a
sequence $(q_{n})_{n=1}^{\infty }\searrow \frac{1}{2m+2}$ such that for
every $n\in \mathbb{N}$ the self-similar set $E(x_{q_{n}})$ is a Cantor sets
of zero Lebesgue measure.
By \cite{BFS}, for $q=\frac{1}{2m+2}$ the achievement set $E(x_{q})$ is a
Cantorval. Therefore, if $m>2$, there are three ratios $p<q<r$ such that
E(x_{p})$ and $E(x_{r})$ are Cantor sets while $E(x_{q})$ is a Cantorval. By
our best knowledge it is the first result of this type for multigeometric
sequences.
Now we will focus on Ferens-like sequences $x_{q}=(m+k,\dots ,k;q)$ where
m\geq k$. \smallskip
For $k=1$ the Ferens-like sequence $x_{q}=(m+1,\dots ,2,1;q)$ ha
\begin{equation*}
\Sigma =\big\{0,1,2,\dots ,(m+2)\left( m+1\right) /2\big\}\text{.}
\end{equation*
The set $E(x_{q})$ is a Cantor set (for $q<\frac{1}{|\Sigma |}$) or an
interval (for $q\geq \frac{1}{|\Sigma |}$); see Theorem 7 in \cite{BFS}),
Theorem \ref{kakeya} or Theorem~\ref{th3}. \smallskip
For $k=2$, the \textquotedblleft shortest" Ferens-like sequence is
x_{q}=(4,3,2;q)$. For this sequenc
\begin{equation*}
\Sigma =\left\{ 0,2,3,4,5,6,7,9\right\} \text{.}
\end{equation*
Note that the same $\Sigma $ has Guthrie-Nymann-Jones sequence $(3,2,2,2;q)$
(see Example \ref{ex9a}). It follows that $E(x_{q})$ is a Cantor set for
q\in \big(0,\frac{1}{8}\big)$ and $E(x_{q})$ is a Cantorval for $q=\frac{1}{
}$. By Theorem~\ref{th3}, $K(\Sigma ;q)$ is an interval for $q\geq I(\Sigma
)=\frac{2}{11}$ and a Cantorval for $q\in \big(\frac{1}{6},\frac{2}{11}\big)
. As shown in Example~\ref{ex9a}, for almost all $q\in \big(\frac{1}{8}
\frac{1}{6}\big)$ the set $K(\Sigma ;q)$ has positive Lebesgue measure.
Using Theorem~\ref{Th2}, we can find a decreasing sequence $(q_{n})$ tending
to $\frac{1}{8}$ for which the sets $K(\Sigma ;q_{n})$ have zero Lebesgue
measure.
\smallskip
For $k=3$ the \textquotedblleft shortest" Ferens-like sequence is
x_{q}=(6,5,4,3;q)$. For this sequenc
\begin{equation*}
\Sigma =\left\{ 0,3,\dots ,15,18\right\}
\end{equation*
and $|\Sigma |=15$. Since $1\in \frac{1}{15}\Sigma $ the set $\Sigma
_{2}=\Sigma +\frac{1}{15}\Sigma $ has less than $\left\vert 15\right\vert
^{2}$ elements (for example $4$ can be presented as $4+0$ or as $3+1$).
Therefore $\frac{1}{15^{2}}|\Sigma _{2}|<1$ and for $q=\frac{1}{15}$ the set
$E(x_{q})$ is a Cantor set according to Theorem \ref{th9}. Moreover,
calculating for $q=\frac{1}{14}>\frac{1}{15}$ the cardinality
\begin{equation*}
|\Sigma_3|=|\Sigma +q\Sigma +q^{2}\Sigma |=2655<14^{3}
\end{equation*
and applying Theorem \ref{th9}, we conclude that the achievement set
E(x_{q})$ is a Cantor set of zero Lebesgue measure for $q=\frac{1}{14}$. On
the other hand, Corollary~\ref{cor8} implies that for almost all $q\in \big
\frac{1}{15},\frac{1}{1+\sqrt{18}})$ the achievement set $E(x_{q})$ has
positive Lebesque measure. The set $\Sigma $ has $i(\Sigma )=\frac{1}{13}$
and $I(\Sigma )=\frac{3}{21}=\frac{1}{7}$. So, in this case we have the
diagram:
\setlength{\unitlength}{1mm}
\begin{picture}(56,23)(-20,0)
\put(-7,12){\line(1,0){120}}
\put(2,15){$\mathcal{C}_0$}
\put(11.5,7){$\frac{1}{15}$}
\put(21,15){$\lambda^+$}
\put(41,15){$\lambda^+$}
\put(32,15){$\mathcal{C}_0$}
\put(31,7){$\frac{1}{14}$}
\put(51,7){$\frac{1}{13}$}
\put(60,15){$\MC$}
\put(72,7){$\frac17$}
\put(92,15){$\mathcal{I}$}
\put(13,12){\circle{1}}
\put(-7.7,7){$0$}
\put(33,12){\circle{1}}
\put(53,12){\circle*{1}}
\put(73,12){\circle*{1.5}}
\put(-7,11){\line(0,1){2}}
\put(113,11){\line(0,1){2}}
\put(112.3,7){$1$}
\end{picture}\newline
As in the previous case, we can use Theorem~\ref{Th2} (taking $a=b=3$ and
c=-1$) and find a decreasing sequence $(q_{n})$ tending to $\frac{1}{15}$
such that all $E(x_{q_{n}})$ have zero Lebesgue measure.
\smallskip
Suppose now that $k>3$. For the Ferens-like sequence $x_{q}=(k+m,\dots
,k+1,k;q)$ its sumset $\Sigma $ contains the number $|\Sigma |$, which
implies that $|\Sigma +q\Sigma |<|\Sigma |^{2}$ for $q=\frac{1}{|\Sigma |}$
and therefore $E(x_{q})$ is a Cantor set of zero measure according to
Theorem~\ref{th9}.
\section{Rational ratios}
For a contraction ratio $q\in\{\frac1{n+1}:n\in\mathbb{N}\}$ self-similar
sets of positive Lebesgue measure can be characterized as follows:
\begin{theorem}
\label{t12} Let $\Sigma \subset \mathbb{Z}$ be a finite set,
q\in\{\frac1{n+1}:n\in\mathbb{N}\}$ and $\Sigma
_{n}=\sum_{i=0}^{n-1}q^{i}\Sigma $ for $n\in\mathbb{N}$. For the compact set
$K=K(\Sigma;q)$ the following conditions are equivalent:
\begin{itemize}
\item[(i)] $|\Sigma_n| \cdot q^n \geq 1$ for all $n\in\mathbb{N}$;
\item[(ii)] $\inf_{n\in\mathbb{N}}|\Sigma_{n}|\cdot q^n>0$,
\item[(iii)] $\lambda (K)>0.$
\end{itemize}
\end{theorem}
\begin{proof}
The implication (iii)$\Rightarrow $(i) follows from Theorem \ref{th9} while
(i)$\Rightarrow $(ii) is trivial. It remains to prove (ii)$\Rightarrow
(iii). Suppose that $\lambda (K)=0$. Given any $r>0$ consider the $r
-neighborhood $H(K,r) =\{ h\in \mathbb{R}:\mathrm{dist}(h,K) <r\}$ of the
set $K=K(\Sigma;q)$. Take any point $z\in \big\{\sum_{i=n}^\infty
x_iq^i:\forall i\ge n\;x_i\in\Sigma\big\}$ and observe that
\Sigma_n+z\subset K=\big\{\sum_{i=0}^\infty
x_iq^i:(x_i)_{i\in\omega}\in\Sigma^\omega\big\}$, which implies that
H(\Sigma _{n}+z,r) \subset H(K,r)$ for all $r>0$. The continuity of the
Lebesgue measure implies that $\lambda(H(K,r))\rightarrow 0$ when $r$ tends
to zero. It follows from $\Sigma\subset\mathbb{Z}$ and $\frac1q\in\mathbb{N}$
that
\begin{equation*}
\Sigma _{n}\subset q^{n-1}\cdot \mathbb{Z}\text{. }
\end{equation*
Hence, for any two different points $x$ and $y$ from $\Sigma _{n}$, the
distance between $x$ and $y$ is no less then $q^{n-1}>q^n$. Therefore, for
any $n\in \mathbb{N}$
\begin{equation*}
|\Sigma_n|\cdot q^n=\lambda \big( H\big( \Sigma_{n},\tfrac{1}{2}q^{n}\big
\big)=\lambda\big(H\big(\Sigma_n+z,\tfrac12q^n\big)\big)\le\lambda(K
\tfrac12q^n)
\end{equation*
which means that $\lim_{n\rightarrow \infty }|\Sigma _{n}|\cdot q^n=0$.
\end{proof}
Theorems~\ref{t12} combined with Corollary 2.3 of \cite{Schief} imply the
following corollary.
\begin{corollary}
For a finite subset $\Sigma\subset\mathbb{Z}$ and the number $q=\frac{1}
|\Sigma|}<1$ the following conditions are equivalent:
\begin{itemize}
\item[(1)] $K(\Sigma;q)$ has positive Lebesgue measure;
\item[(2)] $K(\Sigma;q)$ contains an interval;
\item[(3)] for every $n\in\mathbb{N}$ the set $\Sigma
_{n}=\sum_{k=0}^{n-1}q^k\Sigma$ has cardinality $|\Sigma _{n}|=|\Sigma|^{n}$.
\end{itemize}
\end{corollary}
\begin{problem}
Is it true that for a finite set $\Sigma\subset\mathbb{Z}$ and any
(rational) $q\in(0,1)$ the self-similar set $K(\Sigma;q)$ has positive
Lebesgue measure if and only if it contains an interval?
\end{problem}
\begin{remark}
According to \cite{Ex}, there exists a 10-element set $\Sigma $ on the
complex plane $\mathbb{C}$ such that for $q=\frac{1}{3}$ the self-similar
compact set $K(\Sigma ;q)=\Sigma +qK(\Sigma ;q)\subset \mathbb{C}$ has
positive Lebesgue measure and empty interior in $\mathbb{C}$.
\end{remark}
|
1,108,101,564,713 | arxiv | \section{Introduction}
The generalization of quantum electrodynamics to include non-abelian gauge
fields produces the asymptotically free gauge theory called quantum
chromodynamics (QCD) which describes the strong interactions. The natural forum
to construct the properly gauge fixed (renormalizable) Lagrangian with which to
perform calculations, is provided by the path integral machinery. For instance
in the Landau gauge, which we concentrate on here, the Faddeev-Popov ghosts
naturally emerge as a consequence of the non-gauge invariance of the path
integral measure. Whilst the resulting Lagrangian more than adequately
describes the ultraviolet structure of asymptotically free quarks and gluons
the infrared behaviour has not been fully established. For instance, it is
evident that as a result of confinement gluons and quarks cannot have
propagators of a fundamental type. Over the last few years there has been
intense activity into measuring gluon and ghost form factors using lattice
methods and the Dyson Schwinger formalism. Denoting these respectively by
$D_A(p^2)$ and $D_c(p^2)$ a general picture emerges in that there is gluon
suppression with $D_A(0)$~$=$~$0$ and ghost enhancement where
$D_c(p^2)$~$\sim$~$1/(p^2)^\lambda$ as $p^2$~$\rightarrow$~$0$ with
$\lambda$~$>$~$0$. Such behaviour is not inconsistent with general
considerations from confinement criteria\cite{1,2,3,4,5,6,7,8,9}. Ideally given
that these properties are now accepted, it is important that they can be
explained from general field theory considerations. This was the approach of
Zwanziger\cite{4,5,7,8} in treating the Gribov problem from the path integral
point of view. Therefore we will briefly review the construction of the
Gribov-Zwanziger Lagrangian before giving a summary of recent results of using
it in the Landau gauge.
\section{Gribov-Zwanziger Lagrangian}
Gribov pointed out\cite{1} that in non-abelian gauge theories it is not
possible to uniquely fix the gauge globally due to the existence of copies of
the gauge field. To handle this the path integral was restricted to the first
Gribov region, $\Omega$, where $\partial \Omega$ is defined by the place where
the Faddeev-Popov operator ${\cal M}$~$=$~$-$~$\partial^\mu D_\mu$ first
vanishes. Within $\Omega$ ${\cal M}$ is always positive and in the Landau
gauge it is hermitian. Moreover $\Omega$ is convex and bounded\cite{3} and all
gauge copies transit\cite{3} $\Omega$. Any copy in the subsequent regions
defined by the other zeroes of ${\cal M}$ can be mapped into $\Omega$. Whilst
the path integral is constrained to $\Omega$, within $\Omega$ there is a
region, $\Lambda$, known as the fundamental modular region where there are no
gauge copies and the gauge is properly fixed. Although $\Lambda$ is difficult
to define, for practical purposes expectation values over $\Lambda$ or $\Omega$
give the same values\cite{10}. Consequently the gluon form factor is modified
to $D_A(p^2)$~$=$~$(p^2)^2/[(p^2)^2+C_A\gamma^4]$ where $\gamma$ is the Gribov
mass, whence suppression emerges\cite{1}. The parameter $\gamma$ is not
independent and satisfies a gap equation. The theory can only be interpreted as
a gauge theory when $\gamma$ takes the value defined in the gap equation.
Thence computing the one loop ghost propagator, it is enhanced precisely when
the gap equation is satisfied\cite{1}.
Gribov's revolutionary analysis was based on a semi-classical approach and then
Zwanziger\cite{4,5} extended it to a path integral construction by modifying
the measure to restrict the integration region to $\Omega$ via the defining
criterion known as the horizon condition,
\begin{equation}
\int A^a_\mu(x) \frac{1}{\partial^\nu D_\nu} A^{a\,\mu}(x) ~=~
\frac{d N_A}{C_A g^2}
\label{hordef}
\end{equation}
where $d$ is the dimension of spacetime and $N_A$ is the adjoint representation
dimension\cite{5}. For the Landau gauge the convexity and ellipsoidal
properties of $\Omega$ allow one to modify the usual Yang-Mills Lagrangian to
include the horizon condition, (\ref{hordef}), producing the non-local
Yang-Mills Lagrangian\cite{4,5}
\begin{equation}
L^\gamma ~=~ -~ \frac{1}{4} G_{\mu\nu}^a
G^{a \, \mu\nu} ~+~ \frac{C_A\gamma^4}{2} A^a_\mu \, \frac{1}{\partial^\nu
D_\nu} A^{a \, \mu} ~-~ \frac{d \NA \gamma^4}{2g^2} ~.
\label{nloclag}
\end{equation}
Again (\ref{nloclag}) only has meaning when $\gamma$ satisfies (\ref{hordef})
which is equivalent to the Gribov gap equation. Finally the non-locality can
be handled by using localizing fields to produce the Gribov-Zwanziger
Lagrangian\cite{5}
\begin{eqnarray}
L^Z &=& L^{QCD} ~+~ \bar{\phi}^{ab \, \mu} \partial^\nu
\left( D_\nu \phi_\mu \right)^{ab} ~-~ \bar{\omega}^{ab \, \mu} \partial^\nu
\left( D_\nu \omega_\mu \right)^{ab} \nonumber \\
&& -~ g f^{abc} \partial^\nu \bar{\omega}^{ae}_\mu \left( D_\nu c \right)^b
\phi^{ec \, \mu} \nonumber \\
&& +~ \frac{\gamma^2}{\sqrt{2}} \left( f^{abc} A^{a \, \mu} \phi^{bc}_\mu ~+~
f^{abc} A^{a \, \mu} \bar{\phi}^{bc}_\mu \right) ~-~
\frac{d \NA \gamma^4}{2g^2}
\label{gzlag}
\end{eqnarray}
where $\phi^{ab}_\mu$ and $\omega^{ab}_\mu$ are localizing ghost fields with
the latter anti-commuting. This Lagrangian is renormalizable\cite{7,11,12}
and reproduces Gribov's one loop gap equation and ghost enhancement\cite{8}.
For (\ref{gzlag}) the horizon condition equates to
\begin{equation}
f^{abc} \langle A^{a \, \mu}(x) \phi^{bc}_\mu(x) \rangle ~=~
\frac{d \NA \gamma^2}{\sqrt{2}g^2} ~.
\label{hordefgz}
\end{equation}
\section{Calculations}
As the Zwanziger construction has produced a renormalizable Lagrangian with
extra fields incorporating infrared features without upsetting ultraviolet
properties, such as asymptotic freedom, it is possible to extend the earlier
one loop analysis\cite{1,8}. For instance in $\MSbar$ the two loop gap equation
results from (\ref{hordefgz}) after computing $17$ vacuum bubble graphs,
giving\cite{13},
\begin{eqnarray}
1 &=& C_A \left[ \frac{5}{8} - \frac{3}{8} \ln \left(
\frac{C_A\gamma^4}{\mu^4} \right) \right] a \nonumber \\
&& +~ \left[ C_A^2 \left( \frac{2017}{768} - \frac{11097}{2048} s_2
+ \frac{95}{256} \zeta(2)
- \frac{65}{48} \ln \left( \frac{C_A\gamma^4}{\mu^4} \right) \right. \right.
\nonumber \\
&& \left. \left. ~~~~~~~~~~~~+~ \frac{35}{128} \left( \ln \left(
\frac{C_A\gamma^4}{\mu^4} \right) \right)^2 + \frac{1137}{2560} \sqrt{5}
\zeta(2) - \frac{205\pi^2}{512} \right) \right. \nonumber \\
&& \left. ~~~~~+~ C_A T_F \Nf \left( -~ \frac{25}{24} - \zeta(2)
+ \frac{7}{12} \ln \left( \frac{C_A\gamma^4}{\mu^4} \right) \right. \right.
\nonumber \\
&& \left. \left. ~~~~~~~~~~~~~~~~~~~~~~~-~ \frac{1}{8} \left( \ln \left(
\frac{C_A\gamma^4}{\mu^4} \right) \right)^2 + \frac{\pi^2}{8} \right) \right]
a^2 +~ O(a^3)
\label{gap2}
\end{eqnarray}
where $s_2$~$=$~$(2\sqrt{3}/9) \mbox{Cl}_2(2\pi/3)$ with $\mbox{Cl}_2(x)$ the
Clausen function, $\zeta(n)$ is the Riemann zeta function and
$a$~$=$~$\alpha_S/(4\pi)$. To appreciate the non-perturbative nature of
$\gamma$ one can formally solve for it with the ansatz
\begin{equation}
\frac{C_A \gamma^4}{\mu^4} ~=~ c_0 [ 1 + c_1 C_A \alpha_S ]
\exp \left[ - \frac{b_0}{C_A \alpha_S} \right]
\end{equation}
giving
\begin{equation}
b_0 ~=~ \frac{32\pi\left[3C_A - \sqrt{79C_A^2-32C_A T_F \Nf}\right]}
{[35C_A-16T_F\Nf]}
\end{equation}
\begin{equation}
c_0 = \exp \!\! \left[ \frac{1}{[105C_A - 48 T_F \Nf]} \! \!
\left[ 260 C_A - 112 T_F \Nf
- \frac{[255C_A - 96 T_F \Nf] C_A}{\sqrt{79 C_A^2 - 32 C_A T_F \Nf}} \right]
\! \right]
\end{equation}
and
\begin{eqnarray}
c_1 &=& \left[ 8940981420 \sqrt{5} C_A^4 \zeta(2)
- 11330632512 \sqrt{5} C_A^3 \Nf T_F \zeta(2)
\right. \nonumber \\
&& \left. +~ 4778237952 \sqrt{5} C_A^2 \Nf^2 T_F^2 \zeta(2)
- 670629888 \sqrt{5} C_A \Nf^3 T_F^3 \zeta(2)
\right. \nonumber \\
&& \left. -~ 8060251500 \pi^2 C_A^4 - 109078793775 s_2 C_A^4
\right. \nonumber \\
&& \left. +~ 7470477000 C_A^4 \zeta(2) + 19529637400 C_A^4
\right. \nonumber \\
&& \left. +~ 12730881600 \pi^2 C_A^3 \Nf T_F + 138232221840 s_2 C_A^3 \Nf T_F
\right. \nonumber \\
&& \left. -~ 29598076800 C_A^3 \Nf T_F \zeta(2) - 32025280640 C_A^3 \Nf T_F
\right. \nonumber \\
&& \left. -~ 7496478720 \pi^2 C_A^2 \Nf^2 T_F^2
- 58293872640 s_2 C_A^2 \Nf^2 T_F^2
\right. \nonumber \\
&& \left. +~ 29503733760 C_A^2 \Nf^2 T_F^2 \zeta(2)
+ 19655024640 C_A^2 \Nf^2 T_F^2
\right. \nonumber \\
&& \left. +~ 1949368320 \pi^2 C_A \Nf^3 T_F^3
+ 8181596160 s_2 C_A \Nf^3 T_F^3
\right. \nonumber \\
&& \left. -~ 11318722560 C_A \Nf^3 T_F^3 \zeta(2)
- 5351014400 C_A \Nf^3 T_F^3
\right. \nonumber \\
&& \left. -~ 188743680 \pi^2 \Nf^4 T_F^4
+ 1509949440 \Nf^4 T_F^4 \zeta(2)
+ 545259520 \Nf^4 T_F^4 \right] \nonumber \\
&& \times \frac{1}{46080 \pi [ 79 C_A - 32 T_F \Nf ]^{5/2}
[ 35 C_A - 16 T_F \Nf ] \sqrt{C_A}} ~.
\end{eqnarray}
So in principle one could now compute with a gluon propagator which includes
renormalon type singularities. Further, with (\ref{gap2}) there is two loop
ghost enhancement with the Kugo-Ojima confinement criterion\cite{9} precisely
fulfilled at this order consistent with Zwanziger's all orders proof\cite{7}.
Also at one loop it has been shown\cite{14} that $D_A(0)$~$=$~$0$. The final
quantity of interest is the renormalization group invariant effective coupling
constant $\alpha^{\mbox{eff}}_S (p^2)$~$=$~$\alpha_S(\mu) D_A(p^2)
\left( D_c(p^2) \right)^2$ which is believed to freeze at zero momentum. From
the $\MSbar$ one loop form factors it was shown\cite{14} that
$\alpha^{\mbox{eff}}_S (0)$~$=$~$\frac{50}{3\pi C_A}$.
Whilst the previous expressions have all been in the $\MSbar$ scheme it is
worth considering other renormalization schemes such as MOM. Given that one
loop calculations\cite{14} produced exact form factors the derivation of the
one loop MOM gap equation is straightforward, giving
\begin{eqnarray}
1 &=& \left[ \frac{5}{8} + \frac{3}{8} \ln \left(
\frac{C_A\gamma^4}{[C_A\gamma^4+\mu^4]} \right) - \frac{C_A\gamma^4}{8\mu^4}
\ln \left( \frac{C_A\gamma^4}{[C_A\gamma^4+\mu^4]} \right) -
\frac{3\pi\sqrt{C_A}\gamma^2}{8\mu^2} \right. \nonumber \\
&& \left. + \left[ \frac{3\sqrt{C_A}\gamma^2}{4\mu^2}
- \frac{\mu^2}{4\sqrt{C_A}\gamma^2} \right] \tan^{-1} \left[
\frac{\sqrt{C_A}\gamma^2}{\mu^2} \right] \right] C_A a + O(a^2) ~.
\label{momgap}
\end{eqnarray}
For later we formally define this as
$1$~$=$~$\mbox{gap}(\gamma,\mu,\mbox{MOM}) C_A a$~$+$~$O(a^2)$. Central to
deriving this was the preservation of the Slavnov-Taylor identities in MOM.
For instance defining $Z_A$ and $Z_c$ from the respective gluon and ghost
$2$-point functions in MOM, then the coupling constant and $\gamma$
renormalization constants are already fixed and these must be used in computing
the horizon function. Given (\ref{momgap}) we have reproduced the one loop
ghost enhancement in MOM and the {\em same} freezing value for
$\alpha^{\mbox{eff}}_S (0)$. Since the numerical structure is different from
the $\MSbar$ calculation we record the analogous\cite{14} computation is
\begin{equation}
\alpha^{\mbox{eff}}_S (0) ~=~ \lim_{p^2 \rightarrow 0} \left[
\frac{ \alpha_S(\mu) \left[ 1 - C_A \left( \mbox{gap}(\gamma,\mu,\mbox{MOM})
+ \frac{5}{8} - \frac{265}{384} \right) a
\right] (p^2)^2 }
{ C_A \gamma^4 \left[ 1 - C_A \left( \mbox{gap}(\gamma,\mu,\mbox{MOM})
- \frac{\pi p^2}{8 \sqrt{C_A} \gamma^2} \right) a \right]^2 } \right]
\end{equation}
whence $\alpha^{\mbox{eff}}_S (0)$~$=$~$\frac{50}{3\pi C_A}$.
\section{Discussion}
To conclude we note that we have reviewed the path integral construction of
Zwanziger's localised renormalizable Lagrangian for the Landau gauge which
incorporates the restriction of gauge configurations to the first Gribov
region. A picture emerges of the infrared structure which is consistent with
the gluon being confined. Crucial to the analysis was the geometry of the
Gribov region. This can be appreciated from another point of view given recent
work in trying to extend the path integral construction to other
gauges\cite{15,16,17}. For linear covariant gauges other than Landau the
Fadeev-Popov operator is not hermitian\cite{15} and convexity of the Gribov
region is only valid when the covariant gauge fixing parameter is
small\cite{15}. Moreover, given that the Faddeev-Popov operator in this
instance would involve the transverse part of the gauge field then the
non-local operator of (\ref{nloclag}) would itself contain a non-locality in
the covariant derivative\cite{15}. Another example is the construction of a
Gribov-Zwanziger type Lagrangian for $SU(2)$ Yang-Mills fixed in the maximal
abelian gauge\cite{16,17}. Whilst a localised renormalizable Lagrangian
analogous to (\ref{gzlag}) can be constructed the algebraic renormalization
analysis demonstrates that there is an additional free parameter which has no
analogue in the Landau gauge\cite{17}. Given these recent considerations it
would seem therefore that in the Gribov context the Landau gauge is peculiarly
special.
\section*{Acknowledgements}
The author thanks Prof. S. Sorella, Prof. D. Zwanziger and Dr D. Dudal for
useful discussions concerning the Gribov problem.
|
1,108,101,564,714 | arxiv | \section{Introduction}
The structural analogies between traditional oxide-containing frameworks and the class of materials known as metal--organic frameworks (MOFs) are certainly well established.\cite{Hoskins_1990} A topical example is the family of zeolitic imidazolate frameworks (ZIFs), which are MOF analogues of zeolitic SiO$_2$ polymorphs.\cite{Lehnert_1980,Tian_2003,Banerjee_2008} Replacing Si$^{4+}$ and O$^{2-}$ ions by tetrahedral Zn$^{2+}$ centres and bridging imidazolate linkers respectively, it is possible to generate a wide range of zeolite-like framework structures with increased pore volume. Like zeolites, ZIFs offer attractive gas storage and catalytic functionality,\cite{Banerjee_2008} but they couple these properties with the characteristic chemical versatility for which MOFs are renowned (\emph{e.g.}\ the imidazolate linker can be replaced by substituted derivatives and the transition metal dication varied). The extensive range of functionalities exhibited by ZIFs and many other MOF families has generated a strong and sustained interest in studying structure/property relationships in such `hybrid' inorganic-organic materials;\cite{Cheetham:2007kx,Ma:2010vn,Dybtsev:2006ys,Halder:2002zr,Qiu:2009ly} consequently, there is a great deal now known regarding their structural chemistry.
What has become increasingly evident is that the structural behaviour of MOFs is often significantly more complex than might have been anticipated. A surprisingly large number---including the canonical MOF-5 (Ref.~\onlinecite{Li_1999})---exhibit negative thermal expansion (NTE; \emph{i.e.}, their lattices expand on cooling).\cite{Zhou:2008uq, Wu:2008kx} Some undergo pressure- and temperature-induced amorphisation processes.\cite{Bennett:2010fk} Yet others show unprecedented structural flexibility, being capable of dramatic changes in crystallite dimensions during absorption or desorption of guest species.\cite{Serre:2002vn} While it is the case that similar mechanical phenomena have been documented for oxide-containing frameworks (\emph{e.g.}\ NTE in ZrW$_2$O$_8$, see Ref.~\onlinecite{Mary:1996}), the effects observed in MOFs are almost always much more extreme. The increased structural flexibility of molecular metal--ligand--metal linkages is heavily implicated in the fundamental difference in magnitude of behaviour: metal--ligand geometries are more easily distorted when the ligand is large and flexible than when it is a single oxide ion.
The prevalence of low-energy deformation mechanisms in MOFs should result in an increased propensity for structural distortion, which if incoherent could resemble the static disorder often found in zeolites\cite{Wragg:2008} and perovskite frameworks\cite{Rodriguez:2009}---or when coherent may give rise to an extreme symmetry lowering observed for some complex framework oxides.\cite{Lister:2004} Yet it seems that reports of static disorder in MOFs are relatively few in number: there is, for example, some discussion of ligand orientation disorder and strongly anisotropic displacement parameters in Cu(4-oxopyrimidinate)$_2$;\cite{Barea_2003} likewise molecular dynamics studies of MOF-5 and its isoreticular congeners have also pointed to the existence of low-barrier enthalpy landscapes thought to be responsible for both static and dynamic disorder.\cite{Amirjalayer:2008fk,Jhon:2007uq} A traditional emphasis within the field on average structure studies---which are also usually performed at temperatures insufficiently low to distinguish static disorder from vibrational motion---has perhaps meant that the degree of static disorder within MOFs has remained poorly understood.
Here, as part of a broader study into the local structure and dynamics of mineralomimetic MOFs, we report a combined average- and local-structure investigation into the existence and nature of static disorder in zinc(II) isonicotinate, Zn(ISN)$_2$. Our approach has been to collect neutron total scattering data for Zn(ISN)$_2$ at 10\,K, which we analyse using Rietveld refinement (average structure) and reverse Monte Carlo (RMC) refinement of the corresponding pair distribution function (PDF) data (local structure). We find strong evidence in both types of refinement for the existence of static disorder that resembles local transverse displacements of the isonicotinate ligands.
The Zn(ISN)$_2$ framework structure contains Zn$^{2+}$ centres that are fourfold-coordinated by isonicotinate anions, with each anion bridging two Zn$^{2+}$ cations to form a tetrahedral net with the quartz topology [Fig.~\ref{fig1}]. The coordination sphere of each Zn$^{2+}$ centre consists of two pyridinyl N donor atoms and two carboxylate O atoms---an arrangement that is inconsistent with the corresponding site symmetry imposed by the $\beta$-quartz structure. Instead, the material adopts the lower-symmetry $\alpha$-quartz structure, crystallising in the hexagonal space group $P6_2$.\cite{Wu:2009} As in quartz itself, the structure of Zn(ISN)$_2$ contains one-dimensional pores parallel to the crystallographic $\mathbf c$ axis. In as-prepared samples these pores are occupied by solvent molecules, but the pores can be evacuated by heating to 100\,$^\circ$C.\cite{Sun:2002} There remains some controversy as to whether the correct space group for Zn(ISN)$_2$ is actually $P3_1$, which is a subgroup of $P6_2$ and which demands the existence of two crystallographically distinct ISN units in the asymmetric unit.\cite{Wu:2009, Sun:2002}
\begin{figure}
\includegraphics{fig1.jpg}
\caption{Crystal structure of (a) Zn(ISN)$_2$ with [ZnN$_2$O$_2$] tetrahedra connected via isonicotinate ligands, and (b) $\alpha$-quartz with connected [SiO$_4$] tetrahedra. Both structures are viewed down the \emph{c} axis with their unit cell shown. Local coordination environment of (c) Zn(ISN)$_2$ and (d) $\alpha$-quartz. (Zn, green; N, light blue; O, red; C, black; Si, dark blue, and isonicotinate hydrogens have been removed for clarity).\label{fig1}}
\end{figure}
Our paper begins by describing the experimental methods used in our study, together with our approach to RMC refinements of the neutron PDF data we have collected. We note that this is somewhat of a test-case for the use of our {\sc rmcprofile} code\cite{Tucker:2007} in the study of structural disorder in crystalline MOFs, and we have encountered some difficulties through an unquantifiable degree of absorption of atmospheric H$_2$O into our deuterated sample. Mindful of the ensuing limitations, we present the results of our average-structure and RMC studies and discuss the influence of the included solvent on the conclusions drawn. We finish with a discussion on the nature of static disorder in this material in the context of correlated (thermal) displacements within $\alpha$-quartz itself.
\section{Materials and Methods}
\subsection{Synthesis}
A polycrystalline sample of zinc(II) isonicotinate was prepared by mixing stoichiometric quantities of zinc(II) acetate and $d^4$-isonicotinic acid (QMx, 98\% D) dissolved in dimethylsulfoxide (DMSO). The white powder formed on mixing was filtered, washed and dried \emph{in vacuo} (100\,$^{\circ}$C, 24\,h) in order to remove all solvent from the framework pores. Working within a glovebox, the dried sample (\emph{ca} 2\,g) was transferred to a vanadium can suitable for neutron total scattering measurements.
\subsection{Neutron total scattering}
Neutron total scattering data were collected at 10\,K for Zn(ISN)$_2$ using the time-of-flight diffractometer GEM at ISIS. \cite{Williams:1997,Day:2004,AlexC:2005} For the experiment, approximately 2\,g of Zn(ISN)$_2$, prepared as described above, was placed within a cylindrical vanadium can of 8.3\,mm diameter and 6\,cm height. This can was loaded at room temperature inside a closed cycle helium refrigerator and the temperature was lowered slowly to 10\,K. Total scattering data were collected over a large range of scattering vectors of magnitudes $0.6\leq Q\leq 40$\,\AA$^{-1}$, giving a real-space resolution of order $\Delta r\simeq 0.09$\,\AA.
The total scattering data were corrected using standard methods, taking into account the effects of background scattering, absorption, multiple scattering within the sample, beam intensity variations, the Placzek inelastic correction, and hydrogen content corrections.\cite{Keen:2001fk} These corrected data were then converted to experimental $G(r)$ and $F(Q)$ functions:\cite{Dove:2002kx,Keen:2001fk}
\begin{eqnarray}
F(Q)&=&\rho_0 \int_0^\infty4\pi r^2 G(r)\frac{\sin Qr}{Qr}\mathrm{d}r\label{fq}\\
G(r)&=&\sum_{i,j=1}^nc_ic_j\bar{b}_i\bar{b}_j[g_{ij}(r)-1],
\end{eqnarray}
where
\begin{equation}
g_{ij}(r)=\frac{n_{ij}(r)}{4\pi r^2\mathrm{d}r\rho_0},\label{gr}
\end{equation}
$n_{ij}(r)$ is the number of pairs of atoms of type $i$ and $j$ separated by distance $r$, $\rho_0$ is the number density, $c_i$ the concentration of each species $i$ and $b_i$ the corresponding neutron scattering length. The Bragg profiles for each data set were extracted from the scattering data collected using the detector banks centred on on scattering angles $2\theta = 34.96^{\circ}, 63.62^{\circ}, 91.30^{\circ},$ and $154.40^{\circ}$.
Because the number density $\rho_0$ enters Eqs.~\eqref{fq} and \eqref{gr}, it is possible during data normalisation to check for consistency between the expected value of $\rho_0$ and the value for which normalisation is most robust. In the present case, a value of $\rho_0$ corresponding to guest-free Zn(ISN)$_2$ was found not to be the value most consistent with the observed data; moreover, by comparing the incoherent scattering levels associated with each detector bank it was possible to deduce that the sample as measured was actually contaminated with a significant quantity of hydrogenous material, despite the care taken to evacuate the framework completely.
This unquantifiable degree of solvation had three consequences for our (usually quantitative) RMC modelling of the PDF data. First, we found that quantitative normalisation of the $G(r)$ function was not possible, and so for RMC refinement a smoothly-varying function was subtracted from the normalised data---this did not affect the final model but improved visually the quality of our fits to data. Second, it was not possible to fit the very lowest-$r$ region of the PDF since this was likely to contain a well-structured contribution from the included solvent (\emph{e.g.}\ the O--H distance if the solvent were H$_2$O). Third, because the value of $\rho_0$ was not known accurately it was necessary to fit the PDF only while allowing refinement of an overall scale parameter. We note that such an approach is quite common for other PDF fitting procedures.\cite{Farrow:2007}
\subsection{Average structure refinement}
The experimental Bragg diffraction profiles were fitted with the {\sc gsas} Rietveld refinement program\cite{VonDreele:2000,Toby:2001} using both the published $P3_1$ and $P6_2$ structural models.\cite{Sun:2002,Wu:2009} Atomic coordinates of the organic ligand were refined using rigid body constraints in order to minimise the number of refinable parameters; likewise a single set of anisotropic displacement parameters was refined for all atoms in the isonicotinate group. Zn atom coordinates were refined while fixing the $z$ component (there is no unique origin in $z$ for either $P3_1$ or $P6_2$) and an isotropic displacement parameter was allowed to refine freely. Solvent occupancy within the pore network was treated using a number of different approaches, and these are discussed in more detail in the results section below.
\subsection{Reverse Monte Carlo refinement}
RMC refinements were carried out using the {\sc rmcprofile} code.\cite{Tucker:2007} To the best of our knowledge, this study represents the first RMC study of a crystalline MOF and we detail below some of the difficulties we have encountered in the process. As for all RMC studies, the basic refinement objective is to produce atomistic configurations that fit simultaneously the experimental $G(r)$, $F(Q)$, and Bragg profile $I(t)$ functions. This is achieved by accepting or rejecting random atomic moves produced by the Metropolis Monte Carlo algorithm, where in this case the Monte Carlo `energy' function is determined by the quality of the fits to data. The motivation behind fitting both real-space $G(r)$ and reciprocal-space $F(Q)$/$I(t)$ functions is to probe local distortions in the framework in a manner that is inherently consistent with the long-range periodic order reflected in the Bragg intensities. Similar refinements of crystalline materials in which reciprocal-space data are not used have a tendency to become unphysically disordered,\cite{Tucker:2001} and we were keen to assess the degree of structural disorder in Zn(ISN)$_2$ on the most realistic level possible.
Our starting configurations for the RMC process were based on a orthogonal supercell related to the crystallographic cell by the transformation
\begin{equation}
\left[ \begin{matrix}
a \\ b \\ c
\end{matrix} \right]_{\textrm{RMC}} =
\left[ \begin{array}{ccc}
4 & 0 & 0 \\
2 & 4 & 0 \\
0 & 0 & 10 \\
\end{array} \right] \times
\left[ \begin{matrix}
a \\ b \\ c \\
\end{matrix} \right]_{P6_2},
\end{equation}
giving a cell of dimensions 60.14 \AA\ $\times$ 53.82 \AA\ $\times$ 61.36\,\AA. The use in RMC of orthogonal axes for hexagonal systems means the configurations can be prepared with dimensions approximately equal in each direction. This maximises the pair distribution cut-off value $r_{\textrm{max}}$ for a given number of atoms (and hence minimises computational cost).
In the results section below we discuss in more detail the various tests we performed to determine the best way of modelling solvent occupancy. The key set of RMC configurations contained 18\,720 atoms, of which 12\,960 are part of the framework and 5\,760 are part of the included solvent. We note that the configuration contained six different atom types (C, D, H, N, O, Zn), giving rise to a total of 21 partial $g_{ij}(r)$ contributions---we understand this to be the largest number refined within {\sc rmcprofile} to date. A number of soft constraints and restraints were applied throughout the refinement process: closest-approach constraints stopped atom pairs from being separated by unphysically short distances; `distance-window' constraints maintained the framework connectivity without prejudicing the bond-bending and bond-stretching terms [Table~\ref{table1}];\cite{Goodwin:2005} and, finally, a set of empirical geometric restraints were applied to maintain the geometry of the isonicotinate moiety.\cite{Bennett:2010fk}
\begin{table}
\caption{\label{table1} `Distance window' parameters \emph{d}$_{min}$, \emph{d}$_{max}$ used in our Zn(ISN)$_2$ RMC refinements and the corresponding mean pair separations {\emph{\=d}} (\AA) and their standard deviations $\sigma$.}
\begin{center}
\begin{tabular}{@{\extracolsep{5mm}}lcccc}
\hline\hline Atom pair&$d_{\textrm{min}}$ (\AA)&$d_{\textrm{max}}$ (\AA)&$\bar d$ (\AA) & $\sigma$ (\AA)\\\hline
O--H & 0.8 & 1.1 & 1.006 & 0.076\\
C--D & 0.9 & 1.2 & 1.039 & 0.072\\
C--C & 1.1 & 1.7 & 1.431 & 0.062\\
C--N & 1.1 & 1.7 & 1.321 & 0.055 \\
C--O & 1.1 & 1.7 & 1.268 & 0.040 \\
Zn--O & 1.8 & 2.4 & 2.004 & 0.128 \\
Zn--N & 1.8 & 2.4 & 2.081 & 0.104 \\\hline\hline
\end{tabular}
\end{center}
\end{table}
The refinement process included fitting the experimental $F(Q)$ ($0.6\leq Q\leq40$\,\AA$^{-1}$), $I(t)$ and $G(r)$ ($1.6\leq r\leq25.2$\,\AA) functions. We note here that the minimum value of $r$ used for the $G(r)$ fitting is larger than the nearest-neighbour bond length for the vast majority of solvents, including H$_2$O. A total of five independent RMC refinements were performed in parallel; each refinement was allowed to continue until no further improvements in the fits to data were observed. The absolute atomic coordinates differ amongst the five final configurations, but the corresponding fits to data are essentially identical. Wherever possible, our results are averaged over all five configurations.
\section{Results}
\subsection{Average structure}\label{averagestructure}
We used as the basis of our Rietveld refinements the models of Refs.~\onlinecite{Wu:2009} and \onlinecite{Sun:2002}, both of which include a number of water molecules within the framework pore structure. Our first set of refinements involved simply removing the solvent component and refining framework coordinates. We obtained essentially identical fits for both $P3_1$ and $P6_2$ models, neither of which was entirely convincing [Fig.~\ref{fig2}(a)]. A considerable improvement to the quality of these fits was obtained by including scattering density within the framework pores. We tested a number of different ways of modelling this scattering density---including the coordinates of Refs.~\onlinecite{Wu:2009} and \onlinecite{Sun:2002}---but found the best fits were obtained for a slightly different structure model containing four H$_2$O molecules per formula unit [Fig.~\ref{fig2}(b)]. The corresponding crystallographic details are summarised in Table~\ref{table2}.
\begin{figure}
\includegraphics{fig2.jpg}
\caption{Representative Reitveld fits to the 10\,K neutron diffraction pattern of Zn(ISN)$_2$: (a) using an evacuated framework model, and (b) using a model containing four water molecules per formula unit. Experimental data are given as small filled circles, the fitted profile is shown as a solid red line, and the difference (fit$-$data) is shown beneath each curve.\label{fig2}}
\end{figure}
It is certainly feasible that the sample absorbed atmospheric H$_2$O during setup of the GEM experiment or that some H$_2$O or DMSO (used in the synthesis) remained from incomplete desolvation during sample preparation. Because all of the various models for adsorbed solvent we tested involved large displacement parameters and fractional occupancies of multiple equivalent sites, we are certainly not claiming that the diffraction intensities are sufficiently sensitive to identify the chemical composition of the included component. Instead, we were keen to assess the extent to which the structural parameters of the framework itself were affected by the different models. The positions and atomic displacement parameters obtained for the `four H$_2$O' and solvent-free models are given in Table~\ref{table2}; their close correspondence illustrates that the framework geometry is robustly defined by the diffraction data, even if those of the solvent component are not. In all our refinements, we found no evidence to support symmetry lowering to the $P3_1$ model of Ref.~\onlinecite{Sun:2002}.
\begin{table}
\begin{center}
\caption{\label{table2}Crystallographic parameters, atomic coordinates
and isotropic equivalent displacement parameters determined using
Rietveld refinement of neutron scattering data for Zn(ISN)$_2$. The ISN
coordinates are given as the centre-of-mass of the ligand. The first set
of positional and displacement parameters correspond to the final
structural model containing four H$_2$O molecules per formula unit; the
second (marked with an asterisk) correspond to the solvent-free model
discussed in the text.}
\begin{tabular}{lcccc}
\hline\hline
\multicolumn{1}{l}{Crystal system}&\multicolumn{4}{l}{Hexagonal}\\
\multicolumn{1}{l}{Space group}&\multicolumn{4}{l}{$P6_2$}\\
\multicolumn{1}{l}{$a$ (\AA)}&\multicolumn{4}{l}{15.53615(25)}\\
\multicolumn{1}{l}{$c$ (\AA)}&\multicolumn{4}{l}{6.13556(26)}\\
\multicolumn{1}{l}{$V$ (\AA$^3$)}&\multicolumn{4}{l}{1282.54(6)}\\
\multicolumn{1}{l}{$Z$}&\multicolumn{4}{l}{3}\\
\multicolumn{1}{l}{$T$ (K)}&\multicolumn{4}{l}{10}\\
\hline\hline
Atom&$x$&$y$&$z$&$U_{\rm iso}$ (\AA$^2$)\\\hline
Zn&0.5& 0 &0.1319(16) & 0.0199(22)\\
ISN&0.26272(8)&0.79764(9)&0.4963(12)&0.0229(3)\\
H$_2$O1&0.7702(11)&0.8083(6)&0.310(4)&0.438(5)\\
H$_2$O2&0.8143(8)&1.0246(7)&0.3744(20)&0.438(5)\\\hline
Zn$^\ast$&0.5& 0 &0.132(3)&0.016(4)\\
ISN$^\ast$&0.26305(15)&0.79501(17)&0.514(3)&0.0204(7)\\\hline\hline
\end{tabular}
\end{center}
\end{table}
It was possible to refine reliably a set of anisotropic displacement parameters for the ISN ligand. The values obtained correspond to thermal ellipsoids that are strongly elongated in a direction perpendicular to the plane of the ligand itself [Fig.~\ref{fig3}(a)]. This sort of behaviour is not uncommon for molecular framework materials, since the lowest-energy vibrational modes usually involve transverse displacements of the bridging ligand; examples include the cyanide bridges of Zn(CN)$_2$ (Ref.~\onlinecite{Goodwin_2005}) and the terephthalate linkers in MOF-5 (Ref.~\onlinecite{Lock_2010}). What is unusual here is the magnitude of the thermal ellipsoids given that the data were collected at 10\,K. We note that by using a wide range of GEM detector banks our refinements include data at sufficiently low $d$-spacing values to give confidence in the determination of anisotropic displacement parameters. The issue of the magnitude of displacement parameters is one to which we return in Section~\ref{discussion}.
\begin{figure}
\includegraphics{fig3.jpg}
\caption{Unit cell projections of Zn(ISN)$_2$ viewed down the \emph{c} axis, showing (a) the average structure and (b) the atomic distributions obtained by collapsing the RMC configuration onto a single unit cell. In both cases, water molecules have been removed for clarity.\label{fig3}}
\end{figure}
\subsection{Local structure}
With the strong preference in Rietveld refinements for a structure model containing four water molecules per unit cell, our RMC study of local structure in Zn(ISN)$_2$ used this same model as its starting point. RMC refinement gave fits to the time-of-flight Bragg intensity function $I(t)$ [Fig.~\ref{fig4}] that were essentially identical to those obtained by Rietveld refinement. As anticipated, the solvent molecules were strongly disordered in our final RMC configurations and consequently our analysis focusses on the local structure of the Zn(ISN)$_2$ framework itself.
\begin{figure}
\includegraphics{fig4.jpg}
\caption{RMC fit to the Bragg profile with the experimental data points given as points, and the RMC fits (solid lines) obtained using RMCPROFILE as described in the text. The difference curve is shown beneath the fit.\label{fig4}}
\end{figure}
By `collapsing' the atomic coordinates of each refined RMC configuration onto a single unit cell, it is possible to visualise the average structure model to which these configurations correspond [Fig.~\ref{fig3}(b)]. There is a clear similarity to the results of our Rietveld refinement. In particular, there are large anisotropic displacements of the ISN ligand that reflect well the displacement parameters determined above. Whereas the Rietveld refinement employed rigid-body constraints and a single set of displacement parameters for all atoms within the ISN ligand, the RMC refinement allows the scattering distribution for each atomic site to assume whatever form is demanded by the data. We find that the differences amongst distributions for the various atoms in the ISN ligand are small, and certainly in all cases the displacements are much stronger in a direction perpendicular to the ligand plane.
\begin{figure}
\includegraphics{fig5.jpg}
\caption{Calculated and experimental $G(r)$ functions for Zn(ISN)$_2$. (Bottom) Experimental $G(r)$ data (filled black circles) and RMC fit (red line). (Middle) Calculated $G(r)$ function for the static disordered model obtained from Rietveld refinement. (Top) Calculated $G(r)$ function for the low-energy-dynamics model described in the text.\label{fig5}}
\end{figure}
Given the ambiguity regarding included solvent composition, it is not at all surprising that we found the lowest-$r$ region of the $G(r)$ particularly difficult to normalise robustly. For most of the $G(r)$, the contribution from a disordered solvent network would be expected to be a smoothly-varying function of $r$; however, at the lowest values of $r$ there must exist well-defined contributions corresponding to the interatomic separations within individual solvent molecules. Consequently our RMC fits to $G(r)$ were performed only for $r\geq1.7$\,\AA\ in order that the particular method employed for modelling solvent inclusion (in this case four H$_2$O molecules per unit cell) would not affect refinement of the framework. The fits obtained are not at the quantitative level usually achieved using {\sc rmcprofile}, but nonetheless all qualitative features of the $G(r)$ are certainly captured well [Fig.~\ref{fig5}].
We were struck by the absence of pronounced features in the PDF beyond $r\simeq5$\,\AA, and especially so for data collected at a temperature of 10\,K. Even for a system with a disordered component, the framework itself would be expected to give rise to a well-structured $G(r)$ function: the superposition of two functions---one strongly varying and the other weakly varying---still varies strongly. Consequently, the $G(r)$ function could be considered consistent with a large degree of disorder in the framework geometry of Zn(ISN)$_2$.
It is straightforward to determine bond-angle distributions directly from the RMC configurations, and we concentrate here on two particular distribution functions. The first corresponds to the angles within the [ZnN$_2$O$_2$] coordination tetrahedra [Fig.~\ref{fig6}(a)]. The second type of angle is based on the centres of mass of the ISN ligands (which we represent by the symbol `X'): then the X--Zn--X angle distribution [Fig.~\ref{fig6}(b)] is analogous to the O--Si--O tetrahedral angle distribution in $\alpha$-quartz itself. We find in both cases that the distributions are very broad, suggesting that Zn(ISN)$_2$ is flexible both on the scale of the framework itself and also in terms of the individual coordination polyhedra.
\begin{figure}
\includegraphics{fig6.jpg}
\caption{ RMC bond angle distributions: (a) intratetrahedral angles within the [ZnN$_2$O$_2$] coordination environment (N--Zn--N in blue, O--Zn--O in red, and N--Zn--O in black); (b) intratetrahedral X--Zn--X angles reflecting geometric flexing of the framework itself.\label{fig6}}
\end{figure}
The degree of structural distortion can be quantified further using the language of geometric algebra (GA).\cite{Wells_2002} Analysis of RMC configurations using the GA-based code {\sc gasp}\cite{Wells_2002,Wells_2002b,Wells_2004} allows the atomic displacements to be understood in terms of (i) translations, (ii) rotations, and (iii) distortions of fundamental geometric units. In this particular system, there are two types of geometric unit of special interest. The first corresponds to the [ZnN$_2$O$_2$] coordination tetrahedra, the behaviour of which reflects the rigidity of Zn--N and Zn--O bonding interactions. The second type of unit involves the centres of mass of the ISN ligands: here the behaviour of the [ZnX$_4$] tetrahedra describes the flexing and deformation of the framework structure as a whole.
In Fig.~\ref{fig7} we plot the relative components of translations, rotations, and deformation (stretching and bending) determined for the two types of polyhedral unit across our five RMC configurations. What we find is that the translational component is very large indeed for the Zn coordination polyhedra. These translations are evidently allowed by flexing of the Zn--ISN--Zn linkages, reflected in the large angle bending component of the [ZnX$_4$] values. These distortions will correspond to increased displacement of the Zn atoms away from their average sites, and also an apparent increase in the transverse component of the anisotropic displacement parameters for the ISN ligands. These two effects are precisely those observed in Rietveld refinement.
\begin{figure}
\includegraphics{fig7.jpg}
\caption{Average mismatch scores obtained using {\sc gasp} for $\alpha$-quartz (black), the [ZnO$_2$N$_2$] coordination environment in Zn(ISN)$_2$ (dark grey), and the [ZnX$_4$] framework geometry of Zn(ISN)$_2$ (light grey). The bar chart shows the type of distortions present in the polyhedra: (left--right) translations, rotations, bond angle bending and bond stretching.\label{fig7}}
\end{figure}
Finally, we put these results in context by comparing the magnitude of the translation, rotation and distortion components with those obtained for an RMC refinement of $\alpha$-quartz itself (also at 10\,K) [Fig.~\ref{fig7}].\cite{Tucker:2001_aquartz} Not only are the rigid-body-type translations and rotations many times larger for Zn(ISN)$_2$, but the distortions are also an order of magnitude more extreme. This result is consistent with the general perception that MOF-type materials are substantially more flexible than their oxide-based counterparts.
\section{Discussion}\label{discussion}
That the ISN ligands in Zn(ISN)$_2$ are displaced in a transverse direction to a very large degree even at 10\,K seems clear both from the average- and local-structure analysis performed so far. The central question is whether or not the magnitude of this displacement is consistent with low-energy vibrational motion of the linkages or with static disorder. Here we make use of the following lattice dynamical formalism which links magnitude of displacement with phonon mode energies:
\begin{equation}
\langle u_j^{2}\rangle=\frac{\hbar}{m_j\omega_{\textrm E}}\left[\frac{1}{2} + n(\omega_{\textrm E},T)\right],\label{u2}
\end{equation}
where $n(\omega_{\textrm E},T)=1/\{\exp[\hbar \omega_{\textrm E}/k_{\textrm B}T] - 1\}$ is the Bose-Einstein occupation number and $m$ the atomic mass.
Because 10\,K is low with respect to typical phonon frequencies, the displacements in equation Eq.~\eqref{u2} will be dominated at this temperature by the energy of the first dispersionless branch of the phonon spectrum. Phonon frequencies have not been determined for Zn(ISN)$_2$, but we might expect that the relevant span of energies will be roughly similar to that in other MOFs; \emph{e.g.}\ MOF-5, the structure of which is also based on Zn$^{2+}$ centres but connected via terephthalate ligands (similar in size to isonicotinate).\cite{Zhou:2006} The first dispersionless branch in MOF-5 is calculated to occur at $\omega=4.6$\,THz. Using this value as a conservative estimate for $\omega_{\textrm E}$ and substituting into Eq.~\eqref{u2} one obtains $\langle u_{\textrm{Zn}}^2\rangle=0.0113$\,\AA$^2$ and $\langle u_{\textrm{ISN}}^2\rangle=0.0059$\,\AA$^2$. Comparing these values to the $U_{\textrm{iso}}$ values in Table~\ref{table2}, it is clear that the experimental Zn and ISN displacements are, respectively, two and four times larger than can be accounted for by thermal motion alone. This is clear evidence that the displacements observed for Zn(ISN)$_2$ are consistent only with static disorder of the framework.
As a further check of this conclusion, we calculated using {\sc pdfgui} (Ref.~\onlinecite{Farrow_2007}) the $G(r)$ functions expected from the Rietveld model of Section~\ref{averagestructure} using both the as-refined $U_{ij}$/$U_{\textrm{iso}}$ parameters (\emph{i.e.}\ with static disorder) and using equivalent parameters re-scaled according to the $\langle u_i^2\rangle$ values estimated above (\emph{i.e.}\ for a comparable model where displacements are due only to low-energy vibrational modes). The results of both calculations are shown in Fig.~\ref{fig5}, from which it is clear that (i) the dynamic model gives a PDF that is much more typical of 10\,K data, and (ii) the static disorder model reflects well the actual experimental $G(r)$.
In conclusion, we have used a combination of Rietveld and RMC refinement of neutron total scattering data to study the local and average structure of the metal--organic framework zinc(II) isonicotinate. While our study has unquestionably been complicated by the inclusion of hydrogen-containing solvent molecules within the pore structure of the framework, we do find robust evidence for the existence of large-scale static disorder in the material at 10\,K. The role of structural flexibility in the driving unusual dynamic properties of MOFs is increasingly appreciated; what we find here is that the same flexibility may also affect the ground-state structural properties of the same materials.
\section*{Acknowledgements}
This research was supported financially by the EPSRC (grant EP/G004528/2) and the ERC (project 279705), and by the STFC in the form of access to the GEM instrument at ISIS.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.